Skip navigation

Tag Archives: post

On the news, today, of the early stages of drought hitting the U.S. northeast states I decided to springboard off of yesterday’s post and show a more practical use of hexbin state maps than the built-in (and still purpose unknown to me) “bees” data.

The U.S. Drought Monitor site supplies more than just a pretty county-level map. There’s plenty of data and you can dynamically retrieve just data tables for the whole U.S., U.S. states and U.S. counties. Since we’re working with state hexbins, we just need the state-level data. Drought levels for all five stages are reported per-state, so we can take all this data and created a faceted/small-multiples map based on it.

This builds quite a bit on the previous work, so you’ll see some familiar code. Most of the new code is actually making the map look nice (the great part about this is that once you have the idiom down, it’s just a matter of running the script each day vs a billion mouse clicks). The other bit of new code is the data-retrieval component:

library(readr)
library(tidyr)
library(dplyr)

intensity <- c(D0="Abnormally Dry", D1="Moderate Drought", D2="Severe Drought", 
               D3="Extreme Drought", D4="Exceptional Drought")

today <- format(Sys.Date(), "%Y%m%d")

read_csv(sprintf("http://droughtmonitor.unl.edu/USDMStatistics.ashx/?mode=table&aoi=state&date=%s", today)) %>% 
  gather(drought_level, value, D0, D1, D2, D3, D4) %>% 
  mutate(intensity=factor(intensity[drought_level], 
                          levels=as.character(intensity), ordered=TRUE)) -> drought

This:

  • sets up a fast way to add the prettier description of the drought levels (besides D0, D1, etc)
  • dynamically uses today’s date as the parameter for the URL we read with read_csv (from the readr package)
  • covert the data from wide to long
  • adds the intensity description

The ggplot code will facet on the intensity level to make the overall map:

library(rgdal)
library(rgeos)
library(ggplot2)
library(readr)
library(tidyr)
library(dplyr)
library(grid)

# get map from https://gist.github.com/hrbrmstr/51f961198f65509ad863#file-us_states_hexgrid-geojson

us <- readOGR("us_states_hexgrid.geojson", "OGRGeoJSON")

centers <- cbind.data.frame(data.frame(gCentroid(us, byid=TRUE), id=us@data$iso3166_2))

us_map <- fortify(us, region="iso3166_2")

intensity <- c(D0="Abnormally Dry", D1="Moderate Drought", D2="Severe Drought",
               D3="Extreme Drought", D4="Exceptional Drought")

today <- format(Sys.Date(), "%Y%m%d")

read_csv(sprintf("http://droughtmonitor.unl.edu/USDMStatistics.ashx/?mode=table&aoi=state&date=%s", today)) %>%
  gather(drought_level, value, D0, D1, D2, D3, D4) %>%
  mutate(intensity=factor(intensity[drought_level],
                          levels=as.character(intensity), ordered=TRUE)) -> drought

gg <- ggplot()
gg <- gg + geom_map(data=us_map, map=us_map,
                    aes(x=long, y=lat, map_id=id),
                    color="white", size=0.5)
gg <- gg + geom_map(data=drought, map=us_map,
                    aes(fill=value, map_id=State))
gg <- gg + geom_map(data=drought, map=us_map,
                    aes(map_id=State),
                    fill="#ffffff", alpha=0, color="white",
                    show_guide=FALSE)
gg <- gg + geom_text(data=centers, aes(label=id, x=x, y=y), color="white", size=4)
gg <- gg + scale_fill_distiller(name="State\nDrought\nCoverage", palette="RdPu", na.value="#7f7f7f",
                                labels=sprintf("%d%%", c(0, 25, 50, 75, 100)))
gg <- gg + coord_map()
gg <- gg + facet_wrap(~intensity, ncol=2)
gg <- gg + labs(x=NULL, y=NULL, title=sprintf("U.S. Drought Conditions as of %s\n", Sys.Date()))
gg <- gg + theme_bw()
gg <- gg + theme(plot.title=element_text(face="bold", hjust=0, size=24))
gg <- gg + theme(panel.border=element_blank())
gg <- gg + theme(panel.margin=unit(3, "lines"))
gg <- gg + theme(panel.grid=element_blank())
gg <- gg + theme(axis.ticks=element_blank())
gg <- gg + theme(axis.text=element_blank())
gg <- gg + theme(strip.background=element_blank())
gg <- gg + theme(strip.text=element_text(face="bold", hjust=0, size=14))
gg <- gg + theme(legend.position=c(0.75, 0.15))
gg <- gg + theme(legend.direction="horizontal")
gg <- gg + theme(legend.title.align=1)

png(sprintf("%s.png", today), width=800, height=800)
print(gg)
dev.off()

20150515

Now, you can easily animate these over time to show the progression/regression of the drought conditions. If you're sure your audience can work with SVG files, you can use those for very crisp/sharp maps (and even feed it to D3 or path editing tools). If you have an example of how you're using hexbin choropleths, drop a note in the comments. The code from above is also on github.

There’s been lots of buzz about “statebin” maps of late. A recent tweet by @andrewxhill referencing work by @dannydb pointed to a nice shapefile (alternate link) that ends up being a really great way to handle statebin maps (and I feel like a fool for not considering it for a more generic solution earlier).

Here’s a way to use the GeoJSON version in R. I like GeoJSON since it’s a single file vs a directory of files and is readable vs binary. If you’re in a TL;DR hurry, you can just review the code in this gist. Read on for expository.

Taking a look around

When you download the GeoJSON, it should be in a file called us_states_hexgrid.geojson. We can see what’s in there with R pretty easily:

library(rgdal)

ogrInfo("us_states_hexgrid.geojson", "OGRGeoJSON")

## Source: "us_states_hexgrid.geojson", layer: "OGRGeoJSON"
## Driver: GeoJSON number of rows 51 
## Feature type: wkbPolygon with 2 dimensions
## Extent: (-137.9747 26.39343) - (-69.90286 55.3132)
## CRS: +proj=longlat +datum=WGS84 +no_defs  
## Number of fields: 6 
##         name type length typeName
## 1 cartodb_id    0      0  Integer
## 2 created_at    4      0   String
## 3 updated_at    4      0   String
## 4      label    4      0   String
## 5       bees    2      0     Real
## 6  iso3166_2    4      0   String

Along with the basic shapefile goodness, we have some data, too! We’ll use all this to make a chorpleth hexbin of “bees” (I have no idea what this is but assume it has something to do with bee population, which is a serious problem on the planet right now). Let’s dig in.

Plotting the bins

First we need to read in the data, which is pretty simple:


us <- readOGR("us_states_hexgrid.geojson", "OGRGeoJSON")

That ends up being a fairly complex object with polygons and data. However, we can take a quick look at it with base R graphics:


plot(us)

base1

Yay! While we could do most (if not all) the remainder of the graphics in base R, I personally believe ggplot is more intuitive and expressive, so let's do the same thing with ggplot. First, we'll have to get the data structure into something ggplot can handle:

library(ggplot2)

us_map <- fortify(us, region="iso3166_2")

That gives us a data frame with the 2-letter state abbreviations as the "region" keys. Now we can do a basic ggplot:

ggplot(data=us_map, aes(map_id=id, x=long, y=lat)) + 
  geom_map(map=us_map, color="black", fill="white")

Rplot

Ugh. Talk about ugly. But, at least it works! Now all we need to do is turn it into a choropleth, remove some map chart junk and make it look prettier!

Upping the aesthetics

There's a pretty good idiom for making maps in R. There's a handy layer/geom called geom_map which takes care of a ton of details under the hood. We can use it for making outlines and fills and add as many layers of them as we want/need. For our needs, we'll:

  • put down a base layer of polygons
  • add a fill layer for our data
  • get rid of map chart junk

This is all pretty straightforward once you get the hang of it:

g <- ggplot()

# Plot base map -----------------------------------------------------------

gg <- gg + geom_map(data=us_map, map=us_map,
                    aes(x=long, y=lat, map_id=id),
                    color="white", size=0.5)

# Plot filled polygons ----------------------------------------------------

gg <- gg + geom_map(data=us@data, map=us_map,
                    aes(fill=bees, map_id=iso3166_2))

# Remove chart junk for the “map" -----------------------------------------

gg <- gg + labs(x=NULL, y=NULL)
gg <- gg + theme_bw()
gg <- gg + theme(panel.border=element_blank())
gg <- gg + theme(panel.grid=element_blank())
gg <- gg + theme(axis.ticks=element_blank())
gg <- gg + theme(axis.text=element_blank())
gg

Rplot01

Definitely better, but it still needs work. Outlines would be good and it definitely needs a better color palette. It would also be nice if the polygons weren't "warped". We can fix these issues by adding in a few other elements:

gg <- ggplot()
gg <- gg + geom_map(data=us_map, map=us_map,
                    aes(x=long, y=lat, map_id=id),
                    color="white", size=0.5)
gg <- gg + geom_map(data=us@data, map=us_map,
                    aes(fill=bees, map_id=iso3166_2))

# Overlay borders without ugly line on legend -----------------------------

gg <- gg + geom_map(data=us@data, map=us_map,
                    aes(map_id=iso3166_2),
                    fill="#ffffff", alpha=0, color="white",
                    show_guide=FALSE)

# ColorBrewer scale; using distiller for discrete vs continuous -----------

gg <- gg + scale_fill_distiller(palette="RdPu", na.value="#7f7f7f")

# coord_map mercator works best for the display ---------------------------

gg <- gg + coord_map()

gg <- gg + labs(x=NULL, y=NULL)
gg <- gg + theme_bw()
gg <- gg + theme(panel.border=element_blank())
gg <- gg + theme(panel.grid=element_blank())
gg <- gg + theme(axis.ticks=element_blank())
gg <- gg + theme(axis.text=element_blank())
gg

Rplot02

Much better. We use a "hack" to keep the legend free of white slash marks for the polygon outlines (see the comments for a less-hackish way) and coord_map to let the projection handle the "unwarping". By using the distiller fill, we get discrete color bins vs continuous shades (use what you feel is most appropriate, though, for your own work).

Where are we?

Most statebin/hexbin maps still (probably) need state labels since (a) Americans are notoriously bad at geography and (b) even if they were good at geography, we've removed much of the base references for folks to work from accurately.

The data exists in the shapefile, but to get the labels put in the centers of each polygon we have to do a bit of work:

library(rgeos)

centers <- cbind.data.frame(data.frame(gCentroid(us, byid=TRUE), id=us@data$iso3166_2))

That gets us a data frame of the x & y centers of each polygon along with the (abbreviated) state name. We can now add a layer with geom_text to place the label. The following is the complete solution:

library(rgdal)
library(rgeos)
library(ggplot2)

us <- readOGR("us_states_hexgrid.geojson", "OGRGeoJSON")

centers <- cbind.data.frame(data.frame(gCentroid(us, byid=TRUE), id=us@data$iso3166_2))

us_map <- fortify(us, region="iso3166_2")

ggplot(data=us_map, aes(map_id=id, x=long, y=lat)) + geom_map(map=us_map, color="black", fill="white")

gg <- ggplot()
gg <- gg + geom_map(data=us_map, map=us_map,
                    aes(x=long, y=lat, map_id=id),
                    color="white", size=0.5)
gg <- gg + geom_map(data=us@data, map=us_map,
                    aes(fill=bees, map_id=iso3166_2))
gg <- gg + geom_map(data=us@data, map=us_map,
                    aes(map_id=iso3166_2),
                    fill="#ffffff", alpha=0, color="white",
                    show_guide=FALSE)
gg <- gg + geom_text(data=centers, aes(label=id, x=x, y=y), color="white", size=4)
gg <- gg + scale_fill_distiller(palette="RdPu", na.value="#7f7f7f")
gg <- gg + coord_map()
gg <- gg + labs(x=NULL, y=NULL)
gg <- gg + theme_bw()
gg <- gg + theme(panel.border=element_blank())
gg <- gg + theme(panel.grid=element_blank())
gg <- gg + theme(axis.ticks=element_blank())
gg <- gg + theme(axis.text=element_blank())
gg

Rplot03

Wrapping up

This is a pretty neat way to work with "statebins" and I'll probably take some time over the summer to update my statebins package to use shapefiles and allow for more generic shapes. Ramnath Vaidyanathan has also done some work with statebins and javascript, so I'll see what I can do to merge all the functionality into one package.

If you've got an alternate way to work with these or have some interesting "bins" to show, drop a note in the comments.

I’m an avid NPR listener also follow a number of their programs and people on Twitter. I really dig their [quotable](https://github.com/nprapps/quotable) tweets. Here’s a sample of a recent one:

I asked @brianboyer & @alykat (two member of the _awesome_ NPR Visuals team) if they had a workflow for this (they publish quite a bit of open source tools) and, well [of course they did](https://twitter.com/hrbrmstr/status/596656939720962048).

After poking around a bit, I determined that the workflow would be great for something like NPR but was a bit overkill for me and also did not meet all of my requirements. I really only needed something that would:

– work from the command-line (on OS X)
– take a JSON file for the quote
– generate an SVG (for blogging) and PNG (for tweeting)

Thus, [quotebox](http://github.com/hrbrmstr/quotebox) was born.

For the time being, `quotebox` is a python script that lets you pass in a JSON config file with quote components and spits out some files to use in blog posts & tweets. I can see it morphing into more, but for now you can create a quote file like:

{
  "quote" : "Making these quotes by hand is fine &amp; NPR Visuals web app is cool, but this has some advantages, too.",
  "source" : "Data-Driven Security co-author Bob Rudis on making “NPR-like quoteboxes” at the command-line.", 
  "logo" : "ddsec.png"
}

which will generate:

test

or (SVG version):

test

The script auto-adds “smart” quotes and an em-dash plus base64 encodes the logo file to a data URI so you can easily use the SVG file without needing to bring along the associated logo asset. The SVG itself has a couple (at least to me) interesting components. First, it uses the [Lato](https://www.google.com/fonts/specimen/Lato) font from Google Fonts via `defs`:

<defs>
  <style type="text/css">@import url('http://fonts.googleapis.com/css?family=Lato:300,900');</style>
</defs>

It then uses two `foreignObject` tags for the main quote and the attribution line. Here’s an example:

<foreignObject x="25" y="25" width="590" height="165">
  <body xmlns="http://www.w3.org/1999/xhtml"
    style="background: white; font-family: Lato; font-weight:900; font-stretch:normal; font-size:36px; line-height:36px; text-indent:-16px; margin-left:20px; color:#333333">
    <p>&#8220;Making these quotes by hand is fine &amp; NPR Visuals web app is cool, but this has some advantages, too.&#8221;</p>
  </body>
</foreignObject>

Like most SVG elements, you have full precision as to how you place and size the `foreignObject` and this object contains XHTML, so all text or graphics in them can be fully styled. Not every browser or command-line rendering engine supports this, but the latest Chrome & Safari browsers do along with the >=2.0.0 of [phantomjs](http://phantomjs.org/).

Once the SVG is built, the script uses phantomjs to render the resultant png file. You now have two assets (SVG & png) that can be used wherever you like, but posting it as a Twitter image post will require the use of png.

### Requirements & Usage

You’ll need python, [Pillow](https://python-pillow.github.io/) and [phantomjs](http://phantomjs.org/) to run the script and just need to execute

python quotebox.py test.json

to generate `test.svg` and `test.png`. On OS X I was using

python quotebox.py test.json && open test.svg && open test.png

to test out the image generation & conversion.

Note that many (most?) folks should use the NPR Visuals [quotable](https://github.com/nprapps/quotable) instead of this since it has full workflow management. This script was created primarily to oversimplify local usage (I don’t work in a newsroom) and play with SVG.

The python is not bullet proof (no error checking, lack of use of tempfiles/dirs) but it’s a decent basic foundation.

### Future work

There will definitely be command line switches for some options (quotes or no quotes) and possibly an extension to the JSON config file for fonts and colors. I’ll think supporting grabbing a logo from a URL is needed and automagic resizing to a series of constraints. I can also envision spinning up a quotebox app on Heroku or Google Apps and bridging it to a bookmarklet, but fork away and PR as you see fit. The creativity & ingenuity of the community knows no bounds.

The SVE service is a fake new service solely put out there to facilitate some inane puzzle. You can reference SVE-2015-9999 via this URL and hopefully this will make it possible for google to get it back in their index.

I saw a fly-by `#rstats` mention of more airplane accident data on — of all places — LinkedIn (email) today which took me to a [GitHub repo](https://github.com/philjette/CrashData) by @philjette. It seems there’s a [web site](http://www.planecrashinfo.com/) (run by what seems to be a single human) that tracks plane crashes. Here’s a tweet from @philjette announcing it:

The repo contains the R code that scrapes the site and it’s (mostly) in old-school R and works really well. I’m collecting and conjuring many bits of R for the classes I’m teaching in the fall and thought that it would be useful to replicate @philjette’s example in modern Hadleyverse style (i.e. `dplyr`, `rvest`, etc). I even submitted a [pull request](https://github.com/philjette/CrashData/pull/1) to him with the additional version. I’ve replicated it below with some additional comments for those wanting to jump into the Hadleyverse. No shiny `ggplot2` graphs this time, I’m afraid. This is all raw code, but will hopefully be useful to those learning the modern ropes.

Just to get the setup bits out of the way, here’s all the packages I’ll be using:

library(dplyr)
library(rvest)
library(magrittr)
library(stringr)
library(lubridate)
library(pbapply)

Phil made a function to grab data for a whole year, so I did the same and gave it a default parameter of the current year (programmatically). I also tossed in some parameter checking for good measure.

The basic setup is to:

– grab the HTML for the page of a given year
– extract and format the crash dates
– extract location & operator information, which is made slightly annoying since the site uses a `
` and includes spurious newlines within a single `

` element
– extract aircraft type and registration (same issues as previous column)
– extract accident details, which are embedded in a highly formatted column that requires `str_match_all` to handle (well)

Some things worth mentioning:

– `data_frame` is super-helpful in not-creating `factors` from the character vectors
– `bind_rows` and `bind_cols` are a nice alternative to using `data.table` functions
– I think `stringr` needs a more pipe-friendly replacement for `gsub` and, perhaps, even `ifesle` (yes, I guess I could submit a PR). The `.` just feels wrong in pipes to me, still
– if you’re not using `pbapply` functions (free progress bars for everyone!) you _should_ be, especially for long scraping operations
– sometimes XPath entries can be less verbose than CSS (and easier to craft) and I have no issue mixing them in scraping code when necessary

Here’s the new `get_data` function (_updated per comment and to also add some more hadleyverse goodness_):

#' retrieve crash data for a given year
#' defaults to current year
#' earliest year in the database is 1920
get_data <- function(year=as.numeric(format(Sys.Date(), "%Y"))) {
 
  crash_base <- "http://www.planecrashinfo.com/%d/%s.htm"
 
  if (year < 1920 | year > as.numeric(format(Sys.Date(), "%Y"))) {
    stop("year must be >=1920 and <=current year", call.=FALSE)
  }
 
  # get crash date
 
  pg <- html(sprintf(crash_base, year, year))
  pg %>%
    html_nodes("table > tr > td:nth-child(1)") %>%
    html_text() %>%
    extract(-1) %>%
    dmy() %>%
    data_frame(date=.) -> date
 
  # get location and operator
 
  loc_op <- bind_rows(lapply(1:length(date), function(i) {
 
    pg %>%
      html_nodes(xpath=sprintf("//table/tr/td[2]/*/br[%d]/preceding-sibling::text()", i)) %>%
      html_text() %>%
      str_trim() %>%
      str_replace_all("^(Near|Off) ", "") -> loc
 
    pg %>%
      html_nodes(xpath=sprintf("//table/tr/td[2]/*/br[%d]/following-sibling::text()", i)) %>%
      html_text() %>%
      str_replace_all("(^[[:space:]]*|[[:space:]]*$|\\n)", "") -> op
 
    data_frame(location=loc, operator=op)
 
  }))
 
  # get type & registration
 
  type_reg <- bind_rows(lapply(1:length(date), function(i) {
 
    pg %>%
      html_nodes(xpath=sprintf("//table/tr/td[3]/*/br[%d]/preceding-sibling::text()", i)) %>%
      html_text() %>%
      str_replace_all("(^[[:space:]]*|[[:space:]]*$|\\n)", "") %>%
      ifelse(.=="?", NA, .) -> typ
 
    pg %>% html_nodes(xpath=sprintf("//table/tr/td[3]/*/br[%d]/following-sibling::text()", i)) %>%
      html_text() %>%
      str_replace_all("(^[[:space:]]*|[[:space:]]*$|\\n)", "") %>%
      ifelse(.=="?", NA, .) -> reg
 
    data_frame(type=typ, registration=reg)
 
  }))
 
  # get fatalities
 
  pg %>% html_nodes("table > tr > td:nth-child(4)") %>%
    html_text() %>%
    str_match_all("([[:digit:]]+)/([[:digit:]]+)\\(([[:digit:]]+)\\)") %>%
    lapply(function(x) {
      data_frame(aboard=as.numeric(x[2]), fatalties=as.numeric(x[3]), ground=as.numeric(x[4]))
    }) %>%
    bind_rows %>% tail(-1) -> afg
 
  bind_cols(date, loc_op, type_reg, afg)
 
}

While that gets one year, it’s super-simple to get all crashes since 1950:

crashes <- bind_rows(pblapply(1950:2015, get_data))

Yep. That’s it. Now `crashes` contains a `data.frame` (well, `tbl_df`) of all the crashes since 1950, ready for further analysis.

For the class I’m teaching, I’ll be extending this to grab the extra details for each crash link and then performing more data science-y operations.

If you’ve got any streamlining tips or alternate ways to handle the scraping Hadleyverse-style please drop a note in the comments. Also, definitely check out Phil’s great solution, especially to compare it to this new version.

Over on The DO Loop, @RickWicklin does a nice job [visualizing the causes of airline crashes](http://blogs.sas.com/content/iml/2015/03/30/visualizing-airline-crashes/) in SAS using a mosaic plot. More often than not, I find mosaic plots can be a bit difficult to grok, but Rick’s use was spot on and I believe it shows the data pretty well, but I also thought I’d take the opportunity to:

– Give @jennybc’s new [googlesheets](http://github.com/jennybc/googlesheets) a spin
– Show some `dplyr` & `tidyr` data wrangling (never can have too many examples)
– Crank out some `ggplot` zero-based streamgraph-y area charts for the data with some extra `ggplot` wrangling for good measure

I also decided to use the colors in the [original David McCandless/Kashan visualization](http://www.informationisbeautiful.net/visualizations/plane-truth-every-single-commercial-plane-crash-visualized/).

#### Getting The Data

As I mentioned, @jennybc made a really nice package to interface with Google Sheets, and the IIB site [makes the data available](https://docs.google.com/spreadsheet/ccc?key=0AjOUPqcIwvnjdEx2akx5ZjJXSk9oM1E3dWpqZFJ6Nmc&usp=drive_web#gid=1), so I copied it to my Google Drive and gave her package a go:

library(googlesheets)
library(ggplot2) # we'll need the rest of the libraries later
library(dplyr)   # but just getting them out of the way
library(tidyr)
 
# this will prompt for authentication the first time
my_sheets <- list_sheets()
 
# which one is the flight data one
grep("Flight", my_sheets$sheet_title, value=TRUE)
 
## [1] "Copy of Flight Risk JSON" "Flight Risk JSON" 
 
# get the sheet reference then the data from the second tab
flights <- register_ss("Flight Risk JSON")
flights_csv <- flights %>% get_via_csv(ws = "93-2014 FINAL")
 
# take a quick look
glimpse(flights_csv)
 
## Observations: 440
## Variables:
## $ date       (chr) "d", "1993-01-06", "1993-01-09", "1993-01-31", "1993-02-08", "1993-02-28", "...
## $ plane_type (chr) "t", "Dash 8-311", "Hawker Siddeley HS-748-234 Srs", "Shorts SC.7 Skyvan 3-1...
## $ loc        (chr) "l", "near Paris Charles de Gualle", "near Surabaya Airport", "Mt. Kapur", "...
## $ country    (chr) "c", "France", "Indonesia", "Indonesia", "Iran", "Taiwan", "Macedonia", "Nor...
## $ ref        (chr) "r", "D-BEAT", "PK-IHE", "9M-PID", "EP-ITD", "B-12238", "PH-KXL", "LN-TSA", ...
## $ airline    (chr) "o", "Lufthansa Cityline", "Bouraq Indonesia", "Pan Malaysian Air Transport"...
## $ fat        (chr) "f", "4", "15", "14", "131", "6", "83", "3", "6", "2", "32", "55", "132", "4...
## $ px         (chr) "px", "20", "29", "29", "67", "22", "56", "19", "22", "17", "38", "47", "67"...
## $ cat        (chr) "cat", "A1", "A1", "A1", "A1", "A1", "A1", "A1", "A1", "A2", "A1", "A1", "A1...
## $ phase      (chr) "p", "approach", "initial_climb", "en_route", "en_route", "approach", "initi...
## $ cert       (chr) "cert", "confirmed", "probable", "probable", "confirmed", "probable", "confi...
## $ meta       (chr) "meta", "human_error", "mechanical", "weather", "human_error", "weather", "h...
## $ cause      (chr) "cause", "pilot & ATC error", "engine failure", "low visibility", "pilot err...
## $ notes      (chr) "n", NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,...
 
# the spreadsheet has a "helper" row for javascript, so we nix it
flights_csv <- flights_csv[-1,] # js vars removal
 
# and we convert some columns while we're at it
flights_csv %>%
  mutate(date=as.Date(date),
         fat=as.numeric(fat),
         px=as.numeric(px)) -> flights_csv

#### A Bit of Cleanup

Despite being a spreadsheet, the data needs some cleanup and there’s no real need to include “grounded” or “unknown” in the flight phase given the limited number of incidents in those categories. I’d actually mention that descriptively near the visual if this were anything but a blog post.

The area chart also needs full values for each category combo per year, so we use `expand` from `tidyr` with `left_join` and `mutate` to fill in the gaps.

Finally, we make proper, ordered labels:

flights_csv %>%
  mutate(year=as.numeric(format(date, "%Y"))) %>%
  mutate(phase=tolower(phase),
         phase=ifelse(grepl("take", phase), "takeoff", phase),
         phase=ifelse(grepl("climb", phase), "takeoff", phase),
         phase=ifelse(grepl("ap", phase), "approach", phase)) %>%
  count(year, meta, phase) %>%
  left_join(expand(., year, meta, phase), ., c("year", "meta", "phase")) %>% 
  mutate(n=ifelse(is.na(n), 0, n)) %>% 
  filter(!phase %in% c("grounded", "unknown")) %>%
  mutate(phase=factor(phase, 
                      levels=c("takeoff", "en_route", "approach", "landing"),
                      labels=c("Takeoff", "En Route", "Approach", "Landing"),
                      ordered=TRUE)) -> flights_dat

I probably took some liberties lumping “climb” in with “takeoff”, but I’d’ve asked an expert for a production piece just as I would hope folks doing work for infosec reports or visualizations would consult someone knowledgable in cybersecurity.

#### The Final Plot

I’m a big fan of an incremental, additive build idiom for `ggplot` graphics. By using the `gg <- gg + …` style one can move lines around, comment them out, etc without dealing with errant `+` signs. It also forces a logical separation of ggplot elements. Personally, I tend to keep my build orders as follows: - main `ggplot` call with mappings if the graph is short, otherwise add the mappings to the `geom`s - all `geom_` or `stat_` layers in the order I want them, and using line breaks to logically separate elements (like `aes`) or to wrap long lines for easier readability. - all `scale_` elements in order from axes to line to shape to color to fill to alpha; I'm not as consistent as I'd like here, but keeping to this makes it really easy to quickly hone in on areas that need tweaking - `facet` call (if any) - label setting, always with `labs` unless I really have a need for using `ggtitle` - base `theme_` call - all other `theme` elements, one per `gg <- gg +` line I know that's not everyone's cup of tea, but it's just how I roll `ggplot`-style. For this plot, I use a smoothed stacked plot with a custom smoother and also use Futura Medium for the text font. Substitute your own fav font if you don't have Futura Medium.

flights_palette <- c("#702023", "#A34296", "#B06F31", "#939598", "#3297B0")
 
gg <- ggplot(flights_dat, aes(x=year, y=n, group=meta)) 
gg <- gg + stat_smooth(mapping=aes(fill=meta), geom="area",
                       position="stack", method="gam", formula=y~s(x)) 
gg <- gg + scale_fill_manual(name="Reason:", values=flights_palette, 
                             labels=c("Criminal", "Human Error",
                                      "Mechanical", "Unknown", "Weather"))
gg <- gg + scale_y_continuous(breaks=c(0, 5, 10, 13))
gg <- gg + facet_grid(~phase)
gg <- gg + labs(x=NULL, y=NULL, title="Crashes by year, by reason & flight phase")
gg <- gg + theme_bw()
gg <- gg + theme(legend.position="bottom")
gg <- gg + theme(text=element_text(family="Futura Medium"))
gg <- gg + theme(plot.title=element_text(face="bold", hjust=0))
gg <- gg + theme(panel.grid=element_blank())
gg <- gg + theme(panel.border=element_blank())
gg <- gg + theme(strip.background=element_rect(fill="#525252"))
gg <- gg + theme(strip.text=element_text(color="white"))
gg

That ultimately produces:

flights

with the facets ordered by takeoff, flying, approaching landing and actual landing phases. Overall, things have gotten way better, though I haven’t had time to look in to the _bump_ between 2005 and 2010 for landing crashes.

As an aside, Boeing has a [really nice PDF](http://www.boeing.com/news/techissues/pdf/statsum.pdf) on some of this data with quite a bit more detail.

It seems Ruben C. Arslan had the waffle idea about the same time I did. Apart from some extra spiffy XKCD-like styling, one other thing his waffling routines allowed for was using FontAwesome icons. When you use an icon vs a block, you are really making a basic version of isotype pictograms. They can add a dimension to the story you’re trying to tell without using any words. I’ve added two parameters to a pre-release CRAN version that I’d like folks to kick the tyres on a bit. Said parameters are use_glyph— which is either FALSE or a character string for a FontAwesome icon (more on that in a bit) — and glyph_size — which is a numeric value for the font size since it won’t scale when the graphic resizes.

Fonts in R & waffle

One part of R that is (with apologies to Winston and others) weak is fonts. You can use fonts, but doing so is often not pretty (despite guidance on the subject) and not without problems (we tried using a custom font again for this year’s DBIR graphics and failed miserably — again — due to font issues and R and had to have the graphics folks substitute them in).

To use the FontAwesome glyphs you need to:

  • grab the ttf version from here
  • install it on your system
  • install the extrafont package
  • run font_import() (get some coffee/scotch while you wait)
  • load extrafont when you need to use these glyphs

Once you do that, you’re probably ready to make isotype pictograms with waffle. I say probably since this process worked on two of my OS X systems but not a third. Same R version. Same RStudio version. Same import process. (This is part of the reason for my lament of the state of fonts since I’m not exactly an n00b with either R, Macs or fonts.)

Making isotype pictograms

I did borrow some code from Ruben, but I hate typing unicode characters and I suspect most folks do as well. If you do any work in straight HTML/CSS, you know you can just refer to the various FontAwesome glyphs by name. To use FontAwesome glyphs with waffle you specify the font name (no fa- prefix) vs unicode character. If you want to see what’s available (and don’t want to bookmark the FontAwesome site) you can run either fa_list() which will give you a list of available FontAwesome glyph names or use fa_grep() and supply a pattern name. For example, running fa_grep("car") gives you:

##  [1] "car"                  "caret-down"           "caret-left"          
##  [4] "caret-right"          "caret-square-o-down"  "caret-square-o-left" 
##  [7] "caret-square-o-right" "caret-square-o-up"    "caret-up"            
## [10] "cart-arrow-down"      "cart-plus"            "cc-mastercard"       
## [13] "credit-card"          "shopping-cart"

Any grep regex will work in that function.

You’ll need to devtools::install_github("hrbrmstr/waffle", ref="cran") to use the dev/pre-CRAN version of waffle before doing anything.

To make an isotype pictogram version of the health records breaches waffle chart, you can do the following:

library(waffle)
library(extrafont)
parts <- c(`Un-breached\nUS Population`=(318-11-79), `Premera`=11, `Anthem`=79)
waffle(parts/10, rows=3, colors=c("#969696", "#1879bf", "#009bda"),
       use_glyph="medkit", size=8)

isobreach

So, please kick the tyres, post comments about your font successes & woes and definitely link to any isotype pictograms you create.

Vis expert Naomi Robbins did an excellent [critique](http://www.forbes.com/sites/naomirobbins/2015/03/19/color-problems-with-figures-from-the-jerusalem-post/) of the [graphics](http://www.jpost.com/Israel-Elections/Analysis-The-Israel-election-decided-by-one-vote-394229) that went along with an article on Israeli election in the Jerusalem Post.

Non-uniform and color-blind-unfriendly categorical colors and disproportionate arc sizes are definitely three substantial issues in that series of visualizations. We can rectify all of them with two new packages of mine: [waffle](http://github.com/hrbrmstr/waffle) & [adobecolor](http://github.com/hrbrmstr/adobecolor). The former provides a good alternative to pie charts (no charts at all are a good alternative to pie charts) and the latter makes it possible to share color palettes without passing long strings of hex-encoded colors.

Using [XScope](http://xscopeapp.com/) I encoded a color-blind-friendly palette from [Brian Connelly](http://bconnelly.net/2013/10/creating-colorblind-friendly-figures/) and saved the palette off as an Adobe Color file (`ACO`). I then took the values from the charts and mapped each party to a particular color. Then, I made ordered and proportional waffle charts using the the values and aligned colors. The results are below:

# install.packages("waffle")
# devtools::install_github("hrbrmstr/swatches")
 
library(waffle)
library(swatches)
 
national_unity <- c(`Zionist Union (27)`=27,
                    `Likud (27)`=27,
                    `Kulanu (10)`=10,
                    `Shas (7)`=7,
                    `UTJ (6)`=6)
 
right_wing <- c(`Likud (27)`=27,
                `Kulanu (10)`=10,
                `Bayit Yehudi (8)`=8,
                `Shas (7)`=7,
                `UTJ (6)`=6,
                `Yisrael Beytenu (5)`=5)
 
herzog_led <- c(`Zionist Union (27)`=27,
                `Kulanu (10)`=10,
                `Shas (7)`=7,
                `UTJ (6)`=6,
                `Meretz (5)`=5)
 
party_colors <- rev(read_aco("http://rud.is/dl/israel.aco"))
 
zion <- party_colors[1]
likud <- party_colors[2]
kulanu <- party_colors[3]
shas <- party_colors[4]
utj <- party_colors[5]
visrael <- party_colors[6]
meretz <- party_colors[7]
bayit <- party_colors[6]
 
nw <- waffle(national_unity, rows=5,
             colors=c(zion, likud, kulanu, shas, utj),
             title="\nNational unity government") +
  theme(plot.title=element_text(size=12, face="bold"))
 
rw <- waffle(right_wing, rows=5,
             colors=c(likud, kulanu, bayit, shas, utj, visrael),
             title="\nRight Wing", pad=3) +
  theme(plot.title=element_text(size=12, face="bold"))
 
hw <- waffle(herzog_led, rows=5,
             colors=c(zion, kulanu, shas, utj, meretz),
             title="\nHerzog led", pad=5) +
  theme(plot.title=element_text(size=12, face="bold"))
 
iron(nw, rw, hw)

israel

If I knew my audience did not have color processing issues, I’d use a better palette. Regardless, these results are far better than the careless pies presented in the original story. The squares represent the same quantities in each chart and the colors also map to the parties.

Honestly, though, you could get a better idea with simple, un-tweaked base graphics bar charts:

par(mfrow=c(3,1))
barplot(national_unity, col=c(zion, likud, kulanu, shas, utj), main="National unity government")
barplot(right_wing, col=c(likud, kulanu, bayit, shas, utj, visrael), main="Right Wing")
barplot(herzog_led, col=c(zion, kulanu, shas, utj, meretz), main="Herzog led")

isrbar

Please consider your readers and the message you’re trying to convey when developing visualizations, especially when you have as large an audience as the Jerusalem Post.