Skip navigation

Author Archives: hrbrmstr

Don't look at me…I do what he does — just slower. #rstats avuncular • ?Resistance Fighter • Cook • Christian • [Master] Chef des Données de Sécurité @ @rapid7

Mozilla released the [MetricsGraphics.js library](http://metricsgraphicsjs.org/) back in November of 2014 ([gh repo](https://github.com/mozilla/metrics-graphics)) and was greeted with great fanfare. It’s primary focus is on crisp, clean layouts for interactive time-series data, but they have support for other chart types as well (though said support is far from comprehensive).

I had been pondering building an R package to help generate these charts when Ramnath Vaidyanathan, Kenton Russell & JJ Allaire came up with the insanely awesome [htmlwidgets](http://www.htmlwidgets.org/) R package, which is the best javascript<->R bridge to-date. Here’s a quick take on how to make a basic line chart before going into some package (and MetricsGraphics) details:

library(metricsgraphics)
 
tmp <- data.frame(year=seq(1790, 1970, 10), uspop=as.numeric(uspop))
 
tmp %>%
  mjs_plot(x=year, y=uspop) %>%
  mjs_line() %>%
  mjs_add_marker(1850, "Something Wonderful") %>%
  mjs_add_baseline(150, "Something Awful")

Example of Basic MetricsGrahpics Chart

One of the package goals (which should be evident from the example) is that it had to conform to the new “piping” idiom, made popular through the [magrittr](https://github.com/smbache/magrittr), [ggvis](http://ggvis.rstudio.com/) and [dplyr](http://github.com/dplyr) packages. This made it possible to avoid one function with a ton of parameters and help break out the chart building into logical steps. While it may not have the flexibility of `ggplot2`, you can do some neat things with MetricsGraphics charts, like use multiple lines:

set.seed(1492)
stocks <- data.frame(
  time = as.Date('2009-01-01') + 0:9,
  X = rnorm(10, 0, 1),
  Y = rnorm(10, 0, 2),
  Z = rnorm(10, 0, 4))
 
stocks %>%
  mjs_plot(x=time, y=X, width=500, height=350) %>%
  mjs_line() %>%
  mjs_add_line(Y) %>%
  mjs_add_line(Z) %>%
  mjs_axis_x(xax_format="date") %>%
  mjs_add_legend(c("X", "Y", "Z"))

Stocks X, Y & Z over time

and, pretty configurable scatterplots:

library(RColorBrewer)
 
mtcars %>%
  mjs_plot(x=wt, y=mpg, width=500, height=350) %>%
  mjs_point(color_accessor=cyl,
            x_rug=TRUE, y_rug=TRUE,
            size_accessor=carb,
            size_range=c(5, 10),
            color_type="category",
            color_range=brewer.pal(n=11, name="RdBu")[c(1, 5, 11)]) %>%
  mjs_labs(x="Weight of Car", y="Miles per Gallon")

Motor Trend Cars – mpg~wt

The `htmlwidgets` developers go into [great detail](http://www.htmlwidgets.org/develop_intro.html) on how to create a widget, but there are some central points I’ll cover and potentially reiterate.

First, use the `htmlwidgets::scaffoldWidget` that `htmlwidgets` provides to kickstart your project. It’ll setup the essentials and free your time up to work on the interface components. You will need to edit the generated `yaml` file to use the minified javascript files for things like jquery or d3 since Chrome will be unhappy if you don’t.

Next, remember that all you’re doing is building an R object with data to be passed into a javascript function/environment. MetricsGraphics made this a bit easier for me since the main graphic configuration is one, giant parameter list (take a look at the `metricsgraphics.js` source in github).

Third, if you need to customize the html generation function in the main `packagename_html` file, ensure you pass in `class` to the main `div` element. I was very pleased to discover that you can return a list of HTML elements vs a single one:

metricsgraphics_html <- function(id, style, class, ...) {
  list(tags$div(id = id, class = class, style=style),
       tags$div(id = sprintf("%s-legend", id), class = sprintf("%s-legend", class)))
}

and that may eventually enable support for facet-like functionality without manually creating multiple plots.

Fourth, try to build around the piping idiom. It makes it so much easier to add parameters and manage the data environment.

Fifth, use `iframe`s for embedding your visualizations in other documents (like this blog post). It avoids potential namespace collisions and frees you from having to cut/paste HTML from one doc to another.

And, lastly, remember that you can generate your own `elementId` in the event you need to use it with your javascript visualization library (like I had to).

Currently, `metricsgraphics` is at 0.4.1 and has support for most of the basic chart types along with linking charts (in `Rmd` files). You can install it from the [github repo](https://github.com/hrbrmstr/metricsgraphics) and make sure to file all issues or feature requests there. If you make something with it like @abresler [did](http://asbcllc.com/blog/2015/January/ww2_tanks/), drop a note in the comments!

Now, go forth and wrap some libraries!

Even though it’s still at version `0.4`, the `ggvis` package has quite a bit of functionality and is highly useful for exploratory data analysis (EDA). I wanted to see how geographical visualizations would work under it, so I put together six examples that show how to use various features of `ggvis` for presenting static & interactive cartographic creations. Specifically, the combined exercises demonstrate:

– basic map creation
– basic maps with points/labels
– dynamic choropleths (with various scales & tooltips)
– applying projections and custom color fills (w/tooltips)
– apply projections and projecting coordinates for plotting (w/tooltips that handle missing data well)

If you want to skip the post and head straight to the code you can [head on over to github](https://github.com/hrbrmstr/ggvis-maps), [peruse the R markdown file on RPubs](http://rpubs.com/hrbrmstr/ggvis-maps) or play with the [shiny version](https://hrbrmstr.shinyapps.io/ggvis-maps/). You’ll need that code to actually run any of the snippets below since I’m leaving out some code-cruft for brevity. Also, all the map graphics below were generated by saving the `ggvis` output as PNG files (for best browser compatibility), right from the `ggvis` renderer popup. Click/tap each for a larger version.

### Basic Polygons

1

Even though we still need the help of `ggplot2`’s `fortify`, it’s pretty straightforward to crank out a basic map in `ggvis`:

maine <- readOGR("data/maine.geojson", "OGRGeoJSON")
 
map <- ggplot2::fortify(maine, region="name")
 
map %>%
  ggvis(~long, ~lat) %>%
  group_by(group, id) %>%
  layer_paths(strokeOpacity:=0.5, stroke:="#7f7f7f") %>%
  hide_legend("fill") %>%
  hide_axis("x") %>% hide_axis("y") %>%
  set_options(width=400, height=600, keep_aspect=TRUE)

The code is very similar to one of the ways we render the same image in `ggplot`. We first read in the shapefile, convert it into a data frame we can use for plotting, group the polygons properly, render them with `layer_paths` and get rid of chart junk. Now, `ggvis` (to my knowledge as of this post) has no equivalent of `coord_map`, so we have to rely on the positioning in the projection and work out the proper `height` and `width` parameters to use with a uniform aspect ratio (`keep_aspect=TRUE`).

>For those not familiar with `ggvis` the `~` operator lets us tell `ggivs` which columns (or expressions using columns) to map to function parameters and `:=` operator just tells it to use a raw, un-scaled value. You can find out more about [why the tilde was chosen](https://github.com/rstudio/ggvis/issues/173) or about the [various other special operators](http://ggvis.rstudio.com/ggvis-basics.html).

### Basic Annotations

2

You can annotate maps in an equally straightforward way.

county_centers <- maine %>%
  gCentroid(byid=TRUE) %>%
  data.frame %>%
  cbind(name=maine$name %>% gsub(" County, ME", "", .) )
 
map %>%
  group_by(group, id) %>%
  ggvis(~long, ~lat) %>%
  layer_paths(strokeWidth:=0.25, stroke:="#7f7f7f") %>%
  layer_points(data=county_centers, x=~x, y=~y, size:=8) %>%
  layer_text(data=county_centers,
             x=~x+0.05, y=~y, text:=~name,
             baseline:="middle", fontSize:=8) %>%
  hide_legend("fill") %>%
  hide_axis("x") %>% hide_axis("y") %>%
  set_options(width=400, height=600, keep_aspect=TRUE)

>Note that the `group_by` works both before or after the `ggvis` call. Consistent pipe idioms FTW!

Here, we’re making a data frame out of the county centroids and names then using that in a call to `layer_points` and `layer_text`. Note how you can change the data source for each layer (just like in `ggplot)` and use expressions just like in `ggplot` (we moved the text just slightly to the right of the dot).

>Since `ggvis` outputs [vega](http://trifacta.github.io/vega/) and uses [D3](http://d3js.org/) for rendering, you should probably take a peek at those frameworks as it will help you understand the parameter name differences between `ggvis` and `ggplot`.

### Basic Choropleths

3

There are actually two examples of this basic state choropleth in the code, but one just uses a different color scale, so I’ll just post the code for one here. This is also designed for interactivity (it has tooltips and lets you change the fill variable) so you should run it locally or look at the [shiny version](https://hrbrmstr.shinyapps.io/ggvis-maps/).

# read in some crime & population data for maine counties
me_pop <- read.csv("data/me_pop.csv", stringsAsFactors=FALSE)
me_crime <- read.csv("data/me_crime.csv", stringsAsFactors=FALSE)
 
# get it into a form we can use (and only use 2013 data)
 
crime_1k <- me_crime %>%
  filter(year==2013) %>%
  select(1,5:12) %>%
  left_join(me_pop) %>%
  mutate(murder_1k=1000*(murder/population_2010),
         rape_1k=1000*(rape/population_2010),
         robbery_1k=1000*(robbery/population_2010),
         aggravated_assault_1k=1000*(aggravated_assault/population_2010),
         burglary_1k=1000*(burglary/population_2010),
         larceny_1k=1000*(larceny/population_2010),
         motor_vehicle_theft_1k=1000*(motor_vehicle_theft/population_2010),
         arson_1k=1000*(arson/population_2010))
 
# normalize the county names
 
map %<>% mutate(id=gsub(" County, ME", "", id)) %>%
  left_join(crime_1k, by=c("id"="county"))
 
# this is for the tooltip. it does a lookup into the crime data frame and
# then uses those values for the popup
 
crime_values <- function(x) {
  if(is.null(x)) return(NULL)
  y <- me_crime %>% filter(year==2013, county==x$id) %>% select(1,5:12)
  sprintf("<table width='100%%'>%s</table>",
          paste0("<tr><td style='text-align:left'>", names(y),
         ":</td><td style='text-align:right'>", format(y), collapse="</td></tr>"))
}
 
map %>%
  group_by(group, id) %>%
  ggvis(~long, ~lat) %>%
  layer_paths(fill=input_select(label="Crime:",
                                choices=crime_1k %>%
                                  select(ends_with("1k")) %>%
                                  colnames %>% sort,
                                id="Crime",
                                map=as.name),
              strokeWidth:=0.5, stroke:="white") %>%
  scale_numeric("fill", range=c("#bfd3e6", "#8c6bb1" ,"#4d004b")) %>%
  add_tooltip(crime_values, "hover") %>%
  add_legend("fill", title="Crime Rate/1K Pop") %>%
  hide_axis("x") %>% hide_axis("y") %>%
  set_options(width=400, height=600, keep_aspect=TRUE)

You can omit the `input_select` bit if you just want to do a single choropleth (just map `fill` to a single variable). The `input_select` tells `ggvis` to make a minimal bootstrap sidebar-layout scaffold around the actual graphic to enable variable interaction. In this case we let the user explore different types of crimes (by 1K population) and we also have a tooltip that shows the #’s of each crime in each county as we hover.

### Projections and Custom Colors

4

We’re pretty much (mostly) re-creating a [previous post](http://rud.is/b/2014/11/16/moving-the-earth-well-alaska-hawaii-with-r/) in this example and making a projected U.S. map with drought data (as of 2014-12-23).

us <- readOGR("data/us.geojson", "OGRGeoJSON")
us <- us[!us$STATEFP %in% c("02", "15", "72"),]
 
# same method to change the projection
 
us_aea <- spTransform(us, CRS("+proj=laea +lat_0=45 +lon_0=-100 +x_0=0 +y_0=0 +a=6370997 +b=6370997 +units=m +no_defs"))
 
map <- ggplot2::fortify(us_aea, region="GEOID")
 
droughts <- read.csv("data/dm_export_county_20141223.csv")
droughts$id <- sprintf("%05d", as.numeric(as.character(droughts$FIPS)))
droughts$total <- with(droughts, (D0+D1+D2+D3+D4)/5)
 
map_d <- merge(map, droughts, all.x=TRUE)
 
# pre-make custom colors per county
 
ramp <- colorRampPalette(c("white", brewer.pal(n=9, name="YlOrRd")), space="Lab")
 
map_d$fill_col <- as.character(cut(map_d$total, seq(0,100,10), include.lowest=TRUE, labels=ramp(10)))
map_d$fill_col <- ifelse(is.na(map_d$fill_col), "#FFFFFF", map_d$fill_col)
 
drought_values <- function(x) {
  if(is.null(x) | !(x$id %in% droughts$id)) return(NULL)
  y <- droughts %>% filter(id==x$id) %>% select(1,3,4,6:10)
  sprintf("<table width='100%%'>%s</table>",
          paste0("<tr><td style='text-align:left'>", names(y),
         ":</td><td style='text-align:right'>", format(y), collapse="</td></tr>"))
}
 
map_d %>%
  group_by(group, id) %>%
  ggvis(~long, ~lat) %>%
  layer_paths(fill:=~fill_col, strokeOpacity := 0.5, strokeWidth := 0.25) %>%
  add_tooltip(drought_values, "hover") %>%
  hide_legend("fill") %>%
  hide_axis("x") %>% hide_axis("y") %>%
  set_options(width=900, height=600, keep_aspect=TRUE)

It’s really similar to the previous code (and you may/should be familiar with the Albers transform from the previous post).

### World Domination

5

world <- readOGR("data/ne_50m_admin_0_countries.geojson", layer="OGRGeoJSON")
world <- world[!world$iso_a3 %in% c("ATA"),]
world <- spTransform(world, CRS("+proj=wintri"))
 
map_w <- ggplot2::fortify(world, region="iso_a3")
 
# really quick way to get coords from a KML file
 
launch_sites <- rbindlist(lapply(ogrListLayers("data/launch-sites.kml")[c language="(1:2,4:9)"][/c], function(layer) {
  tmp <- readOGR("data/launch-sites.kml", layer)
  places <- data.table(coordinates(tmp)[,1:2], as.character(tmp$Name))
}))
setnames(launch_sites, colnames(launch_sites), c("lon", "lat", "name"))
 
# now, project the coordinates we extracted
 
coordinates(launch_sites) <-  ~lon+lat
launch_sites <- as.data.frame(SpatialPointsDataFrame(spTransform(
  SpatialPoints(launch_sites, CRS("+proj=longlat")), CRS("+proj=wintri")),
  launch_sites@data))
 
map_w %>%
  group_by(group, id) %>%
  ggvis(~long, ~lat) %>%
  layer_paths(fill:="#252525", stroke:="white", strokeOpacity:=0.5, strokeWidth:=0.25) %>%
  layer_points(data=launch_sites, 
               x=~lon, y=~lat, 
               fill:="#cb181d", stroke:="white", 
               size:=25, fillOpacity:=0.5, strokeWidth:=0.25) %>%
  hide_legend("fill") %>%
  hide_axis("x") %>% hide_axis("y") %>%
  set_options(width=900, height=500, keep_aspect=TRUE)

The main differences in this example are the re-projection of the data we’re using. I grabbed a KML file of rocket launch sites from Wikipedia and made it into a data frame then [re]project those points into Winkel-Tripel for use with Winkel-Tripel world map made at the beginning of the example. The `ggplot` `coord_map` handles these transforms for you, so until there’s a `ggvis` equivalent, you’ll need to do it this way (though, there’s not Winkel-Tripel projection in the `mapproject` package so you kinda need to do it this way for `ggplot` as well for this projection).

### Wrapping Up

There’s code up on github for the “normal”, `Rmd` and Shiny versions of these examples. Give each a go and try tweaking various parameters, changing up the tooltips or using your own data. Don’t forget to drop a note in the comments with any of your creations and use github for any code issues.

I and @awpiii were trading news about the power outages in Maine & New Hampshire last night and he tweeted the link to the @PSNH [Outage Map](http://www.psnh.com/outage/). As if the Bing Maps tiles weren’t bad enough, the use of a categorical color scale instead of a sequential one[[1](http://earthobservatory.nasa.gov/blogs/elegantfigures/2011/05/20/qualitative-vs-sequential-color-scales/)] caused sufficient angst that I whipped up an alternate version in R between making pies and bread for Thanksgiving (even with power being out for us).

PSNH provides a text version of outages (by town) that ends up being a pretty clean HTML table, and a quick Google search led me to a fairly efficient town-level [shapefile](http://www.mass.gov/anf/research-and-tech/it-serv-and-support/application-serv/office-of-geographic-information-massgis/datalayers/adjacent-states-town-boundaries.html) for New Hampshire. With these data files at the ready, it was time to make a better map.

**Step 0 – Environment Setup**

So, I lied. There are six steps. “5” just works way better in attention-grabbing list headlines. The first one is setting up the project and loading all the libraries we’ll need. I use RStudio for most of my R coding and their IDE has the concept of a “project” which has it’s own working directory, workspace, history, and source documents separate from any other RStudio windows you have open. They are a great way to organize your analyses and experiments. I have my own “new project” [script](http://rud.is/dl/newprj.sh) that sets up additional directory structures, configures the `Rproj` file with my preferences and initializes a git repository for the project.

I also use the setup step to load up a ggplot2 map theme I keep in a gist.

library(sp)
library(rgdal)
library(dplyr)
library(rvest)
library(stringi)
library(scales)
library(RColorBrewer)
library(ggplot2)
 
# for theme_map
devtools::source_gist("https://gist.github.com/hrbrmstr/33baa3a79c5cfef0f6df")

**Step 1 – Read in the map**

This is literally a one-liner:

nh <- readOGR("data/nhtowns/NHTOWNS_POLY.shp", "NHTOWNS_POLY")

My projects all have a `data` directory and thats where I normally store shapefiles. I used `ogrinfo -al NHTOWNS_POLY.shp` at the command line to determine the layer name.

**Step 2 – Read in the outage data**

The `rvest` package is nothing short of amazing. It makes very quick work of web scraping and—despite some newlines in the mix—this qualifies as a one-liner in my book:

outage <- html("http://www.psnh.com/outagelist/") %>%
  html_nodes("table") %>%
  html_table() %>%
  .[[1]]

That bit of code grabs the whole page, extracts all the HTML tables (there is just one in this example), turns it into a list of data frames and then returns the first one.

**Step 3 – Data wrangling**

While this step is definitely not as succinct as the two previous ones, it’s pretty straightforward:

outage <- outage[complete.cases(outage),]
colnames(outage) <- c("id", "total_customers", "without_power", "percentage_out")
outage$id <- stri_trans_totitle(outage$id)
outage$out <- cut(outage$without_power,
    breaks=c(0, 25, 100, 500, 1000, 5000, 10000, 20000, 40000),
    labels=c("1 - 25", "26 - 100", "101 - 500", "501 - 1,000",
             "1,001 - 5,000", "5,001- 10,000", "10,001 - 20,000",
             "20,001 - 40,000"))

We filter out the `NA`’s (this expunges the “total” row), rename the columns, convert the town name to the same case used in the shapefile (NOTE: I could have just `toupper`ed all the town names, but I really like this function from the `stringi` package) and then use `cut` to make an 8-level factor out of the customer outage count (to match the PSNH map legend).

**Step 4 – Preparing the map for plotting with `ggplot`**

This is another one-liner:

nh_map <- fortify(nh, region="NAME")

and makes it possible to use the town names when specifying the polygon regions we want to fill with our spiffy color scheme.

**Step 5 – Plotting the map**

It is totally possible to do this in one line, but many kittens will lose their lives if you do. I like this way of structuring the creation of a `ggplot` graphic since it makes it very easy to comment out or add various layers or customizations without worrying about stray `+` signs.

gg <- ggplot(data=nh_map, aes(map_id=id))
gg <- gg + geom_map(map=nh_map, aes(x=long, y=lat),
                    color="#0e0e0e", fill="white", size=0.2)
gg <- gg + geom_map(data=outage, map=nh_map, aes(fill=out),
                    color="#0e0e0e", size=0.2)
gg <- gg + scale_fill_brewer(type="seq", palette="RdPu",
                             name="Number of\ncustomer outages\nin each town")
gg <- gg + coord_equal()
gg <- gg + labs(title=sprintf("%s Total PSNH Customers Without Power",
                              comma(sum(outage$without_power))))
gg <- gg + theme_map()
gg <- gg + theme(legend.position="right")
gg

That sequence starts the base `ggplot` object creation, sets up the base map colors and then overlays the town outage colors. We use the `RdPu` [Color Brewer](http://colorbrewer2.org/) sequential palette and give the legend the same title as the PSNH counterpart.

The shapefile is already projected (Lambert Conformal Conic—take a look at it with `ogrinfo -al`), so we can get away with using `coord_equal` vs re-projecting it, and we do a tally of outages to stick in the title. My base `theme_map` is designed for Maine, hence the extra `theme` call to move the legend.

**The Finished Product**

Crisp SVG polygons, no cluttered Bing Maps tiles and a proper color palette.

![img](http://rud.is/dl/psnh.svg)

All the code is [up on github](https://github.com/hrbrmstr/psnh).

UPDATE: You can now run this as a local Shiny app by entering shiny::runGist("95ec24c1b0cb433a76a5", launch.browser=TRUE) at an R prompt (provided all the dependent libraries (below) are installed) or use it interactively over at Shiny Apps.

The impending arrival of the first real snowfall of the year in my part of Maine got me curious about what the most likely “first snow” dates are for my region. The U.S. Historical Climatology Network (USHCN) maintains [historical daily climate records](http://cdiac.ornl.gov/epubs/ndp/ushcn/daily_doc.html) for each station in each state and has data (for some stations) going back as far as the 1800’s. A quick look at their [data files](http://cdiac.ornl.gov/ftp/ushcn_daily/) indicated that they would definitely help satiate my curiosity (and make for a late night of cranking out some R code and ggplot visualizations).

To start, we’ll need a bit more than base R to get the job done:

library(pbapply)
library(data.table)
library(dplyr)
library(ggplot2)
library(grid)
library(gridExtra)
library(stringi)

In all honesty, `pbapply`, `dplyr` and `stringi` are not necessary, but they definitely make life easier by (respectively) giving us:

– free progress bars for `*apply` operations,
– high efficacy data manipulation idioms, and
– a handy utility for converting strings to title case.

With setup out of the way, the first real task is to see which observer station is closest to my area. To figure that out we need to read in the station data file which is, sadly, in fixed-width format. Some stations have `#` characters in their titles, to we have to account for that when we call `read.fwf`. After reading in the station database we use a naive–but-usable distance calculation to find the closest station:

stations <- read.fwf("data/ushcn-stations.txt",
                     widths=c(6, 9, 10, 7, 3, 31, 7, 7, 7, 3),
                     col.names=c("coop_id", "latitude", "longitude", "elevation",
                                 "state", "name", "component_1", "component_2",
                                 "component_3", "utc_offset"),
                     colClasses=c("character", "numeric", "numeric", "numeric",
                                  "character", "character", "character", "character",
                                  "character", "character"),
                     comment.char="", strip.white=TRUE)
 
# not a great circle, but it gets the job done here
closestStation <- function(stations, lat, lon) {
  index <- which.min(sqrt((stations$latitude-lat)^2 +
                          (stations$longitude-lon)^2))
  stations[index,]
}
 
# what's the closest station?
closestStation(stations, 43.2672, -70.8617)
 
##     coop_id latitude longitude elevation state   name component_1 component_2 component_3 utc_offset
633  272174    43.15    -70.95      24.4    NH DURHAM      ------      ------      ------         +5

As a Mainer, I’m not thrilled that this is the actual, closest station, so we’ll also see what the closest one is in Maine:

closestStation(stations %>% filter(state=="ME"), 43.2672, -70.8617)
##    coop_id latitude longitude elevation state             name component_1 component_2 component_3 utc_offset
10  176905  43.6497  -70.3003      13.7    ME PORTLAND JETPORT      ------      ------      ------         +5

The analysis is easy enough to do for both, so we’ll first take a look at Durham, New Hampshire then do the exact same valuation for Portland, Maine.

Despite being fixed-width, the station database was not too difficult to wrangle. The state-level files that contain the readings are another matter:

Variable Columns Type
COOP ID 1-6 Character
YEAR 7-10 Integer
MONTH 11-12 Integer
ELEMENT 13-16 Character
VALUE1 17-21 Integer
MFLAG1 22 Character
QFLAG1 23 Character
SFLAG1 24 Character
VALUE2 25-29 Integer
MFLAG2 30 Character
QFLAG2 31 Character
SFLAG2 32 Character
VALUE31 257-261 Integer
MFLAG31 262 Character
QFLAG31 263 Character
SFLAG31 264 Character

We have fixed-width, wide-format records with 31 days for each month, which proves the existence of truly insidious people in the world. Rather than use `read.fwf` again, we’ll take a different approach (since we ultimately need the data in long format) and use `readLines` to read in all the records from the NH data file, then filter out everything but snowfall entries from the station we’re interested in.

Next, we setup nested `lapply` calls to build a long data frame from each month then combine them all together into a single data frame:

snow <- readLines("data/state27_NH.txt")
 
snow <- grep("SNOW", snow, value=TRUE)
snow <- grep("^272174", snow, value=TRUE)
 
snow_dat <- rbindlist(pblapply(snow, function(x) {
 
  rbindlist(lapply(1:31, function(i) {
 
    # record format described here:
    # http://cdiac.ornl.gov/ftp/ushcn_daily/data_format.txt
 
    start <- 17 + (i-1)*8
 
    list(coop_id=substr(x, 1, 6),
         date=sprintf("%s-%02d-%02d", substr(x, 7, 10), as.numeric(substr(x, 11, 12)), i),
         element=substr(x, 13, 16),
         value=as.numeric(substr(x, start, start+4)),
         mflag=substr(x, start+5, start+5),
         qflag=substr(x, start+6, start+6),
         sflag=substr(x, start+7, start+7))
 
  }))
 
}))

Now, we’ll clean up the records even further by removing invalid entries (those with a `value` == `-9999`) and convert record dates to actual `Date` objects and filter out invalid dates:

snow_dat <- snow_dat %>% filter(value != -9999)
 
# since the data file has 31 days for each records regardless of whether
# that's valid or not we do a shortcut to remove invalid dates by doing the
# a vectorized Date conversion, then removing records with NA dates
 
snow_dat$date <- as.Date(snow_dat$date)
snow_dat <- snow_dat %>% filter(!is.na(date))
 
# having the year extracted is handy for filtering
snow_dat$year <- format(snow_dat$date, "%Y")

Given that Winter in the U.S. spans across two calendar years, we need a way to keep dates in January-May associated with the previous year (yes, that adds an inherent assumption that no first snow is in June, which might not hold true for Alaska). To facilitate this, we’ll convert each date to its corresponding day of year value then add the number of total days in the start year to those values for all months <= May. We really do need to do this, too, since there are many cases where the first snowfall will be in January-March for many states.

snow_dat$doy <- as.numeric(format(snow_dat$date, "%j"))
snow_dat$doy <- ifelse(snow_dat$doy<=180,
                       snow_dat$doy + as.numeric(format(as.Date(sprintf("%s-12-31", snow_dat$year)), "%j")),
                       snow_dat$doy)

Now, the fun begins. We use (mostly) `dplyr` to extract the first snowfall day from each year, then make a dot-line plot from the data:

first <- snow_dat %>%
  filter(value>0) %>%                           # ignore 0 values
  filter(date>=as.Date("1950-01-01")) %>%       # start at 1950 (arbitrary)
  merge(stations, by="coop_id", all.x=TRUE) %>% # merge station details
  group_by(coop_id, year) %>%                   # group by station and year
  arrange(doy) %>%                              # sort by our munged day of year
  filter(row_number(doy) == 1) %>%              # grab the first entry by group
  select(name, state, date, value, doy)         # we only need some variables
 
title_1 <- sprintf("First observed snowfall (historical) at %s, %s", stri_trans_totitle(unique(first$name)), unique(first$state))
 
gg <- ggplot(first, aes(y=year, x=doy))
gg <- gg + geom_segment(aes(xend=min(first$doy)-20, yend=year), color="#9ecae1", size=0.25)
gg <- gg + geom_point(aes(color=coop_id), shape=8, size=3, color="#3182bd")
gg <- gg + geom_text(aes(label=format(date, "%b-%d")), size=3, hjust=-0.2)
gg <- gg + scale_x_continuous(expand=c(0, 0), limits=c(min(first$doy)-20, max(first$doy)+20))
gg <- gg + labs(x=NULL, y=NULL, title=title_1)
gg <- gg + theme_bw()
gg <- gg + theme(legend.position="none")
gg <- gg + theme(panel.grid=element_blank())
gg <- gg + theme(panel.border=element_blank())
gg <- gg + theme(axis.ticks.x=element_blank())
gg <- gg + theme(axis.ticks.y=element_blank())
gg <- gg + theme(axis.text.x=element_blank())
gg <- gg + theme(axis.text.y=element_text(color="#08306b"))
by_year <- gg

While that will help us see the diversity across years, we have to do quite a bit of eye tracking to get the most likely date range for the first snowfall, so we’ll add a boxplot into the mix and use `summary` to figure out the quartiles (for labeling the chart) for the actual date values:

wx_range <- summary(as.Date(format(first$date, "2013-%m-%d")))
names(wx_range) <- NULL
min_wx <- gsub("2013-", "", wx_range[2])
max_wx <- gsub("2013-", "", wx_range[5])
 
title_2 <- sprintf("Most likely first snowfall will be between %s & %s", min_wx, max_wx)
 
gg <- ggplot(first %>% mutate(name="0000"), aes(name, doy))
gg <- gg + geom_boxplot(fill="#3182bd", color="#08306b", outlier.colour="#08306b")
gg <- gg + scale_y_continuous(expand=c(0, 0),
                              limits=c(min(first$doy)-20, max(first$doy)+20))
gg <- gg + coord_flip()
gg <- gg + labs(x=NULL, y=NULL, title=title_2)
gg <- gg + theme_bw()
gg <- gg + theme(legend.position="none")
gg <- gg + theme(panel.grid=element_blank())
gg <- gg + theme(panel.border=element_blank())
gg <- gg + theme(axis.ticks.x=element_blank())
gg <- gg + theme(axis.text.x=element_blank())
gg <- gg + theme(axis.ticks.y=element_line(color="white"))
gg <- gg + theme(axis.text.y=element_text(color="white"))
gg <- gg + theme(plot.title=element_text(size=11))
box_wx <- gg

Finally, we’ll combine them together to get the finished product:

grid.arrange(by_year, box_wx, nrow=2, heights=unit(c(0.9, 0.1), "npc"))

nh-2

And, do the same for Portland:

bothClick for larger

There are many more analyses and visualizations that can be performed on these data sets, but be wary when creating them as I’ve seen a few files with fixed-width formatting errors and have also noticed missing records for some observer stations.

You can find the complete, commented code up on [github](https://github.com/hrbrmstr/snowfirst).

In a previous post we looked at how to use D3 TopoJSON files with R and make some very D3-esque maps. I mentioned that one thing missing was moving Alaska & Hawaii a bit closer to the continental United States and this post shows you how to do that.

The D3 folks have it easy. They just use the built in modified Albers composite projection. We R folk have to roll up our sleeves a bit (but not much) to do the same. Thankfully, we can do most of the work with the elide (“ih lied”) function from the maptools package.

We’ll start with some package imports:

library(maptools)
library(mapproj)
library(rgeos)
library(rgdal)
library(RColorBrewer)
library(gthemes)
library(ggplot2)

I’m using a GeoJSON file that I made from the 2013 US Census shapefile. I prefer GeoJSON mostly due to it being single file and the easy conversion to TopoJSON if I ever need to use the same map in a D3 context (I work with information security data most of the time, so I rarely have to use maps at all for the day job). I simplified the polygons a bit (passing -simplify 0.01 to ogr2ogr) to reduce processing time.

We read in that file and then transform the projection to Albers equal area and join the polygon ids to the shapefile data frame:

# https://www.census.gov/geo/maps-data/data/cbf/cbf_counties.html
# read U.S. counties moderately-simplified GeoJSON file
us <- readOGR(dsn="data/us.geojson", layer="OGRGeoJSON")

# convert it to Albers equal area
us_aea <- spTransform(us, CRS("+proj=laea +lat_0=45 +lon_0=-100 +x_0=0 +y_0=0 +a=6370997 +b=6370997 +units=m +no_defs"))
us_aea@data$id <- rownames(us_aea@data)

Now, to move Alaska & Hawaii, we have to:

  • extract them from the main shapefile data frame
  • perform rotation, scaling and transposing on them
  • ensure they have the right projection set
  • merge them back into the original spatial data frame

The elide function has parameters for all the direct shape munging, so we’ll just do that for both states. I took a peek at the D3 source code for the Albers projection to get a feel for the parameters. You can tweak those if you want them in other positions or feel the urge to change the Alaska rotation angle.

# extract, then rotate, shrink & move alaska (and reset projection)
# need to use state IDs via # https://www.census.gov/geo/reference/ansi_statetables.html
alaska <- us_aea[us_aea$STATEFP=="02",]
alaska <- elide(alaska, rotate=-50)
alaska <- elide(alaska, scale=max(apply(bbox(alaska), 1, diff)) / 2.3)
alaska <- elide(alaska, shift=c(-2100000, -2500000))
proj4string(alaska) <- proj4string(us_aea)

# extract, then rotate & shift hawaii
hawaii <- us_aea[us_aea$STATEFP=="15",]
hawaii <- elide(hawaii, rotate=-35)
hawaii <- elide(hawaii, shift=c(5400000, -1400000))
proj4string(hawaii) <- proj4string(us_aea)

# remove old states and put new ones back in; note the different order
# we're also removing puerto rico in this example but you can move it
# between texas and florida via similar methods to the ones we just used
us_aea <- us_aea[!us_aea$STATEFP %in% c("02", "15", "72"),]
us_aea <- rbind(us_aea, alaska, hawaii)

Rather than just show the resultant plain county map, we’ll add some data to it. The first example uses US drought data (from November 11th, 2014). Drought conditions are pretty severe in some states, but we’ll just show areas that have any type of drought (levels D0-D4). The color ramp shows the % of drought coverage in each county (you’ll need a browser that can display SVGs to see the maps):

# get some data to show...perhaps drought data via:
# http://droughtmonitor.unl.edu/MapsAndData/GISData.aspx
droughts <- read.csv("data/dm_export_county_20141111.csv")
droughts$id <- sprintf("%05d", as.numeric(as.character(droughts$FIPS)))
droughts$total <- with(droughts, (D0+D1+D2+D3+D4)/5)

# get ready for ggplotting it... this takes a cpl seconds
map <- fortify(us_aea, region="GEOID")

# plot it
gg <- ggplot()
gg <- gg + geom_map(data=map, map=map,
                    aes(x=long, y=lat, map_id=id, group=group),
                    fill="#ffffff", color="#0e0e0e", size=0.15)
gg <- gg + geom_map(data=droughts, map=map, aes(map_id=id, fill=total),
                    color="#0e0e0e", size=0.15)
gg <- gg + scale_fill_gradientn(colours=c("#ffffff", brewer.pal(n=9, name="YlOrRd")),
                                na.value="#ffffff", name="% of County")
gg <- gg + labs(title="U.S. Areas of Drought (all levels, % county coverage)")
gg <- gg + coord_equal()
gg <- gg + theme_map()
gg <- gg + theme(legend.position="right")
gg <- gg + theme(plot.title=element_text(size=16))
gg

img

While that shows Alaska & Hawaii in D3-Albers-style, it would be more convincing if we actually used the FIPS county codes on Alaska & Hawaii, so we’ll riff off the previous post and make a county land-mass area choropleth (since we have the land mass area data available in the GeoJSON file):

gg <- ggplot()
gg <- gg + geom_map(data=map, map=map,
                    aes(x=long, y=lat, map_id=id, group=group),
                    fill="white", color="white", size=0.15)
gg <- gg + geom_map(data=us_aea@data, map=map, aes(map_id=GEOID, fill=log(ALAND)),
                    color="white", size=0.15)
gg <- gg + scale_fill_gradientn(colours=c(brewer.pal(n=9, name="YlGn")),
                                na.value="#ffffff", name="County Land\nMass Area (log)")
gg <- gg + labs(title="U.S. County Area Choropleth (log scale)")
gg <- gg + coord_equal()
gg <- gg + theme_map()
gg <- gg + theme(legend.position="right")
gg <- gg + theme(plot.title=element_text(size=16))
gg

img

Now, you have one less reason to be envious of the D3 cool kids!

The code & shapefiles are available on github.

granitesecx

The Washingon Post did another great story+vis, this time on states [Spending seized assets](http://www.washingtonpost.com/wp-srv/special/investigative/asset-seizures/).

According to their sub-head:

>_Since 2008, about 5,400 police agencies have spent $2.5 billion in proceeds from cash and property seized under federal civil forfeiture laws. Police suspected the assets were linked to crime, although in 81 percent of cases no one was indicted._

Their interactive visualization lets you drill down into each state to examine the spending in each category. Since the WaPo team made the [data available](http://www.washingtonpost.com/wp-srv/special/investigative/asset-seizures/data/all.json) [JSON] I thought it might be interesting to take a look at a comparison across states (i.e. who are the “big spenders” of this siezed hoarde). Here’s a snippet of the JSON:

{"states": [
  {
  "st": "AK",
  "stn": "Alaska",
  "total": 8470032,
  "cats":
     [{ "weapons": 1649832, 
     "electronicSurv": 402490, 
     "infoRewards": 760730, 
     "travTrain": 848128, 
     "commPrograms": 121664, 
     "salaryOvertime": 776766, 
     "other": 1487613, 
     "commComp": 1288439, 
     "buildImprov": 1134370 }],
  "agencies": [
     {
     "aid": "AK0012700",
     "aname": "Airport Police & Fire Ted Stevens Anch Int'L Arpt",
     "total": 611553,
     "cats":
        [{ "weapons": 214296, "travTrain": 44467, "other": 215464, "commComp": 127308, "buildImprov": 10019 }]
     },
     {
     "aid": "AK0010100",
     "aname": "Anchorage Police Department",
     "total": 3961497,
     "cats":
        [{ "weapons": 1104777, "electronicSurv": 94741, "infoRewards": 743230, "travTrain": 409474, "salaryOvertime": 770709, "other": 395317, "commComp": 249220, "buildImprov": 194029 }]
     },

Getting the data was easy (in R, of course!). Let’s setup the packages we’ll need:

library(data.table)
library(dplyr)
library(tidyr)
library(ggplot2)
library(scales)
library(grid)
library(statebins)
library(gridExtra)

We also need `jsonlite`, but only to parse the data (which I’ve downloaded locally), so we’ll just do that in one standalone line:

data <- jsonlite::fromJSON("all.json", simplifyVector=FALSE)

It’s not fair (or valid) to just compare totals since some states have a larger population than others, so we’ll show the data twice, once in raw totals and once with a per-capita lens. For that, we’ll need population data:

pop <- read.csv("http://www.census.gov/popest/data/state/asrh/2013/files/SCPRC-EST2013-18+POP-RES.csv", stringsAsFactors=FALSE)
colnames(pop) <- c("sumlev", "region", "divison", "state", "stn", "pop2013", "pop18p2013", "pcntest18p")
pop$stn <- gsub(" of ", " Of ", pop$stn)

We have to fix the `District of Columbia` since the WaPo data capitalizes the `Of`.

Now we need to extract the agency data. This is really straightforward with some help from the `data.table` package:

agencies <- rbindlist(lapply(data$states, function(x) {
  rbindlist(lapply(x$agencies, function(y) {
    data.table(st=x$st, stn=x$stn, aid=y$aid, aname=y$aname, rbindlist(y$cats))
  }), fill=TRUE)
}), fill=TRUE)

The `rbindlist` `fill` option is super-handy in the event we have varying columns (and, we do in this case). It’s also wicked-fast.

Now, we use some `dplyr` and `tidyr` to integrate the population information and summarize our data (OK, we cheat and use `melt`, but some habits are hard to break):

c_st <- agencies %>%
  merge(pop[,5:6], all.x=TRUE, by="stn") %>%
  gather(category, value, -st, -stn, -pop2013, -aid, -aname) %>%
  group_by(st, category, pop2013) %>%
  summarise(total=sum(value, na.rm=TRUE), per_capita=sum(value, na.rm=TRUE)/pop2013) %>%
  select(st, category, total, per_capita)

Let’s use a series of bar charts to compare state-against state. We’ll do the initial view with just raw totals. There are 9 charts, so this graphic scrolls a bit and you can select it to make it larger:

# hack to ordering the bars by kohske : http://stackoverflow.com/a/5414445/1457051 #####
 
c_st <- transform(c_st, category2=factor(paste(st, category)))
c_st <- transform(c_st, category2=reorder(category2, rank(-total)))
 
# pretty names #####
 
levels(c_st$category) <- c("Weapons", "Travel, training", "Other",
                           "Communications, computers", "Building improvements",
                           "Electronic surveillance", "Information, rewards",
                           "Salary, overtime", "Community programs")
gg <- ggplot(c_st, aes(x=category2, y=total))
gg <- gg + geom_bar(stat="identity", aes(fill=category))
gg <- gg + scale_y_continuous(labels=dollar)
gg <- gg + scale_x_discrete(labels=c_st$st, breaks=c_st$category2)
gg <- gg + facet_wrap(~category, scales = "free", ncol=1)
gg <- gg + labs(x="", y="")
gg <- gg + theme_bw()
gg <- gg + theme(strip.background=element_blank())
gg <- gg + theme(strip.text=element_text(size=15, face="bold"))
gg <- gg + theme(panel.margin=unit(2, "lines"))
gg <- gg + theme(panel.border=element_blank())
gg <- gg + theme(legend.position="none")
gg

Comparison of Spending Category by State (raw totals)

raw-sm

There are definitely a few, repeating “big spenders” in that view, but is that the _real_ story? Let’s take another look, but factoring in state population:

# change bar order to match per-capita calcuation #####
 
c_st <- transform(c_st, category2=reorder(category2, rank(-per_capita)))
 
# per-capita bar plot #####
 
gg <- ggplot(c_st, aes(x=category2, y=per_capita))
gg <- gg + geom_bar(stat="identity", aes(fill=category))
gg <- gg + scale_y_continuous(labels=dollar)
gg <- gg + scale_x_discrete(labels=c_st$st, breaks=c_st$category2)
gg <- gg + facet_wrap(~category, scales = "free", ncol=1)
gg <- gg + labs(x="", y="")
gg <- gg + theme_bw()
gg <- gg + theme(strip.background=element_blank())
gg <- gg + theme(strip.text=element_text(size=15, face="bold"))
gg <- gg + theme(panel.margin=unit(2, "lines"))
gg <- gg + theme(panel.border=element_blank())
gg <- gg + theme(legend.position="none")
gg

Comparison of Spending Category by State (per-capita)

capita-sm

That certainly changes things! Alaska, West Virginia, and D.C. definitely stand out for “Weapons”, “Other” & “Information”, respectively, (what’s Rhode Island hiding in “Other”?!) and the “top 10” in each category are very different from the raw total’s view. We can look at this per-capita view with the `statebins` package as well:

st_pl <- vector("list", 1+length(unique(c_st$category)))
 
j <- 0
for (i in unique(c_st$category)) {
  j <- j + 1
  st_pl[[j]] <- statebins_continuous(c_st[category==i,], state_col="st", value_col="per_capita") +
    scale_fill_gradientn(labels=dollar, colours=brewer.pal(6, "PuBu"), name=i) +
    theme(legend.key.width=unit(2, "cm"))
}
st_pl[[1+length(unique(c_st$category))]] <- list(ncol=1)
 
grid.arrange(st_pl[[1]], st_pl[[2]], st_pl[[3]],
             st_pl[[4]], st_pl[[5]], st_pl[[6]],
             st_pl[[7]], st_pl[[8]], st_pl[[9]], ncol=3)

Per-capita “Statebins” view of WaPo Seizure Data

(Doing this exercise also showed me I need to add some flexibility to the `statebins` package).

The (https://gist.github.com/hrbrmstr/27b8f44f573539dc2971) shows how to build a top-level category data table (along with the rest of the code in this post). I may spin this data up into an interactive D3 visualization in the next week or two (as I think it might work better than large faceted bar charts), so stay tuned!

A huge thank you to the WaPo team for making data available to others. Go forth and poke at it with your own questions and see what you can come up with (perhaps comparing by area of state)!

Markus Gessman (@MarkusGesmann) did a beautiful job [Visualising the seasonality of Atlantic windstorms](http://www.magesblog.com/2014/10/visualising-seasonality-of-atlantic.html) using small multiples, which was inspired by both a [post](http://freakonometrics.hypotheses.org/17113) by Arthur Charpentier (@freakonometrics) on using Markov spatial processes to “generate” hurricanes—which was [tweaked a bit](http://robertgrantstats.wordpress.com/2014/10/01/transparent-hurricane-paths-in-r/) by Robert Grant (@robertstats)—and [Gaston Sanchez](https://github.com/gastonstat)’s [Visualizing Hurricane Trajectories](http://rpubs.com/gaston/hurricanes) RPub.

I have [some history](http://rud.is/b/2012/10/28/watch-sandy-in-r-including-forecast-cone/) with hurricane data and thought I’d jump on the bandwagon using the same data and making some stop-frame animations. I borrowed from previous work (hence starting with all the credits above) but have used `dplyr` idioms for data-frame filtering & mutating and my own month/year extraction code.

The first animation accumulates storm tracks in-year and displays the names of the storms in a list down the left side while the second does a full historical accumulation of tracks. I changed the storm path gradient but kept most of the other formatting bits and made the plots suitable for 1080p output/playback.

Rather than go the `ffmpeg` route, I used [ImageMagick](http://www.imagemagick.org/) since it makes equally quick work out of converting a bunch of `png` files to an `mp4` file. I made the animations go quickly, but they can be advanced forward/back one frame at a time in any decent player.

library(maps)
library(data.table)
library(dplyr)
library(ggplot2)
library(grid)
library(RColorBrewer)
 
# takes in a numeric vector and returns a sequence from low to high
rangeseq <- function(x, by=1) {
  rng <- range(x)
  seq(from=rng[1], to=rng[2], by=by)
}
 
# etract the months (as a factor of full month names) from
# a date+time "x" that can be converted to a POSIXct object,
extractMonth <- function(x) {
  months <- format(as.POSIXct(x), "%m")
  factor(months, labels=month.name[rangeseq(as.numeric(months))])
}
 
# etract the years (as a factor of full 4-charater-digit years) from
# a date+time "x" that can be converted to a POSIXct object,
extractYear <- function(x) {
  factor(as.numeric(format(as.POSIXct(x), "%Y")))
}
 
# get from: ftp://eclipse.ncdc.noaa.gov/pub/ibtracs/v03r06/all/csv/Allstorms.ibtracs_all.v03r06.csv.gz
storms_file <- "data/Allstorms.ibtracs_all.v03r06.csv"
storms <-  fread(storms_file, skip=10, select=1:18)
 
col_names <- c("Season", "Num", "Basin", "Sub_basin", "Name", "ISO_time", "Nature",
             "Latitude", "Longitude", "Wind.kt", "Pressure.mb", "Degrees_North", "Deegrees_East")
setnames(storms, paste0("V", c(2:12, 17, 18)), col_names)
 
# use dplyr idioms to filter & mutate the data
 
storms <- storms %>%
  filter(Latitude > -999,                                  # remove missing data
         Longitude > -999,
         Wind.kt > 0,
         !(Name %in% c("UNNAMED", "NONAME:UNNAMED"))) %>%
  mutate(Basin=gsub(" ", "", Basin),                       # clean up fields
         ID=paste(Name, Season, sep="."),
         Month=extractMonth(ISO_time),
         Year=extractYear(ISO_time)) %>%
  filter(Season >= 1989, Basin %in% "NA")                  # limit to North Atlantic basin
 
season_range <- paste(range(storms$Season), collapse=" - ")
knots_range <- range(storms$Wind.kt)
 
# setup base plotting parameters (these won't change)
 
base <- ggplot()
base <- base + geom_polygon(data=map_data("world"),
                            aes(x=long, y=lat, group=group),
                            fill="gray25", colour="gray25", size=0.2)
base <- base + scale_color_gradientn(colours=rev(brewer.pal(n=9, name="RdBu")),
                                     space="Lab", limits=knots_range)
base <- base + xlim(-138, -20) + ylim(3, 55)
base <- base + coord_map()
base <- base + labs(x=NULL, y=NULL, title=NULL, colour = "Wind (knots)")
base <- base + theme_bw()
base <- base + theme(text=element_text(family="Arial", face="plain", size=rel(5)),
                     panel.background = element_rect(fill = "gray10", colour = "gray30"),
                     panel.margin = unit(c(0,0), "lines"),
                     panel.grid.major = element_blank(),
                     panel.grid.minor = element_blank(),
                     plot.margin = unit(c(0,0,0,0), "lines"),
                     axis.text.x = element_blank(),
                     axis.text.y = element_blank(),
                     axis.ticks = element_blank(),
                     legend.position = c(0.25, 0.1),
                     legend.background = element_rect(fill="gray10", color="gray10"),
                     legend.text = element_text(color="white", size=rel(2)),
                     legend.title = element_text(color="white", size=rel(5)),
                     legend.direction = "horizontal")
 
# loop over each year, producing plot files that accumulate tracks over each month
 
for (year in unique(storms$Year)) {
 
  storm_ids <- unique(storms[storms$Year==year,]$ID)
 
  for (i in 1:length(storm_ids)) {
 
    storms_yr <- storms %>% filter(Year==year, ID %in% storm_ids[1:i])
 
    # stuff takes a while, so it's good to have a progress message
    message(sprintf("%s %s", year, storm_ids[i]))
 
    gg <- base
    gg <- gg + geom_path(data=storms_yr,
                         aes(x=Longitude, y=Latitude, group=ID, colour=Wind.kt),
                         size=1.0, alpha=1/4)
    gg <- gg + geom_text(label=year, aes(x=-135, y=51), size=rel(6), color="white", vjust=1)
    gg <- gg + geom_text(label=paste(gsub(".[[:digit:]]+$", "", storm_ids[1:i]), collapse="\n"),
                         aes(x=-135, y=49.5), size=rel(4.5), color="white", vjust=1)
 
    # change "quartz" to "cairo" if you're not on OS X
 
    png(filename=sprintf("output/%s%03d.png", year, i),
        width=1920, height=1080, type="quartz", bg="gray25")
    print(gg)
    dev.off()
 
  }
 
}
 
# convert to mp4 animation - needs imagemagick
system("convert -delay 8 output/*png output/hurr-1.mp4")
# unlink("output/*png") # do this after verifying convert works

# take an alternate approach for accumulating the entire hurricane history
# start with the base, but add to the ggplot object in a loop, which will
# accumulate all the tracks.
 
gg <- base
 
for (year in unique(storms$Year)) {
 
  storm_ids <- unique(storms[storms$Year==year,]$ID)
 
  for (i in 1:length(storm_ids)) {
 
    storms_yr <- storms %>% filter(ID %in% storm_ids[i])
 
    message(sprintf("%s %s", year, storm_ids[i]))
    gg <- gg + geom_path(data=storms_yr,
                         aes(x=Longitude, y=Latitude, group=ID, colour=Wind.kt),
                         size=1.0, alpha=1/4)
 
    png(filename=sprintf("output/%s%03d.png", year, i),
        width=1920, height=1080, type="quartz", bg="gray25")
    print(gg)
    dev.off()
 
  }
 
}
 
system("convert -delay 8 output/*png output/hurr-2.mp4")
# unlink("output/*png") # do this after verifying convert works

Full code in [this gist](https://gist.github.com/hrbrmstr/23bf06784e898871dd61).