Archive for the ‘R’ Category

Overcoming D3 Cartographic Envy With R + ggplot

When I used one of the Scotland TopoJSON files for a recent post, it really hit me just how much D3 cartography envy I had/have as an R user. Don’t get me wrong, I can conjure up D3 maps pretty well [1] [2] and the utility of an interactive map visualization goes without saying, but we can make great static maps in R without a great deal of effort, so I decided to replicate a few core examples from the D3 topojson gallery in R.

I chose five somewhat different examples, each focusing on various aspects of creating map layers and trying to not be too U.S. focused. Here they are (hit the main link to go to the gist for the example and the bl.ocks URL to see it’s D3 counterpart):

I used the TopoJSON/GeoJSON files provided with each example, so you’ll need a recent gdal (>= 1.11), and—consequently—a suitable build of rgdal) to work through the examples.

The Core Mapping Idiom

While the details may vary with each project you work on, the basic flow to present a map in R with ggplot are:

  • read in a map features (I use readOGR in these examples)
  • convert that into something ggplot can handle
  • identify values you wish to pair with those features (optional if we’re just plotting a plain map)
  • determine which portion of the map is to be displayed
  • plot the map features

Words & abbreviations mean things, just like map symbols mean things, and if you’re wondering what this “OGR” is, here’s the answer from the official FAQ:

OGR used to stand for OpenGIS Simple Features Reference Implementation. However, since OGR is not fully compliant with the OpenGIS Simple Feature specification and is not approved as a reference implementation of the spec the name was changed to OGR Simple Features Library. The only meaning of OGR in this name is historical. OGR is also the prefix used everywhere in the source of the library for class names, filenames, etc.

The readOGR function can work with a wide variety of file formats and OGR files can hold a wide variety of data. The most basic use for our mapping is to read in these TopoJSON/GeoJSON files and use the right features from them to make our maps. Features/layers can be almost anything (counties, states, countries, rivers, lakes, etc) and we can see what features we want to work with by using the ogrListLayers function (you can do this from an operating system command line as well, but we’ll stay in R for now). Let’s take a look at the layers available in the map from the Costa Rica example:

ogrListLayers("division.json")
 
## [1] "limites"    "provincias" "cantones"   "distritos" 
attr(,"driver")
## [1] "GeoJSON"
attr(,"nlayers")
## [1] 4

Those translate to “country”, “provinces”, “cantons”, & “districts”. Each layer has polygons and associated data for the polygons (and overall layer), including information about the type of projection. If you’re sensing a “math trigger warning”, fear not; I won’t be delving into to much more cartographic detail as you probably just want to see the maps & code.

Swiss Cantons

If you’re from the U.S. you (most likely) have no idea what a canton is. The quickest explanation is that it is an administrative division within a country and, in this specific example, the 26 cantons of Switzerland are the member states of the federal state of Switzerland.

The D3 Swiss Cantons uses a TopoJSON/GeoJSON file that has only one layer (i.e. the cantons) along with metadata about the canton id and name:

ogrInfo("readme-swiss.json", "cantons")
 
## Source: "readme-swiss.json", layer: "cantons"
## Driver: GeoJSON number of rows 26 
## Feature type: wkbPolygon with 2 dimensions
## Extent: (5.956 45.818) - (10.492 47.808)
## Number of fields: 2 
##   name type length typeName
## 1   id    4      0   String
## 2 name    4      0   String

NOTE: you should learn to get pretty adept with the OGR functions or command-line tools as you can do some really amazing things with them, including extracting only certain features, simplifying the polygons or fixing issues. Some of the TopoJSON/GeoJSON files you’ll find with D3 examples may have missing or invalid components and you can fix some of them with these tools. We’ll be working around errors and missing values in these examples.

The D3 example displays the canton name at the centroid of the polygon, so that’s what we’ll do in R:

library(rgeos)
library(rgdal) # needs gdal > 1.11.0
library(ggplot2)
 
# ggplot map theme
devtools::source_gist("https://gist.github.com/hrbrmstr/33baa3a79c5cfef0f6df")
 
map = readOGR("readme-swiss.json", "cantons")
 
map_df <- fortify(map)

The map object is a SpatialPolygonsDataFrame and has a fairly complex structure:

slotNames(map)
## [1] "data"        "polygons"    "plotOrder"   "bbox"       
## [5] "proj4string"
 
names(map)
## [1] "id"   "name"
 
# execute these on your own and poke around the data structures after determining the class
class(map@data)
class(map@polygons)
class(map@plotOrder)
class(map@bbox)
class(map@proj4string)

The fortify function turns all that into something we can use with ggplot. Normally, we’d be able to get fortify to associate the canton name to the polygon points it encodes via the region parameter. That did not work with these TopoJSON/GeoJSON files and I didn’t really poke around much to determine why since it’s easy enough to work around. In this case, I manually merged the names with the fortified map data frame.

#  create mapping for id # to name since "region=" won't work
dat <- data.frame(id=0:(length(map@data$name)-1), canton=map@data$name)
map_df <- merge(map_df, dat, by="id")

We can get the centroid via the gCentroid function, and we’ll make a data frame of those center points and the name of the canton for use with a geom_text layer after plotting the base outlines of the cantons (with a rather bland fill, but I didn’t pick the color):

# find canton centers
centers <- data.frame(gCentroid(map, byid=TRUE))
centers$canton <- dat$canton
 
# make a map!
gg <- ggplot()
gg <- gg + geom_map(data=map_df, map=map_df,
                    aes(map_id=id, x=long, y=lat, group=group),
                    color="#ffffff", fill="#bbbbbb", size=0.25)
# gg <- gg + geom_point(data=centers, aes(x=x, y=y))
gg <- gg + geom_text(data=centers, aes(label=canton, x=x, y=y), size=3)
gg <- gg + coord_map()
gg <- gg + labs(x="", y="", title="Swiss Cantons")
gg <- gg + theme_map()

The coord_map() works with the mapproj package to help us display maps in reasonable projections (or really dumb ones). The default is "mercator" and we’ll stick with that since the D3 examples use it (but, winkel-tripel FTW!).

Here’s the result of our hard work (select map for larger version):

If you ignore the exposition above and just take into account non-blank source code lines, we did all that in ~16LOC and have a scaleable SVG file as a result. You can have some fun with the above code and remove the static fill="#bbbbbb" and move it to the mapping aesthetic parameter and tie it’s value to the canton name.

Costa Rica

The TopoJSON/GeoJSON file provided with the D3 example is a good example of encoding multiple layers into a single file (see the first ogrListLayers above). We’ll create a fortified version of each layer and then plot each with a geom_map layer using different line colors, sizes and fills:

limites = readOGR("division.json", "limites")
provincias = readOGR("division.json", "provincias")
cantones = readOGR("division.json", "cantones")
distritos = readOGR("division.json", "distritos")
 
limites_df <- fortify(limites)
cantones_df <- fortify(cantones)
distritos_df <- fortify(distritos)
provincias_df <- fortify(provincias)
 
gg <- ggplot()
gg <- gg + geom_map(data=limites_df, map=limites_df,
                    aes(map_id=id, x=long, y=lat, group=group),
                    color="white", fill="#dddddd", size=0.25)
gg <- gg + geom_map(data=cantones_df, map=cantones_df,
                    aes(map_id=id, x=long, y=lat, group=group),
                    color="red", fill="#ffffff00", size=0.2)
gg <- gg + geom_map(data=distritos_df, map=distritos_df,
                    aes(map_id=id, x=long, y=lat, group=group),
                    color="#999999", fill="#ffffff00", size=0.1)
gg <- gg + geom_map(data=provincias_df, map=provincias_df,
                    aes(map_id=id, x=long, y=lat, group=group),
                    color="black", fill="#ffffff00", size=0.33)
gg <- gg + coord_map()
gg <- gg + labs(x="", y="", title="Costa Rica TopoJSON")
gg <- gg + theme_map()

The result is pretty neat and virtually identical to the D3 version:

Try playing around with the order of the geom_map layers (or remove some) and also the line color/size/fill & alpha values to see how it changes the map.

Area Choropleth

I’m not a huge fan of the colors used in the D3 version and I’m not going to spend any time moving Hawaii & Alaska around (that’s a whole different post). But, I will show how to make a similar area choropleth:

# read in the county borders
map = readOGR("us.json", "counties")
 
# calculate (well retrieve) the area since it's part of the polygon structure
# and associate it with the polygon id so we can use it later. We need to do
# the merge manually again since the "us.json" file threw errors again when
# trying to use the fortify "region" parameter.
 
map_area <- data.frame(id=0:(length(map@data$id)-1),
                       area=sapply(slot(map, "polygons"), slot, "area") )
 
# read in the state borders
states = readOGR("us.json", "states")
states_df <- fortify(states)
 
# create map data frame and merge area info
map_df <- fortify(map)
map_df <- merge(map_df, map_area, by="id")
 
gg <- ggplot()
 
# thin white border around counties and shades of yellow-green for area (log scale)
gg <- gg + geom_map(data=map_df, map=map_df,
                    aes(map_id=id, x=long, y=lat, group=group, fill=log1p(area)),
                    color="white", size=0.05)
 
# thick white border for states
gg <- gg + geom_map(data=states_df, map=states_df,
                    aes(map_id=id, x=long, y=lat, group=group),
                    color="white", size=0.5, alpha=0)
gg <- gg + scale_fill_continuous(low="#ccebc5", high="#084081")
 
# US continental extents - not showing alaska & hawaii
gg <- gg + xlim(-124.848974, -66.885444)
gg <- gg + ylim(24.396308, 49.384358)
 
gg <- gg + coord_map()
gg <- gg + labs(x="", y="", title="Area Choropleth")
gg <- gg + theme_map()
gg <- gg + theme(legend.position="none")

Play with the colors and use different values instead of the polygon area (perhaps use sample or runif to generate some data) to see how it changes the choropleth outcome.

Blocky Counties

The example from the D3 wiki is more “how to work with shapefiles and map coordinates” than it is useful, but we have the same flexibility in R, so we’ll make the same plot by using the bbox function to make a data frame of bounding boxes we can use with geom_rect (there’s no geom_map in this example, just using the coordinate system to plot boxes):

# use the topojson from the bl.ocks example
map = readOGR("us.json", "counties")
 
# build our map data frame of rects
 
map_df <- do.call("rbind", lapply(map@polygons, function(p) {
 
  b <- bbox(p) # get bounding box of polygon and put it into a form we can use later
 
  data.frame(xmin=b["x", "min"],
             xmax=b["x", "max"],
             ymin=b["y", "min"],
             ymax=b["y", "max"])
 
}))
map_df$id <- map$id # add the id even though we aren't using it now
 
gg <- ggplot(data=map_df)
gg <- gg + geom_rect(aes(xmin=xmin, xmax=xmax,
                         ymin=ymin, ymax=ymax),
                     color="steelblue", alpha=0, size=0.25)
 
# continental us only
gg <- gg + xlim(-124.848974, -66.885444)
gg <- gg + ylim(24.396308, 49.384358)
gg <- gg + coord_map()
gg <- gg + labs(x="", y="", title="Blocky Counties")
gg <- gg + theme_map()
gg <- gg + theme(legend.position="none")

To re-emphasize we’re just working with ggplot layers, so play around and, perhaps color in only the odd numbered counties.

County Circles (OK, Ovals)

The last D3 example I’m copying swaps squares for circles, which makes this more of a challenge to do in R+ggplot since ggplot has no “circle” geom (and holey geom_points do not count). So, we’ll borrow and slightly adapt a function from StackOverflow by joran that builds a data frame of polygon points derived by a center & diameter. We’ll add an id value (for each of the counties) and make one really big data frame (well, big for use in ggplot) that we can then plot as grouped geom_paths. Unlike our cantons example, the gCentroid function coughed up errors on this TopoJSON/GeoJSON file, so I resorted to approximating the center from the rectangular bounding box. Also, I don’t project the circle coordinates before plotting, so they’re ovals. While it doesn’t mirror the D3 example perfectly, it should help reinforce how to work with the map’s metadata and draw arbitrary components on a map:

# adapted from http://stackoverflow.com/questions/6862742/draw-a-circle-with-ggplot2
# computes a circle from a given diameter. we add "id" so we can have one big
# data frame and group them for plotting
 
circleFun <- function(id, center = c(0,0),diameter = 1, npoints = 100){
    r = diameter / 2
    tt <- seq(0,2*pi,length.out = npoints)
    xx <- center[1] + r * cos(tt)
    yy <- center[2] + r * sin(tt)
    return(data.frame(id=id, x = xx, y = yy))
}
 
# us topojson from the bl.ocks example
map = readOGR("us.json", "counties")
 
# this topojson file gives rgeos_getcentroid errors here
# so we approximate the centroid
 
map_df <- do.call("rbind", lapply(map@polygons, function(p) {
 
  b <- bbox(p)
 
  data.frame(x=b["x", "min"] + ((b["x", "max"] - b["x", "min"]) / 2),
             y=b["y", "min"] + ((b["y", "max"] - b["y", "min"]) / 2))
 
}))
 
# get area & diameter
map_df$area <- sapply(slot(map, "polygons"), slot, "area")
map_df$diameter <- sqrt(map_df$area / pi) * 2
 
# make our big data frame of circles
circles <- do.call("rbind", lapply(1:nrow(map_df), function(i) {
  circleFun(i, c(map_df[i,]$x, map_df[i,]$y), map_df[i,]$diameter)
}))
 
gg <- ggplot(data=circles, aes(x=x, y=y, group=id))
gg <- gg + geom_path(color="steelblue", size=0.25)
 
# continental us
gg <- gg + xlim(-124.848974, -66.885444)
gg <- gg + ylim(24.396308, 49.384358)
gg <- gg + coord_map()
gg <- gg + labs(x="", y="", title="County Circles (OK, Ovals)")
gg <- gg + theme_map()
gg <- gg + theme(legend.position="none")

If you poke around a bit at the various map libraries in R, you should be able to figure out how to get those plotted as circles (and learn alot in the process).

Wrapping Up

R ggplot maps won’t and shouldn’t replace D3 maps for many reasons, paramount of which is interactivity. The generated SVG files are also fairly large and the non-SVG versions don’t look nearly as crisp (and aren’t as flexible). However, this should be a decent introductory primer on mapping and shapefiles and might come in handy when you want to use R to enhance maps with other data and write out (yep, R can read and write OGR) your own shapefiles for use in D3 (or other tools/languages).

Don’t forget that all source code (including TopoJSON/GeoJSON files and sample SVGs) are in their own gists:

If you figure out what is causing some of the errors I mentioned or make some epic maps of your own, don’t hesitate to drop a note in the comments.

Seeing the (day)light with R

The arrival of the autumnal equinox foreshadows the reality of longer nights and shorter days here in the northeast US. We can both see that reality and distract ourselves from it at the same time by firing up RStudio (or your favorite editor) and taking a look at the sunrise & sunset times based on our map coordinates using some functions from the R maptools package.

The sunriset function takes in a lat/lon pair, a range of dates and whether we want sunrise or sunset calculated and returns when those ephemeral events occur. For example, we can see the sunrise time for Portsmouth, NH on Christmas day this year (2014) via:

library(maptools)
 
# these functions need the lat/lon in an unusual format
portsmouth <- matrix(c(-70.762553, 43.071755), nrow=1)
for_date <- as.POSIXct("2014-12-25", tz="America/New_York")
sunriset(portsmouth, for_date, direction="sunrise", POSIXct.out=TRUE)
 
##         day_frac                time
## newlon 0.3007444 2014-12-25 07:13:04

We can pass in a vector of dates, to this function, and that means we’ll have data points we can work with to visualize this change. Let’s wrap the sequence generation into a function of our own and extract:

  • sunrise
  • sunset
  • solar noon
  • # hours of daylight

for every day in the sequence, returning the result as a data frame.

# adapted from http://r.789695.n4.nabble.com/maptools-sunrise-sunset-function-td874148.html
ephemeris <- function(lat, lon, date, span=1, tz="UTC") {
 
  # convert to the format we need
  lon.lat <- matrix(c(lon, lat), nrow=1)
 
  # make our sequence - using noon gets us around daylight saving time issues
  day <- as.POSIXct(date, tz=tz)
  sequence <- seq(from=day, length.out=span , by="days")
 
  # get our data
  sunrise <- sunriset(lon.lat, sequence, direction="sunrise", POSIXct.out=TRUE)
  sunset <- sunriset(lon.lat, sequence, direction="sunset", POSIXct.out=TRUE)
  solar_noon <- solarnoon(lon.lat, sequence, POSIXct.out=TRUE)
 
  # build a data frame from the vectors
  data.frame(date=as.Date(sunrise$time),
             sunrise=as.numeric(format(sunrise$time, "%H%M")),
             solarnoon=as.numeric(format(solar_noon$time, "%H%M")),
             sunset=as.numeric(format(sunset$time, "%H%M")),
             day_length=as.numeric(sunset$time-sunrise$time))
 
}

Now we can take a look at these values over 10 days near All Hallows Eve:

ephemeris(43.071755, -70.762553, "2014-10-31", 10, tz="America/New_York")
 
##          date sunrise solarnoon sunset day_length
## 1  2014-10-31     716      1226   1736  10.332477
## 2  2014-11-01     717      1226   1734  10.289145
## 3  2014-11-02     518      1026   1533  10.246169
## 4  2014-11-03     620      1126   1632  10.203563
## 5  2014-11-04     621      1126   1631  10.161346
## 6  2014-11-05     622      1126   1629  10.119535
## 7  2014-11-06     624      1126   1628  10.078148
## 8  2014-11-07     625      1126   1627  10.037204
## 9  2014-11-08     626      1126   1626   9.996721
## 10 2014-11-09     627      1126   1625   9.956719

We now have everything we need to visualize the seasonal daylight changes. We’ll use ggplot (with some help from the grid package) and build a two panel graph, one that gives us a “ribbon” view of what hours of the day are in daylight and the other just showing the changes in the total number of hours of daylight available during the day. We’ll build the function so that it will:

  • optionally show the current date/time (TRUE by default)
  • optionally show when solar noon is (FALSE by default)
  • optionally plot the graphs (TRUE by default)
  • return an arrangeGrob of the charts in the event we want to use them in other charts
library(ggplot2)
library(scales)
library(gridExtra)
 
# create two formatter functions for the x-axis display
 
# for graph #1 y-axis
time_format <- function(hrmn) substr(sprintf("%04d", hrmn),1,2)
 
# for graph #2 y-axis
pad5 <- function(num) sprintf("%2d", num)
 
daylight <- function(lat, lon, place, start_date, span=2, tz="UTC", 
                     show_solar_noon=FALSE, show_now=TRUE, plot=TRUE) {
 
  stopifnot(span>=2) # really doesn't make much sense to plot 1 value
 
  srss <- ephemeris(lat, lon, start_date, span, tz)
 
  x_label = ""
 
  gg <- ggplot(srss, aes(x=date))
  gg <- gg + geom_ribbon(aes(ymin=sunrise, ymax=sunset), fill="#ffeda0")
 
  if (show_solar_noon) gg <- gg + geom_line(aes(y=solarnoon), color="#fd8d3c")
 
  if (show_now) {
    gg <- gg + geom_vline(xintercept=as.numeric(as.Date(Sys.time())), color="#800026", linetype="longdash", size=0.25)
    gg <- gg + geom_hline(yintercept=as.numeric(format(Sys.time(), "%H%M")), color="#800026", linetype="longdash", size=0.25)
    x_label = sprintf("Current Date / Time: %s", format(Sys.time(), "%Y-%m-%d / %H:%M"))
  }
 
  gg <- gg + scale_x_date(expand=c(0,0), labels=date_format("%b '%y"))
  gg <- gg + scale_y_continuous(labels=time_format, limits=c(0,2400), breaks=seq(0, 2400, 200), expand=c(0,0))
  gg <- gg + labs(x=x_label, y="",
                  title=sprintf("Sunrise/set for %s\n%s ", place, paste0(range(srss$date), sep=" ", collapse="to ")))
  gg <- gg + theme_bw()
  gg <- gg + theme(panel.background=element_rect(fill="#525252"))
  gg <- gg + theme(panel.grid=element_blank())
 
  gg1 <- ggplot(srss, aes(x=date, y=day_length))
  gg1 <- gg1 + geom_area(fill="#ffeda0")
  gg1 <- gg1 + geom_line(color="#525252")
 
  if (show_now) gg1 <- gg1 + geom_vline(xintercept=as.numeric(as.Date(Sys.time())), color="#800026", linetype="longdash", size=0.25)
 
  gg1 <- gg1 + scale_x_date(expand=c(0,0), labels=date_format("%b '%y"))
  gg1 <- gg1 + scale_y_continuous(labels=pad5, limits=c(0,24), expand=c(0,0))
  gg1 <- gg1 + labs(x="", y="", title="Day(light) Length (hrs)")
  gg1 <- gg1 + theme_bw()
 
  if (plot) grid.arrange(gg, gg1, nrow=2)
 
  arrangeGrob(gg, gg1, nrow=2)
 
}

We can test our our new function using the same location and graph the sunlight data for a year starting September 1, 2014 (select graph for full-size version):

daylight(43.071755, -70.762553, "Portsmouth, NH", "2014-09-01", 365, tz="America/New_York")

sunlight

With the longer nights approaching we can further enhance the plotting function to add markers for solstices and perhaps even make a new version that compares sunlight across different geographical locations.

Complete code example is in this gist.

Charting/Mapping the Scottish Vote with R (an rvest/dplyr/tidyr/TopoJSON/ggplot tutorial)

The BBC did a pretty good job live tracking the Scotland secession vote, but I really didn’t like the color scheme they chose and decided to use the final tally site as the basis for another tutorial using the tools from the Hadleyverse and taking advantage of the fact that newer gdal libraries can read in TopoJSON/GeoJSON files, meaning we can use most of the maps the D3-ers create/use right in R.

We’ll need a few R packages to help us get, clean, format and chart the data:

library(rvest)
library(dplyr)
library(httr) # >0.5
library(tidyr)
library(gpclib)
library(rgeos)
library(sp)
library(maptools)
library(rgdal) # needs gdal > 1.11.0
library(ggplot2)
library(reshape2)
library(gridExtra)

The new rvest package makes it super-fun (and easy) to get data out of web pages (as I’ve mentioned on the sister blog), but said data is still web page data, usually geared towards making things render well in a browser, and we end up having to clean up the extracted fields to get useful data. Since we usually want a data frame from the extraction, an rvest idiom I’ve been playing with involves bundling the element extraction & cleanup code into one function and then using that to build the columns:

# extract data from rvest-ed <div>'s and clean it up a bit
# pass in the rvested HTML object and the CSS selector to extract, also 
# indicating whether we want a number or character vector returned
 
extractAndCleanup <- function(data, selector, make_numeric=FALSE) {
  x <- data %>% html_nodes(selector) %>% html_text()
  x <- gsub("^[[:punct:][:space:]]*|[[:punct:][:space:]]*$", "", x)
  if (make_numeric) x <- as.numeric(gsub("[,[:space:]]*", "", x))
  x
}
 
bbc_vote <- html("http://www.bbc.com/news/events/scotland-decides/results")
 
secede <- data.frame(
  council=bbc_vote %>% extractAndCleanup(".body-row__cell--council"),
  electorate=bbc_vote %>% extractAndCleanup(".body-row__cell--electorate", TRUE),
  yes=bbc_vote %>% extractAndCleanup(".body-row__cell--yes", TRUE),
  no=bbc_vote %>% extractAndCleanup(".body-row__cell--no", TRUE),
  stringsAsFactors=FALSE)

We can then compute whether the vote tally was to secede or not and assign a color in the event we choose to use base graphics for plotting (we won’t for this tutorial). I chose a muted version of the Union Jack red and the official Scottish blue for this exercise.

secede <- secede %>% mutate(gone=yes>no,
                            color=ifelse(gone, "#0065BD", "#CF142B77"))

Getting the map from the BBC site is just as simple. An inspection of the site in any decent browser with a “Developer” mode lets us see the elements being downloaded. For the BBC map, it reads the data from: http://static.bbci.co.uk/news/1.46.4-1166/js/data/maps/l/scotland-elections.js which is a TopoJSON object wrapped in two lines of extra javascript code. We’ll grab that file, clean it up and read the map into R using httr‘s new-ish ability to save to a data file:

GET("http://static.bbci.co.uk/news/1.46.4-1166/js/data/maps/l/scotland-elections.js",
    write_disk("data/scotland.json"), progress())
tmp <- readLines("data/scotland.json")
writeLines(tmp[2], "data/scotland.json")
 
map = readOGR("data/scotland.json", "scotland-elections")

We’ll want to work with the map using Council names, so we need to ensure the names from the extracted div elements match what’s in the TopoJSON file:

secede$council %in% map@data$name
 
##  [1]  TRUE  TRUE  TRUE FALSE  TRUE FALSE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE
## [13] FALSE  TRUE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE
## [25]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE

It looks like we’ll need to clean the names up a bit, but thankfully the names aren’t too far off:

secede$council <- gsub("&", "and", secede$council)
secede[secede$council=="Edinburgh",]$council = "City of Edinburgh"
secede[secede$council=="Glasgow",]$council = "Glasgow City"
secede[secede$council=="Comhairle nan Eilean Siar",]$council = "Na h-Eileanan an Iar"

If we were using base graphics for plotting, we’d also have to ensure the data was in the right order:

secede$council <- factor(secede$council, map@data$name, ordered=TRUE)
secede <- secede %>% arrange(council)

We’re going to use ggplot for the mapping portion, but the normal fortify process didn’t work on this TopoJSON file (some polygon errors emerged), so we’ll take another route and do the data Council name↔id mapping after the fortify call and merge the rest of our data into the map data frame:

map_df <- fortify(map)
 
# manually associate the map id's with the Council names and vote data
councils <- data.frame(id=0:(length(map@data$name)-1),
                       council=as.character(map@data$name))
map_df <- merge(map_df, councils, by="id")
map_df <- merge(map_df, secede, by="council")

Now we can generate the choropleth:

gg <- ggplot()
gg <- gg + geom_map(data=map_df, map=map_df,
                    aes(map_id=id, x=long, y=lat, group=group, fill=color),
                    color="white", size=0.25)
gg <- gg + scale_fill_manual(values=rev(unique(secede$color)),
                             labels=c("Yes", "No"), name="Secede?")
gg <- gg + xlim(extendrange(r=range(coordinates(map)[,1]), f=0.15))
gg <- gg + ylim(extendrange(r=range(coordinates(map)[,2]), f=0.07))
gg <- gg + coord_map()
gg <- gg + labs(x="", y="")
gg <- gg + theme_bw()
gg <- gg + theme(panel.grid=element_blank())
gg <- gg + theme(legend.position="none")
gg <- gg + theme(panel.border=element_blank())
gg <- gg + theme(axis.ticks=element_blank())
gg <- gg + theme(axis.text=element_blank())

A choropleth is all well-and-good, but—since we have the data–let’s add the bar chart to complete the presentation. We’ll combine some dplyr and tidyr calls to melt and subset our data frame:

secede_m <- secede %>%
  gather(variable, value, -council) %>%
  filter(variable %in% c("yes", "no")) %>%
  mutate(value=as.numeric(value))

For this exercise, we’ll plot the 100% stacked bars in order of the “No” votes, and we’ll pre-process this ordering to make the ggplot code easier on the eyes. We start by merging some data back into our melted data frame so we can build the sorted factor by the “No” value column and then make sure the Councils will be in that order:

secede_m <- merge(secede_m, secede, by="council")
secede_m$variable <- factor(secede_m$variable,
                            levels=c("yes", "no"), ordered=TRUE)
secede_m <- secede_m %>% arrange(no, variable)
secede_m$council <- factor(secede_m$council,
                           unique(secede_m$council), ordered=TRUE)

Finally, we can create the 100% stacked bar plot and combine it with the choropleth to build the final product:

gg1 <- ggplot(secede_m, aes(x=council, y=value, fill=factor(variable)))
gg1 <- gg1 + geom_bar(stat="identity", position="fill")
gg1 <- gg1 + scale_fill_manual(values=rev(unique(secede$color)),
                             labels=c("Yes", "No"), name="Secede?")
gg1 <- gg1 + geom_hline(yintercept=0.50, color="gray80")
gg1 <- gg1 + geom_text(aes(label=percent(yes/100)), y=0.08, color="white", size=3)
gg1 <- gg1 + geom_text(aes(label=percent(no/100)), y=0.92, color="white", size=3)
gg1 <- gg1 + coord_flip()
gg1 <- gg1 + labs(x="", y="")
gg1 <- gg1 + theme_bw()
gg1 <- gg1 + theme(panel.grid=element_blank())
gg1 <- gg1 + theme(legend.position="top")
gg1 <- gg1 + theme(panel.border=element_blank())
gg1 <- gg1 + theme(axis.ticks=element_blank())
gg1 <- gg1 + theme(axis.text.x=element_blank())
 
vote <- arrangeGrob(gg1, gg, ncol=2,
                     main=textGrob("Scotland Votes", gp=gpar(fontsize=20)))


(Click for larger version)

I’ve bundled this code up into it’s own github repo. The full project example has a few extra features as

  • it shows how to save the resultant data frame to an R data file (in case the BBC nukes the site)
  • also saves the cleaned-up JSON (getting minimal Scotland shapefiles is tricky so this one’s a keeper even with the polygon errors)
  • wraps all that in if statements so future analysis/vis can work with or without the live data being available.

Hadley really has to stop making R so fun to work with :-)

UPDATE

Based on a comment by Paul Drake suggesting that the BBC choropleth (and, hence, my direct clone of it) could be made more informative by showing the vote difference. Since it’s just changing two lines of code, here it is in-situ vs creating a new post.

gg <- gg + geom_map(data=map_df, map=map_df,
                    aes(map_id=id, x=long, y=lat, group=group, fill=yes-no),
                    color="white", size=0.25)
gg <- gg + scale_fill_gradient(low="#CF142B", high="#0065BD", 
                               name="Secede?\n(vote margin)", guide="legend")

R version of “An exploratory technique for visualizing the distributions of 100 variables:”

Rick Wicklin (@RickWicklin) made a recent post to the SAS blog on An exploratory technique for visualizing the distributions of 100 variables. It’s a very succinct tutorial on both the power of boxplots and how to make them in SAS (of course). I’m not one to let R be “out-boxed”, so I threw together a quick re-creation of his example, mostly as tutorial for any nascent R folks that come across it. (As an aside, I catch Rick’s and other cool, non-R stuff via the Stats Blogs blog aggregator.)

The R implementation (syntax notwithstanding) is extremely similar. First, we’ll need some packages to assist with data reshaping and pretty plotting:

library(reshape2)
library(ggplot2)

Then, we setup a list so we can pick from the same four distributions and set the random seed to make this example reproducible:

dists <- c(rnorm, rexp, rlnorm, runif)
 
set.seed(1492)

Now, we generate a data frame of the 100 variables with 1,000 observations, normalized from 0-1:

many_vars <- data.frame(sapply(1:100, function(x) {
 
  # generate 1,000 random samples
  tmp <- sample(dists, 1)[[1]](1000)
 
  # normalize them to be between 0 & 1
  (tmp - min(tmp)) / (max(tmp) - min(tmp))
 
}))

The sapply iterates over the numbers 1 through 100, passing each number into a function. Each iteration samples an object from the dists list (which are actual R functions) and then calls the function, telling it to generate 1,000 samples and normalize the result to be values between 0 & 1. By default, R will generate column names that begin with X:

str(many_vars[1:5]) # show the structure of the first 5 cols
 
## 'data.frame':    1000 obs. of  5 variables:
##  $ X1: num  0.1768 0.4173 0.5111 0.0319 0.0644 ...
##  $ X2: num  0.217 0.275 0.596 0.785 0.825 ...
##  $ X3: num  0.458 0.637 0.115 0.468 0.469 ...
##  $ X4: num  0.5186 0.0358 0.5927 0.1138 0.1514 ...
##  $ X5: num  0.2855 0.0786 0.2193 0.433 0.9634 ...

We’re going to plot the boxplots, sorted by the third quantile (just like in Rick’s example), so we’ll calculate their rank and then use those ranks (shortly) to order a factor varible:

ranks <- names(sort(rank(sapply(colnames(many_vars), function(x) {
  as.numeric(quantile(many_vars[,x], 0.75))
}))))

There’s alot going on in there. We pass the column names from the many_vars data frame to a function that will return the quantile we want. Since sapply preserves the names we passed in as well as the values, we extract them (via names) after we rank and sort the named vector, giving us a character vector in the order we’ll need:

str(ranks)
 
##  chr [1:100] "X29" "X8" "X92" "X43" "X11" "X52" "X34" ...

Just like in the SAS post, we’ll need to reshape the data into long format from wide format, which we can do with melt:

many_vars_m <- melt(as.matrix(many_vars))
 
str(many_vars_m)
 
## 'data.frame':    100000 obs. of  3 variables:
##  $ Var1 : int  1 2 3 4 5 6 7 8 9 10 ...
##  $ Var2 : Factor w/ 100 levels "X1","X2","X3",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ value: num  0.1768 0.4173 0.5111 0.0319 0.0644 ...

And, now we’ll use our ordered column names to ensure that our boxplots will be presented in the right order (it would be in alpha order if not). Factor variables in R are space-efficient and allow for handy manipulations like this (amongst other things). By default, many_vars_m$Var2 was in alpha order and this call just re-orders that factor.

many_vars_m$Var2 <- factor(many_vars_m$Var2, ranks)
 
str(many_vars_m)
 
## 'data.frame':    100000 obs. of  3 variables:
##  $ Var1 : int  1 2 3 4 5 6 7 8 9 10 ...
##  $ Var2 : Factor w/ 100 levels "X29","X8","X92",..: 24 24 24 24 24 24 24 24 24 24 ...
##  $ value: num  0.1768 0.4173 0.5111 0.0319 0.0644 ...

Lastly, we plot all our hard work (click/touch for larger version):

gg <- ggplot(many_vars_m, aes(x=Var2, y=value))
gg <- gg + geom_boxplot(fill="#BDD7E7", notch=TRUE, outlier.size=1)
gg <- gg + labs(x="")
gg <- gg + theme_bw()
gg <- gg + theme(panel.grid=element_blank())
gg <- gg + theme(axis.text.x=element_text(angle=-45, hjust=0.001, size=5))
gg

unnamed-chunk-9

Here’s the program in it’s entirety:

library(reshape2)
library(ggplot2)
 
dists <- c(rnorm, rexp, rlnorm, runif)
 
set.seed(1)
many_vars <- data.frame(sapply(1:100, function(x) {
  tmp <- sample(dists, 1)[[1]](1000)
  (tmp - min(tmp)) / (max(tmp) - min(tmp))
}))
 
ranks <- names(sort(rank(sapply(colnames(many_vars), function(x) {
  as.numeric(quantile(many_vars[,x], 0.75))
}))))
 
many_vars_m <- melt(as.matrix(many_vars))
 
many_vars_m$Var2 <- factor(many_vars_m$Var2, ranks)
 
gg <- ggplot(many_vars_m, aes(x=Var2, y=value))
gg <- gg + geom_boxplot(fill="steelblue", notch=TRUE, outlier.size=1)
gg <- gg + labs(x="")
gg <- gg + theme_bw()
gg <- gg + theme(panel.grid=element_blank())
gg <- gg + theme(axis.text.x=element_text(angle=-45, hjust=0.001))
gg

I tweaked the boxplot, using a notch and making the outliers take up a fewer pixels.

I’m definitely in agreement with Rick that this is an excellent way to compare many distributions.

Bonus points for the commenter who shows code to color the bars by which distribution generated them!

Rforecastio Package Update (1.1.0)

I’ve bumped up the version number of Rforecastio (github) to 1.1.0. The new features are:

  • removing the SSL certificate bypass check (it doesn’t need it anymore)
  • using plyr for easier conversion of JSON->data frame
  • adding in a new daily forecast data frame
  • roxygen2 inline documentation
library(Rforecastio)
library(ggplot2)
library(plyr)
 
# NEVER put API keys in revision control systems or source code!
fio.api.key= readLines("~/.forecast.io")
 
my.latitude = "43.2673"
my.longitude = "-70.8618"
 
fio.list <- fio.forecast(fio.api.key, my.latitude, my.longitude)
 
fio.gg <- ggplot(data=fio.list$hourly.df, aes(x=time, y=temperature))
fio.gg <- fio.gg + labs(y="Readings", x="Time", title="Houry Readings")
fio.gg <- fio.gg + geom_line(aes(y=humidity*100), color="green")
fio.gg <- fio.gg + geom_line(aes(y=temperature), color="red")
fio.gg <- fio.gg + geom_line(aes(y=dewPoint), color="blue")
fio.gg <- fio.gg + theme_bw()
fio.gg

daily

fio.gg <- ggplot(data=fio.list$daily.df, aes(x=time, y=temperature))
fio.gg <- fio.gg + labs(y="Readings", x="Time", title="Daily Readings")
fio.gg <- fio.gg + geom_line(aes(y=humidity*100), color="green")
fio.gg <- fio.gg + geom_line(aes(y=temperatureMax), color="red")
fio.gg <- fio.gg + geom_line(aes(y=temperatureMin), color="red", linetype=2)
fio.gg <- fio.gg + geom_line(aes(y=dewPoint), color="blue")
fio.gg <- fio.gg + theme_bw()
fio.gg

hourly

Moving From system() calls to Rcpp Interfaces

Over on the Data Driven Security Blog there’s a post on how to use Rcpp to interface with an external library (in this case ldns for DNS lookups). It builds on another post which uses system() to make a call to dig to lookup DNS TXT records.

The core code is below and at both the aforementioned blog post and this gist. The post walks you though creating a simple interface and a future post will cover how to build a full package interface to an external library.

Mapping the March 2014 California Earthquake with ggmap

I had no intention to blog this, but @jayjacobs convinced me otherwise. I was curious about the recent (end of March, 2014) California earthquake “storm” and did a quick plot for “fun” and personal use using ggmap/ggplot.

I used data from the Southern California Earthquake Center (that I cleaned up a bit and that you can find here) but would have used the USGS quake data if the site hadn’t been down when I tried to get it from there.

The code/process isn’t exactly rocket-science, but if you’re looking for a simple way to layer some data on a “real” map (vs handling shapefiles on your own) then this is a really compact/self-contained tutorial/example.

You can find the code & data over at github as well.

There’s lots of ‘splainin in the comments (which are prbly easier to read on the github site) but drop a note in the comments or on Twitter if it needs any further explanation. The graphic is SVG, so use a proper browser :-) or run the code in R if you can’t see it here.


(click for larger version)

library(ggplot2)
library(ggmap)
library(plyr)
library(grid)
library(gridExtra)
 
# read in cleaned up data
dat <- read.table("quakes.dat", header=TRUE, stringsAsFactors=FALSE)
 
# map decimal magnitudes into an integer range
dat$m <- cut(dat$MAG, c(0:10))
 
# convert to dates
dat$DATE <- as.Date(dat$DATE)
 
# so we can re-order the data frame
dat <- dat[order(dat$DATE),]
 
# not 100% necessary, but get just the numeric portion of the cut factor
dat$Magnitude <- factor(as.numeric(dat$m))
 
# sum up by date for the barplot
dat.sum <- count(dat, .(DATE, Magnitude))
 
# start the ggmap bit
# It's super-handy that it understands things like "Los Angeles" #spoffy
# I like the 'toner' version. Would also use a stamen map but I can't get 
# to it consistently from behind a proxy server
la <- get_map(location="Los Angeles", zoom=10, color="bw", maptype="toner")
 
# get base map layer
gg <- ggmap(la) 
 
# add points. Note that the plot will produce warnings for all points not in the
# lat/lon range of the base map layer. Also note that i'm encoding magnitude by
# size and color and using alpha for depth. because of the way the data is sorted
# the most recent quakes in the set should be on top
gg <- gg + geom_point(data=dat,
                      mapping=aes(x=LON, y=LAT, 
                                  size=MAG, fill=m, alpha=DEPTH), shape=21, color="black")
 
# this takes the magnitude domain and maps it to a better range of values (IMO)
gg <- gg + scale_size_continuous(range=c(1,15))
 
# this bit makes the right size color ramp. i like the reversed view better for this map
gg <- gg + scale_fill_manual(values=rev(terrain.colors(length(levels(dat$Magnitude)))))
gg <- gg + ggtitle("Recent Earthquakes in CA & NV")
 
# no need for a legend as the bars are pretty much the legend
gg <- gg + theme(legend.position="none")
 
 
# now for the bars. we work with the summarized data frame
gg.1 <- ggplot(dat.sum, aes(x=DATE, y=freq, group=Magnitude))
 
# normally, i dislike stacked bar charts, but this is one time i think they work well
gg.1 <- gg.1 + geom_bar(aes(fill=Magnitude), position="stack", stat="identity")
 
# fancy, schmanzy color mapping again
gg.1 <- gg.1 + scale_fill_manual(values=rev(terrain.colors(length(levels(dat$Magnitude)))))
 
# show the data source!
gg.1 <- gg.1 + labs(x="Data from: http://www.data.scec.org/recent/recenteqs/Maps/Los_Angeles.html", y="Quake Count")
gg.1 <- gg.1 + theme_bw() #stopthegray
 
# use grid.arrange to make the sizes work well
grid.arrange(gg, gg.1, nrow=2, ncol=1, heights=c(3,1))

Guardian Words: Visualized

Andy Kirk (@visualisingdata) & Lynn Cherny (@arnicas) tweeted about the Guardian Word Count service/archive site, lamenting the lack of visualizations:

This gave me a chance to bust out another Shiny app over on our Data Driven Security shiny server:

I used my trusty “Google-Drive-spreadsheet-IMPORTHTML-to-CSV” workflow (you can access the automagically updated data here) to make the CSV that updates daily on the site and is referenced by the Shiny/R code.

The code has been gist-ified, and I’ll be re-visiting it to refactor the data.frame creation bits and add some more charts as the data set gets larger.


(Don’t forget to take a peek at our new book, Data-Driven Security!)

Using Twitter as a Data Source For Monitoring Password Dumps

I shot a quick post over at the Data Driven Security blog explaining how to separate Twitter data gathering from R code via the Ruby t (github repo) command. Using t frees R code from having to be a Twitter processor and lets the analyst focus on analysis and visualization, plus you can use t as a substitute for Twitter GUIs if you’d rather play at the command-line:

$ t timeline ddsecblog
   @DDSecBlog
   Monitoring Credential Dumps Plus Using Twitter As a Data Source http://t.co/ThYbjRI9Za
 
   @DDSecBlog
   Nice intro to R + stats // Data Analysis and Statistical Inference free @datacamp_com course
   http://t.co/FC44FF9DSp
 
   @DDSecBlog
   Very accessible paper & cool approach to detection // Nazca: Detecting Malware Distribution in
   Large-Scale Networks http://t.co/fqrSaFvUK2
 
   @DDSecBlog
   Start of a new series by new contributing blogger @spttnnh! // @AlienVault rep db Longitudinal
   Study Part 1 : http://t.co/XM7m4zP0tr
 
   ...

The DDSec post shows how to mine the well-formatted output from the @dumpmon Twitter bot to visualize dump trends over time:

and has the code in-line and over at the DDSec github repo [R].

One More (Yet-another?) Olympic Medal Live-tracking Shiny App

I’m posting this mostly to show how to:

  • use the Google spreadsheet data-munging “hack” from the previous post in a Shiny context
  • include it seamlessly into a web page, and
  • run it locally without a great deal of wrangling

The code for the app is in this gist. It is unsurprisingly just like some spiffy other code you’ve seen apart from my aesthetic choices (Sochi blue! lines+dots! and, current rankings next to country names).

I won’t regurgitate the code here since it’s just as easy to view on github. You’re seeing the live results of the app below (unless you’ve been more conservative than most folks with your browser security settings),

but the app is actually hosted over at Data Driven Security, a blog and (woefully underpowered so reload if it coughs up blood, pls) Shiny server that I run with @jayjacobs. It appears in this WordPress post with the help of an IFRAME. It’s essentially the same technique the RStudio/Shiny folks use in many of their own examples.

The app uses bootstrapPage() to help make a more responsive layout which will react nicely in an IFRAME setting (since you won’t know the width of the browser area you’re trying to fit the Shiny output into).

In the ui.R file, I have the plotOutput() configured to scale to 100% of container width:

plotOutput("medalsPlot", width="100%")

and then create a seamless IFRAME that also sizes to max-width:

<iframe src="http://shiny.dds.ec/sochi2014/" 
        style="max-width:100%" 
        width="100%"
        height="500px"
        scrolling="no" 
        frameborder="no" 
        seamless="seamless">
</iframe>

The really cool part (IMO) about many Shiny apps is that you don’t need to rely on the external server to work with the visualization/output. Provided that:

  • the authors have coded their app to support local execution…
  • and presented the necessary ui.R, server.R, global.R, HTML/CSS & data files either as a github gist or a zip/gz/tar.gz file…
  • and you have the necessary libraries installed

then, you can start the app with a simple Rscript one-liner:

Rscript -e "shiny::runGist(8949172, launch.browser=TRUE)"

or

Rscript -e "shiny::runUrl('http://dds.ec/apps/sochi2014.tar.gz', launch.browser=TRUE)"

There is some danger doing this if you haven’t read through the R code prior, since it’s possible to stick some fairly malicious operations in an R script (hey, I’m an infosec professional, so we’re always paranoid :-). But, if you stick with using a gist and do examine the code, you should be fine.

Optimization WordPress Plugins & Solutions by W3 EDGE