Archive for the ‘R’ Category

Mapping the March 2014 California Earthquake with ggmap

I had no intention to blog this, but @jayjacobs convinced me otherwise. I was curious about the recent (end of March, 2014) California earthquake “storm” and did a quick plot for “fun” and personal use using ggmap/ggplot.

I used data from the Southern California Earthquake Center (that I cleaned up a bit and that you can find here) but would have used the USGS quake data if the site hadn’t been down when I tried to get it from there.

The code/process isn’t exactly rocket-science, but if you’re looking for a simple way to layer some data on a “real” map (vs handling shapefiles on your own) then this is a really compact/self-contained tutorial/example.

You can find the code & data over at github as well.

There’s lots of ‘splainin in the comments (which are prbly easier to read on the github site) but drop a note in the comments or on Twitter if it needs any further explanation. The graphic is SVG, so use a proper browser :-) or run the code in R if you can’t see it here.

(click for larger version)

# read in cleaned up data
dat <- read.table("quakes.dat", header=TRUE, stringsAsFactors=FALSE)
# map decimal magnitudes into an integer range
dat$m <- cut(dat$MAG, c(0:10))
# convert to dates
dat$DATE <- as.Date(dat$DATE)
# so we can re-order the data frame
dat <- dat[order(dat$DATE),]
# not 100% necessary, but get just the numeric portion of the cut factor
dat$Magnitude <- factor(as.numeric(dat$m))
# sum up by date for the barplot
dat.sum <- count(dat, .(DATE, Magnitude))
# start the ggmap bit
# It's super-handy that it understands things like "Los Angeles" #spoffy
# I like the 'toner' version. Would also use a stamen map but I can't get 
# to it consistently from behind a proxy server
la <- get_map(location="Los Angeles", zoom=10, color="bw", maptype="toner")
# get base map layer
gg <- ggmap(la) 
# add points. Note that the plot will produce warnings for all points not in the
# lat/lon range of the base map layer. Also note that i'm encoding magnitude by
# size and color and using alpha for depth. because of the way the data is sorted
# the most recent quakes in the set should be on top
gg <- gg + geom_point(data=dat,
                      mapping=aes(x=LON, y=LAT, 
                                  size=MAG, fill=m, alpha=DEPTH), shape=21, color="black")
# this takes the magnitude domain and maps it to a better range of values (IMO)
gg <- gg + scale_size_continuous(range=c(1,15))
# this bit makes the right size color ramp. i like the reversed view better for this map
gg <- gg + scale_fill_manual(values=rev(terrain.colors(length(levels(dat$Magnitude)))))
gg <- gg + ggtitle("Recent Earthquakes in CA & NV")
# no need for a legend as the bars are pretty much the legend
gg <- gg + theme(legend.position="none")
# now for the bars. we work with the summarized data frame
gg.1 <- ggplot(dat.sum, aes(x=DATE, y=freq, group=Magnitude))
# normally, i dislike stacked bar charts, but this is one time i think they work well
gg.1 <- gg.1 + geom_bar(aes(fill=Magnitude), position="stack", stat="identity")
# fancy, schmanzy color mapping again
gg.1 <- gg.1 + scale_fill_manual(values=rev(terrain.colors(length(levels(dat$Magnitude)))))
# show the data source!
gg.1 <- gg.1 + labs(x="Data from:", y="Quake Count")
gg.1 <- gg.1 + theme_bw() #stopthegray
# use grid.arrange to make the sizes work well
grid.arrange(gg, gg.1, nrow=2, ncol=1, heights=c(3,1))

Guardian Words: Visualized

Andy Kirk (@visualisingdata) & Lynn Cherny (@arnicas) tweeted about the Guardian Word Count service/archive site, lamenting the lack of visualizations:

This gave me a chance to bust out another Shiny app over on our Data Driven Security shiny server:

I used my trusty “Google-Drive-spreadsheet-IMPORTHTML-to-CSV” workflow (you can access the automagically updated data here) to make the CSV that updates daily on the site and is referenced by the Shiny/R code.

The code has been gist-ified, and I’ll be re-visiting it to refactor the data.frame creation bits and add some more charts as the data set gets larger.

(Don’t forget to take a peek at our new book, Data-Driven Security!)

Using Twitter as a Data Source For Monitoring Password Dumps

I shot a quick post over at the Data Driven Security blog explaining how to separate Twitter data gathering from R code via the Ruby t (github repo) command. Using t frees R code from having to be a Twitter processor and lets the analyst focus on analysis and visualization, plus you can use t as a substitute for Twitter GUIs if you’d rather play at the command-line:

$ t timeline ddsecblog
   Monitoring Credential Dumps Plus Using Twitter As a Data Source
   Nice intro to R + stats // Data Analysis and Statistical Inference free @datacamp_com course
   Very accessible paper & cool approach to detection // Nazca: Detecting Malware Distribution in
   Large-Scale Networks
   Start of a new series by new contributing blogger @spttnnh! // @AlienVault rep db Longitudinal
   Study Part 1 :

The DDSec post shows how to mine the well-formatted output from the @dumpmon Twitter bot to visualize dump trends over time:

and has the code in-line and over at the DDSec github repo [R].

One More (Yet-another?) Olympic Medal Live-tracking Shiny App

I’m posting this mostly to show how to:

  • use the Google spreadsheet data-munging “hack” from the previous post in a Shiny context
  • include it seamlessly into a web page, and
  • run it locally without a great deal of wrangling

The code for the app is in this gist. It is unsurprisingly just like some spiffy other code you’ve seen apart from my aesthetic choices (Sochi blue! lines+dots! and, current rankings next to country names).

I won’t regurgitate the code here since it’s just as easy to view on github. You’re seeing the live results of the app below (unless you’ve been more conservative than most folks with your browser security settings),

but the app is actually hosted over at Data Driven Security, a blog and (woefully underpowered so reload if it coughs up blood, pls) Shiny server that I run with @jayjacobs. It appears in this WordPress post with the help of an IFRAME. It’s essentially the same technique the RStudio/Shiny folks use in many of their own examples.

The app uses bootstrapPage() to help make a more responsive layout which will react nicely in an IFRAME setting (since you won’t know the width of the browser area you’re trying to fit the Shiny output into).

In the ui.R file, I have the plotOutput() configured to scale to 100% of container width:

plotOutput("medalsPlot", width="100%")

and then create a seamless IFRAME that also sizes to max-width:

<iframe src="" 

The really cool part (IMO) about many Shiny apps is that you don’t need to rely on the external server to work with the visualization/output. Provided that:

  • the authors have coded their app to support local execution…
  • and presented the necessary ui.R, server.R, global.R, HTML/CSS & data files either as a github gist or a zip/gz/tar.gz file…
  • and you have the necessary libraries installed

then, you can start the app with a simple Rscript one-liner:

Rscript -e "shiny::runGist(8949172, launch.browser=TRUE)"


Rscript -e "shiny::runUrl('', launch.browser=TRUE)"

There is some danger doing this if you haven’t read through the R code prior, since it’s possible to stick some fairly malicious operations in an R script (hey, I’m an infosec professional, so we’re always paranoid :-). But, if you stick with using a gist and do examine the code, you should be fine.

Live Google Spreadsheet For Keeping Track Of Sochi Medals

The “medals” R post by TRInker and re-blogged by Revolutions were both spiffy and a live example why there’s no point in not publishing raw data.

You don’t need to have R (or any other language) do the scraping, though. The “IMPORTHTML” function (yes, function names seem to be ALL CAPS now over at Google Drive) in Google Drive Spreadsheets can easily do the scraping with just s simple:


that will refresh on demand and every hour.

Here’s a live URL that will give back a CSV of the results which can easily be used in R thusly:

sochi.medals.URL = ""
medals <- read.csv(textConnection(getURL(sochi.medals.URL)), 
                   stringsAsFactors = FALSE)
'data.frame':  89 obs. of  6 variables:
$ Rank   : chr  "1" "2" "3" "4" ...
$ Country: chr  "Norway" "Canada" "Netherlands" "United States" ...
$ Gold   : int  4 4 3 2 2 1 1 1 1 1 ...
$ Silver : int  3 3 2 1 0 3 2 0 0 0 ...
$ Bronze : int  4 2 3 3 0 3 0 1 0 0 ...
$ Total  : int  11 9 8 6 2 7 3 2 1 1 ...
   Rank                                   Country Gold Silver Bronze Total
1     1                                    Norway    4      3      4    11
2     2                                    Canada    4      3      2     9
3     3                               Netherlands    3      2      3     8
4     4                             United States    2      1      3     6
5     5                                   Germany    2      0      0     2
6     6                              Russian Fed.    1      3      3     7
7     7                                   Austria    1      2      0     3
8     8                                    France    1      0      1     2
9    =9                                   Belarus    1      0      0     1
10   =9                                     Korea    1      0      0     1
11   =9                                    Poland    1      0      0     1
12   =9                                  Slovakia    1      0      0     1
13   =9                               Switzerland    1      0      0     1
14   14                                    Sweden    0      3      1     4
15   15                            Czech Republic    0      2      1     3
16   16                                  Slovenia    0      1      2     3
17   17                                     Italy    0      1      1     2
18  =18                                     China    0      1      0     1
19  =18                                   Finland    0      1      0     1
20  =20                             Great Britain    0      0      1     1
21  =20                                   Ukraine    0      0      1     1
22    -                                   Albania    0      0      0     0
23    -                                   Andorra    0      0      0     0
24    -                                 Argentina    0      0      0     0
25    -                                   Armenia    0      0      0     0
87    -                             Virgin Isl, B    0      0      0     0
88    -                            Virgin Isl, US    0      0      0     0
89    -                                  Zimbabwe    0      0      0     0

Which frees you up from dealing with the scraping and lets you focus solely on the data.

You can set it up in your own Google Docs as well, just make sure to publish the spreadhseet to the web (with ‘everyone’ read permisssions), strip off the pubhtml at the end of the published URL and add export?format=csv&gid=0 in its place.

Data Driven Security Roundup: betaPERT, Shiny, Honeypots, Passwords & Reproducible Research

Jay Jacobs (@jayjacobs)—my co-author of the soon-to-be-released book Data-Driven Security—& I have been hard at work over at the book’s sister-blog cranking out code to help security domain experts delve into the dark art of data science.

We’ve covered quite a bit of ground since January 1st, but I’m using this post to focus more on what we’ve produced using R, since that’s our go-to language.

Jay used the blog to do a long-form answer to a question asked by @dseverski on the SIRA mailing list and I piled on by adding a Shiny app into the mix (both posts make for a pretty #spiffy introduction to expert-opinion risk analyses in R).

Jay continued by releasing a new honeypot data set and corresponding two-part[1,2] post series to jump start analyses on that data. (There’s a D3 geo-visualization stuck in-between those posts if you’re into that sort of thing).

I got it into my head to start a project to build a password dump analytics tool in R (with much more coming soon on that, including a full-on R package + Shiny app combo) and also continue the discussion we started in the book on the need for the infusion of reproducible research principles and practices in the information security domain by building off of @sucuri_security’s Darkleech botnet research.

You can follow along at home with the blog via it’s RSS feed or via the @ddsecblog Twitter account. You can also play along at home if you feel you have something to contribute. It’s as simple as a github pull request and some really straightforward markdown. Take a look the blog’s github repo and hit me up (@hrbrmstr) for details if you’ve got something to share.

Lies, Damn Lies, “Data Journalism” and Charts That Don’t Start at 0

This tweet by @moorehn (who usually is a superb economic journalist) really bugged me:

I grabbed the raw data from EPI: ( and properly started the graph at 0 for the y-axis and also broke out men & women (since the Excel spreadsheet had the data). It’s a really different picture:


I’m not saying employment is great right now, but it’s nowhere near a “ski jump”. So much for the state of data journalism at the start of 2014.

Here’s the hastily crafted R-code:

a <- read.csv("empvyear.csv")
b <- melt(a, id.vars="Year")
gg <- ggplot(data=b, aes(x=Year, y=value, group=variable))
gg <- gg + geom_line(aes(color=variable))
gg <- gg + ylim(0, 100)
gg <- gg + theme_economist()
gg <- gg + labs(x="Year", y="Employment as share of population (%)", 
                title="Employment-to-population ratio, age 25–54, 1975–2011")
gg <- gg + theme(legend.title = element_blank())

And, here’s the data extracted from the Excel file:


Change The Default “Shell…” Action In RStudio for OS X

RStudio is my R development environment of choice and I work primarily on/in Mac OS X. While it’s great that Apple provides a built-in Terminal application, I prefer to use iTerm 2 when I need to do work at a shell. The fine folks at RStudio provide a handy Shell… menu item off the Tools menu, but it (rightly) defaults to using Apple’s for functionality since they can’t assume what terminal application you are using.

To change it to iTerm (or whatever your favorite terminal application is) you need to fire up a text editor and change (make a backup first!) /Applications/ to contain the following modified AppleScript:

on run argv
  set dir to quoted form of (first item of argv)
  tell app "iTerm"
    tell the first terminal
      launch session "Default Session"
      tell the last session
        set name to "RStudio Session"
        write text "cd " & dir
      end tell
    end tell
  end tell
end run

That will open a new tab in iTerm, set the session (tab) name to “RStudio Session” and change the directory to the current working directory in RStudio.

More often than not, I’m just using Alfred to kick up iTerm and doing the cd myself, but this added tweak (which you have to do every time you upgrade RStudio) reduces the churn when I do end up using the feature within RStudio.

Points, Polygons and Power Outages

Most of my free coding time has been spent tweaking a D3-based live power outage tracker for Central Maine Power customers (there’s also a woefully less-featured Shiny app for it, too). There is some R associated with the D3 vis, but it’s limited to a cron job that’s makes the CSV files for the sparklines in D3 vis by

  • reading historical outage data from a database of scraped readings I’ve been keeping
  • filling out the time series
  • reducing it to an hourly time series, and
  • trimming the data set to the last 30 days of records:

# running in a cron job so no spurious text output pls
m <- dbDriver("MySQL");
con <- dbConnect(m, user='DBUSER', password='DBPASSWORD', host='localhost', dbname='DBNAME');
res <- dbSendQuery(con, "SELECT * FROM outage") # cld just pull the 2 fields I need
outages <- fetch(res, n = -1)
outages$ts <- as.POSIXct(gsub("\\:[0-9]+\\..*$","", outages$ts), format="%Y-%m-%d %H:%M")
# for each county we have data for
for (county in unique(outages$county)) {
  # get the 2 fields I need (shld prbly filter that in the SELECT)
  outage.raw <- outages[outages$county == county,c(1,4)]
  # make it a zoo object
  outage.zoo <- zoo(outage.raw$withoutpower, outage.raw$ts)
  # fill out the 15-minute readings
  complete.zoo <- merge(outage.zoo, zoo(, seq(start(outage.zoo), max(outages$ts), by="15 min")), all=TRUE)
  complete.zoo[] <- 0
  # shrink to hourly and trim at 30 days
  hourly.zoo <- last(to.hourly(complete.zoo), "30 days")
  # crank out a CSV  
  df <- data.frame(hourly.zoo)
  df <- data.frame(ts=rownames(df), withoutPower=df$complete.zoo.High)
  write.csv(df, sprintf("OUTPOUT_LOCATION/%s.csv",county), row.names=FALSE)

I knew there were other power companies in Maine, but CMP is the largest, so I focused my attention on getting data from it. After hearing an outage update on MPBN I decided to see if Bangor Hydro Electric had scrape-able content and it turns out there was a lovely (well, as lovely as XML can be) XML file delivered on the page with this “meh” Google push-pin map:


The XML file is used to build the markers for the map and has marker tags that look like this:

<marker name="Exeter" outages="18" 
        lat="44.96218" lng="-69.12253" 

I’m really only tracking county-level data and BHE does not provide that, even in the huge table of street-level outages that you’ll see on that outage page. I decided to use R to aggregate the BHE data to the county level via the “point-in-polygon” method.

Getting Right To the Point

To perform the aggregation in R, I needed county-level polygons for Maine. I already had that thanks to the previous work, but I wanted to optimize the search process, so I took the US counties shapefile and used OGR from the GDAL (Geospatial Data Abstraction Library) suite to extract just the Maine county polygons:

ogr2ogr -f "ESRI Shapefile" \
        -where "STATE_NAME = 'MAINE'" maine.shp counties.shp

You can see both a reduction in file size and complexity via ogrinfo:

$ll *shp
-rwxr-xr-x@ 1 bob  staff  1517624 Jan  8  2010 counties.shp
-rw-r--r--  1 bob  staff    12588 Dec 26 23:03 maine.shp
$ ogrinfo -sql "SELECT COUNT(*) FROM counties" counties.shp
INFO: Open of 'counties.shp'
      using driver 'ESRI Shapefile' successful.
Layer name: counties
Geometry: Polygon
Feature Count: 1
Layer SRS WKT:
COUNT_*: Integer (0.0)
  COUNT_* (Integer) = 3141
$ ogrinfo -sql "SELECT COUNT(*) FROM maine" maine.shp
INFO: Open of 'maine.shp'
      using driver 'ESRI Shapefile' successful.
Layer name: maine
Geometry: Polygon
Feature Count: 1
Layer SRS WKT:
COUNT_*: Integer (0.0)
  COUNT_* (Integer) = 16

The conversion reduces the file size from 1.5MB to ~12K and shrinks the number of polygons from 3,141 to 16. The counties.shp and maine.shp shapefiles were built from U.S. census data and have scads more information that you might want to use (i.e. perhaps, for correlation with the outage info, though storms are the prime causal entity for the outages :-):

$ ogrinfo -al -so counties.shp
INFO: Open of 'counties.shp'
      using driver 'ESRI Shapefile' successful.
Layer name: counties
Geometry: Polygon
Feature Count: 3141
Extent: (-176.806138, 18.921786) - (-66.969271, 71.406235)
Layer SRS WKT:
NAME: String (32.0)
STATE_NAME: String (25.0)
STATE_FIPS: String (2.0)
CNTY_FIPS: String (3.0)
FIPS: String (5.0)
POP2000: Integer (9.0)
POP2007: Integer (9.0)
POP00_SQMI: Real (10.1)
POP07_SQMI: Real (7.1)
WHITE: Integer (9.0)
BLACK: Integer (9.0)
AMERI_ES: Integer (9.0)
ASIAN: Integer (9.0)
HAWN_PI: Integer (9.0)
OTHER: Integer (9.0)
MULT_RACE: Integer (9.0)
HISPANIC: Integer (9.0)
MALES: Integer (9.0)
FEMALES: Integer (9.0)
AGE_UNDER5: Integer (9.0)
AGE_5_17: Integer (9.0)
AGE_18_21: Integer (9.0)
AGE_22_29: Integer (9.0)
AGE_30_39: Integer (9.0)
AGE_40_49: Integer (9.0)
AGE_50_64: Integer (9.0)
AGE_65_UP: Integer (9.0)
MED_AGE: Real (9.1)
MED_AGE_M: Real (9.1)
MED_AGE_F: Real (9.1)
HOUSEHOLDS: Integer (9.0)
AVE_HH_SZ: Real (9.2)
HSEHLD_1_M: Integer (9.0)
HSEHLD_1_F: Integer (9.0)
MARHH_CHD: Integer (9.0)
MARHH_NO_C: Integer (9.0)
MHH_CHILD: Integer (9.0)
FHH_CHILD: Integer (9.0)
FAMILIES: Integer (9.0)
AVE_FAM_SZ: Real (9.2)
HSE_UNITS: Integer (9.0)
VACANT: Integer (9.0)
OWNER_OCC: Integer (9.0)
RENTER_OCC: Integer (9.0)
NO_FARMS97: Real (11.0)
AVG_SIZE97: Real (11.0)
CROP_ACR97: Real (11.0)
AVG_SALE97: Real (7.2)
SQMI: Real (8.1)
OID: Integer (9.0)

With the new shapefile in hand, the basic workflow to get BHE outages at the county level is:

  • Read and parse the BHE outages XML file to get the lat/long pairs
  • Build a SpatialPoints object out of those pairs
  • Make a SpatialPolygonsDataFrame out of the reduced Maine counties shapefile
  • Overlay the points in the polygons and get the feature metadata intersection (including county)
  • Aggregate the outage data

The R code (below) does all that and is liberally commented. One has to appreciate how succinct the XML parsing is and the beautiful simplicity of the over() function (which does all the really hard work).

# Small script to get county-level outage info from Bangor Hydro
# Electric's town(-ish) level info
# BHE's outage google push-pin map is at
# read BHE outage XML file that was intended for the google map
# yep. One. Line. #takethatpython
doc <- xmlTreeParse("", 
# xmlToDataFrame() coughed up blood on that simple file, so we have to
# resort to menial labor to bend the XML to our will <- xmlToList(doc)
doc.attrs <-$.attrs$.attrs <- NULL
# this does the data frame conversion magic, tho it winces a bit
suppressWarnings(doc.df <- data.frame(,, 
# need numbers for some of the columns (vs strings)
doc.df$outages <- as.numeric(doc.df$outages)
doc.df$lat <- as.numeric(doc.df$lat)
doc.df$lng <- as.numeric(doc.df$lng)
# SpatialPoints likes matrices, note that it's in LON, LAT order
# that always messes me up for some reason
doc.m <- as.matrix(doc.df[,c(4,3)])
doc.pts <- SpatialPoints(doc.m)
# I trimmed down the country-wide counties file from
# with
#   ogr2ogr -f "ESRI Shapefile" -where "STATE_NAME = 'MAINE'" maine.shp counties.shp
# to both save load time and reduce the number of iterations for over() later
counties <- readShapePoly("maine.shp", repair=TRUE, IDvar="NAME")
# So, all the above was pretty much just for this next line which does  
# the "is this point 'a' in polygon 'b' automagically for us. 
found.pts <- over(doc.pts, counties)
# steal the column we need (county name) and squirrel it away with outage count
doc.df$county <- found.pts$NAME
doc.sub <- doc.df[,c(2,7)]
# aggregate the result to get outage count by county
count(doc.sub, c("county"), wt_var="outages")
##      county freq
##1    Hancock 4440
##2  Penobscot  869
##3      Waldo   28
##4 Washington  545
##5       <NA>  328

Astute readers will notice unresolved points (the NAs). I suspect they are right on coastal boundaries that were probably missed in these simplified county polygons. We can see what they are by looking at the NA entries in the merged data frame:

           name outages      lat       lng
35    Deer Isle       1 44.22451 -68.67778
38   Harborside     166 44.34900 -68.81555
39     Sorrento      43 44.47341 -68.17723
62    Bucksport      71 44.57369 -68.79562
70    Penobscot      40 44.44552 -68.81780
78      Bernard       1 44.24119 -68.35585
79   Stonington       5 44.15619 -68.66669
80 Roque Bluffs       1 44.61286 -67.47971

But a map would be more useful for those not familiar with Maine geography/extents:


ff = fortify(counties, region = "NAME")
missing <- doc.df[$county),]
gg <- ggplot(ff, aes(x = long, y = lat))
gg <- gg + geom_path(aes(group = group), size=0.15, fill="black")
gg <- gg + geom_point(data=missing, aes(x=lng, y=lat), 
                      color="#feb24c", size=3)
gg <- gg + coord_map(xlim=extendrange(range(missing$lng)), ylim=extendrange(range(missing$lat)))
gg <- gg + theme_bw()
gg <- gg + labs(x="", y="")
gg <- gg + theme(plot.background = element_rect(fill = "transparent",colour = NA),
                 panel.border = element_blank(),
                 panel.background =element_rect(fill = "transparent",colour = NA),
                 panel.grid = element_blank(),
                 axis.text = element_blank(),
                 axis.ticks = element_blank(),

The “zoom in” is done by taking and slightly extending the range of the extracted points via range() and extendrange(), reproduced below:

[1] -68.81780 -67.47971
[1] 44.15619 44.61286
[1] -68.88470 -67.41281
[1] 44.13336 44.63569

It turns out my suspicion was right, so to use this in “production” I’ll need a more accurate shapefile for Maine counties (which I have, but Descent is calling me, so it will have to wait for another day).

I’ll leave you with a non-Google push-pin map of outages that you can build upon (it needs some tweaking):


gg <- ggplot(ff, aes(x = long, y = lat))
gg <- gg + geom_polygon(aes(group = group), size=0.15, fill="black", color="#7f7f7f")
gg <- gg + geom_point(data=doc.df, aes(x=lng, y=lat, alpha=outages, size=outages), 
gg <- gg + coord_map(xlim=c(-71.5,-66.75), ylim=c(43,47.5))
gg <- gg + theme_bw()
gg <- gg + labs(x="", y="")
gg <- gg + theme(plot.background = element_rect(fill = "transparent",colour = NA),
                 panel.border = element_blank(),
                 panel.background =element_rect(fill = "transparent",colour = NA),
                 panel.grid = element_blank(),
                 axis.text = element_blank(),
                 axis.ticks = element_blank(),

You can find all the R code in one, compact gist.

Mapping Power Outages in Maine Dynamically with Shiny/R

I decided to forego the D3 map mentioned in the previous post in favor of a Shiny one since I had 90% of the mapping code written.

I binned the ranges into three groups, changed the color over to something more pleasant (with RColorBrewer), added an interactive table for the counties with outage and have the elements updating every minute.

You can see the Live Outage Map over at it’s live Shiny server. Source is below or over at github if you’ve got blockers enabled.

Optimization WordPress Plugins & Solutions by W3 EDGE