Archive for the ‘R’ Category

Charting/Mapping the Scottish Vote with R (an rvest/dplyr/tidyr/TopoJSON/ggplot tutorial)

The BBC did a pretty good job live tracking the Scotland secession vote, but I really didn’t like the color scheme they chose and decided to use the final tally site as the basis for another tutorial using the tools from the Hadleyverse and taking advantage of the fact that newer gdal libraries can read in TopoJSON/GeoJSON files, meaning we can use most of the maps the D3-ers create/use right in R.

We’ll need a few R packages to help us get, clean, format and chart the data:

library(httr) # >0.5
library(rgdal) # needs gdal > 1.11.0

The new rvest package makes it super-fun (and easy) to get data out of web pages (as I’ve mentioned on the sister blog), but said data is still web page data, usually geared towards making things render well in a browser, and we end up having to clean up the extracted fields to get useful data. Since we usually want a data frame from the extraction, an rvest idiom I’ve been playing with involves bundling the element extraction & cleanup code into one function and then using that to build the columns:

# extract data from rvest-ed <div>'s and clean it up a bit
# pass in the rvested HTML object and the CSS selector to extract, also 
# indicating whether we want a number or character vector returned
extractAndCleanup <- function(data, selector, make_numeric=FALSE) {
  x <- data %>% html_nodes(selector) %>% html_text()
  x <- gsub("^[[:punct:][:space:]]*|[[:punct:][:space:]]*$", "", x)
  if (make_numeric) x <- as.numeric(gsub("[,[:space:]]*", "", x))
bbc_vote <- html("")
secede <- data.frame(
  council=bbc_vote %>% extractAndCleanup(".body-row__cell--council"),
  electorate=bbc_vote %>% extractAndCleanup(".body-row__cell--electorate", TRUE),
  yes=bbc_vote %>% extractAndCleanup(".body-row__cell--yes", TRUE),
  no=bbc_vote %>% extractAndCleanup(".body-row__cell--no", TRUE),

We can then compute whether the vote tally was to secede or not and assign a color in the event we choose to use base graphics for plotting (we won’t for this tutorial). I chose a muted version of the Union Jack red and the official Scottish blue for this exercise.

secede <- secede %>% mutate(gone=yes>no,
                            color=ifelse(gone, "#0065BD", "#CF142B77"))

Getting the map from the BBC site is just as simple. An inspection of the site in any decent browser with a “Developer” mode lets us see the elements being downloaded. For the BBC map, it reads the data from: which is a TopoJSON object wrapped in two lines of extra javascript code. We’ll grab that file, clean it up and read the map into R using httr‘s new-ish ability to save to a data file:

    write_disk("data/scotland.json"), progress())
tmp <- readLines("data/scotland.json")
writeLines(tmp[2], "data/scotland.json")
map = readOGR("data/scotland.json", "scotland-elections")

We’ll want to work with the map using Council names, so we need to ensure the names from the extracted div elements match what’s in the TopoJSON file:

secede$council %in% map@data$name

It looks like we’ll need to clean the names up a bit, but thankfully the names aren’t too far off:

secede$council <- gsub("&", "and", secede$council)
secede[secede$council=="Edinburgh",]$council = "City of Edinburgh"
secede[secede$council=="Glasgow",]$council = "Glasgow City"
secede[secede$council=="Comhairle nan Eilean Siar",]$council = "Na h-Eileanan an Iar"

If we were using base graphics for plotting, we’d also have to ensure the data was in the right order:

secede$council <- factor(secede$council, map@data$name, ordered=TRUE)
secede <- secede %>% arrange(council)

We’re going to use ggplot for the mapping portion, but the normal fortify process didn’t work on this TopoJSON file (some polygon errors emerged), so we’ll take another route and do the data Council name↔id mapping after the fortify call and merge the rest of our data into the map data frame:

map_df <- fortify(map)
# manually associate the map id's with the Council names and vote data
councils <- data.frame(id=0:(length(map@data$name)-1),
map_df <- merge(map_df, councils, by="id")
map_df <- merge(map_df, secede, by="council")

Now we can generate the choropleth:

gg <- ggplot()
gg <- gg + geom_map(data=map_df, map=map_df,
                    aes(map_id=id, x=long, y=lat, group=group, fill=color),
                    color="white", size=0.25)
gg <- gg + scale_fill_manual(values=rev(unique(secede$color)),
                             labels=c("Yes", "No"), name="Secede?")
gg <- gg + xlim(extendrange(r=range(coordinates(map)[,1]), f=0.15))
gg <- gg + ylim(extendrange(r=range(coordinates(map)[,2]), f=0.07))
gg <- gg + coord_map()
gg <- gg + labs(x="", y="")
gg <- gg + theme_bw()
gg <- gg + theme(panel.grid=element_blank())
gg <- gg + theme(legend.position="none")
gg <- gg + theme(panel.border=element_blank())
gg <- gg + theme(axis.ticks=element_blank())
gg <- gg + theme(axis.text=element_blank())

A choropleth is all well-and-good, but—since we have the data–let’s add the bar chart to complete the presentation. We’ll combine some dplyr and tidyr calls to melt and subset our data frame:

secede_m <- secede %>%
  gather(variable, value, -council) %>%
  filter(variable %in% c("yes", "no")) %>%

For this exercise, we’ll plot the 100% stacked bars in order of the “No” votes, and we’ll pre-process this ordering to make the ggplot code easier on the eyes. We start by merging some data back into our melted data frame so we can build the sorted factor by the “No” value column and then make sure the Councils will be in that order:

secede_m <- merge(secede_m, secede, by="council")
secede_m$variable <- factor(secede_m$variable,
                            levels=c("yes", "no"), ordered=TRUE)
secede_m <- secede_m %>% arrange(no, variable)
secede_m$council <- factor(secede_m$council,
                           unique(secede_m$council), ordered=TRUE)

Finally, we can create the 100% stacked bar plot and combine it with the choropleth to build the final product:

gg1 <- ggplot(secede_m, aes(x=council, y=value, fill=factor(variable)))
gg1 <- gg1 + geom_bar(stat="identity", position="fill")
gg1 <- gg1 + scale_fill_manual(values=rev(unique(secede$color)),
                             labels=c("Yes", "No"), name="Secede?")
gg1 <- gg1 + coord_flip()
gg1 <- gg1 + labs(x="", y="")
gg1 <- gg1 + theme_bw()
gg1 <- gg1 + theme(panel.grid=element_blank())
gg1 <- gg1 + theme(legend.position="top")
gg1 <- gg1 + theme(panel.border=element_blank())
gg1 <- gg1 + theme(axis.ticks=element_blank())
gg1 <- gg1 + theme(axis.text.x=element_blank())
vote <- arrangeGrob(gg1, gg, ncol=2,
                     main=textGrob("Scotland Votes", gp=gpar(fontsize=20)))

(Click for larger version)

I’ve bundled this code up into it’s own github repo. The full project example has a few extra features as

  • it shows how to save the resultant data frame to an R data file (in case the BBC nukes the site)
  • also saves the cleaned-up JSON (getting minimal Scotland shapefiles is tricky so this one’s a keeper even with the polygon errors)
  • wraps all that in if statements so future analysis/vis can work with or without the live data being available.

Hadley really has to stop making R so fun to work with :-)

R version of “An exploratory technique for visualizing the distributions of 100 variables:”

Rick Wicklin (@RickWicklin) made a recent post to the SAS blog on An exploratory technique for visualizing the distributions of 100 variables. It’s a very succinct tutorial on both the power of boxplots and how to make them in SAS (of course). I’m not one to let R be “out-boxed”, so I threw together a quick re-creation of his example, mostly as tutorial for any nascent R folks that come across it. (As an aside, I catch Rick’s and other cool, non-R stuff via the Stats Blogs blog aggregator.)

The R implementation (syntax notwithstanding) is extremely similar. First, we’ll need some packages to assist with data reshaping and pretty plotting:


Then, we setup a list so we can pick from the same four distributions and set the random seed to make this example reproducible:

dists <- c(rnorm, rexp, rlnorm, runif)

Now, we generate a data frame of the 100 variables with 1,000 observations, normalized from 0-1:

many_vars <- data.frame(sapply(1:100, function(x) {
  # generate 1,000 random samples
  tmp <- sample(dists, 1)[[1]](1000)
  # normalize them to be between 0 & 1
  (tmp - min(tmp)) / (max(tmp) - min(tmp))

The sapply iterates over the numbers 1 through 100, passing each number into a function. Each iteration samples an object from the dists list (which are actual R functions) and then calls the function, telling it to generate 1,000 samples and normalize the result to be values between 0 & 1. By default, R will generate column names that begin with X:

str(many_vars[1:5]) # show the structure of the first 5 cols
## 'data.frame':    1000 obs. of  5 variables:
##  $ X1: num  0.1768 0.4173 0.5111 0.0319 0.0644 ...
##  $ X2: num  0.217 0.275 0.596 0.785 0.825 ...
##  $ X3: num  0.458 0.637 0.115 0.468 0.469 ...
##  $ X4: num  0.5186 0.0358 0.5927 0.1138 0.1514 ...
##  $ X5: num  0.2855 0.0786 0.2193 0.433 0.9634 ...

We’re going to plot the boxplots, sorted by the third quantile (just like in Rick’s example), so we’ll calculate their rank and then use those ranks (shortly) to order a factor varible:

ranks <- names(sort(rank(sapply(colnames(many_vars), function(x) {
  as.numeric(quantile(many_vars[,x], 0.75))

There’s alot going on in there. We pass the column names from the many_vars data frame to a function that will return the quantile we want. Since sapply preserves the names we passed in as well as the values, we extract them (via names) after we rank and sort the named vector, giving us a character vector in the order we’ll need:

##  chr [1:100] "X29" "X8" "X92" "X43" "X11" "X52" "X34" ...

Just like in the SAS post, we’ll need to reshape the data into long format from wide format, which we can do with melt:

many_vars_m <- melt(as.matrix(many_vars))
## 'data.frame':    100000 obs. of  3 variables:
##  $ Var1 : int  1 2 3 4 5 6 7 8 9 10 ...
##  $ Var2 : Factor w/ 100 levels "X1","X2","X3",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ value: num  0.1768 0.4173 0.5111 0.0319 0.0644 ...

And, now we’ll use our ordered column names to ensure that our boxplots will be presented in the right order (it would be in alpha order if not). Factor variables in R are space-efficient and allow for handy manipulations like this (amongst other things). By default, many_vars_m$Var2 was in alpha order and this call just re-orders that factor.

many_vars_m$Var2 <- factor(many_vars_m$Var2, ranks)
## 'data.frame':    100000 obs. of  3 variables:
##  $ Var1 : int  1 2 3 4 5 6 7 8 9 10 ...
##  $ Var2 : Factor w/ 100 levels "X29","X8","X92",..: 24 24 24 24 24 24 24 24 24 24 ...
##  $ value: num  0.1768 0.4173 0.5111 0.0319 0.0644 ...

Lastly, we plot all our hard work (click/touch for larger version):

gg <- ggplot(many_vars_m, aes(x=Var2, y=value))
gg <- gg + geom_boxplot(fill="#BDD7E7", notch=TRUE, outlier.size=1)
gg <- gg + labs(x="")
gg <- gg + theme_bw()
gg <- gg + theme(panel.grid=element_blank())
gg <- gg + theme(axis.text.x=element_text(angle=-45, hjust=0.001, size=5))


Here’s the program in it’s entirety:

dists <- c(rnorm, rexp, rlnorm, runif)
many_vars <- data.frame(sapply(1:100, function(x) {
  tmp <- sample(dists, 1)[[1]](1000)
  (tmp - min(tmp)) / (max(tmp) - min(tmp))
ranks <- names(sort(rank(sapply(colnames(many_vars), function(x) {
  as.numeric(quantile(many_vars[,x], 0.75))
many_vars_m <- melt(as.matrix(many_vars))
many_vars_m$Var2 <- factor(many_vars_m$Var2, ranks)
gg <- ggplot(many_vars_m, aes(x=Var2, y=value))
gg <- gg + geom_boxplot(fill="steelblue", notch=TRUE, outlier.size=1)
gg <- gg + labs(x="")
gg <- gg + theme_bw()
gg <- gg + theme(panel.grid=element_blank())
gg <- gg + theme(axis.text.x=element_text(angle=-45, hjust=0.001))

I tweaked the boxplot, using a notch and making the outliers take up a fewer pixels.

I’m definitely in agreement with Rick that this is an excellent way to compare many distributions.

Bonus points for the commenter who shows code to color the bars by which distribution generated them!

Rforecastio Package Update (1.1.0)

I’ve bumped up the version number of Rforecastio (github) to 1.1.0. The new features are:

  • removing the SSL certificate bypass check (it doesn’t need it anymore)
  • using plyr for easier conversion of JSON->data frame
  • adding in a new daily forecast data frame
  • roxygen2 inline documentation
# NEVER put API keys in revision control systems or source code!
fio.api.key= readLines("~/")
my.latitude = "43.2673"
my.longitude = "-70.8618"
fio.list <- fio.forecast(fio.api.key, my.latitude, my.longitude) <- ggplot(data=fio.list$hourly.df, aes(x=time, y=temperature)) <- + labs(y="Readings", x="Time", title="Houry Readings") <- + geom_line(aes(y=humidity*100), color="green") <- + geom_line(aes(y=temperature), color="red") <- + geom_line(aes(y=dewPoint), color="blue") <- + theme_bw()

daily <- ggplot(data=fio.list$daily.df, aes(x=time, y=temperature)) <- + labs(y="Readings", x="Time", title="Daily Readings") <- + geom_line(aes(y=humidity*100), color="green") <- + geom_line(aes(y=temperatureMax), color="red") <- + geom_line(aes(y=temperatureMin), color="red", linetype=2) <- + geom_line(aes(y=dewPoint), color="blue") <- + theme_bw()


Moving From system() calls to Rcpp Interfaces

Over on the Data Driven Security Blog there’s a post on how to use Rcpp to interface with an external library (in this case ldns for DNS lookups). It builds on another post which uses system() to make a call to dig to lookup DNS TXT records.

The core code is below and at both the aforementioned blog post and this gist. The post walks you though creating a simple interface and a future post will cover how to build a full package interface to an external library.

Mapping the March 2014 California Earthquake with ggmap

I had no intention to blog this, but @jayjacobs convinced me otherwise. I was curious about the recent (end of March, 2014) California earthquake “storm” and did a quick plot for “fun” and personal use using ggmap/ggplot.

I used data from the Southern California Earthquake Center (that I cleaned up a bit and that you can find here) but would have used the USGS quake data if the site hadn’t been down when I tried to get it from there.

The code/process isn’t exactly rocket-science, but if you’re looking for a simple way to layer some data on a “real” map (vs handling shapefiles on your own) then this is a really compact/self-contained tutorial/example.

You can find the code & data over at github as well.

There’s lots of ‘splainin in the comments (which are prbly easier to read on the github site) but drop a note in the comments or on Twitter if it needs any further explanation. The graphic is SVG, so use a proper browser :-) or run the code in R if you can’t see it here.

(click for larger version)

# read in cleaned up data
dat <- read.table("quakes.dat", header=TRUE, stringsAsFactors=FALSE)
# map decimal magnitudes into an integer range
dat$m <- cut(dat$MAG, c(0:10))
# convert to dates
dat$DATE <- as.Date(dat$DATE)
# so we can re-order the data frame
dat <- dat[order(dat$DATE),]
# not 100% necessary, but get just the numeric portion of the cut factor
dat$Magnitude <- factor(as.numeric(dat$m))
# sum up by date for the barplot
dat.sum <- count(dat, .(DATE, Magnitude))
# start the ggmap bit
# It's super-handy that it understands things like "Los Angeles" #spoffy
# I like the 'toner' version. Would also use a stamen map but I can't get 
# to it consistently from behind a proxy server
la <- get_map(location="Los Angeles", zoom=10, color="bw", maptype="toner")
# get base map layer
gg <- ggmap(la) 
# add points. Note that the plot will produce warnings for all points not in the
# lat/lon range of the base map layer. Also note that i'm encoding magnitude by
# size and color and using alpha for depth. because of the way the data is sorted
# the most recent quakes in the set should be on top
gg <- gg + geom_point(data=dat,
                      mapping=aes(x=LON, y=LAT, 
                                  size=MAG, fill=m, alpha=DEPTH), shape=21, color="black")
# this takes the magnitude domain and maps it to a better range of values (IMO)
gg <- gg + scale_size_continuous(range=c(1,15))
# this bit makes the right size color ramp. i like the reversed view better for this map
gg <- gg + scale_fill_manual(values=rev(terrain.colors(length(levels(dat$Magnitude)))))
gg <- gg + ggtitle("Recent Earthquakes in CA & NV")
# no need for a legend as the bars are pretty much the legend
gg <- gg + theme(legend.position="none")
# now for the bars. we work with the summarized data frame
gg.1 <- ggplot(dat.sum, aes(x=DATE, y=freq, group=Magnitude))
# normally, i dislike stacked bar charts, but this is one time i think they work well
gg.1 <- gg.1 + geom_bar(aes(fill=Magnitude), position="stack", stat="identity")
# fancy, schmanzy color mapping again
gg.1 <- gg.1 + scale_fill_manual(values=rev(terrain.colors(length(levels(dat$Magnitude)))))
# show the data source!
gg.1 <- gg.1 + labs(x="Data from:", y="Quake Count")
gg.1 <- gg.1 + theme_bw() #stopthegray
# use grid.arrange to make the sizes work well
grid.arrange(gg, gg.1, nrow=2, ncol=1, heights=c(3,1))

Guardian Words: Visualized

Andy Kirk (@visualisingdata) & Lynn Cherny (@arnicas) tweeted about the Guardian Word Count service/archive site, lamenting the lack of visualizations:

This gave me a chance to bust out another Shiny app over on our Data Driven Security shiny server:

I used my trusty “Google-Drive-spreadsheet-IMPORTHTML-to-CSV” workflow (you can access the automagically updated data here) to make the CSV that updates daily on the site and is referenced by the Shiny/R code.

The code has been gist-ified, and I’ll be re-visiting it to refactor the data.frame creation bits and add some more charts as the data set gets larger.

(Don’t forget to take a peek at our new book, Data-Driven Security!)

Using Twitter as a Data Source For Monitoring Password Dumps

I shot a quick post over at the Data Driven Security blog explaining how to separate Twitter data gathering from R code via the Ruby t (github repo) command. Using t frees R code from having to be a Twitter processor and lets the analyst focus on analysis and visualization, plus you can use t as a substitute for Twitter GUIs if you’d rather play at the command-line:

$ t timeline ddsecblog
   Monitoring Credential Dumps Plus Using Twitter As a Data Source
   Nice intro to R + stats // Data Analysis and Statistical Inference free @datacamp_com course
   Very accessible paper & cool approach to detection // Nazca: Detecting Malware Distribution in
   Large-Scale Networks
   Start of a new series by new contributing blogger @spttnnh! // @AlienVault rep db Longitudinal
   Study Part 1 :

The DDSec post shows how to mine the well-formatted output from the @dumpmon Twitter bot to visualize dump trends over time:

and has the code in-line and over at the DDSec github repo [R].

One More (Yet-another?) Olympic Medal Live-tracking Shiny App

I’m posting this mostly to show how to:

  • use the Google spreadsheet data-munging “hack” from the previous post in a Shiny context
  • include it seamlessly into a web page, and
  • run it locally without a great deal of wrangling

The code for the app is in this gist. It is unsurprisingly just like some spiffy other code you’ve seen apart from my aesthetic choices (Sochi blue! lines+dots! and, current rankings next to country names).

I won’t regurgitate the code here since it’s just as easy to view on github. You’re seeing the live results of the app below (unless you’ve been more conservative than most folks with your browser security settings),

but the app is actually hosted over at Data Driven Security, a blog and (woefully underpowered so reload if it coughs up blood, pls) Shiny server that I run with @jayjacobs. It appears in this WordPress post with the help of an IFRAME. It’s essentially the same technique the RStudio/Shiny folks use in many of their own examples.

The app uses bootstrapPage() to help make a more responsive layout which will react nicely in an IFRAME setting (since you won’t know the width of the browser area you’re trying to fit the Shiny output into).

In the ui.R file, I have the plotOutput() configured to scale to 100% of container width:

plotOutput("medalsPlot", width="100%")

and then create a seamless IFRAME that also sizes to max-width:

<iframe src="" 

The really cool part (IMO) about many Shiny apps is that you don’t need to rely on the external server to work with the visualization/output. Provided that:

  • the authors have coded their app to support local execution…
  • and presented the necessary ui.R, server.R, global.R, HTML/CSS & data files either as a github gist or a zip/gz/tar.gz file…
  • and you have the necessary libraries installed

then, you can start the app with a simple Rscript one-liner:

Rscript -e "shiny::runGist(8949172, launch.browser=TRUE)"


Rscript -e "shiny::runUrl('', launch.browser=TRUE)"

There is some danger doing this if you haven’t read through the R code prior, since it’s possible to stick some fairly malicious operations in an R script (hey, I’m an infosec professional, so we’re always paranoid :-). But, if you stick with using a gist and do examine the code, you should be fine.

Live Google Spreadsheet For Keeping Track Of Sochi Medals

The “medals” R post by TRInker and re-blogged by Revolutions were both spiffy and a live example why there’s no point in not publishing raw data.

You don’t need to have R (or any other language) do the scraping, though. The “IMPORTHTML” function (yes, function names seem to be ALL CAPS now over at Google Drive) in Google Drive Spreadsheets can easily do the scraping with just s simple:


that will refresh on demand and every hour.

Here’s a live URL that will give back a CSV of the results which can easily be used in R thusly:

sochi.medals.URL = ""
medals <- read.csv(textConnection(getURL(sochi.medals.URL)), 
                   stringsAsFactors = FALSE)
'data.frame':  89 obs. of  6 variables:
$ Rank   : chr  "1" "2" "3" "4" ...
$ Country: chr  "Norway" "Canada" "Netherlands" "United States" ...
$ Gold   : int  4 4 3 2 2 1 1 1 1 1 ...
$ Silver : int  3 3 2 1 0 3 2 0 0 0 ...
$ Bronze : int  4 2 3 3 0 3 0 1 0 0 ...
$ Total  : int  11 9 8 6 2 7 3 2 1 1 ...
   Rank                                   Country Gold Silver Bronze Total
1     1                                    Norway    4      3      4    11
2     2                                    Canada    4      3      2     9
3     3                               Netherlands    3      2      3     8
4     4                             United States    2      1      3     6
5     5                                   Germany    2      0      0     2
6     6                              Russian Fed.    1      3      3     7
7     7                                   Austria    1      2      0     3
8     8                                    France    1      0      1     2
9    =9                                   Belarus    1      0      0     1
10   =9                                     Korea    1      0      0     1
11   =9                                    Poland    1      0      0     1
12   =9                                  Slovakia    1      0      0     1
13   =9                               Switzerland    1      0      0     1
14   14                                    Sweden    0      3      1     4
15   15                            Czech Republic    0      2      1     3
16   16                                  Slovenia    0      1      2     3
17   17                                     Italy    0      1      1     2
18  =18                                     China    0      1      0     1
19  =18                                   Finland    0      1      0     1
20  =20                             Great Britain    0      0      1     1
21  =20                                   Ukraine    0      0      1     1
22    -                                   Albania    0      0      0     0
23    -                                   Andorra    0      0      0     0
24    -                                 Argentina    0      0      0     0
25    -                                   Armenia    0      0      0     0
87    -                             Virgin Isl, B    0      0      0     0
88    -                            Virgin Isl, US    0      0      0     0
89    -                                  Zimbabwe    0      0      0     0

Which frees you up from dealing with the scraping and lets you focus solely on the data.

You can set it up in your own Google Docs as well, just make sure to publish the spreadhseet to the web (with ‘everyone’ read permisssions), strip off the pubhtml at the end of the published URL and add export?format=csv&gid=0 in its place.

Data Driven Security Roundup: betaPERT, Shiny, Honeypots, Passwords & Reproducible Research

Jay Jacobs (@jayjacobs)—my co-author of the soon-to-be-released book Data-Driven Security—& I have been hard at work over at the book’s sister-blog cranking out code to help security domain experts delve into the dark art of data science.

We’ve covered quite a bit of ground since January 1st, but I’m using this post to focus more on what we’ve produced using R, since that’s our go-to language.

Jay used the blog to do a long-form answer to a question asked by @dseverski on the SIRA mailing list and I piled on by adding a Shiny app into the mix (both posts make for a pretty #spiffy introduction to expert-opinion risk analyses in R).

Jay continued by releasing a new honeypot data set and corresponding two-part[1,2] post series to jump start analyses on that data. (There’s a D3 geo-visualization stuck in-between those posts if you’re into that sort of thing).

I got it into my head to start a project to build a password dump analytics tool in R (with much more coming soon on that, including a full-on R package + Shiny app combo) and also continue the discussion we started in the book on the need for the infusion of reproducible research principles and practices in the information security domain by building off of @sucuri_security’s Darkleech botnet research.

You can follow along at home with the blog via it’s RSS feed or via the @ddsecblog Twitter account. You can also play along at home if you feel you have something to contribute. It’s as simple as a github pull request and some really straightforward markdown. Take a look the blog’s github repo and hit me up (@hrbrmstr) for details if you’ve got something to share.

Optimization WordPress Plugins & Solutions by W3 EDGE