Not even going to put an `R` category on this since I don’t want to pollute R-bloggers with this tiny post, but I had to provide the option to let folks specify `ssl.verifypeer=FALSE` (so I made it a generic option to pass in any CURL parameters) and I had a couple gaping bugs that I missed due to not clearing out my environment before building & testing.
I’ve bumped up the version number of `Rforecastio` ([github](https://github.com/hrbrmstr/Rforecastio)) to `1.1.0`. The new
features are:
– removing the SSL certificate bypass check (it doesn’t need it
anymore)
– using `plyr` for easier conversion of JSON-\>data frame
– adding in a new `daily` forecast data frame
– roxygen2 inline documentation
library(Rforecastio) library(ggplot2) library(plyr) # NEVER put API keys in revision control systems or source code! fio.api.key= readLines("~/.forecast.io") my.latitude = "43.2673" my.longitude = "-70.8618" fio.list <- fio.forecast(fio.api.key, my.latitude, my.longitude) fio.gg <- ggplot(data=fio.list$hourly.df, aes(x=time, y=temperature)) fio.gg <- fio.gg + labs(y="Readings", x="Time", title="Houry Readings") fio.gg <- fio.gg + geom_line(aes(y=humidity*100), color="green") fio.gg <- fio.gg + geom_line(aes(y=temperature), color="red") fio.gg <- fio.gg + geom_line(aes(y=dewPoint), color="blue") fio.gg <- fio.gg + theme_bw() fio.gg

fio.gg <- ggplot(data=fio.list$daily.df, aes(x=time, y=temperature)) fio.gg <- fio.gg + labs(y="Readings", x="Time", title="Daily Readings") fio.gg <- fio.gg + geom_line(aes(y=humidity*100), color="green") fio.gg <- fio.gg + geom_line(aes(y=temperatureMax), color="red") fio.gg <- fio.gg + geom_line(aes(y=temperatureMin), color="red", linetype=2) fio.gg <- fio.gg + geom_line(aes(y=dewPoint), color="blue") fio.gg <- fio.gg + theme_bw() fio.gg

Over on the [Data Driven Security Blog](http://datadrivensecurity.info/blog/posts/2014/Apr/making-better-dns-txt-record-lookups-with-rcpp/) there’s a post on how to use `Rcpp` to interface with an external library (in this case `ldns` for DNS lookups). It builds on [another post](http://datadrivensecurity.info/blog/posts/2014/Apr/firewall-busting-asn-lookups/) which uses `system()` to make a call to `dig` to lookup DNS `TXT` records.
The core code is below and at both the aforementioned blog post and [this gist](https://gist.github.com/hrbrmstr/11286662). The post walks you though creating a simple interface and a future post will cover how to build a full package interface to an external library.
Andreas Diesner’s `#spiffy` [Fit2Tcx](https://github.com/adiesner/Fit2Tcx) command-line utility is a lightweight way to convert Garmin/ANT [FIT](http://www.thisisant.com/resources/fit) files to [TCX](http://en.wikipedia.org/wiki/Training_Center_XML) for further processing.
On a linux system, installing it is as simple as:
sudo add-apt-repository ppa:andreas-diesner/garminplugin sudo apt-get update sudo apt-get install fit2tcx
On a Mac OS X system, you’ll need to first grab the `tinyxml` package from `homebrew`:
brew install tinyxml
to install the necessary support library.
After a `git clone` of the Fit2Tcx repository, change the
DFLAGS += -s $(CREATE_LIB) $(CREATE_DEF)
line in `Makefile.in` to
DFLAGS += $(CREATE_LIB) $(CREATE_DEF)
and then do the typical `./configure && make` (there is no `test` target).
You’ll now have a relatively small `fit2tcx` binary that you can move to `/usr/local/bin` or wherever you like command-line utilities to be put.
You can also grab the [pre-compiled binary](http://rud.is/dl/fit2tcx.gz) (built on `OS X 10.9.2` with “latest” `Xcode`).
I had no intention to blog this, but @jayjacobs convinced me otherwise. I was curious about the recent (end of March, 2014) [California earthquake](http://www.latimes.com/local/lanow/la-me-ln-an-estimated-17-million-people-felt-51-earthquake-in-california-20140331,0,2465821.story#axzz2xfGBteq0) “storm” and did a quick plot for “fun” and personal use using `ggmap`/`ggplot`.
I used data from the [Southern California Earthquake Center](http://www.data.scec.org/recent/recenteqs/Maps/Los_Angeles.html) (that I cleaned up a bit and that you can find [here](/dl/quakes.dat)) but would have used the USGS quake data if the site hadn’t been down when I tried to get it from there.
The code/process isn’t exactly rocket-science, but if you’re looking for a simple way to layer some data on a “real” map (vs handling shapefiles on your own) then this is a really compact/self-contained tutorial/example.
You can find the code & data over at [github](https://gist.github.com/hrbrmstr/9921419) as well.
There’s lots of ‘splainin in the comments (which are prbly easier to read on the github site) but drop a note in the comments or on Twitter if it needs any further explanation. The graphic is SVG, so use a proper browser :-) or run the code in R if you can’t see it here.
(click for larger version)
library(ggplot2) library(ggmap) library(plyr) library(grid) library(gridExtra) # read in cleaned up data dat <- read.table("quakes.dat", header=TRUE, stringsAsFactors=FALSE) # map decimal magnitudes into an integer range dat$m <- cut(dat$MAG, c(0:10)) # convert to dates dat$DATE <- as.Date(dat$DATE) # so we can re-order the data frame dat <- dat[order(dat$DATE),] # not 100% necessary, but get just the numeric portion of the cut factor dat$Magnitude <- factor(as.numeric(dat$m)) # sum up by date for the barplot dat.sum <- count(dat, .(DATE, Magnitude)) # start the ggmap bit # It's super-handy that it understands things like "Los Angeles" #spoffy # I like the 'toner' version. Would also use a stamen map but I can't get # to it consistently from behind a proxy server la <- get_map(location="Los Angeles", zoom=10, color="bw", maptype="toner") # get base map layer gg <- ggmap(la) # add points. Note that the plot will produce warnings for all points not in the # lat/lon range of the base map layer. Also note that i'm encoding magnitude by # size and color and using alpha for depth. because of the way the data is sorted # the most recent quakes in the set should be on top gg <- gg + geom_point(data=dat, mapping=aes(x=LON, y=LAT, size=MAG, fill=m, alpha=DEPTH), shape=21, color="black") # this takes the magnitude domain and maps it to a better range of values (IMO) gg <- gg + scale_size_continuous(range=c(1,15)) # this bit makes the right size color ramp. i like the reversed view better for this map gg <- gg + scale_fill_manual(values=rev(terrain.colors(length(levels(dat$Magnitude))))) gg <- gg + ggtitle("Recent Earthquakes in CA & NV") # no need for a legend as the bars are pretty much the legend gg <- gg + theme(legend.position="none") # now for the bars. we work with the summarized data frame gg.1 <- ggplot(dat.sum, aes(x=DATE, y=freq, group=Magnitude)) # normally, i dislike stacked bar charts, but this is one time i think they work well gg.1 <- gg.1 + geom_bar(aes(fill=Magnitude), position="stack", stat="identity") # fancy, schmanzy color mapping again gg.1 <- gg.1 + scale_fill_manual(values=rev(terrain.colors(length(levels(dat$Magnitude))))) # show the data source! gg.1 <- gg.1 + labs(x="Data from: http://www.data.scec.org/recent/recenteqs/Maps/Los_Angeles.html", y="Quake Count") gg.1 <- gg.1 + theme_bw() #stopthegray # use grid.arrange to make the sizes work well grid.arrange(gg, gg.1, nrow=2, ncol=1, heights=c(3,1))
Andy Kirk (@visualisingdata) & Lynn Cherny (@arnicas) tweeted about the Guardian Word Count service/archive site, lamenting the lack of visualizations:
Want to know num of words written in each day's Guardian paper by section + approx reading time? http://t.co/wP4W1EzUsx via @bengoldacre
— Andy Kirk (@visualisingdata) March 15, 2014
This gave me a chance to bust out another [Shiny](http://www.rstudio.com/shiny/) app over on our [Data Driven Security](http://datadrivensecurity.info) [shiny server](http://shiny.dds.ec/guardian-words/):
I used my trusty “`Google-Drive-spreadsheet-IMPORTHTML-to-CSV`” workflow (you can access the automagically updated data [here](https://docs.google.com/spreadsheets/d/10CZhMhpFxTPWcLauam-ydKeFrdNgHEIehKznVMHFRM0/pubhtml)) to make the CSV that updates daily on the site and is referenced by the Shiny/R code.
The code has been [gist-ified](https://gist.github.com/hrbrmstr/9570488.js), and I’ll be re-visiting it to refactor the `data.frame` creation bits and add some more charts as the data set gets larger.
I shot a quick post over at the [Data Driven Security blog](http://bit.ly/1hyqJiT) explaining how to separate Twitter data gathering from R code via the Ruby `t` ([github repo](https://github.com/sferik/t)) command. Using `t` frees R code from having to be a Twitter processor and lets the analyst focus on analysis and visualization, plus you can use `t` as a substitute for Twitter GUIs if you’d rather play at the command-line:
$ t timeline ddsecblog @DDSecBlog Monitoring Credential Dumps Plus Using Twitter As a Data Source http://t.co/ThYbjRI9Za @DDSecBlog Nice intro to R + stats // Data Analysis and Statistical Inference free @datacamp_com course http://t.co/FC44FF9DSp @DDSecBlog Very accessible paper & cool approach to detection // Nazca: Detecting Malware Distribution in Large-Scale Networks http://t.co/fqrSaFvUK2 @DDSecBlog Start of a new series by new contributing blogger @spttnnh! // @AlienVault rep db Longitudinal Study Part 1 : http://t.co/XM7m4zP0tr ...
The DDSec post shows how to mine the well-formatted output from the @dumpmon Twitter bot to visualize dump trends over time:
and has the code in-line and over at the [DDSec github repo](https://github.com/ddsbook/blog/blob/master/extra/src/R/dumpmon.R) [R].
I’m posting this mostly to show how to:
– use the Google spreadsheet data-munging “hack” from the [previous post](http://rud.is/b/2014/02/11/live-google-spreadsheet-for-keeping-track-of-sochi-medals/) in a Shiny context
– include it seamlessly into a web page, and
– run it locally without a great deal of wrangling
The code for the app is [in this gist](https://gist.github.com/hrbrmstr/8949172). It is unsurprisingly just like [some spiffy other code](http://www.r-bloggers.com/winter-olympic-medal-standings-presented-by-r/) you’ve seen apart from my aesthetic choices (Sochi blue! lines+dots! and, current rankings next to country names).
I won’t regurgitate the code here since it’s just as easy to view on [github](https://gist.github.com/hrbrmstr/8949172). You’re seeing the live results of the app below (unless you’ve been more conservative than most folks with your browser security settings),
but the app is actually hosted over at [Data Driven Security](http://shiny.dds.ec/sochi2014/), a blog and (woefully underpowered so reload if it coughs up blood, pls) Shiny server that I run with @jayjacobs. It appears in this WordPress post with the help of an `IFRAME`. It’s essentially the same technique the RStudio/Shiny folks use in many of their own examples.
The app uses [bootstrapPage()
](http://www.rdocumentation.org/packages/shiny/functions/bootstrapPage) to help make a more responsive layout which will react nicely in an `IFRAME` setting (since you won’t know the width of the browser area you’re trying to fit the Shiny output into).
In the `ui.R` file, I have the [plotOutput()
](http://www.rdocumentation.org/packages/shiny/functions/plotOutput) configured to scale to 100% of container width:
plotOutput("medalsPlot", width="100%")
and then create a seamless `IFRAME` that also sizes to max-width:
<iframe src="http://shiny.dds.ec/sochi2014/" style="max-width:100%" width="100%" height="500px" scrolling="no" frameborder="no" seamless="seamless"> </iframe>
The *really cool* part (IMO) about many Shiny apps is that you don’t need to rely on the external server to work with the visualization/output. Provided that:
– the authors have coded their app to support local execution…
– and presented the necessary `ui.R`, `server.R`, `global.R`, HTML/CSS & data files either as a github gist or a zip/gz/tar.gz file…
– and **you** have the necessary libraries installed
then, you can start the app with a simple [Rscript
](http://www.rdocumentation.org/packages/utils/functions/Rscript) one-liner:
Rscript -e "shiny::runGist(8949172, launch.browser=TRUE)"
or
Rscript -e "shiny::runUrl('http://dds.ec/apps/sochi2014.tar.gz', launch.browser=TRUE)"
There is *some* danger doing this if you haven’t read through the R code prior, since it’s possible to stick some fairly malicious operations in an R script (hey, I’m an infosec professional, so we’re always paranoid :-). But, if you stick with using a gist and do examine the code, you should be fine.