Skip navigation

Tag Archives: post

I work with internet-scale data and do my fair share of macro-analyses on vulnerabilities. I use the R semver package for most of my work and wanted to blather on a bit about it since it’s super-helpful for this work and doesn’t get the attention it deserves. semver makes it possible to create charts like this:

which are very helpful in when conducting exposure analytics.

We’ll need a few packages to help us along the way:

library(here) # file mgmt
library(semver) # the whole purpose of the blog post
library(rvest) # we'll need this to get version->year mappings
library(stringi) # b/c I'm still too lazy to switch to ore
library(hrbrthemes) # pretty graphs
library(tidyverse) # sane data processing idioms

By issuing a stats command to a memcached instance you can get a full list of statistics for the server. The recent newsmaking DDoS used this feature in conjunction with address spoofing to create 30 minutes of chaos for GitHub.

I sent a stats command (followed by a newline) to a vanilla memcached installation and it returned 53 lines (1108 bytes) of STAT results that look something like this:

STAT pid 7646
STAT uptime 141
STAT time 1520447469
STAT version 1.4.25 Ubuntu
STAT libevent 2.0.21-stable
...

The version bit is what we’re after, but there are plenty of other variables you could just as easily focus on if you use memcached in any production capacity.

I extracted raw version response data from our most recent scan for open memcached servers on the internet. For ethical reasons, I cannot blindly share the entire raw data set but hit up research@rapid7.com if you have a need or desire to work with this data.

Let’s read it in and take a look:

version_strings <- read_lines(here("data", "versions.txt"))

set.seed(2018-03-07)

sample(version_strings, 50)

##  [1] "STAT version 1.4.5"             "STAT version 1.4.17"           
##  [3] "STAT version 1.4.25"            "STAT version 1.4.31"           
##  [5] "STAT version 1.4.25"            "STAT version 1.2.6"            
##  [7] "STAT version 1.2.6"             "STAT version 1.4.15"           
##  [9] "STAT version 1.4.17"            "STAT version 1.4.4"            
## [11] "STAT version 1.4.5"             "STAT version 1.2.6"            
## [13] "STAT version 1.4.2"             "STAT version 1.4.14 (Ubuntu)"  
## [15] "STAT version 1.4.7"             "STAT version 1.4.39"           
## [17] "STAT version 1.4.4-14-g9c660c0" "STAT version 1.2.6"            
## [19] "STAT version 1.2.6"             "STAT version 1.4.14"           
## [21] "STAT version 1.4.4-14-g9c660c0" "STAT version 1.4.37"           
## [23] "STAT version 1.4.13"            "STAT version 1.4.4"            
## [25] "STAT version 1.4.17"            "STAT version 1.2.6"            
## [27] "STAT version 1.4.37"            "STAT version 1.4.13"           
## [29] "STAT version 1.4.25"            "STAT version 1.4.15"           
## [31] "STAT version 1.4.25"            "STAT version 1.2.6"            
## [33] "STAT version 1.4.10"            "STAT version 1.4.25"           
## [35] "STAT version 1.4.25"            "STAT version 1.4.9"            
## [37] "STAT version 1.4.30"            "STAT version 1.4.21"           
## [39] "STAT version 1.4.15"            "STAT version 1.4.31"           
## [41] "STAT version 1.4.13"            "STAT version 1.2.6"            
## [43] "STAT version 1.4.13"            "STAT version 1.4.15"           
## [45] "STAT version 1.4.19"            "STAT version 1.4.25 Ubuntu"    
## [47] "STAT version 1.4.37"            "STAT version 1.4.4-14-g9c660c0"
## [49] "STAT version 1.2.6"             "STAT version 1.4.25 Ubuntu"

It’s in decent shape, but it needs some work if we’re going to do a version analysis with it. Let’s clean it up a bit:

data_frame(
  string = stri_match_first_regex(version_strings, "STAT version (.*)$")[,2]
) -> versions

count(versions, string, sort = TRUE) %>%
  knitr::kable(format="markdown")
string n
1.4.15 1966
1.2.6 1764
1.4.17 1101
1.4.37 949
1.4.13 725
1.4.4 531
1.4.25 511
1.4.20 368
1.4.14 (Ubuntu) 334
1.4.21 309
1.4.25 Ubuntu 290
1.4.24 259

Much better! However, we really only need the major parts of the semantic version string for a macro view, so let’s remove non-version strings completely and extract just the major, minor and patch bits:

filter(versions, !stri_detect_fixed(string, "UNKNOWN")) %>% # get rid of things we can't use
  mutate(string = stri_match_first_regex(
    string, "([[:digit:]]+\\.[[:digit:]]+\\.[[:digit:]]+)")[,2] # for a macro-view, the discrete sub-versions aren't important
  ) -> versions

count(versions, string, sort = TRUE) %>%
  knitr::kable(format="markdown")
string n
1.4.15 1966
1.2.6 1764
1.4.17 1101
1.4.37 949
1.4.25 801
1.4.4 747
1.4.13 727
1.4.14 385
1.4.20 368
1.4.21 309
1.4.24 264

Much, much better! Now, let’s dig into the versions a bit. Using semver is dirt-simple. Just use parse_version() to get the usable bits out:

ex_ver <- semver::parse_version(head(versions$string[1]))

ex_ver
## [1] Maj: 1 Min: 4 Pat: 25

str(ex_ver)
## List of 1
##  $ :Class 'svptr' <externalptr> 
##  - attr(*, "class")= chr "svlist"

It’s a special class, referencing an external pointer (the package relies on an underling C++ library and wraps everything up in a bow for us).

These objects can be compared, ordered, sorted, etc but I tend to just turn the parsed versions into a data frame that can be associated back with the main strings. That way we keep things pretty tidy and have tons of flexibility.

bind_cols(
  versions,
  pull(versions, string) %>%
    semver::parse_version() %>%
    as.data.frame()
) %>%
  arrange(major, minor, patch) %>%
  mutate(string = factor(string, levels = unique(string))) -> versions

versions
## # A tibble: 11,157 x 6
##    string major minor patch prerelease build
##    <fct>  <int> <int> <int> <chr>      <chr>
##  1 1.2.0      1     2     0 ""         ""   
##  2 1.2.0      1     2     0 ""         ""   
##  3 1.2.5      1     2     5 ""         ""   
##  4 1.2.5      1     2     5 ""         ""   
##  5 1.2.5      1     2     5 ""         ""   
##  6 1.2.5      1     2     5 ""         ""   
##  7 1.2.5      1     2     5 ""         ""   
##  8 1.2.5      1     2     5 ""         ""   
##  9 1.2.5      1     2     5 ""         ""   
## 10 1.2.5      1     2     5 ""         ""   
## # ... with 11,147 more rows

Now we have a tidy data frame and I did the extra step of creating an ordered factor out of the version strings since they are ordinal values. With just this step, we have everything we need to do a basic plot shoing the version counts in-order:

count(versions, string) %>%
  ggplot() +
  geom_segment(
    aes(string, n, xend = string, yend = 0),
    size = 2, color = "lightslategray"
  ) +
  scale_y_comma() +
  labs(
    x = "memcached version", y = "# instances found",
    title = "Distribution of memcached versions"
  ) +
  theme_ipsum_ps(grid = "Y") +
  theme(axis.text.x = element_text(hjust = 1, vjust = 0.5, angle = 90))

memcached versions (raw)

That chart is informative on its own since we get the perspective that there are some really old versions exposed. But, how old are they? Projects like Chrome or Firefox churn through versions regularly/quickly (on purpose). To make more sense out of this we’ll need more info on releases.

This is where things can get ugly for folks who do not have commercial software management databases handy (or are analyzing a piece of software that hasn’t made it to one of those databases yet). The memcached project maintains a wiki page of version history that’s mostly complete, and definitely complete enough for this exercise. It will some processing before we can associate a version to a year.

GitHub does not allow scraping of their site and — off the top of my head — I do not know if there is a “wiki” API endpoint, but I do know that you can tack on .wiki.git to the end of a GitHub repo to clone the wiki pages, so we’ll use that knowledge and the git2r package to gain access to the
ReleaseNotes.md file that has the data we need:

td <- tempfile("wiki", fileext="git") # temporary "directory"

dir.create(td)

git2r::clone(
  url = "git@github.com:memcached/memcached.wiki.git",
  local_path = td,
  credentials = git2r::cred_ssh_key() # need GH ssh keys setup!
) -> repo
## cloning into '/var/folders/1w/2d82v7ts3gs98tc6v772h8s40000gp/T//Rtmpb209Sk/wiki180eb3c6addcbgit'...
## Receiving objects:   1% (5/481),    8 kb
## Receiving objects:  11% (53/481),    8 kb
## Receiving objects:  21% (102/481),   49 kb
## Receiving objects:  31% (150/481),   81 kb
## Receiving objects:  41% (198/481),  113 kb
## Receiving objects:  51% (246/481),  177 kb
## Receiving objects:  61% (294/481),  177 kb
## Receiving objects:  71% (342/481),  192 kb
## Receiving objects:  81% (390/481),  192 kb
## Receiving objects:  91% (438/481),  192 kb
## Receiving objects: 100% (481/481),  192 kb, done.

read_lines(file.path(repo@path, "ReleaseNotes.md")) %>%
  keep(stri_detect_fixed, "[[ReleaseNotes") %>%
  stri_replace_first_regex(" \\* \\[\\[.*]] ", "") %>%
  stri_split_fixed(" ", 2, simplify = TRUE) %>%
  as_data_frame() %>%
  set_names(c("string", "release_year")) %>%
  mutate(string = stri_trim_both(string)) %>%
  mutate(release_year = stri_replace_first_fixed(release_year, "(", "")) %>% # remove leading parens
  mutate(release_year = stri_replace_all_regex(release_year, "\\-.*$", "")) %>% # we only want year so remove remaining date bits from easy ones
  mutate(release_year = stri_replace_all_regex(release_year, "^.*, ", "")) %>% # take care of most of the rest of the ugly ones
  mutate(release_year = stri_replace_all_regex(release_year, "^[[:alpha:]].* ", "")) %>% # take care of the straggler
  mutate(release_year = stri_replace_last_fixed(release_year, ")", "")) %>% # remove any trailing parens
  mutate(release_year = as.numeric(release_year)) -> memcached_releases # make it numeric

unlink(td, recursive = TRUE) # cleanup the git repo we downloaded

memcached_releases
## # A tibble: 49 x 2
##    string release_year
##    <chr>         <dbl>
##  1 1.5.6          2018
##  2 1.5.5          2018
##  3 1.5.4          2017
##  4 1.5.3          2017
##  5 1.5.2          2017
##  6 1.5.1          2017
##  7 1.5.0          2017
##  8 1.4.39         2017
##  9 1.4.38         2017
## 10 1.4.37         2017
## # ... with 39 more rows

We have more versions in our internet-scraped memcached versions data
set than this wiki page has on it, so we need to restrict the official
release history to what we have. Then, we only want a single instance of
each year for the annotations, so we’ll have to do some further processing:

filter(memcached_releases, string %in% unique(versions$string)) %>%
  mutate(string = factor(string, levels = levels(versions$string))) %>%
  group_by(release_year) %>%
  arrange(desc(string)) %>%
  slice(1) %>%
  ungroup() -> annotation_df

knitr::kable(annotation_df, "markdown")
string release_year
1.4.4 2009
1.4.5 2010
1.4.10 2011
1.4.15 2012
1.4.17 2013
1.4.22 2014
1.4.25 2015
1.4.33 2016
1.5.4 2017
1.5.6 2018

Now, we’re ready to add the annotation layers! We’ll take a blind stab at it before adding in further aesthetic customization:

version_counts <- count(versions, string) # no piping this time

ggplot() +
  geom_blank(data = version_counts,aes(string, n)) + # prime the scales
  geom_vline(
    data = annotation_df, aes(xintercept = as.numeric(string)),
    size = 0.5, linetype = "dotted", color = "orange"
  ) +
  geom_segment(
    data = version_counts,
    aes(string, n, xend = string, yend = 0),
    size = 2, color = "lightslategray"
  ) +
  geom_label(
    data = annotation_df, aes(string, Inf, label=release_year),
    family = font_ps, size = 2.5, color = "lightslateblue",
    hjust = 0, vjust = 1, label.size = 0
  ) +
  scale_y_comma() +
  labs(
    x = "memcached version", y = "# instances found",
    title = "Distribution of memcached versions"
  ) +
  theme_ipsum_ps(grid = "Y") +
  theme(axis.text.x = element_text(hjust = 1, vjust = 0.5, angle = 90))

Almost got it in ggpar 1! We need to tweak this so that the labels do not overlap each other and do not obstruct the segment bars. We can do most of this work in geom_segment() itself, plus add a bit of a tweak to the Y axis scale:

ggplot() +
  geom_blank(data = version_counts,aes(string, n)) + # prime the scales
  geom_vline(
    data = annotation_df, aes(xintercept = as.numeric(string)),
    size = 0.5, linetype = "dotted", color = "orange"
  ) +
  geom_segment(
    data = version_counts,
    aes(string, n, xend = string, yend = 0),
    size = 2, color = "lightslategray"
  ) +
  geom_label(
    data = annotation_df, aes(string, Inf, label=release_year), vjust = 1,
    family = font_ps, size = 2.5, color = "lightslateblue", label.size = 0,
    hjust = c(1, 0, 1, 1, 0, 1, 0, 0, 1, 0),
    nudge_x = c(-0.1, 0.1, -0.1, -0.1, 0.1, -0.1, 0.1, 0.1, -0.1, 0.1)
  ) +
  scale_y_comma(limits = c(0, 2050)) +
  labs(
    x = "memcached version", y = "# instances found",
    title = "Distribution of memcached versions"
  ) +
  theme_ipsum_ps(grid = "Y") +
  theme(axis.text.x = element_text(hjust = 1, vjust = 0.5, angle = 90))

Now, we have version and year info to we can get a better idea of the scope of exposure (and, just how much technical debt many organizations have accrued).

With the ordinal version inforamtion we can also perform other statistical operations as well. All due to the semver package.

You can find this R project over at GitHub.

Tis the season for finding out how well Maine fisherfolk did last year; specifically, Maine lobsterfolk.

Most of the news sites in Maine do a feature on the annual landings (here’s one from Bangor Daily News). There was a marked decline — the largest ever — in both poundage and revenue in 2017 and many sources point to the need to improve fishery management to help ensure both the environmental and economic health of the state.

My preferred view for this annual catch comparison is a connected scatterplot, tracing a path along the years. That way you get the feel of a time-series with the actual poundage-to-value without having to resort to two charts or (heaven forbid) a dual-geom/dual-axis chart.

The State of Maine Department of Marine Resources makes the data available but it’s in a PDF:

Thankfully, the PDF is not obfuscated and is just a plain table so it’s easy to parse and turn into:

The code to retrieve the PDF, parse it and produce said connected scatterplot is below.

library(stringi)
library(pdftools)
library(hrbrthemes)
library(tidyverse)

lobster_by_county_url <- "https://www.maine.gov/dmr/commercial-fishing/landings/documents/lobster.county.pdf"
lobster_by_county_fil <- basename(lobster_by_county_url)

if (!file.exists(lobster_by_county_fil)) download.file(lobster_by_county_url, lobster_by_county_fil)

# read in the PDF
lobster_by_county_pgs <- pdftools::pdf_text(lobster_by_county_fil)

map(lobster_by_county_pgs, stri_split_lines) %>% # split each page into lines
  flatten() %>%
  flatten_chr() %>%
  keep(stri_detect_fixed, "$") %>% # keep only lines with "$" in them
  stri_trim_both() %>% # clean up white space
  stri_split_regex("\ +", simplify = TRUE) %>% # get the columns
  as_data_frame() %>%
  mutate_at(c("V3", "V4"), lucr::from_currency) %>% # turn the formatted text into numbers
  set_names(c("year", "county", "pounds", "value")) %>% # better column names
  filter(county != "TOTAL") %>% # we'll calculate our own, thank you
  mutate(year = as.Date(sprintf("%s-01-01", year))) %>% # I like years to be years for plotting
  mutate(county = stri_trans_totitle(county)) -> lobster_by_county_df

arrange(lobster_by_county_df, year) %>%
  mutate(value = value / 1000000, pounds = pounds / 1000000) %>% # easier on the eyes
  group_by(year) %>%
  summarise(pounds = sum(pounds), value = sum(value)) %>%
  mutate(year_lab = lubridate::year(year)) %>%
  mutate(highlight = ifelse(year_lab == 2017, "2017", "Other")) %>% # so we can highlight 2017
  ggplot(aes(pounds, value)) +
  geom_path() +
  geom_label(
    aes(label = year_lab, color = highlight, size = highlight),
    family = font_ps, show.legend = FALSE
  ) +
  scale_x_comma(name = "Pounds (millions) →", limits = c(0, 150)) +
  scale_y_comma(name = "$ USD (millions) →", limits = c(0, 600)) +
  scale_color_manual(values = c("2017" = "#742111", "Other" = "#2b2b2b")) +
  scale_size_manual(values = c("2017" = 6, "Other" = 4)) +
  labs(
    title = "Historical Maine Fisheries Landings Data — Lobster (1964-2017)",
    subtitle = "All counties combined; Not adjusted for inflation",
    caption = "The 2002 & 2003 landings may possibly reflect the increased effort by DMR to collect voluntary landings from some lobster dealers;\nLobster reporting became mandatory in 2004 for all Maine dealers buying directly from harvesters.\nSource: <https://www.maine.gov/dmr/commercial-fishing/landings/historical-data.html>"
  ) +
  theme_ipsum_ps(grid = "XY")

What’s Up?

The NPR Visuals Team created and maintains a javascript library that makes it super easy to embed iframes on web pages and have said documents still be responsive.

The widgetframe R htmlwidget uses pym.js to bring this (much needed) functionality into widgets and (eventually) shiny apps.

NPR reported a critical vulnerability in this library on February 15th, 2018 with no details (said details will be coming next week).

Per NPR’s guidance, any production code using pym.js needs to be pulled or updated to use this new library.

I created an issue & pushed up a PR that incorporates the new version. NOTE that the YAML config file in the existing CRAN package and GitHub dev version incorrectly has 1.3.2 as the version (it’s really the 1.3.1 dev version).

A look at the diff:

suggest that the library was not performing URL sanitization (and now is).

Watch Out For Standalone Docs

Any R markdown docs compiled in “standalone” mode will need to be recompiled and re-published as the vulnerable pym.js library comes along for the ride in those documents.

Regardless of “standalone mode”, if you used widgetframe in any context, including:

anything created is vulnerable regardless of standalone compilation or not.

FIN

Once the final details are released I’ll update this post and may do a new post. Until then:

  • check if you’ve used widgetframe (directly or indirectly)
  • REMOVE ALL VULNERABLE DOCS from RPubs, GitHub pages, your web site (etc) today
  • regenerate all standalone documents ASAP
  • regenerate your blogs, books, dashboards, etc ASAP with the patched code; DO THIS FOR INTERNAL as well as internet-facing content.
  • monitor this space

Much of what I need to do for work-work involves using tools that are (for the moment) not in R. Today, I needed to test the validity of (and other processing on) DMARC records and I’m loathe to either reinvent the wheel or reticulate bits from a fragmented programming language ecosystem unless absolutely necessary. Thankfully, there’s libopendmarc which works well on sane operating systems, but it is a C library that needs an interface to use in R.

However, I also really didn’t want to start a new package for this just yet (there will eventually be one, though, and I prefer working in a package context for Rcpp work). I just needed to run opendmarc_policy_store_dmarc() against a decent-sized chunk of domain names and already-retrieved DMARC TXT records. So, I decided to write a small “inline” cppFunction() to get’er done.

Why am I blogging about this?

Despite growing popularity and a nice examples site, many newcomers to Rcpp (literally the way you want to go when it comes to bridging C[++] and R) still voice discontent about there not being enough “easy” examples. Granted, they are quitely likely looking for full-bore tutorials covering a different, explicit use cases. The aforelinked Gallery has some of those and there are codified examples in — literally — rcppexamples. But, there definitely needs to be more blog posts, books and such linking to them and expanding upon them.

Having mentioned that I’m using cppFunction(), one could, further, ask cppFunction() has a help page with an example, so why blather about using it?”. Fair point! And, there is a reason which was hinted at in the opening paragraph.

I need to use libopendmarc and that requires making a “plugin” if I’m going to do this “inline”. For some other DMARC processing I also need to use libresolv since the library needs to make DNS requests and uses resolv. You don’t need a plugin for a package version as you just need to boilerplate some “find these libraries and get their paths right for Makevars.in” and add the linking code in there as well. Here, we need to register two plugins that provide metdata for the magic that happens under the covers when Rcpp takes your inline code, compiles it and makes the function shared object available in R.

Plugins can be complex and do transformations, but the two I needed to write are just helping ensure the right #include lines are there along with the right linker libraries. Here they are:

library(Rcpp)

registerPlugin(
  name = "libresolv",
  plugin = function(x) {
    list(
      includes = "",
      env = list(PKG_LIBS="-lresolv")
    )
  }
)

registerPlugin(
  name = "libopendmarc",
  plugin = function(x) {
    list(
      includes = "#include <opendmarc/dmarc.h>",
      env = list(PKG_LIBS="-lopendmarc")
    )
  }
)

All they do is make data structures available in the environment. We can use inline::getPlugin() to see them:

inline::getPlugin("libresolv")
## $includes
## [1] ""
##
## $env
## $env$PKG_LIBS
## [1] "-lresolv"


inline::getPlugin("libopendmarc")
## $includes
## [1] "#include <opendmarc/dmarc.h>"
## 
## $env
## $env$PKG_LIBS
## [1] "-lopendmarc"

Finally, the tiny bit of C/C++ code to take in the necessary parameters and return the result. In this case, we’re passing in a character vector of domain names and DMARC records and getting back a logical vector with the test results. Apart from the necessary initialization and cleanup code for libopendmarc this is an idiom you’ll recognize if you look over packages that use Rcpp.

cppFunction(
std::vector< bool > is_dmarc_valid(std::vector< std::string> domains,
                                   std::vector< std::string> dmarc_records) {

  std::vector< bool > out(dmarc_records.size());

  DMARC_POLICY_T *pctx;
  OPENDMARC_STATUS_T status;

  pctx = opendmarc_policy_connect_init((u_char *)"1.2.3.4", 0);

  for (unsigned int i=0; i<dmarc_records.size(); i++) {

    status = opendmarc_policy_store_dmarc(
      pctx,
      (u_char *)dmarc_records[i].c_str(),
      (u_char *)domains[i].c_str(),
      NULL
    );

    out[i] = (status == DMARC_PARSE_OKAY);

    pctx = opendmarc_policy_connect_rset(pctx);

  }

  pctx = opendmarc_policy_connect_shutdown(pctx);

  return(out);

}
,
plugins=c("libresolv", "libopendmarc"))

(Note: the code-formatting plugin was tossing a serious fit about the long text field so you’ll need to put a single quote after cppFunction( and before the line with the , if you’re cutting and pasting at home).

Right at the end, the final parameter is telling cppFunction() what plugins to use.

Executing that line shunts a modified version of the function to disk, compiles it and lets us use the function in R (use cacheDir, showOutput and verbose parameters to control how many gory details lie undeneath this pristine shell).

After running the function, is_dmarc_valid() is available in the environment and ready to use.

domains <- c("bit.ly", "bizible.com", "blackmountainsystems.com", "blackspoke.com")
dmarc <-  c("v=DMARC1; p=none; pct=100; rua=mailto:dmarc@bit.ly; ruf=mailto:ruf@dmarc.bitly.net; fo=1;", 
            "v=DMARC1; p=reject; fo=1; rua=mailto:postmaster@bizible.com; ruf=mailto:forensics@bizible.com;", 
            "v=DMARC1; p=quarantine; pct=100; rua=mailto:demarcrecords@blkmtn.com, mailto:ttran@blkmtn.com", 
            "user.cechire.com.")

is_dmarc_valid(domains, dmarc)
## [1]  TRUE  TRUE  TRUE FALSE

Processing those 5 took just about 10 microseconds which meant I could process the ~1,000,000 domains+DMARCs in no time at all. And, I have something I can use in a DMARC utility package (coming “soon”).

Hopefully this was a useful reference for both hooking up external libraries to “inline” Rcpp functions and for how to go about doing this type of thing in general.

Time for another look at what’s new and interesting in the world with the help of Peter Meissner’s (@marvin_dpr) crossword.r?.

The answers to last week’s puzzle have been posted (it seemed to make more sense posting the answers a week later vs the Monday after).

There is a dedicated category — puzzler — to make it easier to find these later on, all in one place. That category also has it’s own RSS feed.

Peter Meissner (@marvin_dpr) released crossword.r? to CRAN today. It’s a spiffy package that makes it dead simple to generate crossword puzzles.

He also made a super spiffy javascript library to pair with it, which can turn crossword model output into an interactive puzzle.

I thought I’d combine those two creations with a way to highlight new/updated packages from the previous week, cool/useful packages in general, and some R functions that might come in handy. Think of it as a weekly way to get some R information while having a bit of fun!

This was a quick, rough creation and I’ll be changing the styles a bit for next Friday’s release, but Peter’s package is so easy to use that I have absolutely no excuse to not keep this a regular feature of the blog.

I’ll release a static, ggplot2 solution to each puzzle the following Monday(s). If you solve it before then, tweet a screen shot of your solution with the tag #rstats #puzzler and I’ll pick the first time-stamped one to highlight the following week.

I’ll also get a GitHub setup for suggestions/contributions to this effort + to hold the puzzle data.

ANSWERS

library(crossword.r)

cw <- Crossword$new(rows = 12, columns = 12)

cw$add_words(
  
  words = c(
    "crosswordr",
    "searchr",
    "kerasformula",
    "fs",
    "crypto",
    "mgcv",
    "startsWith",
    "akima",
    "rcompgen",
    "broom",
    "nord"
  ),
  
  clues = c(
    "New package to generate crosswords (w/o '.')",
    "Interpolation-focused package",
    "Core generalized additive modelling package",
    "Package facilitating searching Bing, Google and more from an R console",
    "New, high-level interface package to 'keras'",
    "Consistent, cross-platform, vectorised filesystem operations package",
    "Interface package to all digital/crypto currency market data",
    "base function to test if a string starts with another string",
    "utils function to generation command completion networks",
    "Package that makes it easy to tidy statistical analyses objects",
    "An arctic, north-bluish color palette package"#,
  )
  
)

We’re doing some interesting studies (cybersecurity-wise, not finance-wise) on digital currency networks at work-work and — while I’m loathe to create a geo-map from IPv4 geolocation data — we:

  • do get (often, woefully inaccurate) latitude & longitude data from our geolocation service (I won’t name-and-shame here); and,
  • there are definite geo-aspects to the prevalence of mining nodes — especially Bitcoin; and,
  • I have been itching to play with the nascent nord palette? in a cartographical context…

so I went on a small diversion to create a bubble plot of geographical Bitcoin node-prevalence.

I tweeted out said image and someone asked if there was code, hence this post.

You’ll be able to read about the methodology we used to capture the Bitcoin node data that underpins the map below later this year. For now, all I can say is that wasn’t garnered from joining the network-proper.

I’m including the geo-data in the gist?, but not the other data elements (you can easily find Bitcoin node data out on the internets from various free APIs and our data is on par with them).

I’m using swatches? for the nord palette since I was hand-picking colors, but you should use @jakekaupp’s most excellent nord package? if you want to use the various palettes more regularly.

I’ve blathered a bit about nord, so let’s start with that (and include the various other packages we’ll use later on):

library(swatches)
library(ggalt) # devtools::install_github("hrbrmstr/ggalt")
library(hrbrthemes) # devtools::install_github("hrbrmstr/hrbrthemes")
library(tidyverse)

nord <- read_palette("nord.ase")

show_palette(nord)

It may not be a perfect palette (accounting for all forms of vision issues and other technical details) but it was designed very well (IMO).

The rest is pretty straightforward:

  • read in the bitcoin geo-data
  • count up by lat/lng
  • figure out which colors to use (that took a bit of trial-and-error)
  • tweak the rest of the ggplot2 canvas styling (that took a wee bit longer)

I’m using development versions of two packages due to their added functionality not being on CRAN (yet). If you’d rather not use a dev-version of hrbrthemes just use a different ipsum theme vs the new theme_ipsum_tw().

read_csv("bitc.csv") %>%
  count(lng, lat, sort = TRUE) -> bubbles_df

world <- map_data("world")
world <- world[world$region != "Antarctica", ]

ggplot() +
  geom_cartogram(
    data = world, map = world,
    aes(x = long, y = lat, map_id = region),
    color = nord["nord3"], fill = nord["nord0"], size = 0.125
  ) +
  geom_point(
    data = bubbles_df, aes(lng, lat, size = n), fill = nord["nord13"],
    shape = 21, alpha = 2/3, stroke = 0.25, color = "#2b2b2b"
  ) +
  coord_proj("+proj=wintri") +
  scale_size_area(name = "Node count", max_size = 20, labels = scales::comma) +
  labs(
    x = NULL, y = NULL,
    title = "Bitcoin Network Geographic Distribution (all node types)",
    subtitle = "(Using bubbles seemed appropriate for some, odd reason)",
    caption = "Source: Rapid7 Project Sonar"
  ) +
  theme_ipsum_tw(plot_title_size = 24, subtitle_size = 12) +
  theme(plot.title = element_text(color = nord["nord14"], hjust = 0.5)) +
  theme(plot.subtitle = element_text(color = nord["nord14"], hjust = 0.5)) +
  theme(panel.grid = element_blank()) +
  theme(plot.background = element_rect(fill = nord["nord3"], color = nord["nord3"])) +
  theme(panel.background = element_rect(fill = nord["nord3"], color = nord["nord3"])) +
  theme(legend.position = c(0.5, 0.05)) +
  theme(axis.text = element_blank()) +
  theme(legend.title = element_text(color = "white")) +
  theme(legend.text = element_text(color = "white")) +
  theme(legend.key = element_rect(fill = nord["nord3"], color = nord["nord3"])) +
  theme(legend.background = element_rect(fill = nord["nord3"], color = nord["nord3"])) +
  theme(legend.direction = "horizontal")

As noted, the RStudio project associated with this post in in this gist?. Also, upon further data-inspection by @jhartftw, we’ve discovered yet-more inconsistencies in the geo-mapping service data (there are way too many nodes in Paris, for example), but the main point of the post was to mostly show and play with the nord palette.

NOTE: The likelihood of this recipe being added to the recent practice bookdown book is slim, but I’ll try to keep the same format for the blog post.

Problem

You want to collect all the tweets in a Twitter tweet thread

Solution

Use a few key functions in rtweet to piece the thread elements back together.

Discussion

In Twitterland, a “thread” is a series of tweets by an author that are in a reply chain to each other which enables them to be displayed sequentially to form a larger & (ostensibly) more cohesive message. Even with the recent 280 character tweet-length increase, threads are still popular and used daily. They’re very easy to distinguish on Twitter but there is no Twitter API call to collect up all the pieces of these threads.

Let’s build a function — get_thread() — that will take as input a starting thread URL or status id and return a data frame of all the tweets in the thread (in order). As a bonus, we’ll also include a way to include all first-level retweets and replies to each threaded tweet (that, too, happens quite a bit).

There are documentation snippets in the code block (below), but the essence of the function is:

  • first, finding the tweet that belongs to the status id to get some metadata
  • then doing a search for tweets from the author that occurred after that tweet (we do this to save on API calls and we grab a bunch of them)
  • rather than do a bunch of things by hand, we make from/to pairs to feed in as vertex edges into igraph
  • once that’s done, separate out the graph into unique subgraphs and find the one containing the starting status id
  • since that subgraph is just a set of status ids, rebuild the data frame from it and put it in order.

There may be occasions where we want to grab the replies or RTs of any of the original thread tweets. They aren’t always useful, but when they are it’d be good to have this context. So, we’ll add an option that — if TRUE — will cause the function to go down the list of threaded tweets and pull the first-level replies and RTs (excluding the ones from the author). We’ll do this using the Twitter search API as it’ll ultimately save on API calls and it puts the filtering closer to the data (I’m generally “a fan” of putting computation as close to the data as possible for any given task). If there were any, they’ll be in a replies column which can be unnested at-will.

Here’s the complete function:

get_thread <- function(first_status, include_replies=FALSE, .timeline_history=3000) {

  require(rtweet, quietly=TRUE, warn.conflicts=FALSE)
  require(igraph, quietly=TRUE, warn.conflicts=FALSE)
  require(tidyverse, quietly=TRUE, warn.conflicts=FALSE)

  first_status <- if (str_detect(first_status[1], "^http[s]://")) basename(first_status[1]) else first_status[1]

  # get first status
  orig <- rtweet::lookup_tweets(first_status)

  # grab the author's timeline to search
  author_timeline <- rtweet::get_timeline(orig$screen_name, n=.timeline_history, since_id=first_status)

  # build a data frame containing from/to pairs (anything the author
  # replied to) that also includes the `first_status` id.
  suppressWarnings(
    dplyr::filter(
      author_timeline,
      (status_id == first_status) | (reply_to_screen_name == orig$screen_name)
    ) %>%
      dplyr::select(status_id, reply_to_status_id) %>%
      igraph::graph_from_data_frame() -> g
  ) # build a graph from this

  # decompose the graph into unique subgraphs and return them to data frames
  igraph::decompose(g) %>%
    purrr::map(igraph::as_data_frame) -> threads_dfs

  # find the thread with our `first_status` ids

  thread_df <- purrr::keep(threads_dfs, ~any(which(unique(unlist(.x, use.names=FALSE)) == first_status)))

  # BONUS: we get them in the order we need!
  thread_order <- purrr::discard(rev(unique(unlist(thread_df))), str_detect, "NA")

  # filter out the thread from the timeline corpus & sort it
  dplyr::filter(author_timeline, status_id %in% pull(thread_df[[1]], from)) %>%
    dplyr::mutate(status_id = factor(status_id, levels=thread_order)) %>%
    dplyr::arrange(status_id) -> tweet_thread

  if (include_replies) {
    # for each status, lookup 1st-level references to it, excluding ones from the original author
    mutate(
      tweet_thread,
      replies = purrr::map(
        as.character(status_id),
        ~rtweet::search_tweets(sprintf("%s -from:%s", .x, orig$screen_name[1]))
      )
    ) -> tweet_thread
  }

  class(tweet_thread) <-  c("tweet_thread", class(tweet_thread))

  return(tweet_thread)

}

Now, if we grab this thread, the function will return the following:

xdf <- get_thread("https://twitter.com/petersagal/status/952910499825451009")

glimpse(select(xdf, 1:5))
## Observations: 10
## Variables: 5
## $ status_id   <fctr> 952910499825451009, 952910695804305408, 952911012990193664, 952911632077852679, 9529121...
## $ created_at  <dttm> 2018-01-15 14:29:02, 2018-01-15 14:29:48, 2018-01-15 14:31:04, 2018-01-15 14:33:31, 201...
## $ user_id     <chr> "14985228", "14985228", "14985228", "14985228", "14985228", "14985228", "14985228", "149...
## $ screen_name <chr> "petersagal", "petersagal", "petersagal", "petersagal", "petersagal", "petersagal", "pet...
## $ text        <chr> "Funny you mention that. I talked to Minniejean (Brown) Trickey, one of the Little Rock ...

purrr::map(xdf$text, strwrap) %>% 
  purrr::map_chr(paste0, collapse="\n") %>% 
  cat(sep="\n\n")
## Funny you mention that. I talked to Minniejean (Brown) Trickey, one of the Little Rock Nine, about
## that very day in front of CHS for my documentary, "Constitution USA." https://t.co/MRwtlfZtvp
## 
## You would think that of all people, she would be satisfied with the government's response to racism
## and hate. Ike sent the 101st Airborne to escort her to class!
## 
## But what I didn't know is that after the 101st left, CHS expelled her on a trumped up charge of
## assault after she spilled some chili on a white student.
## 
## She spilled some chili. After being tripped by another white kid. "We got rid of one of them!" the
## teachers bragged.
## 
## Then, of course, rather than continue to allow black students to attend CHS, the governor of Alabama
## closed the schools. https://t.co/2DfBEI0OTL"The_Lost_Year"
## 
## Ms Brown looked around the country post high school. She saw Jim Crow, firehoses turned on Blacks,
## the murder of the Birmingham Four and the Mississippi Three. She moved to Canada.
## 
## As of 2012, she found herself coming back to Little Rock, a place she told me she never wanted to
## see again. But she had family. And the National Historic Site center was there. She liked to drop
## by, talk to the kids about what happened.
## 
## Now she lives in Little Rock full time. She doesn't care that her name is inscribed on a bench in
## front of the school. She doesn't care that your dad welcomed her back in '99.  She spends time at
## the Center, telling people what really happened. You should go talk to her.
## 
## (Sorry: Arkansas, obviously. Typing too quickly.)
## 
## Here's me, talking to Ms Trickey and Marty Sammon, who served with the 101st at Little Rock. Buddy
## Squiers on camera. CHS is off to the left. https://t.co/ft4LUBf3sr
## 
## https://t.co/EHLbe1finj

The replies data frame looks much the same as the thread data frame — it’s essentially just another rtweet data frame, so we won’t waste electrons showing it.

While that map/map/cat sequence isn’t bad to type, it’d be more convenient if we had a print() method for this structure (this is one reason we added a class to it). It’d be even spiffier if this print() method made it easier to distinguish the main thread from the RT’s/replies — but still show those extra bits of info. We’ll use the crayon package for added emphasis:

print.tweet_thread <- function(x, ...) {
  
  cat(crayon::cyan(sprintf("@%s - %s\n\n", x$screen_name[1], x$created_at[1])))
  
  if (!("replies" %in% colnames(x))) x$replies <- purrr::map(1:nrow(x), ~list())
  
  purrr::walk2(x$text, x$replies, ~{
    
    cat(crayon::green(paste0(strwrap(.x), collapse="\n")), "\n\n", sep="")
    
    if (length(.y) > 0) {
      purrr::walk2(.y$screen_name, .y$text, ~{
        sprintf("@%s\n%s", .x, .y) %>%
          strwrap(indent=8, exdent=8) %>%
          paste0(collapse="\n") %>%
          crayon::silver$italic() %>%
          cat("\n\n", sep="")
      })
    }
    
  })
  
}

Let’s re-capture the tweet thread but also include replies this time and print it out:

ydf <- get_thread("https://twitter.com/petersagal/status/952910499825451009", include_replies=TRUE)

ydf

See Also

I’ve git-chatted with Sir Kearney to see where to best put this function. I mention that as there are some upcoming posts that kick the aforeblogged tweet_shot() up a notch or two and all of this may work better in a tweetview package.

Regardless, drop a note in the comments if there are other bits of functionality or function options you think belong in get_thread().