Skip navigation

Author Archives: hrbrmstr

Don't look at me…I do what he does — just slower. #rstats avuncular • ?Resistance Fighter • Cook • Christian • [Master] Chef des Données de Sécurité @ @rapid7

If you follow me on Twitter or monitor @Rapid7’s Community Blog you know I’ve been involved a bit in the WannaCry ransomworm triage.

One thing I’ve been doing is making charts of the hourly contribution to the Bitcoin addresses that the current/main attackers are using to accept ransom payments (which you really shouldn’t pay, now, even if you are impacted as it’s unlikely they’re actually giving up keys anymore because the likelihood of them getting cash out of the wallets without getting caught is pretty slim).

There’s a full-on CRAN-ified Rbitcoin package but I didn’t need the functionality in it (yet) to do the monitoring. I posted a hastily-crafted gist on Friday so folks could play along at home, but the code here is a bit more nuanced (and does more).

In the spirit of these R⁶ posts, the following is presented without further commentary apart from the interwoven comments with the exception that this method captures super-micro-payments that do not necessarily translate 1:1 to victim count (it’s well within ball-park estimates but not precise w/o introspecting each transaction).

library(jsonlite)
library(hrbrthemes)
library(tidyverse)

# the wallets accepting ransom payments

wallets <- c(
  "115p7UMMngoj1pMvkpHijcRdfJNXj6LrLn",
  "12t9YDPgwueZ9NyMgw519p7AA8isjr6SMw",
  "13AM4VW2dhxYgXeQepoHkHSQuy6NgaEb94"
)

# easy way to get each wallet info vs bringing in the Rbitcoin package

sprintf("https://blockchain.info/rawaddr/%s", wallets) %>%
  map(jsonlite::fromJSON) -> chains

# get the current USD conversion (tho the above has this, too)

curr_price <- jsonlite::fromJSON("https://blockchain.info/ticker")

# calculate some basic stats

tot_bc <- sum(map_dbl(chains, "total_received")) / 10e7
tot_usd <- tot_bc * curr_price$USD$last
tot_xts <- sum(map_dbl(chains, "n_tx"))

# This needs to be modified once the counters go above 100 and also needs to
# account for rate limits in the blockchain.info API

paged <- which(map_dbl(chains, "n_tx") > 50)
if (length(paged) > 0) {
  sprintf("https://blockchain.info/rawaddr/%s?offset=50", wallets[paged]) %>%
    map(jsonlite::fromJSON) -> chains2
}

# We want hourly data across all transactions

map_df(chains, "txs") %>%
  bind_rows(map_df(chains2, "txs")) %>% 
  mutate(xts = anytime::anytime(time),
         xts = as.POSIXct(format(xts, "%Y-%m-%d %H:00:00"), origin="GMT")) %>%
  count(xts) -> xdf

# Plot it

ggplot(xdf, aes(xts, y = n)) +
  geom_col() +
  scale_y_comma(limits = c(0, max(xdf$n))) +
  labs(x = "Day/Time (GMT)", y = "# Transactions",
       title = "Bitcoin Ransom Payments-per-hour Since #WannaCry Ransomworm Launch",
       subtitle=sprintf("%s transactions to-date; %s total bitcoin; %s USD; Chart generated at: %s EDT",
                        scales::comma(tot_xts), tot_bc, scales::dollar(tot_usd), Sys.time())) +
  theme_ipsum_rc(grid="Y")

I hope all goes well with everyone as you try to ride out this ransomworm storm over the coming weeks. It will likely linger for quite a while, so make sure you patch!

Tagging this as #rstats-related since many R coders use Travis-CI to automate package builds (and other things). Security researcher Ivan Vyshnevskyi did some ++gd responsible disclosure to the Travis-CI folks letting them know they were leaking the contents of “secure” environment variables in the build logs.

The TL;DR on “secure” environment variables is that they let you store secrets — such as OAuth keys or API tokens — ostensibly “securely” (they have to be decrypted to be used so someone/something has they keys to do that so it’s not really “secure”). That is, they should not leak them in build logs. Except that they did…for a bit.

As mentioned, this flaw was reported and is now fixed. Regen your “secrets” and keep an eye on Travis security announcements moving forward.

Political machinations are a tad insane in the U.S. these days & I regularly hit up @ProPublica & @GovTrack sites (& sub to the GovTrack e-mail updates) as I try to be an informed citizen, especially since I’ve got a Senator and Representative who seem to be in the sway of ?.

I’ve always appreciated the ProPublica and GovTrack cartograms as they present a great deal of information in a compact space (especially the House versions). Something nudged me into starting an R package to let folks create them in R (mainly with ggplot2 but an htmlwidget version is planned), which I’ve dubbed voteogram.

With the voteogram package, you can:

  • pull ProPublica roll call vote data for the 101st Congress up through today (via roll_call())
  • plot ProPublica-esque Senate roll call vote cartograms
  • plot ProPublica-esque House roll call vote cartograms
  • plot GovTrack-esque House roll call vote cartograms

GovTrack uses — what I’ve seen @thosjleeper refer to as — a “parliamentary plot” for their version of the Senate roll call cartogram and sir Leeper already has that type of plot covered in ggparliament, so I’ve just focused on the other ones here.

Roll Call

You need data for these cartogram generation functions and you can specify your own populated data frame (the needed columns are in the manual pages for the cartogram plotters). However, you’ll likely want to plot existing data that others have tallied and ProPublica makes that super simple since each vote is in a standalone JSON file. All you have to do is specify whether you want the roll call vote for the house or senate, the Congress number (current one is 115), the session number (current one is 1) and the roll call vote number.

For example, we can see all the idiots Representatives who voted, recently, to kill people repeal the ACA with the following function call:

(h256 <- roll_call("house", 115, 1, 256))
## 115th Congress / Session: 1 / House Roll Call: 256 / May  4, 2017
## 
## American Health Care Act
## 
## Result: Passed

str(h256, max.level = 1)
## List of 29
##  $ vote_id              : chr "H_115_1_256"
##  $ chamber              : chr "House"
##  $ year                 : int 2017
##  $ congress             : chr "115"
##  $ session              : chr "1"
##  $ roll_call            : int 256
##  $ needed_to_pass       : int 216
##  $ date_of_vote         : chr "May  4, 2017"
##  $ time_of_vote         : chr "02:18 PM"
##  $ result               : chr "Passed"
##  $ vote_type            : chr "RECORDED VOTE"
##  $ question             : chr "On Passage"
##  $ description          : chr "American Health Care Act"
##  $ nyt_title            : chr "On Passage"
##  $ total_yes            : int 217
##  $ total_no             : int 213
##  $ total_not_voting     : int 1
##  $ gop_yes              : int 217
##  $ gop_no               : int 20
##  $ gop_not_voting       : int 1
##  $ dem_yes              : int 0
##  $ dem_no               : int 193
##  $ dem_not_voting       : int 0
##  $ ind_yes              : int 0
##  $ ind_no               : int 0
##  $ ind_not_voting       : int 0
##  $ dem_majority_position: chr "No"
##  $ gop_majority_position: chr "Yes"
##  $ votes                :Classes ‘tbl_df’, ‘tbl’ and 'data.frame':  435 obs. of  11 variables:
##  - attr(*, "class")= chr [1:2] "pprc" "list"

As you can see, it has a custom print function and the usable data (for cartographic needs) is in $votes. You can go to town with just that information, making bar charts or tracking individual Congress-critter votes.

Do your best to cache this data as you retrieve it. ProPublica is a non-profit and the JSON files are on AWS. While there’s a certain number of free bits of bandwidth-per-month allotted buy Amazon’s S3 service, best to make sure you’re not tipping them over on any given month. Plus, the vote data doesn’t change once it’s recorded. Consider donating to them if you decided to always grab fresh copies.

There’s a fortify function for this object (it’s classed pprc) so you can pass it right into ggplot() for use or pipe it into a dplyr chain for aggregation & filtering.

House Rules

With the data in hand, we can make some cartograms (the real purpose of the package). I riffed off the ProPublica colors (and haven’t fully finished copying them yet as I need to search for 2 more categories of Independent voting colors) but you can replace them with anything you want. Just reset the scale and use the names in the exposed color value vectors.

There’s also a theme_voteogram() which is designed to augment any base theme (like hrbrthemes::theme_ipsum_rc()) (it’s much like ggthemes::theme_map()).

Here’s the ProPublica view for that particular vote:

house_carto(rep) +
  labs(x=NULL, y=NULL, 
       title="House Vote 256 - Passes American Health Care Act,\nRepealing Obamacare") +
  theme_ipsum_rc(plot_title_size = 24) +
  theme_voteogram()

The house_carto() function defaults to the ProPublica cartogram, but you can easily change that:

house_carto(rep, "gt") +
  labs(x=NULL, y=NULL, 
       title="House Vote 256 - Passes American Health Care Act,\nRepealing Obamacare") +
  theme_ipsum_rc(plot_title_size = 24) +
  theme_voteogram()

Senate Drools

Again, the senate_carto() function only has the ProPublica-esque cartogram available and works pretty much the same way after getting the Senate vote data:

sen <- roll_call("senate", 115, 1, 110)

senate_carto(sen) +
  labs(title="Senate Vote 110 - Invokes Cloture on Neil Gorsuch Nomination") +
  theme_ipsum_rc(plot_title_size = 24) +
  theme_voteogram()

FIN

There’s a bit of work left to do in the package (including an htmlwidget version). You’re invited to file PRs or Issues as you are so moved.

I caught a glimpse of a tweet by @dataandme on Friday:

Mara is — without a doubt — the best data science promoter in the Twitterverse. She seems to have her finger on the pulse of everything that’s happening in the data science world and is one of the most ardent amplifiers there is.

The post she linked to was a bit older (2015) and had a very “stream of consciousness” feel to it. I actually wish more R folks took to their blogs like this to post their explorations into various topics. The code in this post likely worked at the time it was posted and accomplished the desired goal (which means it was ultimately decent code). Said practice will ultimately help both you and others.

Makeover Time

As I’ve noted before, web scraping has some rules, even though they can be tough to find. This post made a very common mistake of not putting in a time delay between requests (a cardinal scraping rule) which we’ll fix in a moment.

There are a few other optimizations we can make. The first is moving from a for loop to something a bit more vectorized. Another is to figure out how many pages we need to scrape from information in the first set of results.

However, an even bigger one is to take advantage of the underlying XHR POST request that the new version of the site ultimately calls (it appears this site has undergone some changes since the blog post and it’s unlikely the code in the post actually works now).

Let’s start by setting up a function to grab individual pages:

library(httr)
library(rvest)
library(stringi)
library(tidyverse)

get_page <- function(i=1, pb=NULL) {
  
  if (!is.null(pb)) pb$tick()$print()
  
  POST(url = "http://www.propwall.my/wp-admin/admin-ajax.php", 
       body = list(action = "star_property_classified_list_change_ajax", 
                   tab = "Most Relevance", 
                   page = as.integer(i), location = "Mont Kiara", 
                   category = "", listing = "For Sale", 
                   price = "", keywords = "Mont Kiara, Kuala Lumpur", 
                   filter_id = "17", filter_type = "Location", 
                   furnishing = "", builtup = "", 
                   tenure = "", view = "list", 
                   map = "on", blurb = "0"), 
       encode = "form") -> res
  
  stop_for_status(res)
  
  res <- content(res, as="parsed") 
  
  Sys.sleep(sample(seq(0,2,0.5), 1))
  
  res
  
}

The i parameter gets passed into the body of the POST request. You can find that XHR POST request via the Network tab of your browser Developer Tools view. You can either transcribe it by hand or use the curlconverter package (which is temporarily off CRAN so you’ll need to get it from github) to auto-convert it to an httr::VERB request.

We also add a parameter (default to NULL) to support the use of a progress bar (so we can see what’s going on). If we pass in a populated dplyr progress bar, this will tick it down for us.

Now, we can use that to get the total number of listings.

get_page(1) %>% 
  html_node(xpath=".//a[contains(., 'Classifieds:')]") %>% 
  html_text() %>% 
  stri_match_last_regex("([[:digit:],]+)$") %>% 
  .[,2] %>% 
  stri_replace_all_fixed(",", "") %>% 
  as.numeric() -> classified_ct

total_pages <- 1 + (classified_ct %/% 20)

We’ll setup another function to extract the listing URLs and titles:

get_listings <- function(pg) {
  data_frame(
    link = html_nodes(pg, "div#list-content > div.media * h4.media-heading > a:nth-of-type(1)" ) %>%  html_attr("href"),
    description = html_nodes(pg, "div#list-content > div.media * h4.media-heading > a:nth-of-type(1)" ) %>% html_text(trim = TRUE)  
  )
}

Rather than chain calls to html_nodes() we take advantage of well-formed CSS selectors (which ultimately gets auto-translated to XPath strings). This has the advantage of speed (though that’s not necessarily an issue when web scraping) as well as brevity.

Now, we’ll scrape all the listings:

pb <- progress_estimated(total_pages)
listings_df <- map_df(1:total_pages, ~get_listings(get_page(.x, pb)))

Yep. That’s it. Everything’s been neatly abstracted into functions and we’ve taken advantage of some modern R idioms to accomplish our first task.

FIN

With the above code you should be able to do your own makeover of the remaining code in the original post. Remember to:

  • add a delay when you sequentially scrape pages from a site
  • abstract out common operations into functions
  • take advantage of purrr functions (or built-in *apply functions) to avoid for loops

I’ll close with a note about adhering to site terms of service / terms and conditions. Nothing I found when searching for ToS/ToC on the site suggested that scraping, automated grabbing or use of the underlying data in bulk was prohibited. Many sites have such restrictions — like IMDB (I mention that as it’s been used alot lately by R folks and it really shouldn’t be). LinkedIn recently sued scrapers for ToS such violations.

I fundamentally believe violating ToS is unethical behavior and should be avoided just on those grounds. When I come across sites I need information from that have restrictive ToS I contact the site owner (when I can find them) and ask them for permission and have only been refused a small handful of times. Given those recent legal actions, it’s also to better be safe than sorry.

Once I realized that my planned, larger post would not come to fruition today I took the R⁶ post (i.e. “minimal expository, keen focus”) route, prompted by a Twitter discussion with some R mates who needed to convert “lightly formatted” Microsoft Word (docx) documents to markdown. Something like this:

to:

Does pandoc work?
=================

Simple document with **bold** and *italics*.

This is definitely a job that pandoc can handle.

pandoc is a Haskell (yes, Haskell) program created by John MacFarlane and is an amazing tool for transcoding documents. And, if you’re a “modern” R/RStudio user, you likely use it every day because it’s ultimately what powers rmarkdown / knitr.

Yes, you read that correctly. Your beautiful PDF, Word and HTML R reports are powered by — and, would not be possible without — Haskell.

Doing the aforementioned conversion from docx to markdown is super-simple from R:

rmarkdown::pandoc_convert("simple.docx", "markdown", output="simple.md")

Give the help on rmarkdown::pandoc_convert() a read as well as the very thorough and helpful documentation over at pandoc.org to see the power available at your command.

Just One More Thing

This section — technically — violates the R⁶ principle so you can stop reading if you’re a purist :-)

There’s a neat, non-on-CRAN package by François Keck called subtoolshttps://github.com/fkeck/subtools which can slice, dice and reformat digital content subtitles. There are multiple formats for these subtitle files and it seems to be able to handle them all.

There was a post (earlier in April) about Ranking the Negativity of Black Mirror Episodes. That post is python and I’ve never had time to fully replicate it in R.

Here’s a snippet (sans expository) that can get you started pulling in subtitles into R and tidytext. I would have written scraper code but the various subtitle aggregation sites make that a task suited for something like my splashr package and I just had no cycles to write the code. So, I grabbed the first season of “The Flash” and use the Bing sentiment lexicon from tidytext to see how the season looked.

The overall scoring for a given episode is naive and can definitely be improved upon.

Definitely drop a link to anything you create in the comments!

# devtools::install_github("fkeck/subtools")

library(subtools)
library(tidytext)
library(hrbrthemes)
library(tidyverse)

data(stop_words)

bing <- get_sentiments("bing")
afinn <- get_sentiments("afinn")

fils <- list.files("flash/01", pattern = "srt$", full.names = TRUE)

pb <- progress_estimated(length(fils))

map_df(1:length(fils), ~{

  pb$tick()$print()

  read.subtitles(fils[.x]) %>%
    sentencify() %>%
    .$subtitles %>%
    unnest_tokens(word, Text) %>%
    anti_join(stop_words, by="word") %>%
    inner_join(bing, by="word") %>%
    inner_join(afinn, by="word") %>%
    mutate(season = 1, ep = .x)

}) %>% as_tibble() -> season_sentiments


count(season_sentiments, ep, sentiment) %>%
  mutate(pct = n/sum(n),
         pct = ifelse(sentiment == "negative", -pct, pct)) -> bing_sent

ggplot() +
  geom_ribbon(data = filter(bing_sent, sentiment=="positive"),
              aes(ep, ymin=0, ymax=pct, fill=sentiment), alpha=3/4) +
  geom_ribbon(data = filter(bing_sent, sentiment=="negative"),
              aes(ep, ymin=0, ymax=pct, fill=sentiment), alpha=3/4) +
  scale_x_continuous(expand=c(0,0.5), breaks=seq(1, 23, 2)) +
  scale_y_continuous(expand=c(0,0), limits=c(-1,1),
                     labels=c("100%\nnegative", "50%", "0", "50%", "positive\n100%")) +
  labs(x="Season 1 Episode", y=NULL, title="The Flash — Season 1",
       subtitle="Sentiment balance per episode") +
  scale_fill_ipsum(name="Sentiment") +
  guides(fill = guide_legend(reverse=TRUE)) +
  theme_ipsum_rc(grid="Y") +
  theme(axis.text.y=element_text(vjust=c(0, 0.5, 0.5, 0.5, 1)))

The intrepid @ma_salmon cranked out another blog post, remixing classical music schedule data from Radio Swiss Classic. It’s a fun post and you should read it before continuing here.

Seriously, click the link and go read it before continuing.

No, I mean it. Click the link or the rest of this makes no sense ;-)

OK, good. You finally read her ? post.

Now, I’m riffing off of said post here for four reasons. Three of the reasons are short, one is longer.

The first, short one is: be kind to web servers when scraping. If you ran a site and suddenly got hit with 3,000+ immediately sequential requests you might not be able to handle it depending on your server config. At a minimum add a Sys.sleep(sample(seq(0,1,0.25), 1)) before each sequential scrape and — if you can spare the time — sample(5,1) would be even better for a delay.

The second, short one is: purrr::safely() is your bff when it comes to xml2::read_html() and other network-ops. The internet is fundamentally broken. Nodes die. Pages get lost. Links rot. You have to be able to handle exceptions and if you define something like s_read_html <- safely(read_html) then when you do s_read_html("https://example.com/") the $result component will be NULL if the network request failed but will contain valid, parsed HTML if it succeeds. It is silent by default and works quite well (as we’ll see below).

The third, short one is: MPGA (Make Progress-bars Great Again). dplyr::progress_estimated() can really simplify the usage of progress bars in purrr calls (drop a note in the comments if the code is confusing and I’ll add some expository).

The last requires the code example for context:

library(rvest)
library(stringi)
library(lubridate)
library(tidyverse)

s_read_html <- safely(read_html)

# helper for brevity
xtract_nodes <- function(node, css) {
  html_nodes(node, css) %>% html_text(trim = TRUE)
}

get_one_day_program <- function(date=Sys.Date(),
                                base_url="http://www.radioswissclassic.ch/en/music-programme/search/%s",
                                pb=NULL) {

  if (!is.null(pb)) pb$tick()$print()

  Sys.sleep(sample(seq(0,1,0.25), 1)) # ideally, make this sample(5,1)

  date <- ymd(date) # handles case where input is character ISO date

  pg <- s_read_html(sprintf(base_url, format(date, "%Y%m%d")))

  if (!is.null(pg$result)) {

    data_frame(

      date = date,
      duration = xtract_nodes(pg$result, 'div[class="playlist"] *
                                            span[class="time hidden-xs"]') %>% hm() %>% as.numeric(),
      artist = xtract_nodes(pg$result, 'div[class="playlist"] * span[class="titletag"]'),
      title = xtract_nodes(pg$result, 'div[class="playlist"] * span[class="artist"]'),

      hour = purrr::map(0:23, ~{
        if (.x<23) {
          nod <- html_nodes(pg$result,
                             xpath=sprintf(".//div[@id='%02d']/following-sibling::div[contains(@class, 'item-row')
                                                                 and (following-sibling::div[@id='%02d'])]", .x, .x+1))
        } else {
          nod <- html_nodes(pg$result,
                            xpath=sprintf(".//div[@id='%02d']/following-sibling::div[contains(@class, 'item-row')]", .x))
        }
        rep(.x, length(nod))
      }) %>%
        flatten_int()

    )

  } else {
    closeAllConnections()
    NULL
  }

}

search_dates <- seq(from = ymd("2008-09-01"), to = ymd("2017-04-22"), by = "1 day")

pb <- progress_estimated(length(search_dates[1:5]))
programs_df <- map_df(search_dates[1:5], get_one_day_program, pb=pb)
programs_df
## # A tibble: 825 × 5
##          date duration                    artist                                                                         title  hour
##        <date>    <dbl>                     <chr>                                                                         <chr> <int>
## 1  2008-09-01       60   Franz Anton Hoffmeister "Andante grazioso" From Flute Quartet In A Major (After Mozart's KV 331) (CH)     0
## 2  2008-09-01      360     Johann Nepomuk Hummel                              "Rondo brillante" Op. 56 For Piano And Orchestra     0
## 3  2008-09-01     1380            Franz Schubert                       "Andante con moto" From Symphony No. 9 In C Major D 944     0
## 4  2008-09-01     2340       Camille Saint-Saëns                                       Violin Concerto No. 1 In A Major Op. 20     0
## 5  2008-09-01     3000        Alexander Scriabin                                           Nocturne In A Flat Major Op. posth.     0
## 6  2008-09-01     3180        Alexander Glazunov                                          Valse From "Scènes de ballet" Op. 52     0
## 7  2008-09-01     3540 Carl Philipp Emanuel Bach                                                           Symphony In G Major     0
## 8  2008-09-01     4200            Giuseppe Verdi                      "O Signore, dal tetto natio" From The Opera "I Lombardi"     1
## 9  2008-09-01     4440             Franz Krommer                                 Clarinet Concerto In E Flat Major Op. 36 (CH)     1
## 10 2008-09-01     5820            Georges Onslow             "Andantino molto cantabile" From Symphony No. 4 In G Major Op. 71     1
## # ... with 815 more rows

One of the reasons Maëlle created her post was to use XPath. Now, I was around when XML was defined and I have a sad, long history with the format, so XPath & I are old friends adversaries. However, there are simpler ways to target some of the nodes.

xpath="//span[@class='time hidden-xs']//text()" is ++gd XPath but it doesn’t need to be if we switch to using html_nodes() which will automatically translate CSS selectors to XPath for us. That bit of XPath turns into div[class="playlist"] * span[class="time hidden-xs"]. Why the extra selector at the beginning? Read on!

div[class="playlist"] * span[class="time hidden-xs"] actually translates to the following XPath:

selectr::css_to_xpath('div[class="playlist"] * span[class="time hidden-xs"]')
## [1] "descendant-or-self::div[@class = 'playlist']/descendant::*/descendant::span[@class = 'time hidden-xs']"

I use the parent playlist <div> because a few of the code bits in Maëlle’s post have to subtract away the last node because the XPath expression is a bit too greedy and also gets the “now playing” info vs just the “what played that day” info. It’s not strictly necessary for the time-code but it is for the artist & title. You can see that it simplifies the scraping a bit.

However, we can use XPath for to scrape the “hour the song played” and use it to fill the resultant data frame.

This .//div[@id='%02d']/following-sibling::div[contains(@class, 'item-row') and (following-sibling::div[@id='%02d'])] is not the most complex XPath but it is pretty gnarly, yet it also shows the power of XPath. What we’re doing in that purrr::map() call (which said XPath is in) is:

  • if the hour is 0:22, then use get all the sibling target nodes between one <div id="hh"> and the next <div id="hh">.
  • if the hour is 23, then get all the target nodes until there are no sibling
  • for either result, make an integer vector containing the hour repeated n times (n being the length of the number of songs played in the hour)
  • flatten it all into one big integer vector

(also: note that whitespace is your bff as well when it comes to formatting XPath queries)

If any read_html request is “bad” NULL will be returned instead of a data_frame, which purrr::map_df() will ignore.

I only did 5 scrapes since I won’t be using the data, but it’s working well on other random sequences I tried.

I tossed in a few more alternative ways to get some of the data, which you can pick up on if you compare the each code bits to each other.

Drop any questions, jibes or better XPath queries (once you post an XPath query on the internet the XPath wonks — like me ? — come out of hiding to prey on innocent bloggers) in the comments.

I caught this “gem” in the Wall Street Journal tonight:

It’s pretty hard to compare store-to-store, even though it is fairly clear which ones are going-going-gone. If we want to see the relative percentage of each store closing and also want to see how they stack up against each other, then let’s make a column of 100% bars and label total stores in each:

library(hrbrthemes)
library(tidyverse)

read.table(text='store,closing,total
"Radio Shack",550,1500
"Payless",400,2600
"Rue21",400,1100
"The Limited",250,250
"bebe",180,180
"Wet Seal",170,170
"Crocs",160,560
"JCPenny",138,1000
"American Apparel",110,110
"Kmart",109,735
"hhgregg",88,220
"Sears",41,695', sep=",", header=TRUE, stringsAsFactors=FALSE) %>% 
  as_tibble() %>% 
  mutate(remaining = total - closing,
         gone = round((closing/total) * 100)/100,
         stay = 1-gone,
         rem_lab = ifelse(remaining == 0, "", scales::comma(remaining))) %>% 
  arrange(desc(stay)) %>% 
  mutate(store=factor(store, levels=store)) -> closing_df

update_geom_font_defaults(font_rc)

ggplot(closing_df) +
  geom_segment(aes(0, store, xend=gone, yend=store, color="Closing"), size=8) +
  geom_segment(aes(gone, store, xend=gone+stay, yend=store, color="Remaining"), size=8) +
  geom_text(aes(x=0, y=store, label=closing), color="white", hjust=0, nudge_x=0.01) +
  geom_text(aes(x=1, y=store, label=rem_lab), color="white", hjust=1, nudge_x=-0.01) +
  scale_x_percent() +
  scale_color_ipsum(name=NULL) +
  labs(x=NULL, y=NULL, 
       title="Selected 2017 Store closings (estimated)",
       subtitle="Smaller specialty chains such as Bebe and American Apparel are closing their stores,\nwhile lareger chains such as J.C. Penny and Sears are scaling back their footprint.") +
  theme_ipsum_rc(grid="X") +
  theme(axis.text.x=element_text(hjust=c(0, 0.5, 0.5, 0.5, 1))) +
  theme(legend.position=c(0.875, 1.025)) +
  theme(legend.direction="horizontal")

One might try circle packing or a treemap to show both relative store count and percentage, but I think the bigger story is the percent reduction for each retail chain. It’d be cool to see what others come up with.

It’s likely you’ve seen the news regarding yet-another researcher showing off a phishing domain attack. The technique is pretty simple:

  • find a target domain you want to emulate
  • register a homoglpyh version of it
  • use the hacker’s favorite tool, Let’s Encrypt to serve it up with a nice, shiny green lock icon
  • deploy some content
  • phish someone
  • Profit!

The phishing works since International Domain Names have been “a thing” for a while (anything for the registrars to make more money) and Let’s Encrypt provides a domain-laundering service for these attackers. But, why should attackers have all the fun! Let’s make some domain homoglyphs in R.

Have Glyph, Will Hack

Rob Dawson has a spiffy homoglyph generator and even has a huge glyph-alike file, but we don’t need the full list to don the hacker cap for this exercise. I’ve made a stripped-down version of it that has (mostly) glyphs that should display correctly in “western” locales. You can pull the full list and tweak the example to broaden the attack capabilities. Let’s take a look:

library(stringi)
library(urltools)
library(purrr)

URL <- "https://rud.is/dl/homoglyphs.txt" # trimmed down from https://github.com/codebox/homoglyph
fil <- basename(URL)
invisible(try(httr::GET(URL, httr::write_disk(fil)), silent = TRUE))

chars <- stri_read_lines(fil)
idx_char <- stri_sub(chars, 1,1)
stri_sub(chars, 1, 1) <-  ""
chars <- set_names(chars, idx_char)

tail(chars)
##                                         u 
##          "ʋυцս\u1d1cu??????????????????" 
##                                         v 
##        "νѵט\u1d20ⅴ∨⋁v??????????????????" 
##                                         w 
##                                      "w" 
##                                         x 
##                "×хᕁᕽ᙮ⅹ⤫⤬⨯x?????????????" 
##                                         y 
## "ɣʏγуүყ\u1d8c\u1effℽy??????????????????" 
##                                         z 
##                   "\u1d22z?????????????"

What we did there was to read in the homoglpyh lines and create a lookup table for Latin characters. Now we need a transformation function.

to_homoglyph <- function(domain) {

  suf <- suffix_extract(domain)
  domain <- stri_replace_last_fixed(domain, sprintf(".%s", suf$suffix[1]), "")

  domain_split <- stri_split_boundaries(domain, type="character")[[1]]

  map_chr(domain_split, ~{
    found <-  chars[.x]
    pos <- sample(stri_count_boundaries(found, type="character"), 1)
    stri_sub(found, pos, pos)
  }) %>%
    c(".", suf$suffix[1]) %>%
    stri_join(collapse="")

}

The basic idea is to:

  • carve out the domain suffix (we need to ensure valid TLDs/suffixes are used in the final domain)
  • split the input domain into separate characters
  • select a homoglyph of the character at random
  • join the separate glpyhs and the TLD/suffix back together.

We can try it out with a very familiar domain:

(converted <- to_homoglyph("google.com"))
## [1] "ƍ၀໐?|?.com"

Now, that’s using all possible homoglyphs and it might not look like google.com to you, but imagine whittling down the list to ones that are really close to Latin character set matches. Or, imagine you’re in a hurry and see that version of Google’s URL with a shiny, green lock icon from Let’s Encrypt. You might not really give it a second thought if the page looked fine (or were on a mobile browser without a location bar showing).

What’s the solution?

Firefox has a configuration setting to turn these IDNs into punycode in the location bar. What does that mean? We can use the urltools::puny_encode() function to find out:

puny_encode("ƍ၀໐?|?.com")
## [1] "xn--|-npa992hbmb6w79iesa.com"

Most folks will be much less likely to trust that domain name (if they bother looking in the location bar). Note that it will still have the “everything’s ?” green Let’s Encrypt lock icon, but you shouldn’t be trusting SSL/TLS anymore for integrity or authenticity anyway.

Chrome Canary (super early bird alpha versions) expands IDNs to punycode by default today and a shorter-cycle release to stable channel is forthcoming. I’m told Edge does somewhat sane things with IDNs and if Safari doesn’t presently handle them Apple will likely release an interstitial security update to handle it.

FIN

See if you can generate some fun look-alike’s, such as ???????.com and drop some latte change to register an IDN and add a free hacking certificate to it to see just how easy this entire process is. Note that attackers are automating this process, so they may have beat you to your favorite homoglyph IDN.

If you’re on Chrome, give the Punycode Alert extension a go if you’d like some extra notification/protection from these domains.

NOTE: to_homoglyph() is not vectorised (it’s an exercise left to the reader).