Skip navigation

Waffle House announced it was closing hundreds of stores this week due to SARS-Cov-2 (a.k.a COVID-19). This move garnered quite a bit of media attention since former FEMA Administrator Craig Fugate used the restaurant chain as both an indicator of the immediate and overall severity of a natural disaster. [He’s not the only one](https://www.ehstoday.com/emergency-management/article/21906815/what-do-waffles-have-to-do-with-risk-management. The original concept was pretty straightforward:

For example, if a Waffle House store is open and offering a full menu, the index is green. If it is open but serving from a limited menu, it’s yellow. When the location has been forced to close, the index is red. Because Waffle House is well prepared for disasters, Kouvelis said, it’s rare for the index to hit red. For example, the Joplin, Mo., Waffle House survived the tornado and remained open.
 
“They know immediately which stores are going to be affected and they call their employees to know who can show up and who cannot,” he said. “They have temporary warehouses where they can store food and most importantly, they know they can operate without a full menu. This is a great example of a company that has learned from the past and developed an excellent emergency plan.”

SARS-Cov-2 is not a tropical storm, so conditions are a bit different and a tad more complex when it comes to basing severity of this particular disaster (mostly caused by inept politicians across the globe), which gave me an idea for how to make the Waffle House Index a proper index, i.e. a _”statistical measure of change in a representative group of individual data points.”_1.

In the case of an outbreak, rather than a simple green/yellow/red condition state, using the ratio of closed to open Waffle House locations as a numeric index — [0-1] — seems to make more sense since it may better help indicate:

  • when shelter-in-place became mandatory where a given restaurant is located
  • the severity of SARS-Cov-2-caused symptoms for a given location
  • disruptions in the supply chain for a given location due to SARS-Cov-2

I kinda desperately needed a covidistraction so I set out to see how hard it would be to build such an index metric.

Waffle House lets you find locations via a standard map/search interface. They provide lots of data via that map which can be used to figure out which stores are open and which are closed. There’s a nascent R package which contains all the recipes necessary for the data gathering. However, you don’t need to use it, since it powers wafflehouseindex.us which is collecting the data when the store closings info changes and provides a snapshot of the latest data daily (direct CSV link).

The historical data will make it to a git repo at some point in the near future.

The current index value is 21.2, which increased quickly after the first value of 18.1 (that event was the catalyst for getting the site up and package done) and the closed locations are on the map at the beginning of the post. I went with three qualitative levels on the gauge mostly to keep things simple.

There will absolutely be more location closings and it will be interesting (and, ultimately, very depressing and likely grave) to see how high the index goes and how long it stays above zero.

FIN

The metric is — for the time being — computed across all stores. As noted earlier, this could be broken down into regional index scores to intuit the aforementioned three indicators on a more local level. The historical data (apart from the first closings announcement) is being saved so it will be possible to go back and compute regional indexes when I’ve got more time.

I shall reiterate that you should grab the data from http://wafflehouseindex.us/data/latest.csv vs use the R package since there’s no point in dup’ing the gathering and the historical data will be up and maintained soon.

Stay safe, folks.

Stuff you need

  • 370g all-purpose flour
  • 7g baking powder
  • 300g sugar
  • 80g shortening
  • 7.5g salt
  • 140g eggs (~2 proper jumbo)
  • 75ml coconut milk or cashew milk or almond milk yogurt
  • 75ml oat milk
  • 15ml vanilla
  • 85ml veg oil
  • 10oz bag ghirardelli dark chocolate chips

Stuff you do

Oven @ 375°F.

Paddle sugar, shortening and salt. 3-5 mins.

Whisk eggs, milk, yogurt, vanilla & oil.

In three batches, mix/fold ^^ into the paddled mixture.

Sift together dry ingredients and mix until moist. Don’t over-mix.

Fold in chips.

Let sit for 3 mins.

While ^^, put liners in a 12-cup muffin tin.

Evenly distribute batter. It’s ~100g batter (~1/2 dry measuring cup) per muffin.

22-30m in the oven (it really depends on your oven type). You should not be afraid to skewer to test nor to move the tin around to evenly brown.

Cool on wire rack.

Über Tuesday has come and almost gone (some state results will take a while to coalesce) and I’m relieved to say that {catchpole} did indeed work, with the example code from before producing this on first run:

If we tweak the buffer space around the squares, I think the cartogram looks better:

but, you should likely use a different palette (see this Twitter thread for examples).

I noted in the previous post that borders might be possible. While I haven’t solved that use-case for individual states, I did manage to come up with a method for making a light version of the cartogram usable:

library(sf)
library(hrbrthemes) 
library(catchpole)
library(tidyverse)

delegates <- read_delegates()

candidates_expanded <- expand_candidates()

gsf <- left_join(delegates_map(), candidates_expanded, by = c("state", "idx"))

m <- delegates_map()

# split off each "area" on the map so we can make a border+background
list(
  setdiff(state.abb, c("HI", "AK")),
  "AK", "HI", "DC", "VI", "PR", "MP", "GU", "DA", "AS"
) %>% 
  map(~{
    suppressWarnings(suppressMessages(st_buffer(
      x = st_union(m[m$state %in% .x, ]),
      dist = 0.0001,
      endCapStyle = "SQUARE"
    )))
  }) -> m_borders

gg <- ggplot()
for (mb in m_borders) {
  gg <- gg + geom_sf(data = mb, col = "#2b2b2b", size = 0.125)
}

gg + 
  geom_sf(
    data = gsf,
    aes(fill = candidate),
    col = "white", shape = 22, size = 3, stroke = 0.125
  ) +
  scale_fill_manual(
    name = NULL,
    na.value = "#f0f0f0",
    values = c(
      "Biden" = '#f0027f',
      "Sanders" = '#7fc97f',
      "Warren" = '#beaed4',
      "Buttigieg" = '#fdc086',
      "Klobuchar" = '#ffff99',
      "Gabbard" = '#386cb0',
      "Bloomberg" = '#bf5b17'
    ),
    limits = intersect(unique(delegates$candidate), names(delegates_pal))
  ) +
  guides(
    fill = guide_legend(
      override.aes = list(size = 4)
    )
  ) +
  coord_sf(datum = NA) +
  theme_ipsum_es(grid="") +
  theme(legend.position = "bottom")

{ssdeepr}

Researcher pals over at Binary Edge added web page hashing (pre- and post-javascript scraping) to their platform using ssdeep. This approach is in the category of context triggered piecewise hashes (CTPH) (or local sensitivity hashing) similar to my R adaptation/packaging of Trend Micro’s tlsh.

Since I’ll be working with BE’s data off-and-on and the ssdeep project has a well-crafted library (plus we might add ssdeep support at $DAYJOB), I went ahead and packaged that up as well.

I recommend using the hash_con() function if you need to read large blobs since it doesn’t require you to read everything into memory first (though hash_file() doesn’t either, but that’s a direct low-level call to the underlying ssdeep library file reader and not as flexible as R connections are).

These types of hashes are great at seeing if something has changed on a website (or see how similar two things are to each other). For instance, how closely do CRAN mirror match the mothership?

library(ssdeepr) # see the links above for installation

cran1 <- hash_con(url("https://cran.r-project.org/web/packages/available_packages_by_date.html"))
cran2 <- hash_con(url("https://cran.biotools.fr/web/packages/available_packages_by_date.html"))
cran3 <- hash_con(url("https://cran.rstudio.org/web/packages/available_packages_by_date.html"))

hash_compare(cran1, cran2)
## [1] 0

hash_compare(cran1, cran3)
## [1] 94

I picked on cran.biotools.fr as I saw they were well-behind CRAN-proper on the monitoring page.

I noted that BE was doing pre- and post-javascript hashing as well. Why, you may ask? Well, websites behave differently with javascript running, plus they can behave differently when different user-agents are set. Let’s grab a page from Wikipedia a few different ways to show how they are not alike at all, depending on the retrieval context. First, let’s grab some web content!

library(httr)
library(ssdeepr)
library(splashr)

# regular grab
h1 <- hash_con(url("https://en.wikipedia.org/wiki/Donald_Knuth"))

# you need Splash running for javascript-enabled scraping this way
sp <- splash(host = "mysplashhost", user = "splashuser", pass = "splashpass")

# js-enabled with one ua
sp %>%
  splash_user_agent(ua_macos_chrome) %>%
  splash_go("https://en.wikipedia.org/wiki/Donald_Knuth") %>%
  splash_wait(2) %>%
  splash_html(raw_html = TRUE) -> js1

# js-enabled with another ua
sp %>%
  splash_user_agent(ua_ios_safari) %>%
  splash_go("https://en.wikipedia.org/wiki/Donald_Knuth") %>%
  splash_wait(2) %>%
  splash_html(raw_html = TRUE) -> js2

h2 <- hash_raw(js1)
h3 <- hash_raw(js2)

# same way {rvest} does it
res <- httr::GET("https://en.wikipedia.org/wiki/Donald_Knuth")

h4 <- hash_raw(content(res, as = "raw"))

Now, let’s compare them:

hash_compare(h1, h4) # {ssdeepr} built-in vs httr::GET() => not surprising that they're equal
## [1] 100

# things look way different with js-enabled

hash_compare(h1, h2)
## [1] 0
hash_compare(h1, h3)
## [1] 0

# and with variations between user-agents

hash_compare(h2, h3)
## [1] 0

hash_compare(h2, h4)
## [1] 0

# only doing this for completeness

hash_compare(h3, h4)
## [1] 0

For this example, just content size would have been enough to tell the difference (mostly, note how the hashes are equal despite more characters coming back with the {httr} method):

length(js1)
## [1] 432914

length(js2)
## [1] 270538

nchar(
  paste0(
    readLines(url("https://en.wikipedia.org/wiki/Donald_Knuth")),
    collapse = "\n"
  )
)
## [1] 373078

length(content(res, as = "raw"))
## [1] 374099

FIN

If you were in a U.S. state with a primary yesterday and were eligible to vote (and had something to vote for, either a (D) candidate or a state/local bit of business) I sure hope you did!

The ssdeep library works on Windows, so I’ll be figuring out how to get that going in {ssdeepr} fairly soon (mostly to try out the Rtools 4.0 toolchain vs deliberately wanting to support legacy platforms).

As usual, drop issues/PRs/feature requests where you’re comfortable for any of these or other packages.

For folks who are smart enough not to go near Twitter, I’ve been on a hiatus from the platform insofar as reading the Twitter feed goes. “Why” isn’t the subject of this post so I won’t go into it, but I’ve broken this half-NYE resolution on more than one occasion and am very glad I did so late January when I caught a RT of this tweet by WSJ’s Brian McGill:

You can find it here, and a static copy of a recent one is below:

I kinda wanted to try to make a woefully imperfect static version of it in R with {ggplot2} so poked around at that URL’s XHR objects and javascript to see if I could find the cartogram and the data source.

The data source was easy as it’s an XHR loaded JSON file: https://asset.wsj.net/wsjnewsgraphics/election/2020/delegates.json.

The cartogram bits… were not. Brian’s two-days of manual effort still needed to be put into something that goes onto a web page and news outlets are super-talented at making compact, fast-loading interactive visualizations, which means one tool they use is “webpack”-esque tools to combine many small javascript files into one. I did traipse through it seeing if there as a back-end JSON or CSV somewhere but could not locate it. However, their cartogram library builds the SVG you see on the page. If you use Developer Tools to inspect any element of the SVG then copy the whole SVG “outer HTML” and save it to a local file:

After using an intercept proxy, it turns out this is a dynamically loaded resource, too: https://asset.wsj.net/wsjnewsgraphics/election/delegate-tracker/carto.svg.

That SVG has three top layer groups and has some wicked transforms in it. There was no way I was going to attempt a {statebins}-esque approach to this copycat project (i.e. convert the squares to a grid and map things manually like Brian did) but I had an idea and used Adobe Illustrator to remove the state names layer and the background polygon layer, then “flatten” the image (which — to over-simplify the explanation — flattens all the transforms), and save it back out.

Then, I added a some magic metadata prescribed by svg2geojson to turn the SVG into a GeoJSON file (which {sf} can read!). (That sentence just made real cartographers & geocomp’ers weep, btw).

Now, that I had something R could use in a bit of an easier fashion there was still work to be done. The SVG 1-px <rect> elements ended up coming across as POLYGONs and many, many more point-squares came along for the ride (in retrospect, I think they may have been the borders around the states, more on that in a bit).

I used {purrr} and {st_coordinates} to figure out where all the 1-px “polygons” were in the {sf} object and isolated them, then added an index field (1:n, n being the number of delegate squares for a given state).

I read in the original SVG with {xml2} and extracted the named state groups. Thankfully the order and number of “blocks” matched the filtered {sf} object. I merged them together, turned the 1-px POLYGONs into POINTs, and made the final {sf} object which I put in the nascent {catchpole} package (location below). Here’s a quick view of it using plot():

library(catchpole) # hrbrmstr/catchpole

plot(delegates_map()[1])

delegates_map()
## Simple feature collection with 3979 features and 2 fields
## geometry type:  POINT
## dimension:      XY
## bbox:           xmin: -121.9723 ymin: 37.36802 xmax: -121.9581 ymax: 37.37453
## epsg (SRID):    4326
## proj4string:    +proj=longlat +datum=WGS84 +no_defs
## First 10 features:
##    state idx                   geometry
## 1     WY   1 POINT (-121.9693 37.37221)
## 2     WY   2 POINT (-121.9693 37.37212)
## 3     WY   3 POINT (-121.9691 37.37221)
## 4     WY   4 POINT (-121.9691 37.37212)
## 5     WY   5 POINT (-121.9691 37.37203)
## 6     WY   6 POINT (-121.9691 37.37194)
## 7     WY   7 POINT (-121.9691 37.37185)
## 8     WY   8  POINT (-121.969 37.37221)
## 9     WY   9  POINT (-121.969 37.37212)
## 10    WY  10  POINT (-121.969 37.37203)

All that was needed was to try it out with the real data.

I simplified that process quite a bit in {catchpole} but also made it possible to work with the individual bits on your own. {gg_catchpole()} will fetch the WSJ delegate JSON and build the basic map for you using my dark “ipsum” theme:

library(sf)
library(catchpole) # hrbrmstr/catchpole
library(hrbrthemes)
library(tidyverse)

gg_catchpole() +
  theme_ft_rc(grid="") +
  theme(legend.position = "bottom")

BONUS!

Now that you have the WSJ JSON file, you can do other, basic visualizations with it:

library(hrbrthemes) 
library(waffle)
library(geofacet)
library(tidyverse)

jsonlite::fromJSON(
  url("https://asset.wsj.net/wsjnewsgraphics/election/2020/delegates.json"),
  simplifyDataFrame = FALSE
) -> del

c(
  "Biden" = "#5ac4c2",
  "Sanders" = "#63bc51",
  "Warren" = "#9574ae",
  "Buttigieg" = "#007bb1",
  "Klobuchar" = "#af973a",
  "Bloomberg" = "#AA4671",
  "Steyer" = "#4E4EAA",
  "Yang" = "#C76C48",
  "Gabbard" = "#7B8097"
) -> dcols

bind_cols(del$data$US$delCount) %>% 
  gather(candidate, delegates) %>% 
  filter(delegates > 0) %>%
  arrange(desc(delegates)) %>% 
  mutate(candidate = fct_inorder(candidate)) %>%
  ggplot(aes(candidate, delegates)) +
  geom_col(fill = ggthemes::tableau_color_pal()(1), width = 0.55) +
  labs(
    x = NULL, y = "# Delegates",
    title = "2020 Democrat POTUS Race Delegate Counts",
    subtitle = sprintf("Date: %s", Sys.Date()),
    caption = "Data source: WSJ <https://asset.wsj.net/wsjnewsgraphics/election/2020/delegates.json>\n@hrbrmstr #rstats"
  ) +
  theme_ipsum_rc(grid="Y")

bind_cols(del$data$US$delCount) %>% 
  gather(candidate, delegates) %>% 
  filter(delegates > 0) %>%
  arrange(desc(delegates)) %>% 
  mutate(candidate = fct_inorder(candidate)) %>%
  ggplot(aes(fill=candidate, values=delegates)) +
  geom_waffle(color = "white", size = 0.5) +
  scale_fill_manual(name = NULL, values = dcols) +
  coord_fixed() +
  labs(
    x = NULL, y = "# Delegates",
    title = "2020 Democrat POTUS Race Delegate Counts",
    subtitle = sprintf("Date: %s", Sys.Date()),
    caption = "Data source: WSJ <https://asset.wsj.net/wsjnewsgraphics/election/2020/delegates.json>\n@hrbrmstr #rstats"
  ) +
  theme_ipsum_rc(grid="") +
  theme_enhance_waffle()

state_del <- del
state_del$data[["US"]] <- NULL

map_df(state_del$data, ~bind_cols(.x$delCount), .id = "state") %>% 
  gather(candidate, delegates, -state) %>% 
  filter(delegates > 0) %>% 
  ggplot(aes(candidate, delegates)) +
  geom_col(aes(fill = candidate), col = NA, width = 0.55) +
  scale_fill_manual(name = NULL, values = dcols) +
  facet_geo(~state) +
  labs(
    x = NULL, y = "# Delegates",
    title = "2020 Democrat POTUS Race Delegate Counts by State",
    subtitle = sprintf("Date: %s", Sys.Date()),
    caption = "Data source: WSJ <https://asset.wsj.net/wsjnewsgraphics/election/2020/delegates.json>\n@hrbrmstr #rstats"
  ) +
  theme_ipsum_rc(grid="Y") +
  theme(axis.text.x = element_blank()) +
  theme(panel.spacing.x = unit(0.5, "lines")) +
  theme(panel.spacing.y = unit(0.1, "lines")) +
  theme(legend.position = c(0.95, 0.1)) +
  theme(legend.justification = c(1, 0))

FIN

More work needs to be done on the map and {catchpole} itself but there’s a sufficient base for others to experiment with (PRs and your own blog posts welcome!).

W/r/t “more on that later” bits: The extra polygons were very likely borders and I think borders would help the cartogram, but we can make them with {sf}, too. We can also add in a layer for state names and/or just figure out the centroid for each point grouping (with {sf}) and get places for labels that way). Not sure I’ll have time for any of that (this whole process went quickly, believe it or not).

Also: ggiraph::geom_sf_interactive() can be used as a poor-dude’s popup to turn this (quickly) into an interactive piece.

If you hit up https://git.rud.is/hrbrmstr/catchpole you’ll find the package and URLs to other social coding sites (though GitUgh has been plagued with downtime and degraded performance the past few weeks so you should really think about moving your workloads to real service).

Have fun mapping Über Tuesday and share your creations, PR’s, ideas, etc for the package wherever you’re most comfortable.

It seems that the need for MX, DKIM, SPF, and DMARC records for modern email setups were just not enough acronyms (and setup tasks) for some folks, resulting in the creation of yet-another-acronym — BIMI, or, Brand Indicators for Message Identification. The goal of BIMI is to “provide a mechanism for mail senders to publish a validated logotype that mail receivers can display with the senders’ messages.” You can read about the rationale for BIMI and the preliminary RFC for crafting BIMI DNS TXT records over a few caffeinated beverages. I’ll try to TL;DR the high points below.

The idea behind BIMI is to provide a visual indicator of the brand associated with a mail message; i.e. you’ll have an image to look at somewhere in the mail list display and/or mail message display of your mail client if it supports BIMI. This visual indicator is merely an image URL association with a brand mail domain through the use of a new special-prefix DNS TXT record. Mail intermediaries and mail clients are only supposed to allow presentation of BIMI-record provided images after verifying that the email domain itself conforms to the DMARC standard (which you should be using if you’re an organization/brand and shame on you if you’re not by now). In fact, the goal of BIMI is to help ensure:

  • the organization is legitimate
  • the domain names are controlled by the organization
  • the organization has current rights to display the indicator

When BIMI validation is being performed, the party requesting validation is currently authorized to do so by the organization and is who they say they are.

If you’re having flashbacks to the lost era of when SSL certificates were supposed to have similar integrity assertions, you’re not alone (thanks, LE).

What’s Really Going On?

I’m not part of any working group associated with BIMI, I just measure and study the internet for a living. As someone who is as likely to use alpine to peruse mail as I am a thick email client or (heaven forbid) web client, BIMI will be of little value to me since I’m not really going to see said images anyway.

Reading through all the BIMI (and associated) RFCs, email security & email marketing vendor blogs/papers, and general RFC commentators, BIMI isn’t solving any problem that well-armored DMARC configurations aren’t already solving. It appears to be driven mainly by brand marketing wonks who just want to shove brand logos in front of you and have one more way to track you.

Yep, tracking email perusals (even if it’s just a list view) will be one of the benefits (to brands and marketing firms) and is most assuredly a non-stated primary goal of this standard. To help illustrate this, let’s look at the BIMI record for one of the most notorious tracking brands on the planet, Verizon (in this case, Verizon Wireless). When you receive a BIMI-“enhanced” email from verizonwireless.com the infrastructure handling the email receipt will look for and process the BIMI header that was sent along for the ride and eventually query a TXT record for default._bimi.verizonwireless.com (or whatever the sender has specified instead of default — more on that in a bit). In this case the response will be:

v=BIMI1; l=https://ecrm.e.verizonwireless.com/AC/Global/Bling/Images/checkmark/verizon.svg;

which means the image they want displayed is at that URL. Your client will have to fetch that during an interactive session, so your IP address — at a minimum — will be leaked when that fetch happens.

Brands can specify something other than the default. selector with the email, so they could easily customize that to be a unique identifier which will “be you” and know when you’ve at least looked at said email in a list view (provided that’s how your email client will show it) if not in the email proper. Since this is a “high integrity” visual component of the message, it’s likely not going to be subject to the “do not load external images/content” rules you have setup (you do view emails with images turned off initially, right?).

So, this is likely just one more way the IETF RFC system is being abused by large corporations to continue to erode our privacy (and get their horribly designed logos in our faces).

Let’s see who are the early adopters of BIMI.

BIMI Through the Alexa Looking Glass

Amazon had stopped updating the Alexa Top 1m sites for a while but it’s been back for quite some time so we can use it to see how many sites in the top 1m have BIMI records.

We’ll use the {zdnsr} package (also on GitLab, SourceHut, BitBucket, and GitUgh) to perform a million default._bimi prefix queries and see how many valid BIMI TXT record responses we get.

library(zdnsr) # hrbrmstr/zdnsr on social coding sites
library(stringi)
library(urltools)
library(tidyverse)

refresh_publc_nameservers_list() # get a current list of active nameservers we can use

# read in the top1m
top1m <- read_csv("~/data/top-1m.csv", col_names = c("rank", "domain")) # http://s3.amazonaws.com/alexa-static/top-1m.csv.zip

# fire off a million queries, storing good results where we can pick them up later
zdns_query(
  entities = sprintf("%s.%s", "default._bimi", top1m$domain),
  query_type = "TXT",
  num_nameservers = 500,
  output_file = "~/data/top1m-bimi.json",
)

# ~10-30m later depending on your system/network/randomly chosen resolvers

bmi <- jsonlite::stream_in(file("~/data/top1m-bimi.json")) # using jsonlite vs ndjson since i don't want a "flat" structure

idx <- which(lengths(bmi$data$answers) > 0) # find all the ones with non-0 results

# start making a tidy data structure
tibble(
  answer = bmi$data$answers[idx]
) %>%
  unnest(answer) %>%
  filter(grepl("^v=BIM", answer)) %>% # only want BIMI records, more on this in a bit
  mutate(
    l = stri_match_first_regex(answer, "l=([^;]+)")[,2], # get the image link
    l_dom = domain(l) # get the image domain
  ) %>% 
  bind_cols(
    suffix_extract(.$name) # so we can get the apex domain below
  ) %>% 
  mutate(
    name_apex = glue::glue("{domain}.{suffix}"),
    name_stripped = stri_replace_first_regex(
      name, "^default\\._bimi\\.", ""
    )
  ) %>% 
  select(name, name_stripped, name_apex, l, l_dom, answer) -> bimi_df

Here’s what we get:

bimi_df
## # A tibble: 321 x 6
##    name       name_stripped  name_apex  l                            l_dom               answer                       
##    <chr>      <chr>          <glue>     <chr>                        <chr>               <chr>                        
##  1 default._… ebay.com       ebay.com   https://ir.ebaystatic.com/p… ir.ebaystatic.com   v=BIMI1; l=https://ir.ebayst…
##  2 default._… linkedin.com   linkedin.… https://media.licdn.com/med… media.licdn.com     v=BIMI1; l=https://media.lic…
##  3 default._… wish.com       wish.com   https://wish.com/static/img… wish.com            v=BIMI1; l=https://wish.com/…
##  4 default._… dropbox.com    dropbox.c… https://cfl.dropboxstatic.c… cfl.dropboxstatic.… v=BIMI1; l=https://cfl.dropb…
##  5 default._… spotify.com    spotify.c… https://message-editor.scdn… message-editor.scd… v=BIMI1; l=https://message-e…
##  6 default._… ebay.co.uk     ebay.co.uk https://ir.ebaystatic.com/p… ir.ebaystatic.com   v=BIMI1; l=https://ir.ebayst…
##  7 default._… asos.com       asos.com   https://content.asos-media.… content.asos-media… v=BIMI1; l=https://content.a…
##  8 default._… wix.com        wix.com    https://valimail-app-prod-u… valimail-app-prod-… v=BIMI1; l=https://valimail-…
##  9 default._… cnn.com        cnn.com    https://amplify.valimail.co… amplify.valimail.c… v=BIMI1; l=https://amplify.v…
## 10 default._… salesforce.com salesforc… https://c1.sfdcstatic.com/c… c1.sfdcstatic.com   v=BIMI1; l=https://c1.sfdcst…
## # … with 311 more rows

I should re-run this mass query since it usually takes 3-4 runs to get a fully comprehensive set of results (I should also really use work’s infrastructure to do the lookups against the authoritative nameservers for each organization like we do for our FDNS studies, but this was a spur-of-the-moment project idea to see if we should add BIMI to our studies and my servers are “free” whereas AWS nodes most certainly are not).

To account for the aforementioned “comprehensiveness” issues, we’ll round up the total from 310 to 400 (the average difference between 1 and 4 bulk queries is more like 5% than 20% but I’m in a generous mood), so 0.04% of the domains in the Alexa Top 1m have BIMI records. Not all of those domains are going to have MX records but it’s safe to say less than 1% of the brands on the Alexa Top 1m have been early BIMI adopters. This is not surprising since it’s not really a fully baked standard and no real clients support it yet (AOL doesn’t count, apologies to the Oathers). Google claims to be “on board” with BIMI, so once they adopt it, we should see that percentage go up.

Tracking isn’t limited to a tricked out dynamic DNS configuration that customizes selectors for each recipient. Since many brands use third party services for all things email, those clearinghouses are set to get some great data on you if these preliminary results are any indicator:

count(bimi_df, l_dom, sort=TRUE)
## # A tibble: 255 x 2
##    l_dom                                                                          n
##    <chr>                                                                      <int>
##  1 irepo.primecp.com                                                             13
##  2 www.letakomat.sk                                                               9
##  3 valimail-app-prod-us-west-2-auth-manager-assets.s3.us-west-2.amazonaws.com     8
##  4 static.mailkit.eu                                                              7
##  5 astatic.ccmbg.com                                                              5
##  6 def0a2r1nm3zw.cloudfront.net                                                   4
##  7 static.be2.com                                                                 4
##  8 www.christin-medium.com                                                        4
##  9 amplify.valimail.com                                                           3
## 10 bimi-host.250ok.com                                                            3
## # … with 245 more rows

The above code counted how many BIMI URLs are hosted at a particular domain and the top 5 are all involved in turning you into the product for other brands.

Speaking of brands, these are the logos of the early adopters which I made by generating some HTML from an R script and screen capturing the browser result:

FIN

The data from the successful BIMI results of the mass DNS query is at https://rud.is/dl/2020-02-21-bimi-responses.json.gz. Knowing there are results to be had, I’ll be setting up a regular (proper) mass-query of the Top 1m and see how things evolve over time and possibly get it on the work docket. We may just do a mass BIMI prefix query against all FDNS apex domains just to see a broader scale result, so stay tuned.

Drop note if you discover any more insights from the data (there are a few in there I’m saving for a future post) or your own BIMI inquiries; also drop a note if you have a good defense for BIMI other than marketing and tracking.

As the maintainer of RSwitch — and developer of my own (for personal use) macOS, iOS, watchOS, iPadOS and tvOS apps — I need the full Apple Xcode install around (more R-focused macOS folk can get away with just the command-line tools being installed). As an Apple Developer who insanely runs the macOS & Xcode betas as they are released, I also have the misery of dealing with Xcode usurping authority over .R files every time it receives an update. Sure, I can right-click on an R script, choose “Open With => Other…”, pick RStudio and make it the new default, but clicks interrupt train of thought and take more time than execution a quick shell command at a terminal prompt (which I always have up).

Enter: dtuihttps://github.com/moretension/duti — a small command-line tool that lets you change the default application just by knowing the id of the application you want to make the default. For instance, RStudio’s id is org.rstudio.RStudio which can be obtained via:

$ osascript -e 'id of app "RStudio"'
org.rstudio.RStudio

and, we can use that value in a quick call to duti:

$ duti -s org.rstudio.RStudio .R all

If you’d rather Visual Studio Code or Sublime Text to be the default for .R files, their bundle ids are com.sublimetext.3 and com.microsoft.VSCode, respectively. If you’d rather use Atom, well you really need to think about your life choices.

We can see what the current default for R scripts via:

$ duti -x R
RStudio.app
/Applications/RStudio.app
org.rstudio.RStudio

You can turn the “setter” into a shell alias (preferably zsh or sh alias since bash is going away soon) or shell script for quick use.

Installing duti

Homebrew users can just brew install duti and get on with their day. Folks can also grab the latest release and get on with their day with just a little more effort.

The duti utility can also be compiled on your own (which is preferred so you can look at the source to make sure you know being compromised by a random developer on the internet); but, if you have macOS 10.15 (Catalina), you’ll need to jump through a few hoops since it doesn’t compile out-of-the-box on that platform yet. Thankfully those hoops aren’t too bad thanks to a helpful pull request that adds support for the current version of macOS. (You’ll need at least the command-line developer tools installed for this to work and likely need to brew install autoconf automake libtool to ensure all the toolchain bits that are needed are in place.):

At a terminal prompt, go to where you normally go to clone git repositories and grab the source:

$ git clone git@github.com:moretension/duti.git
$ cd duti
$ git fetch origin pull/39/head:pull_39 # add and fetch the origin for the PR
$ git checkout pull_39                  # switch to the branch
$                                       # review the source code
$                                       # no, really, review the source code!
$ autoconf                              # run autoconf to generate the configure script
$ ./configure                           # generate the Makefile (there will be "checking" and "creating" messages)
$ make                                  # build it! (there will be macOS API deprecation warnings but no errors)
$ make install                          # install it! (you may need to prefix with "sudo -H"; this will put the binary in `/usr/local/bin/` and man page in `/usr/local/share/man/man`

NOTE: If you only have the macOS Xcode command line tools (vs the entirety of Xcode) you’ll need to edit aclocal.m4 before you run autoconf and change line 9 to be:

sdk_path="/Library/Developer/CommandLineTools/SDKs"

since the existing setting assumes you have the full Xcode installation available.

FIN

I’ll be adding this functionality to the next version of RSwitch, letting you specify the application(s) you want to own various R-ish files. It will check for the proper values being in place on a regular basis and set them to your defined preferences (I also need to see if there’s an event I can have RSwitch watch for to trigger the procedure).

If you have another, preferred way to keep ownership of R files drop a blog post link in the comments (or just drop a note the comments with said procedure).

macOS R users who tend to work on the bleeding edge likely noticed some downtime at <mac.r-project.org> this past weekend. Part of the issue was an SSL/TLS certificate expiration situation. Moving forward, we can monitor this with R using the super spiffy {openssl} and {pushoverr} packages whilst also generating a daily report with {rmarkdown} and {DT}.

The Basic Process

The {openssl} package has a handy function — download_ssl_cert() — which will, by default, hit a given host on the standard HTTPS port (443/TCP) and grab the site certificate and issuer. We’ll grab the “validity end” field and convert that to a date to use for comparison.

To get the target list of sites to check I used Rapid7’s FDNS data set and a glance at a few certificate transparency logs to put together a current list of “r-project” domains that have been known to have SSL certs. This process could be made more dynamic, but things don’t change that quickly in r-project domain land.

Finally, we use the {DT} package to build a pretty HTML table and the {pushoverr} package to send notifications at normal priority for certs expiring within a week and critical priority for certs that have expired (the package has excellent documentation which will guide you through setting up a Pushover account).

I put this all in a plain R script named r-project-ssl-notify.R that’s then called from a Linux CRON job which runs:

/usr/bin/Rscript -e 'rmarkdown::render(input="PATH_TO/r-project-ssl-notify.R", output_file="PATH_TO/r-project-cert-status/index.html", quiet=TRUE)'

once a day at 0930 ET to make this status page and also fire off any notifications which I have going to my watch and phone (I did a test send by expanding the delta to 14 days):

watch

phone

Here’s the contents of

#' ---
#' title: "r-project SSL/TLS Certificate Status"
#' date: "`r format(Sys.time(), '%Y-%m-%d')`"
#' output:
#'   html_document:
#'     keep_md: false
#'     theme: simplex
#'     highlight: monochrome
#' ---
#+ init, include=FALSE
knitr::opts_chunk$set(
  message = FALSE, 
  warning = FALSE, 
  echo = FALSE, 
  collapse=TRUE
)

#+ libs
library(DT)
library(openssl)
library(pushoverr)
library(tidyverse)

# Setup -----------------------------------------------------------------------------------------------------------

# This env config file contains two lines:
#
# PUSHOVER_USER=YOUR_PUSHOVER_USER_STRING
# PUSHOVER_APP=YOUR_PUSHOVER_APP_KEY
#
# See the {pushoverr} package for how to setup your Pushover account
readRenviron("~/jobs/conf/r-project-ssl-notify.env")


# Check certs -----------------------------------------------------------------------------------------------------

# r-project.org domains retrieved from Rapid7's FDNS data set
# (https://opendata.rapid7.com/sonar.fdns_v2/) and cert transparency logs

#+ work
c(
  "beta.r-project.org", "bugs.r-project.org", "cloud.r-project.org", 
  "cran-archive.r-project.org", "cran.at.r-project.org", "cran.ch.r-project.org", 
  "cran.es.r-project.org", "cran.r-project.org", "cran.uk.r-project.org", 
  "cran.us.r-project.org", "developer.r-project.org", "ess.r-project.org", 
  "ftp.cran.r-project.org", "journal.r-project.org", "lists.r-forge.r-project.org", 
  "mac.r-project.org", "r-project.org", "svn.r-project.org", "translation.r-project.org", 
  "user2011.r-project.org", "user2014.r-project.org", "user2016.r-project.org", 
  "user2018.r-project.org", "user2019.r-project.org", "user2020.r-project.org", 
  "user2020muc.r-project.org", "win-builder.r-project.org", "www.cran.r-project.org", 
  "www.r-project.org", "www.user2019.fr"
) -> r_doms

# grab each cert

r_certs <- map(r_doms, openssl::download_ssl_cert)

# make a nice table
tibble(
  dom = r_doms,
  expires = map_chr(r_certs, ~.x[[1]][["validity"]][[2]]) %>% # this gets us the "validity end"
    as.Date(format = "%b %d %H:%M:%S %Y", tz = "GMT"),        # and converts it to a date object
  delta = as.numeric(expires - Sys.Date(), "days")            # this computes the delta from the day this script was called
) %>% 
  arrange(expires) -> r_certs_expir

# Status page generation ------------------------------------------------------------------------------------------

# output nice table  
DT::datatable(r_certs_expir, list(pageLength = nrow(r_certs_expir))) # if the # of r-proj doms gets too large we'll cap this for pagination

# Notifications ---------------------------------------------------------------------------------------------------

# See if we need to notify abt things expiring within 1 week
# REMOVE THIS or edit the delta max if you want less noise
one_week <- filter(r_certs_expir, between(delta, 1, 7))
if (nrow(one_week) > 0) {
  pushover_normal(
    title = "There are r-project SSL Certs Expiring Within 1 Week", 
    message = "Check which ones: https://rud.is/r-project-cert-status"
  )
}

# See if we have expired certs
expired <- filter(r_certs_expir, delta <= 0)
if (nrow(expired) > 0) {
  pushover_critical(
    title = "There are expired r-project SSL Certs!", 
    message = "Check which ones: https://rud.is/r-project-cert-status"
  )
}

FIN

With just a tiny bit of R code we have the ability to monitor expiring SSL certs via a diminutive status page and alerts to any/all devices at our disposal.

Each year the World Economic Forum releases their Global Risk Report around the time of the annual Davos conference. This year’s report is out and below are notes on the “cyber” content to help others speed-read through those sections (in the event you don’t read the whole thing). Their expert panel is far from infallible, but IMO it’s worth taking the time to read through their summarized viewpoints. Some of your senior leadership are represented at Davos and either contributed to the report or will be briefed on the report, so it’s also a good idea just to keep an eye on what they’ll be told.

Direct link to report PDF: http://www3.weforum.org/docs/WEF_Global_Risk_Report_2020.pdf.

“Cyber” Cliffs Notes

  • Cyberattacks moved out of the Top 5 Global Risks in terms of Likelihood (page 2)

  • Cyberattacks remain in the upper-right risk quadrant (page 3)

  • Cyberattacks likelihood estimation reduced slightly but impact moved up a full half point to ~4.0 (out of 5.0) (page 4)

  • Cyberattacks are placed as directly related to named risks of: (page 5)

    • information infrastructure breakdown, (76.2% of the 200+ member expert panel on short-term outlook)
    • data fraud/theft, (75.0% of the 200+ member expert panel on short-term outlook) and
    • adverse tech advances (<70% of the 200+ member expert panel on short-term outlook)

    All three of which have their own relationships (it’s worth tracing them out as an exercise in downstream impact potential if one hasn’t worked through a risk relationship exercise before)

  • Cyberattacks remain on the long-term outlook (next 10 years) for both likelihood and impact by all panel sectors

  • Pages 61-71 cover the “Fourth Industrial Revolution” (4IR) and cyberattacks are mentioned on every page.

    • There are 2025 market projections that might be useful as deck fodder.
    • Interesting statistic that 50% of the world’s population is online and that one million additional people are joining the internet daily.
    • The notion of nation-state mandated “parallel cyberspaces” is posited (we’re seeing that develop in Russia and some other countries right now).
    • They also mention the proliferation of patents to create and enforce a first-mover advantage
    • Last few pages of the section have a wealth of external resources that are worth perusing
  • In the health section on page 78 they mention the susceptibility of health data to cyberattacks

  • They list out specific scenarios in the back; many have a cyber component

    • Page 92: “Geopolitical risk”: Interstate conflict with regional consequences — A bilateral or multilateral dispute between states that escalates into economic (e.g. trade/currency wars, resource nationalization), military, cyber, societal or other conflict.

    • Page 92: “Technological risk”: Breakdown of critical information infrastructure and networks — Cyber dependency that increases vulnerability to outage of critical information infrastructure (e.g. internet, satellites) and networks, causing widespread disruption.

    • Page 92: “Technological risk”: Large-scale cyberattacks — Large-scale cyberattacks or malware causing large economic damage, geopolitical tensions or widespread loss of trust in the internet.

    • Page 92: “Technological risk”: Massive incident of data fraud or theft — Wrongful exploitation of private or official data that takes place on an unprecedented scale.

FIN

Hopefully this saved folks some time, and I’m curious as to how others view the Ouija board scrawls of this expert panel when it comes to cybersecurity predictions, scenarios, and ratings.