Skip navigation

Tag Archives: post

The new year begins with me being on the hook to crank out a book on advanced web-scraping in R by July (more on that in a future blog post). The bookdown? package seemed to be the best way to go about doing this but I had only played with the toy/default examples of it and wanted to test out the platform with a “Hello, World”-like example of a “real” book to iron out issues and avoid more refactoring later on than I know I will have to do. I’ve been on an rtweet kick as of late (I have no idea why) and had an e-copy of O’Reilly’s 21 Recipes for Mining Twitter in the their synced Dropbox folder (it was a free giveaway a few years ago) and decided to make an rtweet version of it in a bookdown project.

You can find the GitHub repo for it here and the rendered version here. NOTE: I will likely not finish the remaining two chapters (I need to spend the time on the real book :-) but will gladly add you as a co-author if you shoot over a PR.

I began with Sean Kross’ quick start and decided to work primarily in Sublime Text and use a Makefile to manage the build process. Since the goal was to iron out kinks for a real production book, here’s a bullet list of some tips as a result of figuring out what worked for me:

  • Get Yihui Xie’s book. I have a physical copy but having either will help you when things get frustrating (and they do get frustrating at times)
  • Use git. However you instantiate the project, use git source control so you don’t lose your hard work. However some directories are not tracked in git! You may want to modify the line with *.rds in .gitignore to be a bit less brutal if you happen to generate rds files outside of the project but use them in chapter examples. Also, make sure to put other, sensitive items (like .httr-oauth) in that .gitignore to avoid having to reset credentials.
  • Use a Makefile. I like RStudio, but have far more editing tools in Sublime Text for book-ish work. Plus it has an easy build system manager, and I find it easier to navigate files.
  • Make liberal use of code chunks. Chapter 16 has a structure that I used in many of the chapters. One block for library calls (no caching); load fonts (hidden, and primarily for PDF rendering); named, cached logical sections that go with the flow of the chapter text; custom figure dimensions to ensure they come out as desired. Caching will speed up rendering time immensely.
  • Use saved data and a mixture of echo=FALSE, eval=TRUE, echo=TRUE, eval=FALSE for things you generated outside of the book source code (because they may be long running things you don’t want to wait for even once in rendering) but want to show in the book (perhaps with slightly modified source).
  • Despite using git, create a daily compressed archive of the directory tree and stick it on Dropbox (that can be part of the Makefile). Your work is valuable and you need to make sure it’s backed up.
  • Learn about references. Yihui Xie’s book shows how to deal with in- and cross-chapter references, read and use them!
  • Use a bookdown::word_document2 vs PDF and make a custom Word template for it. The default PDF output is fine for basic things, but you’ll want to generate a better one from Word.
  • When things stop rendering properly save your recently edited files and go back in time with git to a working start. This happened to me a few times as I worked across different machines. git makes glitches almost stress free.
  • Use rsync for publishing. I need to add this to the Makefile but one, short command-line call can publish your work in seconds to a web server.

I’ll likely have more tips as the year goes on and will have a follow-up post for using web server access logs to generate “kindle-like” reading statistics for your tomes.

(You can find all R⁶ posts here)

UPDATE 2018-01-01 — this has been added to rtweet (GH version).

A Twitter discussion:

that spawned from Maëlle’s recent look-back post turned into a quick function for capturing an image of a Tweet/thread using webshot, rtweet, magick and glue.

Pass in a status id or a twitter URL and the function will grab an image of the mobile version of the tweet.

The ultimate goal is to make a function that builds a tweet using only R and magick. This will have to do until the new year.

tweet_shot <- function(statusid_or_url, zoom=3) {

  require(glue, quietly=TRUE)
  require(rtweet, quietly=TRUE)
  require(magick, quietly=TRUE)
  require(webshot, quietly=TRUE)

  x <- statusid_or_url[1]

  is_url <- grepl("^http[s]://", x)

  if (is_url) {

    is_twitter <- grepl("twitter", x)
    stopifnot(is_twitter)

    is_status <- grepl("status", x)
    stopifnot(is_status)

    already_mobile <- grepl("://mobile\\.", x)
    if (!already_mobile) x <- sub("://twi", "://mobile.twi", x)

  } else {

    x <- rtweet::lookup_tweets(x)
    stopifnot(nrow(x) > 0)
    x <- glue_data(x, "https://mobile.twitter.com/{screen_name}/status/{status_id}")

  }

  tf <- tempfile(fileext = ".png")
  on.exit(unlink(tf), add=TRUE)

  webshot(url=x, file=tf, zoom=zoom)

  img <- image_read(tf)
  img <- image_trim(img)

  if (zoom > 1) img <- image_scale(img, scales::percent(1/zoom))

  img

}

Now just do one of these:

tweet_shot("947082036019388416")
tweet_shot("https://twitter.com/jhollist/status/947082036019388416")

to get:

2017 is nearly at an end. We humans seem to need these cycles to help us on our path forward and have, throughout history, used these annual demarcation points as a time of reflection of what was, what is an what shall come next.

To that end, I decided it was about time to help quantify a part of the soon-to-be previous annum in R through the fabrication of a reusable template. Said template contains various incantations that will enable the wielder to enumerate their social contributions on:

  • StackOveflow
  • GitHub
  • Twitter
  • WordPress

through the use of a parameterized R markdown document.

The result of one such execution can be found here (for those who want a glimpse of what I was publicly up to in 2017).

Want to see where you contributed the most on SO? There’s a vis for that:

What about your GitHub activity? There’s a vis for that, too:

Perhaps you just want to see your top blog posts for the year. There’s also a vis for that:

Or — maybe — you just want to see how much you blathered on Twitter. There’s even a vis for that:

Take the Rmd for a spin. File issues & PRs for things that need work and take some time to look back on 2017 with a more quantified eye than you may have in years’ past.

Here’s to 2018 being full of magic, awe, wonder and delight for us all!

It’s been a long time coming, but swatches? is now on CRAN.

What is “swatches”?

First off, swatches has nothing to do with those faux-luxury brand Swiss-made timepieces. swatches is all about color.

R/CRAN has plenty of color picking packages. The colourlovers? ? by @thosjleeper is one of my favs. But, color palettes have been around for ages. Adobe has two: Adobe Color (ACO) and Adobe Swatch Exchange (ASE); GIMP has “GPL”; OpenOffice has “SOC” and KDE has the unimaginative “colors”. So. Many. Formats. Wouldn’t it be great if there were a package that read them all in with a simple read_palette() function? Well, now there is.

I threw together a fledgling version of swatches a few years ago to read in ACO files from a $DAYJOB at the time and it cascaded from there. I decided to resurrect it and get it on CRAN to support a forthcoming “year in review” post that will make its way to your RSS feeds on-or-about December 31st.

True Colors Shining Through

Let’s say you want to get ahead of the game in 2018 and start preparing to dazzle your audience by using a palette that incorporates PANTONE’s 2018 Color of the Year (yes, that’s “a thing”) : Ultra Violet.

If you scroll down there, you’ll see a download link for an ASE version of the palettes. We can skip that and start with some R code:

library(swatches)
library(hrbrthemes)
library(tidyverse)

download.file("https://www.pantone.com/images/pages/21348/adobe-ase/Pantone-COY18-Palette-ASE-files.zip", "ultra_violet.zip")
unique(dirname((unzip("ultra_violet.zip"))))
## [1] "./Pantone COY18 Palette ASE files"
## [2] "./__MACOSX/Pantone COY18 Palette ASE files"


dir("./Pantone COY18 Palette ASE files")
#  [1] "PantoneCOY18-Attitude.ase"         "PantoneCOY18-Desert Sunset.ase"   
#  [3] "PantoneCOY18-Drama Queen.ase"      "PantoneCOY18-Floral Fantasies.ase"
#  [5] "PantoneCOY18-Intrigue.ase"         "PantoneCOY18-Kindred Spirits.ase" 
#  [7] "PantoneCOY18-Purple Haze.ase"      "PantoneCOY18-Quietude.ase"

Ah, if only the designers cleaned up their ZIP file.

We’ve got eight palettes to poke at, and hopefully one will be decent enough to use for our plots.

Let’s take a look:

par(mfrow=c(8,1))

dir("./Pantone COY18 Palette ASE files", full.names=TRUE) %>% 
  walk(~{
    pal_name <- gsub("(^[[:alnum:]]+-|\\.ase$)", "", basename(.x))
    show_palette(read_palette(.x))
    title(pal_name)
  })

par(mfrow=c(1,1))

I had initially thought I’d go for “Attitude”, but f.lux kicked in and “Intrigue” warmed better, so let’s go with that one.

(intrigue <- read_palette("./Pantone COY18 Palette ASE files/PantoneCOY18-Intrigue.ase"))
## PANTONE 19-4053 TCX PANTONE 17-4328 TCX PANTONE 18-3838 TCX PANTONE 18-0324 TCX PANTONE 19-3917 TCX 
##           "#195190"           "#3686A0"           "#5F4B8B"           "#757A4E"           "#4E4B51" 
## PANTONE 15-0927 TCX PANTONE 14-5002 TCX PANTONE 14-3949 TCX 
##           "#BD9865"           "#A2A2A1"           "#B7C0D7"

Having the PANTONE names is all-well-and-good, but those are going to be less-useful in a ggplot2 context due to the way factors are mapped to names in character color vectors in manual scales, so let’s head that off at the pass:

(intrigue <- read_palette("./Pantone COY18 Palette ASE files/PantoneCOY18-Intrigue.ase", use_names=FALSE))
## [1] "#195190" "#3686A0" "#5F4B8B" "#757A4E" "#4E4B51" "#BD9865" "#A2A2A1" "#B7C0D7"

Beautiful.

Let’s put our new color scale to work! We’ve got 8 colors to work with, but won’t need all of them (at least for a quick example):

ggplot(economics_long, aes(date, value)) +
  geom_area(aes(fill=variable)) +
  scale_y_comma() +
  scale_fill_manual(values=intrigue) +
  facet_wrap(~variable, scales = "free", nrow = 2, strip.position = "bottom") +
  theme_ipsum_rc(grid="XY", strip_text_face="bold") +
  theme(strip.placement = "outside") +
  theme(legend.position=c(0.85, 0.2))

This is far from a perfect palette, but it definitely helped illustrate basic package usage without inflicting ocular damage (remember: I could have picked an obnoxious Christmas palette :-)

More Practical Uses

If your workplace or the workplace you’re consulting for has brand guidelines, then they likely have swatches in one of the supported formats. Lot’s do.

You can keep those colors swatches in their native format and try out different ones as your designers refresh their baseline styles.

FIN

As always, kick the tyres, file issues, questions or PRs and hopefully the package will help refresh some designs for all of us in the coming year.

GitHub (2017-12-21 post-time) started adding obnoxious boxes to their activity feed. I use that to discover new projects/developers. While I also have it in RSS and that’s nice and compact, I do browse the activity feed directly.
Those giant boxes had to go.

If you’ve got uBlock installed, these rules filter them out:

github.com##.follow > .body > .py-3.border-gray-light.border-bottom.flex-items-baseline.d-flex > .width-full.flex-column.d-flex > .my-2.p-3.rounded-1.border

github.com##.watch_started > .body > .py-3.border-gray-light.border-bottom.flex-items-baseline.d-flex > .width-full.flex-column.d-flex > .my-2.p-3.rounded-1.border

github.com##.create > .body > .py-3.border-gray-light.border-bottom.flex-items-baseline.d-flex > .width-full.flex-column.d-flex > .my-2.p-3.rounded-1.border

github.com##.public > .body > .py-3.border-gray-light.border-bottom.flex-items-baseline.d-flex > .width-full.flex-column.d-flex > .my-2.p-3.rounded-1.border

(For first-timers, R⁶ tagged posts are short & sweet with minimal expository; R⁶ feed)

At work-work I mostly deal with medium-to-large-ish data. I often want to poke at new or existing data sets w/o working across billions of rows. I also use Apache Drill for much of my exploratory work.

Here’s how to uniformly sample data from Apache Drill using the sergeant package:

library(sergeant)

db <- src_drill("sonar")
tbl <- tbl(db, "dfs.dns.`aaaa.parquet`")

summarise(tbl, n=n())
## # Source:   lazy query [?? x 1]
## # Database: DrillConnection
##          n
##      <int>
## 1 19977415

mutate(tbl, r=rand()) %>% 
  filter(r <= 0.01) %>% 
  summarise(n=n())
## # Source:   lazy query [?? x 1]
## # Database: DrillConnection
##        n
##    <int>
## 1 199808

mutate(tbl, r=rand()) %>% 
  filter(r <= 0.50) %>% 
  summarise(n=n())
## # Source:   lazy query [?? x 1]
## # Database: DrillConnection
##         n
##     <int>
## 1 9988797

And, for groups (using a different/larger “database”):

fdns <- tbl(db, "dfs.fdns.`201708`")

summarise(fdns, n=n())
## # Source:   lazy query [?? x 1]
## # Database: DrillConnection
##            n
##        <int>
## 1 1895133100

filter(fdns, type %in% c("cname", "txt")) %>% 
  count(type)
## # Source:   lazy query [?? x 2]
## # Database: DrillConnection
##    type        n
##   <chr>    <int>
## 1 cname 15389064
## 2   txt 67576750

filter(fdns, type %in% c("cname", "txt")) %>% 
  group_by(type) %>% 
  mutate(r=rand()) %>% 
  ungroup() %>% 
  filter(r <= 0.15) %>% 
  count(type)
## # Source:   lazy query [?? x 2]
## # Database: DrillConnection
##    type        n
##   <chr>    <int>
## 1 cname  2307604
## 2   txt 10132672

I will (hopefully) be better at cranking these bite-sized posts more frequently in 2018.

I know some folks had a bit of fun with the previous post since it exposed the fact that I left out unique MQTT client id generation from the initial 0.1.0 release of the in-development package (client ids need to be unique).

There have been some serious improvements since said post and I thought a (hopefully not-too-frequent) blog-journal of the development of this particular package might be interesting/useful to some folks, especially since I’m delving into some not-oft-blogged (anywhere) topics as I use some new tricks in this particular package.

Thank The Great Maker for C++

I’m comfortable and not-too-shabby at wrapping C/C++ things with an R bow and I felt quite daft seeing this after I had started banging on the mosquitto C interface. Yep, that’s right: it has a C++ interface. It’s waaaaay easier (in my experience) bridging C++ libraries since Dirk/Romain’s (et al, as I know there are many worker hands involved as well) Rcpp has so many tools available to do that very thing.

As an aside, if you do any work with Rcpp or want to start doing work in Rcpp, Masaki E. Tsuda’s Rcpp for Everyone is an excellent tome.

I hadn’t used Rcpp Modules before (that link goes to a succinct but very helpful post by James Curran) but they make exposing C++ library functionality even easier than I had experienced before. So easy, in fact, that it made it possible to whip out an alpha version of a “domain specific language” (or a pipe-able, customized API — however you want to frame these things in your head) for the package. But, I’m getting ahead of myself.

The mosquittopp class in the mosqpp namespace is much like the third bowl of porridge: just right. It’s neither too low-level nor too high-level and it was well thought out enough that it only required a bit of tweaking to use as an Rcpp Module.

First there are more than a few char * parameters that needed to be std::strings. So, something like:

int username_pw_set(const char *username, const char *password);

becomes:

int username_pw_set(std::string username, std::string password);

in our custom wrapper class.

Since the whole point of the mqtt package is to work in R vs C[++] or any other language, the callbacks — the functions that do the work when message, publish, subscribe, etc. events are triggered — need to be in R. I wanted to have some default callbacks during the testing phase and they’re really straightforward to setup in Rcpp:

Rcpp::Environment pkg_env = Rcpp::Environment::namespace_env("mqtt");

Rcpp::Function ccb = pkg_env[".mqtt_connect_cb"];
Rcpp::Function dcb = pkg_env[".mqtt_disconnect_cb"];
Rcpp::Function pcb = pkg_env[".mqtt_publish_cb"];
Rcpp::Function mcb = pkg_env[".mqtt_message_cb"];
Rcpp::Function scb = pkg_env[".mqtt_subscribe_cb"];
Rcpp::Function ucb = pkg_env[".mqtt_unsubscribe_cb"];
Rcpp::Function lcb = pkg_env[".mqtt_log_cb"];
Rcpp::Function ecb = pkg_env[".mqtt_error_cb"];

The handy thing about that approach is you don’t need to export the functions (it works like the ::: does).

But the kicker is the Rcpp Module syntactic sugar:

RCPP_MODULE(MQTT) {

  using namespace Rcpp;

  class_<mqtt_r>("mqtt_r")
    .constructor<std::string, std::string, int>("id/host/port constructor")
    .constructor<std::string, std::string, int, std::string, std::string>("id/host/port/user/pass constructor")
    .constructor<std::string, std::string, int, Rcpp::Function, Rcpp::Function, Rcpp::Function>("id/host/post/con/mess/discon constructor")
    .method("connect", &mqtt_r::connect)
    .method("disconnect", &mqtt_r::disconnect)
    .method("reconnect", &mqtt_r::reconnect)
    .method("username_pw_set", &mqtt_r::username_pw_set)
    .method("loop_start", &mqtt_r::loop_start)
    .method("loop_stop", &mqtt_r::loop_stop)
    .method("loop", &mqtt_r::loop)
    .method("publish_raw", &mqtt_r::publish_raw)
    .method("publish_chr", &mqtt_r::publish_chr)
    .method("subscribe", &mqtt_r::subscribe)
    .method("unsubscribe", &mqtt_r::unsubscribe)
    .method("set_connection_cb", &mqtt_r::set_connection_cb)
    .method("set_discconn_cb", &mqtt_r::set_discconn_cb)
    .method("set_publish_cb", &mqtt_r::set_publish_cb)
    .method("set_message_cb", &mqtt_r::set_message_cb)
    .method("set_subscribe_cb", &mqtt_r::set_subscribe_cb)
    .method("set_unsubscribe_cb", &mqtt_r::set_unsubscribe_cb)
    .method("set_log_cb", &mqtt_r::set_log_cb)
    .method("set_error_cb", &mqtt_r::set_error_cb)
   ;

}

That, combined with RcppModules: MQTT in the DESCRIPTION file and a MQTT <- Rcpp::Module("MQTT") just above where you’d put an .onLoad handler means you can do something like (internally, since it’s not exported):

mqtt_obj <- MQTT$mqtt_r

mqtt_conn_obj <- new(mqtt_obj, "unique_client_id", "test.mosquitto.org", 1883L)

and have access to each of those methods right from R (e.g. mqtt_conn_obj$subscribe(0, "topic", 0)).

If you’re careful with your C++ class code, you’ll be able to breeze through exposing functionality.

Because of the existence of Rcpp Modules, I was able to do what follows in the next section in near record time.

“The stump of a %>% he held tight in his teeth”

I felt compelled to get a Christmas reference in the post and it’s relevant to this section. I like %>%, recommend the use of %>% and use %>% in my own day-to-day R coding (it’s even creeping into internal package code, though I still try not to do that). I knew I wanted to expose a certain way of approaching MQTT workflows in this mqtt package and that meant coming up with an initial — but far from complete — mini-language or pipe-able API for it. Here’s the current thinking/implementation:

  • Setup connection parameters with mqtt_broker(). For now, it takes some parameters, but there is a URI scheme for MQTT so I want to be able to support that as well at some point.
  • Support authentication with mqtt_username_pw(). There will also be a function for dealing with certificates and other security-ish features which look similar to this.
  • Make it dead-easy to subscribe to topics and associate callbacks with mqtt_subscribe() (more on this below)
  • Support an “until you kill it” workflow with mqtt_run() that loops either forever or for a certain number of iterations
  • Support user-controlled iterations with mqtt_begin(), mqtt_loop() and mqtt_end(). An example (in a bit) should help explain this further, but this one is likely to be especially useful in a Shiny context.

Now, as hopefully came across in the previous post, the heart of MQTT message processing is the callback function. You write a function with a contractually defined set of parameters and operate on the values passed in. While we should all likely get into a better habit of using named function objects vs anonymous functions, anonymous functions are super handy, and short ones don’t cause code to get too gnarly. However, in this new DSL/API I’ve cooked up, each topic message callback has six parameters, so that means if you want to use an anonymous function (vs a named one) you have to do something like this in message_subscribe():

mqtt_subscribe("sometopic",  function(id, topic, payload, qos, retain, con) {})

That’s not very succinct, elegant or handy. Since those are three attributes I absolutely ? about most things related to R, I had to do something about it.

Since I’m highly attached to the ~{} syntax introduced with purrr and now expanding across the Tidyverse, I decided to make a custom version of it for message_subscribe(). As a result, the code above can be written as:

mqtt_subscribe("sometopic",  ~{})

and, you can reference id, topic, payload, etc inside those brackets without the verbose function declaration.

How is this accomplished? Via:

as_message_callback <- function(x, env = rlang::caller_env()) {
  rlang::coerce_type(
    x, rlang::friendly_type("function"),
    closure = { x },
    formula = {
      if (length(x) > 2) rlang::abort("Can't convert a two-sided formula to an mqtt message callback function")
      f <- function() { x }
      formals(f) <- alist(id=, topic=, payload=, qos=, retain=, con=)
      body(f) <- rlang::f_rhs(x)
      f
    }
  )
}

It’s a shortened version of some Tidyverse code that’s more generic in nature. That as_message_callback() function looks to see if you’ve passed in a ~{} or a named/anonymous function. If ~{} was used, that function builds a function with the contractually obligated/expected signature, otherwise it shoves in what you gave it.

A code example is worth a thousand words (which is, in fact, the precise number of “words” up until this snippet, at least insofar as the WordPress editor counts them):

library(mqtt)

# We're going to subscribe to *three* BBC subtitle feeds at the same time!
#
# We'll distinguish between them by coloring the topic and text differently.

# this is a named function object that displays BBC 2's subtitle feed when it get messages
moar_bbc <- function(id, topic, payload, qos, retain, con) {
  if (topic == "bbc/subtitles/bbc_two_england/raw") {
    cat(crayon::cyan(topic), crayon::blue(readBin(payload, "character")), "\n", sep=" ")
  }
}

mqtt_broker("makmeunique", "test.mosquitto.org", 1883L) %>% # connection info
  
  mqtt_silence(c("all")) %>% # silence all the development screen messages
  
  # subscribe to BBC 1's topic using a fully specified anonyous function
  
  mqtt_subscribe(
    "bbc/subtitles/bbc_one_london/raw",
    function(id, topic, payload, qos, retain, con) { # regular anonymous function
      if (topic == "bbc/subtitles/bbc_one_london/raw")
        cat(crayon::yellow(topic), crayon::green(readBin(payload, "character")), "\n", sep=" ")
    }) %>%
  
  # as you can see we can pipe-chain as many subscriptions as we like. the package 
  # handles the details of calling each of them. This makes it possible to have
  # very focused handlers vs lots of "if/then/case_when" impossible-to-read functions.
  
  # Ahh. A tidy, elegant, succinct ~{} function instead
  
  mqtt_subscribe("bbc/subtitles/bbc_news24/raw", ~{ # tilde shortcut function (passing in named, pre-known params)
    if (topic == "bbc/subtitles/bbc_news24/raw")
      cat(crayon::yellow(topic), crayon::red(readBin(payload, "character")), "\n", sep=" ")
  }) %>%
  
  # And, a boring, but -- in the long run, better (IMO) -- named function object
  
  mqtt_subscribe("bbc/subtitles/bbc_two_england/raw", moar_bbc) %>% # named function
  
  mqtt_run() -> res # this runs until you Ctrl-C

There’s in-code commentary, so I’ll refrain from blathering about it more here except for noting there are a staggering amount of depressing stories on BBC News and an equally staggering amount of un-hrbrmstr-like language use in BBC One and BBC Two shows. Apologies if any of the GH README.md snippets or animated screenshots ever cause offense, as it’s quite unintentional.

But you said something about begin/end/loop before?

Quite right! For that we’ll use a different example.

I came across a topic — “sfxrider/+/locations” — on broker.mqttdashboard.com. It looks like live data from folks who do transportation work for “Shadowfax Technologies” (which is a crowd-sourced transportation/logistics provider in India). It publishes the following in the payload:

| device:6170774037 | latitude:28.518363 | longitude:77.095753 | timestamp:1513539899000 |
| device:6170774037 | latitude:28.518075 | longitude:77.09555 | timestamp:1513539909000 |
| device:6170774037 | latitude:28.518015 | longitude:77.095488 | timestamp:1513539918000 |
| device:8690150597 | latitude:28.550963 | longitude:77.13432 | timestamp:1513539921000 |
| device:6170774037 | latitude:28.518018 | longitude:77.095492 | timestamp:1513539928000 |
| device:6170774037 | latitude:28.518022 | longitude:77.095495 | timestamp:1513539938000 |
| device:6170774037 | latitude:28.518025 | longitude:77.095505 | timestamp:1513539947000 |
| device:6170774037 | latitude:28.518048 | longitude:77.095527 | timestamp:1513539957000 |
| device:6170774037 | latitude:28.518075 | longitude:77.095573 | timestamp:1513539967000 |
| device:8690150597 | latitude:28.550963 | longitude:77.13432 | timestamp:1513539975000 |
| device:6170774037 | latitude:28.518205 | longitude:77.095603 | timestamp:1513539977000 |
| device:6170774037 | latitude:28.5182 | longitude:77.095587 | timestamp:1513539986000 |
| device:6170774037 | latitude:28.518202 | longitude:77.095578 | timestamp:1513539996000 |
| device:6170774037 | latitude:28.5182 | longitude:77.095578 | timestamp:1513540006000 |
| device:6170774037 | latitude:28.518203 | longitude:77.095577 | timestamp:1513540015000 |
| device:6170774037 | latitude:28.518208 | longitude:77.095577 | timestamp:1513540025000 |

Let’s turn that into proper, usable, JSON (we’ll just cat() it out for this post):

library(mqtt)
library(purrr)
library(stringi)

# turn the pipe-separated, colon-delimeted lines into a proper list
.decode_payload <- function(.x) {
  .x <- readBin(.x, "character")
  .x <- stri_match_all_regex(.x, "([[:alpha:]]+):([[:digit:]\\.]+)")[[1]][,2:3]
  .x <- as.list(setNames(as.numeric(.x[,2]), .x[,1]))
  .x$timestamp <- as.POSIXct(.x$timestamp/1000, origin="1970-01-01 00:00:00")
  .x
}

# do it safely as the payload in MQTT can be anything
decode_payload <- purrr::safely(.decode_payload)

# change the client id
mqtt_broker("makemeuique", "broker.mqttdashboard.com", 1883L) %>%
  mqtt_silence(c("all")) %>%
  mqtt_subscribe("sfxrider/+/locations", ~{
    x <- decode_payload(payload)$result
    if (!is.null(x)) {
      cat(crayon::yellow(jsonlite::toJSON(x, auto_unbox=TRUE), "\n", sep=""))
    }
  }) %>%
  mqtt_run(times = 10000) -> out

What if you wanted do that one-by-one so you could plot the data live in a Shiny map? Well, we won’t do that in this post, but the user-controlled loop version would look like this:

mqtt_broker("makemeuique", "broker.mqttdashboard.com", 1883L) %>%
  mqtt_silence(c("all")) %>%
  mqtt_subscribe("sfxrider/+/locations", ~{
    x <- decode_payload(payload)$result
    if (!is.null(x)) {
      cat(crayon::yellow(jsonlite::toJSON(x, auto_unbox=TRUE), "\n", sep=""))
    }
  }) %>%
  mqtt_begin() -> tracker # _begin!! not _run!!

# call this individually and have the callback update a
# larger scoped variable or Redis or a database. You
# can also just loop like this `for` setup.

for (i in 1:25) mqtt_loop(tracker, timeout = 1000)

mqtt_end(tracker) # this cleans up stuff!

FIN

Whew. 1,164 words later and I hope I’ve kept your interest through it all. I’ve updated the GH repo for the package and also updated the requirements for the package in the README. I’m also working on a configure script (mimicking @opencpu’s ‘anti-conf’ approach) and found Win32 library binaries that should make this easier to get up and running on Windows, so stay tuned for the next installment and don’t hesitate to jump on board with issues, questions, comments or PRs.

The goal for the next post is to cover reading from either that logistics feed or OwnTracks and dynamically display points on a map with Shiny. Stay tuned!

Most of us see the internet through the lens of browsers and apps on our laptops, desktops, watches, TVs and mobile devices. These displays are showing us — for the most part — content designed for human consumption. Sure, apps handle API interactions, but even most of that communication happens over ports 80 or 443. But, there are lots of ports out there; 0:65535, in fact (at least TCP-wise). And, all of them have some kind of data, and most of that is still targeted to something for us.

What if I told you the machines are also talking to each other using a thin/efficient protocol that allows one, tiny sensor to talk to hundreds — if not thousands — of systems without even a drop of silicon-laced sweat? How can a mere, constrained sensor do that? Well, it doesn’t do it alone. Many of them share their data over a fairly new protocol dubbed MQTT (Message Queuing Telemetry Transport).

An MQTT broker watches for devices to publish data under various topics and then also watches for other systems to subscribe to said topics and handles the rest of the interchange. The protocol is lightweight enough that fairly low-powered (CPU- and literal electric-use-wise) devices can easily send their data chunks up to a broker, and the entire protocol is robust enough to support a plethora of connections and an equal plethora of types of data.

Why am I telling you all this?

Devices that publish to MQTT brokers tend to be in the spectrum of what folks sadly call the “Internet of Things”. It’s a terrible, ambiguous name, but it’s all over the media and most folks have some idea what it means. In the context of MQTT, you can think of it as, say, a single temperature sensor publishing it’s data to an MQTT broker so many other things — including programs written by humans to capture, log and analyze that data — can receive it. This is starting to sound like something that might be right up R’s alley.

There are also potential use-cases where an online data processing system might want to publish data to many clients without said clients having to poll a poor, single-threaded R server constantly.

Having MQTT connectivity for R could be really interesting.

And, now we have the beginnings of said connectivity with the mqtt? package.

Another Package? Really?

Yes, really.

Besides the huge potential for having a direct R-bridge to the MQTT world, I’m work-interested in MQTT since we’ve found over 35,000 of them on the default, plaintext port for MQTT (1883) alone:

There are enough of them that I don’t even need to show a base map.

Some of these servers require authentication and others aren’t doing much of anything. But, there are a number of them hosted by corporations and individuals that are exposing real data. OwnTracks seems to be one of the more popular self-/badly-hosted ones.

Then, there are others — like test.mosquitto.org — which deliberately run open MQTT servers for “testing”. There definitely is testing going on there, but there are also real services using it as a production broker. The mqtt package is based on the mosquitto C library, so it’s only fitting that we show a few examples from its own test site here.

For now, there’s really one function: topic_subscribe(). Eventually, R will be able to publish to a broker and do more robust data collection operations (say, to make a live MQTT dashboard in Shiny). The topic_subscribe() function is an all-in one tool that enables you to:

  • connect to a broker
  • subscribe to a topic
  • pass in R callback functions which will be executed on connect, disconnect and when new messages come in

That’s plenty of functionality to do some fun things.

(Update: if you had tried this during the ~24 hours after the blog post was up, you may have run into an issue where the client id was not unique and got wonky results. That’s fixed, now, but it’s a good idea to use your own, unique client id when making MQTT requests).

What’s the frequencytemperature, Kenneth?

The mosquitto test server has one topic — /outbox/crouton-demo/temperature — which is a fake temperature sensor that just sends data periodically so you have something to test with. Let’s capture 50 samples and plot them.

Since we’re using a callback we have to use the tricksy <<- operator to store/update variables outside the callback function. And, we should pre-allocate space for said data to avoid needlessly growing objects. Here’s a complete code-block:

library(mqtt) # devtools::install_github("hrbrmstr/mqtt")
library(jsonlite)
library(hrbrthemes)
library(tidyverse)

i <- 0 # initialize our counter
max_recs <- 50 # max number of readings to get

readings <- vector("character", max_recs)

# our callback function
temp_cb <- function(id, topic, payload, qos, retain) {

  i <<- i + 1 # update the counter
  readings[i] <<- readBin(payload, "character") # update our larger-scoped vector

  return(if (i==max_recs) "quit" else "go") # need to send at least "". "quit" == done

}

topic_subscribe(
  topic = "/outbox/crouton-demo/temperature",
  message_callback=temp_cb
)

# each reading looks like this:
# {"update": {"labels":[4631],"series":[[68]]}}
map(readings, fromJSON) %>%
  map(unlist) %>%
  map_df(as.list) %>%
  ggplot(aes(update.labels, update.series)) +
  geom_line() +
  geom_point() +
  labs(x="Reading", y="Temp (F)", title="Temperature via MQTT") +
  theme_ipsum_rc(grid="XY")

We setup temp_cb() to be our callback and topic_subscribe() ensures that the underlying mosquitto library will call it every time a new message is published to that topic. The chart really shows how synthetic the data is.

Subtitles from the Edge

Temperature sensors are just the sort of thing that MQTT was designed for. But, we don’t need to be stodgy about our use of MQTT.

Just about a year ago from this post, the BBC launched live subtitles for iPlayer. Residents of the Colonies may not know what iPlayer is, but it’s the “app” that lets UK citizens watch BBC programmes on glowing rectangles that aren’t proper tellys. Live subtitles are hard to produce well (and get right) and the BBC making the effort to do so also on their digital platform is quite commendable. We U.S. folks will likely be charged $0.99 for each set of digital subtitles now that net neutrality is gone.

Now, some clever person(s) wired up some of these live subtitles to MQTT topics. We can wire up our own code in R to read them live:

bbc_callback <- function(id, topic, payload, qos, retain) {
  cat(crayon::green(readBin(payload, "character")), "\n", sep="")
  return("") # ctrl-c will terminate
}

mqtt::topic_subscribe(topic = "bbc/subtitles/bbc_news24/raw",
                      connection_callback=mqtt::mqtt_silent_connection_callback,
                      message_callback=bbc_callback)

In this case, control-c terminates things (cleanly).

You could easily modify the above code to have a bot that monitors for certain keywords then sends windowed chunks of subtitled text to some other system (Slack, database, etc). Or, create an online tidy text analysis workflow from them.

Shiny MQTT

This is an update to the post. I had posited (below) the potential for future use in a Shiny context. If some missed points are acceptable, you can use this in Shiny now. Here’s code for a live viewer of the first temperature data example (above). (NOTE: I heavily borrowedstole from Miles/Alicia’s cool webrockets example for this:

library(shiny)
library(mqtt)
library(hrbrthemes)
library(tidyverse)

ui <- fluidPage(
  plotOutput('plot')
)

server <- function(input, output) {

  get_temps <- function(n) {

    i <- 0
    max_recs <- n
    readings <- vector("character", max_recs)

    temp_cb <- function(id, topic, payload, qos, retain) {
      i <<- i + 1
      readings[i] <<- readBin(payload, "character")
      return(if (i==max_recs) "quit" else "go")
    }

    mqtt::topic_subscribe(topic = "/outbox/crouton-demo/temperature",
                    connection_callback = mqtt::mqtt_silent_connection_callback,
                    message_callback = temp_cb)

    purrr::map(readings, jsonlite::fromJSON) %>%
      purrr::map(unlist) %>%
      purrr::map_df(as.list)

  }

  values <- reactiveValues(x = NULL, y = NULL)

  observeEvent(invalidateLater(450), {
    new_response <- get_temps(1)
    if (length(new_response) != 0) {
      values$x <- c(values$x, new_response$update.labels)
      values$y <- c(values$y, new_response$update.series)
    }
  }, ignoreNULL = FALSE)

  output$plot <- renderPlot({
    xdf <- data.frame(xval = values$x, yval = values$y)
    ggplot(xdf, aes(x = xval, y=yval)) + 
      geom_line() +
      geom_point() +
      theme_ipsum_rc(grid="XY")
  })

}

shinyApp(ui = ui, server = server)

FIN

The code is on GitHub and all input/contributions are welcome and encouraged. Some necessary TBDs are authentication & encryption. But, how would you like the API to look for using it, say, in Shiny apps? What should publishing look like? What helper functions would be useful (ones to slice & dice topic names or another to convert raw message text more safely)? Should there be an R MQTT “DSL”? Lots of things to ponder and so many sites to “test”!

P.S.

In case you are concerned about the unusually boring R package name, I wanted to use RIoT (lower-cased, of course) but riot is, alas, already taken.