Skip navigation

Tag Archives: r6

@RMHoge asked the following on Twitter:

Here’s one way to do that which doesn’t rely on pandoc (pandoc can easily do this and ships with RStudio but shelling out for this is cheating :-)

We’ll need some help (NOTE that 2 of these are “GitHub” packages)

library(archive) # install_github("jimhester/archive") + 3rd party library
library(hgr) # install_github("hrbrmstr/hgr")
library(stringi)
library(tidyverse)

We’ll use one of @hadleywickham’s books since it’s O’Reilly and they do epubs well. The archive package lets us treat the epub (which is really just a ZIP file) as a mini-filesystem and embraces “tidy” so we have lovely data frames to work with:

bk_src <- "~/Data/R Packages.epub"

bk <- archive::archive(bk_src)

bk
## # A tibble: 92 x 3
##    path                           size date               
##    <chr>                         <dbl> <dttm>             
##  1 mimetype                        20. 2015-03-24 21:49:16
##  2 OEBPS/assets/cover.png      211616. 2015-06-03 16:16:56
##  3 OEBPS/content.opf            10193. 2015-03-24 21:49:16
##  4 OEBPS/toc.ncx                30037. 2015-03-24 21:49:16
##  5 OEBPS/cover.html               315. 2015-03-24 21:49:16
##  6 OEBPS/titlepage01.html         466. 2015-03-24 21:49:16
##  7 OEBPS/copyright-page01.html   3286. 2015-03-24 21:49:16
##  8 OEBPS/toc01.html             17557. 2015-03-24 21:49:16
##  9 OEBPS/preface01.html         17784. 2015-03-24 21:49:16
## 10 OEBPS/part01.html              444. 2015-03-24 21:49:16
## # ... with 82 more rows

We care not about crufty bits and only want HTML files (NOTE: I use html for the pattern since they can be .xhtml files as well):

## # A tibble: 26 x 3
##    path                          size date               
##    <chr>                        <dbl> <dttm>             
##  1 OEBPS/cover.html              315. 2015-03-24 21:49:16
##  2 OEBPS/titlepage01.html        466. 2015-03-24 21:49:16
##  3 OEBPS/copyright-page01.html  3286. 2015-03-24 21:49:16
##  4 OEBPS/toc01.html            17557. 2015-03-24 21:49:16
##  5 OEBPS/preface01.html        17784. 2015-03-24 21:49:16
##  6 OEBPS/part01.html             444. 2015-03-24 21:49:16
##  7 OEBPS/ch01.html             12007. 2015-03-24 21:49:16
##  8 OEBPS/ch02.html             28633. 2015-03-24 21:49:18
##  9 OEBPS/part02.html             454. 2015-03-24 21:49:18
## 10 OEBPS/ch03.html             28629. 2015-03-24 21:49:18
## # ... with 16 more rows

Let’s read in one file (as a test) and convert it to text and show the first few lines of it:

archive::archive_read(bk, "OEBPS/preface01.html") %>%
  read_lines() %>%
  paste0(collapse = "\n") -> chapter

hgr::clean_text(chapter) %>%
  stri_sub(1, 1000) %>%
  cat()
## Preface
## 
## 
## In This Book
## 
## This book will guide you from being a user of R packages to being a creator of R packages. In , you’ll learn why mastering this skill is so important, and why it’s easier than you think. Next, you’ll learn about the basic structure of a package, and the forms it can take, in . The subsequent chapters go into more detail about each component. They’re roughly organized in order of importance:
## 
## 
##  The most important directory is R/, where your R code lives. A package with just this directory is still a useful package. (And indeed, if you stop reading the book after this chapter, you’ll have still learned some useful new skills.)
##  
##  The DESCRIPTION lets you describe what your package needs to work. If you’re sharing your package, you’ll also use the DESCRIPTION to describe what it does, who can use it (the license), and who to contact if things go wrong.
##  
##  If you want other people (including “future you”!) to understand how to use the functions in your package, you’

hgr::clean_text() uses some XSLT magic to pull text. My jericho? can often do a better job but it’s rJava-based so a bit painful for some folks to get running.

Now, we’ll convert all the files:

filter(bk, stri_detect_fixed(path, "html")) %>%
  mutate(content = map_chr(path, ~{
    archive::archive_read(bk, .x) %>%
      read_lines() %>%
      paste0(collapse = "\n") %>%
      hgr::clean_text()
  })) %>%
  print(n=27)
## # A tibble: 26 x 4
##    path                          size date                content         
##    <chr>                        <dbl> <dttm>              <chr>           
##  1 OEBPS/cover.html              315. 2015-03-24 21:49:16 Cover           
##  2 OEBPS/titlepage01.html        466. 2015-03-24 21:49:16 "R Packages\n\n…
##  3 OEBPS/copyright-page01.html  3286. 2015-03-24 21:49:16 "R Packages\n\n…
##  4 OEBPS/toc01.html            17557. 2015-03-24 21:49:16 "navPrefaceIn T…
##  5 OEBPS/preface01.html        17784. 2015-03-24 21:49:16 "Preface\n\n\nI…
##  6 OEBPS/part01.html             444. 2015-03-24 21:49:16 Getting Started 
##  7 OEBPS/ch01.html             12007. 2015-03-24 21:49:16 "Introduction\n…
##  8 OEBPS/ch02.html             28633. 2015-03-24 21:49:18 "Package Struct…
##  9 OEBPS/part02.html             454. 2015-03-24 21:49:18 Package Compone…
## 10 OEBPS/ch03.html             28629. 2015-03-24 21:49:18 "R Code\n\nThe …
## 11 OEBPS/ch04.html             31275. 2015-03-24 21:49:18 "Package Metada…
## 12 OEBPS/ch05.html             42089. 2015-03-24 21:49:18 "Object Documen…
## 13 OEBPS/ch06.html             31484. 2015-03-24 21:49:18 "Vignettes: Lon…
## 14 OEBPS/ch07.html             28594. 2015-03-24 21:49:18 "Testing\n\nTes…
## 15 OEBPS/ch08.html             30808. 2015-03-24 21:49:18 "Namespace\n\nT…
## 16 OEBPS/ch09.html             12125. 2015-03-24 21:49:18 "External Data\…
## 17 OEBPS/ch10.html             42013. 2015-03-24 21:49:18 "Compiled Code\…
## 18 OEBPS/ch11.html              8933. 2015-03-24 21:49:18 "Installed File…
## 19 OEBPS/ch12.html              3897. 2015-03-24 21:49:18 "Other Componen…
## 20 OEBPS/part03.html             446. 2015-03-24 21:49:18 Best Practices  
## 21 OEBPS/ch13.html             59493. 2015-03-24 21:49:18 "Git and GitHub…
## 22 OEBPS/ch14.html             44702. 2015-03-24 21:49:18 "Automated Chec…
## 23 OEBPS/ch15.html             39450. 2015-03-24 21:49:18 "Releasing a Pa…
## 24 OEBPS/ix01.html             75277. 2015-03-24 21:49:20 IndexAad hoc te…
## 25 OEBPS/colophon01.html         974. 2015-03-24 21:49:20 "About the Auth…
## 26 OEBPS/colophon02.html        1653. 2015-03-24 21:49:20 "Colophon\n\nTh…

I’m not wrapping this into a package anytime soon but this is also a pretty basic flow that may not require a package. This has been wrapped into a small package dubbed pubcrawl?.

Drop a note in the comments with your hints/workflows on converting epub to plaintext!

(You can find all R⁶ posts here)

UPDATE 2018-01-01 — this has been added to rtweet (GH version).

A Twitter discussion:

that spawned from Maëlle’s recent look-back post turned into a quick function for capturing an image of a Tweet/thread using webshot, rtweet, magick and glue.

Pass in a status id or a twitter URL and the function will grab an image of the mobile version of the tweet.

The ultimate goal is to make a function that builds a tweet using only R and magick. This will have to do until the new year.

tweet_shot <- function(statusid_or_url, zoom=3) {

  require(glue, quietly=TRUE)
  require(rtweet, quietly=TRUE)
  require(magick, quietly=TRUE)
  require(webshot, quietly=TRUE)

  x <- statusid_or_url[1]

  is_url <- grepl("^http[s]://", x)

  if (is_url) {

    is_twitter <- grepl("twitter", x)
    stopifnot(is_twitter)

    is_status <- grepl("status", x)
    stopifnot(is_status)

    already_mobile <- grepl("://mobile\\.", x)
    if (!already_mobile) x <- sub("://twi", "://mobile.twi", x)

  } else {

    x <- rtweet::lookup_tweets(x)
    stopifnot(nrow(x) > 0)
    x <- glue_data(x, "https://mobile.twitter.com/{screen_name}/status/{status_id}")

  }

  tf <- tempfile(fileext = ".png")
  on.exit(unlink(tf), add=TRUE)

  webshot(url=x, file=tf, zoom=zoom)

  img <- image_read(tf)
  img <- image_trim(img)

  if (zoom > 1) img <- image_scale(img, scales::percent(1/zoom))

  img

}

Now just do one of these:

tweet_shot("947082036019388416")
tweet_shot("https://twitter.com/jhollist/status/947082036019388416")

to get:

(For first-timers, R⁶ tagged posts are short & sweet with minimal expository; R⁶ feed)

At work-work I mostly deal with medium-to-large-ish data. I often want to poke at new or existing data sets w/o working across billions of rows. I also use Apache Drill for much of my exploratory work.

Here’s how to uniformly sample data from Apache Drill using the sergeant package:

library(sergeant)

db <- src_drill("sonar")
tbl <- tbl(db, "dfs.dns.`aaaa.parquet`")

summarise(tbl, n=n())
## # Source:   lazy query [?? x 1]
## # Database: DrillConnection
##          n
##      <int>
## 1 19977415

mutate(tbl, r=rand()) %>% 
  filter(r <= 0.01) %>% 
  summarise(n=n())
## # Source:   lazy query [?? x 1]
## # Database: DrillConnection
##        n
##    <int>
## 1 199808

mutate(tbl, r=rand()) %>% 
  filter(r <= 0.50) %>% 
  summarise(n=n())
## # Source:   lazy query [?? x 1]
## # Database: DrillConnection
##         n
##     <int>
## 1 9988797

And, for groups (using a different/larger “database”):

fdns <- tbl(db, "dfs.fdns.`201708`")

summarise(fdns, n=n())
## # Source:   lazy query [?? x 1]
## # Database: DrillConnection
##            n
##        <int>
## 1 1895133100

filter(fdns, type %in% c("cname", "txt")) %>% 
  count(type)
## # Source:   lazy query [?? x 2]
## # Database: DrillConnection
##    type        n
##   <chr>    <int>
## 1 cname 15389064
## 2   txt 67576750

filter(fdns, type %in% c("cname", "txt")) %>% 
  group_by(type) %>% 
  mutate(r=rand()) %>% 
  ungroup() %>% 
  filter(r <= 0.15) %>% 
  count(type)
## # Source:   lazy query [?? x 2]
## # Database: DrillConnection
##    type        n
##   <chr>    <int>
## 1 cname  2307604
## 2   txt 10132672

I will (hopefully) be better at cranking these bite-sized posts more frequently in 2018.

Matt @stiles is a spiffy data journalist at the @latimes and he posted an interesting chart on U.S. Attorneys General longevity (given that the current US AG is on thin ice):

I thought it would be neat (since Matt did the data scraping part already) to look at AG tenure distribution by party, while also pointing out where Sessions falls.

Now, while Matt did scrape the data, it’s tucked away into a javascript variable in an iframe on the page that contains his vis.

It’s still easier to get it from there vs re-scrape Wikipedia (like Matt did) thanks to the V8 package by @opencpu.

The following code:

  • grabs the vis iframe
  • extracts and evaluates the target javascript to get a nice data frame
  • performs some factor re-coding (for better grouping and to make it easier to identify Sessions)
  • plots the distributions using the beeswarm quasirandom alogrithm
library(V8)
library(rvest)
library(ggbeeswarm)
library(hrbrthemes)
library(tidyverse)

pg <- read_html("http://mattstiles.org/dailygraphics/graphics/attorney-general-tenure-20172517/child.html?initialWidth=840&childId=pym_0&parentTitle=Chart%3A%20If%20Ousted%2C%20Jeff%20Sessions%20Would%20Have%20a%20Historically%20Short%20Tenure%20%7C%20The%20Daily%20Viz&parentUrl=http%3A%2F%2Fthedailyviz.com%2F2017%2F07%2F25%2Fchart-if-ousted-jeff-sessions-would-have-a-historically-short-tenure%2F")

ctx <- v8()
ctx$eval(html_nodes(pg, xpath=".//script[contains(., 'DATA')]") %>% html_text())

ctx$get("DATA") %>% 
  as_tibble() %>% 
  readr::type_convert() %>% 
  mutate(party = ifelse(is.na(party), "Other", party)) %>% 
  mutate(party = fct_lump(party)) %>% 
  mutate(color1 = case_when(
    party == "Democratic" ~ "#313695",
    party == "Republican" ~ "#a50026",
    party == "Other" ~ "#4d4d4d")
  ) %>% 
  mutate(color2 = ifelse(grepl("Sessions", label), "#2b2b2b", "#00000000")) -> ags

ggplot() + 
  geom_quasirandom(data = ags, aes(party, amt, color = color1)) +
  geom_quasirandom(data = ags, aes(party, amt, color = color2), 
                   fill = "#ffffff00", size = 4, stroke = 0.25, shape = 21) +
  geom_text(data = data_frame(), aes(x = "Republican", y = 100, label = "Jeff Sessions"), 
            nudge_x = -0.15, family = font_rc, size = 3, hjust = 1) +
  scale_color_identity() +
  scale_y_comma(limits = c(0, 4200)) +
  labs(x = "Party", y = "Tenure (days)", 
       title = "U.S. Attorneys General",
       subtitle = "Distribution of tenure in office, by days & party: 1789-2017",
       caption = "Source data/idea: Matt Stiles <bit.ly/2vXAHTM>") +
  theme_ipsum_rc(grid = "XY")

I turned the data into a CSV and stuck it in this gist if folks want to play w/o doing the js scraping.

I’ve been doing intermittent prep work for a follow-up to an earlier post on store closings and came across this CNN Money “article” on it. Said “article” is a deliberately obfuscated or lazily crafted series of GIF images that contain all the Radio Shack impending store closings. It’s the most comprehensive list I’ve found, but the format is terrible and there’s no easy, in-browser way to download them all.

CNN has ToS that prevent automated data gathering from CNN-proper. But, they used Adobe Document Cloud for these images which has no similar restrictions from a quick glance at their ToS. That means you get an R⁶ post on how to grab the individual 38 images and combine them into one PDF. I did this all with the hopes of OCRing the text, which has not panned out too well since the image quality and font was likely deliberately set to make it hard to do precisely what I’m trying to do.

If you work through the example, you’ll get a feel for:

  • using sprintf() to take a template and build a vector of URLs
  • use dplyr progress bars
  • customize httr verb options to ensure you can get to content
  • use purrr to iterate through a process of turning raw image bytes into image content (via magick) and turn a list of images into a PDF
library(httr)
library(magick)
library(tidyverse)

url_template <- "https://assets.documentcloud.org/documents/1657793/pages/radioshack-convert-p%s-large.gif"

pb <- progress_estimated(38)

sprintf(url_template, 1:38) %>% 
  map(~{
    pb$tick()$print()
    GET(url = .x, 
        add_headers(
          accept = "image/webp,image/apng,image/*,*/*;q=0.8", 
          referer = "http://money.cnn.com/interactive/technology/radio-shack-closure-list/index.html", 
          authority = "assets.documentcloud.org"))    
  }) -> store_list_pages

map(store_list_pages, content) %>% 
  map(image_read) %>% 
  reduce(image_join) %>% 
  image_write("combined_pages.pdf", format = "pdf")

I figured out the Document Cloud links and necessary httr::GET() options by using Chrome Developer Tools and my curlconverter package.

If any academic-y folks have a test subjectsummer intern with a free hour and would be willing to have them transcribe this list and stick it on GitHub, you’d have my eternal thanks.

If you follow me on Twitter or monitor @Rapid7’s Community Blog you know I’ve been involved a bit in the WannaCry ransomworm triage.

One thing I’ve been doing is making charts of the hourly contribution to the Bitcoin addresses that the current/main attackers are using to accept ransom payments (which you really shouldn’t pay, now, even if you are impacted as it’s unlikely they’re actually giving up keys anymore because the likelihood of them getting cash out of the wallets without getting caught is pretty slim).

There’s a full-on CRAN-ified Rbitcoin package but I didn’t need the functionality in it (yet) to do the monitoring. I posted a hastily-crafted gist on Friday so folks could play along at home, but the code here is a bit more nuanced (and does more).

In the spirit of these R⁶ posts, the following is presented without further commentary apart from the interwoven comments with the exception that this method captures super-micro-payments that do not necessarily translate 1:1 to victim count (it’s well within ball-park estimates but not precise w/o introspecting each transaction).

library(jsonlite)
library(hrbrthemes)
library(tidyverse)

# the wallets accepting ransom payments

wallets <- c(
  "115p7UMMngoj1pMvkpHijcRdfJNXj6LrLn",
  "12t9YDPgwueZ9NyMgw519p7AA8isjr6SMw",
  "13AM4VW2dhxYgXeQepoHkHSQuy6NgaEb94"
)

# easy way to get each wallet info vs bringing in the Rbitcoin package

sprintf("https://blockchain.info/rawaddr/%s", wallets) %>%
  map(jsonlite::fromJSON) -> chains

# get the current USD conversion (tho the above has this, too)

curr_price <- jsonlite::fromJSON("https://blockchain.info/ticker")

# calculate some basic stats

tot_bc <- sum(map_dbl(chains, "total_received")) / 10e7
tot_usd <- tot_bc * curr_price$USD$last
tot_xts <- sum(map_dbl(chains, "n_tx"))

# This needs to be modified once the counters go above 100 and also needs to
# account for rate limits in the blockchain.info API

paged <- which(map_dbl(chains, "n_tx") > 50)
if (length(paged) > 0) {
  sprintf("https://blockchain.info/rawaddr/%s?offset=50", wallets[paged]) %>%
    map(jsonlite::fromJSON) -> chains2
}

# We want hourly data across all transactions

map_df(chains, "txs") %>%
  bind_rows(map_df(chains2, "txs")) %>% 
  mutate(xts = anytime::anytime(time),
         xts = as.POSIXct(format(xts, "%Y-%m-%d %H:00:00"), origin="GMT")) %>%
  count(xts) -> xdf

# Plot it

ggplot(xdf, aes(xts, y = n)) +
  geom_col() +
  scale_y_comma(limits = c(0, max(xdf$n))) +
  labs(x = "Day/Time (GMT)", y = "# Transactions",
       title = "Bitcoin Ransom Payments-per-hour Since #WannaCry Ransomworm Launch",
       subtitle=sprintf("%s transactions to-date; %s total bitcoin; %s USD; Chart generated at: %s EDT",
                        scales::comma(tot_xts), tot_bc, scales::dollar(tot_usd), Sys.time())) +
  theme_ipsum_rc(grid="Y")

I hope all goes well with everyone as you try to ride out this ransomworm storm over the coming weeks. It will likely linger for quite a while, so make sure you patch!

Once I realized that my planned, larger post would not come to fruition today I took the R⁶ post (i.e. “minimal expository, keen focus”) route, prompted by a Twitter discussion with some R mates who needed to convert “lightly formatted” Microsoft Word (docx) documents to markdown. Something like this:

to:

Does pandoc work?
=================

Simple document with **bold** and *italics*.

This is definitely a job that pandoc can handle.

pandoc is a Haskell (yes, Haskell) program created by John MacFarlane and is an amazing tool for transcoding documents. And, if you’re a “modern” R/RStudio user, you likely use it every day because it’s ultimately what powers rmarkdown / knitr.

Yes, you read that correctly. Your beautiful PDF, Word and HTML R reports are powered by — and, would not be possible without — Haskell.

Doing the aforementioned conversion from docx to markdown is super-simple from R:

rmarkdown::pandoc_convert("simple.docx", "markdown", output="simple.md")

Give the help on rmarkdown::pandoc_convert() a read as well as the very thorough and helpful documentation over at pandoc.org to see the power available at your command.

Just One More Thing

This section — technically — violates the R⁶ principle so you can stop reading if you’re a purist :-)

There’s a neat, non-on-CRAN package by François Keck called subtoolshttps://github.com/fkeck/subtools which can slice, dice and reformat digital content subtitles. There are multiple formats for these subtitle files and it seems to be able to handle them all.

There was a post (earlier in April) about Ranking the Negativity of Black Mirror Episodes. That post is python and I’ve never had time to fully replicate it in R.

Here’s a snippet (sans expository) that can get you started pulling in subtitles into R and tidytext. I would have written scraper code but the various subtitle aggregation sites make that a task suited for something like my splashr package and I just had no cycles to write the code. So, I grabbed the first season of “The Flash” and use the Bing sentiment lexicon from tidytext to see how the season looked.

The overall scoring for a given episode is naive and can definitely be improved upon.

Definitely drop a link to anything you create in the comments!

# devtools::install_github("fkeck/subtools")

library(subtools)
library(tidytext)
library(hrbrthemes)
library(tidyverse)

data(stop_words)

bing <- get_sentiments("bing")
afinn <- get_sentiments("afinn")

fils <- list.files("flash/01", pattern = "srt$", full.names = TRUE)

pb <- progress_estimated(length(fils))

map_df(1:length(fils), ~{

  pb$tick()$print()

  read.subtitles(fils[.x]) %>%
    sentencify() %>%
    .$subtitles %>%
    unnest_tokens(word, Text) %>%
    anti_join(stop_words, by="word") %>%
    inner_join(bing, by="word") %>%
    inner_join(afinn, by="word") %>%
    mutate(season = 1, ep = .x)

}) %>% as_tibble() -> season_sentiments


count(season_sentiments, ep, sentiment) %>%
  mutate(pct = n/sum(n),
         pct = ifelse(sentiment == "negative", -pct, pct)) -> bing_sent

ggplot() +
  geom_ribbon(data = filter(bing_sent, sentiment=="positive"),
              aes(ep, ymin=0, ymax=pct, fill=sentiment), alpha=3/4) +
  geom_ribbon(data = filter(bing_sent, sentiment=="negative"),
              aes(ep, ymin=0, ymax=pct, fill=sentiment), alpha=3/4) +
  scale_x_continuous(expand=c(0,0.5), breaks=seq(1, 23, 2)) +
  scale_y_continuous(expand=c(0,0), limits=c(-1,1),
                     labels=c("100%\nnegative", "50%", "0", "50%", "positive\n100%")) +
  labs(x="Season 1 Episode", y=NULL, title="The Flash — Season 1",
       subtitle="Sentiment balance per episode") +
  scale_fill_ipsum(name="Sentiment") +
  guides(fill = guide_legend(reverse=TRUE)) +
  theme_ipsum_rc(grid="Y") +
  theme(axis.text.y=element_text(vjust=c(0, 0.5, 0.5, 0.5, 1)))

RStudio is a great way to work through analyses tasks, and I suspect most folks use the “desktop” version of the product on their local workstations.

The fine folks at RStudio also make a server version (the codebase for RStudio is able to generate server or desktop and they are generally in 100% feature parity when it comes to interactive use). You only need to set it up on a compatible server and then access it via any modern web browser to get virtually the same experience you have on the desktop.

I use RStudio Server as well as RStudio Desktop and have never been comfortable mixing web browsing tasks and analysis tasks in the same browser (it’s one of the many reasons I dislike jupyter notebooks). I also keep many apps open and inevitably would try to cmd-tab (I’m on macOS) between apps to find the RStudio server one only to realize I shld have been keyboard tabbing through Chrome tabs.

Now, it’s not too hard to fire up a separate Chrome or Safari instance to get a separate server but it’d be great if there was a way to make it “feel” more like an app — just like RStudio Desktop. Well, it turns out there is a way using nativefier.

If you use the Slack standalone desktop client, the Atom text editor or a few other “modern” apps, they are pretty much just web pages wrapped in a browser shell using something like Electron. Jia Hao came up with the idea of being able to do the same thing for any web page.

To create a standalone RStudio Server client for a particular RStudio Server instance, just do the following after installing nativefier:

nativefier "https://rstudio.example.com:8787/" --name "RStudio @ Example"

Replace the URL with the one you currently use in-browser (and, please consider using SSL/TLS when connecting over the public internet) and use any name that will be identifiable to you. You get a safe, standalone application and will never have to worry about browser crashes impacting your workflow.

There are many customizations you can make to the app shell and you can even use your own icons to represent servers differently. It’s all super-simple to setup and get working.

Note that for macOS folks there has been a way pre-nativefier to do this same thing with a tool called Fluid but it uses the Apple WebKit/Safari shell vs the Chrome shell and I prefer the Chrome shell and cross-platform app-making ability.

Hopefully this quick R⁶ tip will help you corral your RStudio Server connections. And, don’t forget to join in on the R⁶ bandwagon and share your own quick tips, snippets and hints to help the broader R community level-up their work.