Skip navigation

rolls

Here’s a quick/easy recipe for soft roll dough that you can use to make dairy-free Parker House rolls, knot rolls or just plain rolls.

NOTE: It makes ~3lbs of dough. I freeze it to save time in the work-weeks. Also, weigh your ingredients. I have [this scale](http://amzn.to/1o6uRuL).

Oven at 375°F

### Ingredients

– Bread flour (750g)
– Instant, dry yeast (12g)
– Almond milk, room temperature (400ml)
– Dairy-free butter substitute, soft (I use [Earth Balance](http://amzn.to/1QwpagJ)) (75g)
– Eggs (55°F) (75g) (~2 medium)
– Sugar (I use light brown) (75g)
– Salt (19g)

### Work

1. Combine flour and yeast well with whisk. Fold in the rest of the ingredients. Use a standing mixer and mix on slow for 4 minutes and then medium for 3 minutes. You’re looking for firm + elastic dough with good gluten development.
2. Ferment the whole batch for 1 hour or until it’s doubled. I leave it in the mixer bowl, but ensure it’s quasi-ball-shaped and top it with a lid from a pot and set it on our stove with the oven at 200°F; since we live in Maine and keep our house cold in the winter.
3. Remove & shape the dough into a nice round. Let it rest on a floured surface or a board with parchment paper for 20 minutes.
4. Divide it into 50g pieces and make rounds.
5. Flour the ones you won’t be using right away and freeze them.

**For Parker House**

1. Roll out each round into an oval 3mm thick. Make sure there’s no excess flour. Fold in half length-wise. Roll to make a wedge shape that’s roughly 6mm thick to 3mm thick
2. Put on baking sheet with parchment paper giving some distance between them. Brush with fake butter or really nice olive oil. Let rise for 40m. You should be able to poke it (gently) and have it resume it’s shape.
3. Bake at 375°F in (with convection turned on if you have that kind of oven) for ~20m or until golden brown.
4. Brush with more oil or fake butter.

**For Knots**

1. Roll each round into 6-inch pieces.
2. Make a knot (you really can’t mess it up but hit up YouTube for what it looks like)
3. Arrange on a sheet pan with parchment paper.
4. If you want to, make a _room temperature_ egg wash and brush it on.
5. Let them rise for ~50m. Should be springy like step #2 above.
6. Brush again if you used egg wash.
3. Bake at 375°F in (with convection turned on if you have that kind of oven) for ~15m or until golden brown.

**For “Rolls”**

1. Ensure each round is about the same size and actually as spherical as you can.
2. Arrange on a sheet pan with parchment paper.
3. Let them rise for ~25m. Should be springy like step #2 above, above.
3. Bake at 375°F in (with convection turned on if you have that kind of oven) for ~25m or until golden brown.

High resolution and SVG versions of the new R logo are finally available.

I converted the SVG to WKT (file here) which means we can use it like we would a shapefile in R. That includes plotting!

Here’s a short example of how to read that WKT and plot the logo using ggplot2:

library(sp)
library(maptools)
library(rgeos)
library(ggplot2)
library(ggthemes)
 
r_wkt_gist_file <- "https://gist.githubusercontent.com/hrbrmstr/07d0ccf14c2ff109f55a/raw/db274a39b8f024468f8550d7aeaabb83c576f7ef/rlogo.wkt"
if (!file.exists("rlogo.wkt")) download.file(r_wkt_gist_file, "rlogo.wkt")
rlogo <- readWKT(paste0(readLines("rlogo.wkt", warn=FALSE)))
 
rlogo_shp <- SpatialPolygonsDataFrame(rlogo, data.frame(poly=c("halo", "r")))
rlogo_poly <- fortify(rlogo_shp, region="poly")
 
ggplot(rlogo_poly) + 
  geom_polygon(aes(x=long, y=lat, group=id, fill=id)) + 
  scale_fill_manual(values=c(halo="#b8babf", r="#1e63b5")) +
  coord_equal() + 
  theme_map() + 
  theme(legend.position="none")

RStudio

UPDATE curlconverter will now return (as the function return value) a working R function. See the README for examples


When you visit a site like the LA Times’ NH Primary Live Results site and wish you had the data that they used to make the tables & visualizations on the site:

primary

Sometimes it’s as simple as opening up your browsers “Developer Tools” console and looking for XHR (XML HTTP Requests) calls:

XHR

You can actually see a preview of those requests (usually JSON):

Developer_Tools_-_http___graphics_latimes_com_election-2016-new-hampshire-results_

While you could go through all the headers and cookies and transcribe them into httr::GET or httr::POST requests, that’s tedious, especially when most browsers present an option to “Copy as cURL”. cURL is a command-line tool (with a corresponding systems programming library) that you can use to grab data from URIs. The RCurl and curl packages in R are built with the underlying library. The cURL command line captures all of the information necessary to replicate the request the browser made for a resource. The cURL command line for the URL that gets the Republican data is:

curl 'http://graphics.latimes.com/election-2016-31146-feed.json' 
  -H 'Pragma: no-cache' 
  -H 'DNT: 1' 
  -H 'Accept-Encoding: gzip, deflate, sdch'
  -H 'X-Requested-With: XMLHttpRequest' 
  -H 'Accept-Language: en-US,en;q=0.8' 
  -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.39 Safari/537.36' 
  -H 'Accept: */*' 
  -H 'Cache-Control: no-cache' 
  -H 'If-None-Match: "7b341d7181cbb9b72f483ae28e464dd7"' 
  -H 'Cookie: s_fid=79D97B8B22CA721F-2DD12ACE392FF3B2; s_cc=true' 
  -H 'Connection: keep-alive' 
  -H 'If-Modified-Since: Wed, 10 Feb 2016 16:40:15 GMT'
  -H 'Referer: http://graphics.latimes.com/election-2016-new-hampshire-results/' 
  --compressed

While that’s easier than manual copy/paste transcription, these requests are uniform enough that there Has To Be A Better Way. And, now there is, with curlconverter.

The curlconverter package has (for the moment) two main functions:

  • straighten() : which returns a list with all of the necessary parts to craft an httr POST or GET call
  • make_req() : which actually _returns a working httr call, pre-filled with all of the necessary information.

By default, either function reads from the clipboard (envision the workflow where you do the “Copy as cURL” then switch to R and type make_req() or req_params <- straighten()), but they can take in a vector of cURL command lines, too (NOTE: make_req() is currently limited to one while straighten() can handle as many as you want).

Let’s show what happens using election results cURL command line:

REP <- "curl 'http://graphics.latimes.com/election-2016-31146-feed.json' -H 'Pragma: no-cache' -H 'DNT: 1' -H 'Accept-Encoding: gzip, deflate, sdch' -H 'X-Requested-With: XMLHttpRequest' -H 'Accept-Language: en-US,en;q=0.8' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.39 Safari/537.36' -H 'Accept: */*' -H 'Cache-Control: no-cache'  -H 'Cookie: s_fid=79D97B8B22CA721F-2DD12ACE392FF3B2; s_cc=true' -H 'Connection: keep-alive' -H 'If-Modified-Since: Wed, 10 Feb 2016 16:40:15 GMT' -H 'Referer: http://graphics.latimes.com/election-2016-new-hampshire-results/' --compressed"
 
resp <- curlconverter::straighten(REP)
jsonlite::toJSON(resp, pretty=TRUE)
 
    ## [
    ##   {
    ##     "url": ["http://graphics.latimes.com/election-2016-31146-feed.json"],
    ##     "method": ["get"],
    ##     "headers": {
    ##       "Pragma": ["no-cache"],
    ##       "DNT": ["1"],
    ##       "Accept-Encoding": ["gzip, deflate, sdch"],
    ##       "X-Requested-With": ["XMLHttpRequest"],
    ##       "Accept-Language": ["en-US,en;q=0.8"],
    ##       "User-Agent": ["Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.39 Safari/537.36"],
    ##       "Accept": ["*/*"],
    ##       "Cache-Control": ["no-cache"],
    ##       "Connection": ["keep-alive"],
    ##       "If-Modified-Since": ["Wed, 10 Feb 2016 16:40:15 GMT"],
    ##       "Referer": ["http://graphics.latimes.com/election-2016-new-hampshire-results/"]
    ##     },
    ##     "cookies": {
    ##       "s_fid": ["79D97B8B22CA721F-2DD12ACE392FF3B2"],
    ##       "s_cc": ["true"]
    ##     },
    ##     "url_parts": {
    ##       "scheme": ["http"],
    ##       "hostname": ["graphics.latimes.com"],
    ##       "port": {},
    ##       "path": ["election-2016-31146-feed.json"],
    ##       "query": {},
    ##       "params": {},
    ##       "fragment": {},
    ##       "username": {},
    ##       "password": {}
    ##     }
    ##   }
    ## ]

You can then use the items in the returned list to make a GET request manually (but still tediously).

curlconverter‘s make_req() will try to do this conversion for you automagically using httr‘s little used VERB() function. It’s easier to show than to tell:

curlconverter::make_req(REP)
VERB(verb = "GET", url = "http://graphics.latimes.com/election-2016-31146-feed.json", 
     add_headers(Pragma = "no-cache", 
                 DNT = "1", `Accept-Encoding` = "gzip, deflate, sdch", 
                 `X-Requested-With` = "XMLHttpRequest", 
                 `Accept-Language` = "en-US,en;q=0.8", 
                 `User-Agent` = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.39 Safari/537.36", 
                 Accept = "*/*", 
                 `Cache-Control` = "no-cache", 
                 Connection = "keep-alive", 
                 `If-Modified-Since` = "Wed, 10 Feb 2016 16:40:15 GMT", 
                 Referer = "http://graphics.latimes.com/election-2016-new-hampshire-results/"))

You probably don’t need all those headers, but you just need to delete what you don’t need vs trial-and-error build by hand. Try assigning the output of that function to a variable and inspecting what’s returned. I think you’ll find this is a big enhancement to your workflows (if you do alot of this “scraping without scraping”).

You can find the package on gitub. It’s built with V8 and uses a modified version of the curlconverter Node module by Nick Carneiro.

It’s still in beta and could use some tyre kicking. Convos in the comments, issues or feature requests in GH (pls).

The `knitr`/R markdown system is a great way to organize reports and analyses. However, the built-in ones (that come with RStudio/the `rmarkdown` package) rely on Bootstrap and also use jQuery. There’s nothing wrong with that, but the generated standalone HTML documents (which are a great way to distribute reports) don’t really need all that cruft and it’s fun & informative to check out new frameworks from time-to-time. Also, jQuery is a heavy crutch I’m working hard to not need anymore.

To that end, I created a package — [`markdowntemplates`](https://github.com/hrbrmstr/markdowntemplates) — that contains three alternate templates that you can use out of the box or (hopefully) clone, customize and use in your future work. One template is based on the [Bulma CSS framework](http://bulma.io), the other is based on the [Skeleton CSS framework](http://getskeleton.com) and the last one is a super-minimal template with no formatting (i.e. it’s a good one to build on).

The github repo has screenshots of the basic formatting.

I tried to keep with the base formatting of each theme, but I went a bit crazy and showed how to have a fixed banner in the Skeleton version.

### How it works

There are three directories inside `inst/rmarkdown/templates` each has a similar structure:

– a `resources` directory with CSS (and potentially javascript)
– a `skeleton` directory which holds example `Rmd` “skeleton” files
– a `base.html` file which is the parameterized HTML template for the Rmd
– a `template.yaml` file which is how RStudio/`knitr` knows there’s one or more R markdown templates in your package

The `minimal` `base.html` is small enough to include here:

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml"$if(lang)$ lang="$lang$" xml:lang="$lang$"$endif$>
 
<head>
 
<meta charset="utf-8">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta name="generator" content="pandoc" />
<meta name="viewport" content="width=device-width, initial-scale=1">
 
<title>$if(title)$$title$$endif$</title>
 
$for(header-includes)$
$header-includes$
$endfor$
 
$if(highlightjs)$
<style type="text/css">code{white-space: pre;}</style>
<link rel="stylesheet"
      href="$highlightjs$/$if(highlightjs-theme)$$highlightjs-theme$$else$default$endif$.css"
      $if(html5)$$else$type="text/css" $endif$/>
<script src="$highlightjs$/highlight.js"></script>
<script type="text/javascript">
if (window.hljs && document.readyState && document.readyState === "complete") {
   window.setTimeout(function() {
      hljs.initHighlighting();
   }, 0);
}
</script>
$endif$
 
$if(highlighting-css)$
<style type="text/css">code{white-space: pre;}</style>
<style type="text/css">
$highlighting-css$
</style>
$endif$
 
$for(css)$
<link rel="stylesheet" href="$css$" $if(html5)$$else$type="text/css" $endif$/>
$endfor$
 
</head>
 
<body>
<div class="container">
 
<h1>$if(title)$$title$$endif$</h1>
 
$for(include-before)$
$include-before$
$endfor$
 
$if(toc)$
<div id="$idprefix$TOC">
$toc$
</div>
$endif$
 
$body$
 
$for(include-after)$
$include-after$
$endfor$
 
</div>
 
$if(mathjax-url)$
<!-- dynamically load mathjax for compatibility with self-contained -->
<script>
  (function () {
    var script = document.createElement("script");
    script.type = "text/javascript";
    script.src  = "$mathjax-url$";
    document.getElementsByTagName("head")[0].appendChild(script);
  })();
</script>
$endif$
 
</body>
</html>

I kept a bit of the RStudio template code for source code formatting, but grokking the actual template language should be pretty straightforward. You should be able to pick out `$title$` in there and you can add as many parameters to the `Rmd` YAML section as you like (with corresponding counterparts in that template). I added a corresponding, exported R function for each supported template to show how easy it is to customize the parameters while also accepting further customizations in the YAML of each `Rmd`.

Imagine building a base template with your personal or organization’s branding *and* having it set apart from the cookie-cutter RStudio `rmarkdown` Bootstrap/jQuery template. Or, create course-specific templates like the [`s20x` package](https://github.com/cran/s20x). Once you peek behind the curtain to see how things are done, it’s not so complex and the sky is the limit.

I’ll try to get these in CRAN as soon as possible. If you have a preference for another CSS framework (I’m thinking of adding a “Metro” CSS kit and a Google web starter CSS kit), shoot me an issue or PR and I’ll incorporate it. The more examples we have the easier it will be for folks to create new templates.

Any & all feedback is most welcome.

(If you don’t know what XML is, you should probably [read a primer](https://en.wikipedia.org/wiki/XML) before reading this post,)

When working with data, one inevitably comes across things encoded in XML. I’m in the “anti-XML” camp, but deal with my fair share of XML in “cyber” and help out enough people who have to work with XML that I’ve become pretty proficient when slicing & dicing it.

R has two main packages to deal with XML: the original `XML` package and the more lightweight and modern `xml2` package. If you really need all the power of `libxml2` (the C library that powers both packages) or are _creating_ XML from R, then you probably know your way around the `XML` package and are pretty self-sufficient.

Most folks can get by with the `xml2` package if their goal is to work with XML data. By “work with” I mean read in files or data from APIs that come in XML format and have to find nuggets of gold in between all those `<` and `>` tags. To do so requires finding what you need and that means using a query language called `XPath` to pinpoint the node(s) you are after. Working with `XPath` can be pretty daunting for those who went to school to ultimately cure diseases, build high-performing stock portfolios, target advertising to everyone or perform a host of other real work. Becoming an expert in `XPath` was not something on the bucket list but to work with XML you will need to be familiar with it.

The [`xmlview`](https://github.com/hrbrmstr/xmlview) package provides a way to visually inspect XML and interactively test out `XPath` expressions. It’s as simple to use as:

devtools::install_github("ramnathv/htmlwidgets") # we use some bleeding edge features
devtools::install_github("hrbrmstr/xmlview")
library(xml2)
library(xmlview)
 
# plain text XML
xml_view("<note><to>Tove</to><from>Jani</from><heading>Reminder</heading><body>Don't forget me this weekend!</body></note>")
 
# read-in XML document
doc <- read_xml("http://www.npr.org/rss/rss.php?id=1001")
xml_view(doc, add_filter=TRUE)

(There’s also an experimental `xml_tree_view()` in there by @timelyportfolio that we’ll be adding features to at a pretty rapid pace.)

Here’s a screenshot of it in action:

RStudioScreenSnapz003

There are options to change the CSS styling for the formatted code. Yep, it will format and highlight XML for you so it’s easier to work with. There’s an animated gif of a screencast over [on github](https://github.com/hrbrmstr/xmlview) as well.

Once you perfect your `XPath` expression, hit the “R” button and it will generate the code you can copy back into RStudio. It understands namespaces but try not to stuff a huge XML document in there as browsers don’t work well with large data elements (the viewer is an `htmlwidget` and is, hence, browser-based).

It works with plain character XML/HTML, and many `xml2` data types. I have no current plans for `XML` package object support but toss up an issue on github if you really need it (or, better yet, a PR). If there are other desired features (especially from educators), please post a request in github issue as well.

Watch for more features in the coming weeks and a CRAN release once the bleeding edge `htmlwidgets` packages makes it to CRAN.

Despite being a cybersecurity professional, it’s pretty easy to social engineer me:

I’ll note that @jayjacobs does it all the time to me.

I took Thorsten’s tweet as a challenge to ggplot2-ize the Bloomberg visualizations as best as possible.

All the code in [on github](https://github.com/hrbrmstr/forceaccounted) and you can see the finished product (knitted from an Rmd file) [on this project page](http://rud.is/projects/force_accounted.html) or mini-scroll below in the `iframe`.

I encourage folks to look at the project (it’s actually a package) source as it has quite a bit of data munging and ggplot2 “tricks” that could be useful in “real” visualizations.

`iptools` is a set of tools for working with IP addresses. Not just work, but work _fast_. It’s backed by `Rcpp` and now uses the [AsioHeaders](http://dirk.eddelbuettel.com/blog/2016/01/07/#asioheaders_1.11.0-1) package by Dirk Eddelbuettel, which means it no longer needs to _link_ against the monolithic Boost libraries and *works on Windows*!

What can you do with it? One thing you can do is take a vector of domain names and turn them into IP addresses:

library(iptools)
 
hostname_to_ip(c("rud.is", "dds.ec", "ironholds.org", "google.com"))
 
## [[1]]
## [1] "104.236.112.222"
## 
## [[2]]
## [1] "162.243.111.4"
## 
## [[3]]
## [1] "104.131.2.226"
## 
## [[4]]
##  [1] "2607:f8b0:400b:80a::100e" "74.125.226.101"           "74.125.226.102"          
##  [4] "74.125.226.100"           "74.125.226.96"            "74.125.226.104"          
##  [7] "74.125.226.99"            "74.125.226.103"           "74.125.226.105"          
## [10] "74.125.226.98"            "74.125.226.97"            "74.125.226.110"

That means you can pump a bunch of domain names from logs into `iptools` and get current IP address allocations out for them.

You can also do the reverse:

library(magrittr)
library(purrr)
library(iptools)
 
hostname_to_ip(c("rud.is", "dds.ec", "ironholds.org", "google.com")) %>% 
  flatten_chr() %>% 
  ip_to_hostname() %>% 
  flatten_chr()
 
##  [1] "104.236.112.222"           "dds.ec"                    "104.131.2.226"            
##  [4] "yyz08s13-in-x0e.1e100.net" "yyz08s13-in-f5.1e100.net"  "yyz08s13-in-f6.1e100.net" 
##  [7] "yyz08s13-in-f4.1e100.net"  "yyz08s13-in-f0.1e100.net"  "yyz08s13-in-f8.1e100.net" 
## [10] "yyz08s13-in-f3.1e100.net"  "yyz08s13-in-f7.1e100.net"  "yyz08s13-in-f9.1e100.net" 
## [13] "yyz08s13-in-f2.1e100.net"  "yyz08s13-in-f1.1e100.net"  "yyz08s13-in-f14.1e100.net"

Notice that it handled IPv6 addresses and also cases where no reverse mapping existed for an IP address.

You can convert IPv4 addresses to and from long integer format (the 4 octet version of IPv4 addresses is primarily to make them easier for humans to grok), generate random IP addresses for testing, test IP addresses for validity and type and also reference data sets with registered assignments (so you can see allocated IP groups). Plus, it includes `xff_extract()` which can help identify an actual IP address (helpful when connections come from behind proxies).

We can’t thank Dirk enough for cranking out `AsioHeaders` since it means there will be many more network/”cyber” packages coming for R and available on every platform.

You can find `iptools` version `0.3.0` [on CRAN](https://cran.r-project.org/web/packages/iptools/) now (it may take your mirror a bit to catch up), grab the source [release](https://github.com/hrbrmstr/iptools/releases/tag/v0.3.0) on GitHub or check out the [repo](https://github.com/hrbrmstr/iptools/), poke around, submit issues and/or contribute!

Isn’t it great when an R package can help you with resolutions in the new year?

Gone are the days when one had a single computer plugged directly into a modem (cable, DSL or good ol’ Hayes). Even the days when there were just one or two computers connected via wires or invisible multi-gigahertz waves passing through the air are in the long gone by. Today (as you’ll see in the February 2016 [OUCH! newsletter](http://securingthehuman.sans.org/resources/newsletters/ouch/2016)), there are scads of devices of all kinds on your home network. How can you keep track of them all?

Some router & wireless access point vendors provide tools on their device “admin” pages to see what’s connected, but they are inconsistent at best (and usually pretty ugly & cumbersome to navigate to). Thankfully, app purveyors have jumped in to fill the gap. Here’s a list of free or “freemium” (basic features for free, advanced features cost extra) tools for mobile devices and Windows or OS X (if you’re running Linux at home, I’m assuming you’re familiar with the tools available for Linux).

### iOS

– Fing –
– iNet – Network Scanner –
– Network Analyzer Lite – wifi scanner, ping & net info –

### Android

– Fing – (google); (amazon)
– Pamn (google); (direct source)

### Windows

– Advanced IP Scanner
– Angry IP Scanner –
– Fing –
– MiTec Network Scanner –
– nmap –

### OS X

– Angry IP Scanner –
– Fing –
– IP Scanner –
– LanScan –
– nmap –

Some of these tools are easier to work with than others, but they all install pretty easily (though “Fing” and “nmap” work at the command-line on Windows & OS X, so if you’re not a “power user”, you may want to use other tools on those platforms). In most cases, it’s up to you to keep a copy of the output and perform your own “diffs”. One “pro” option for tools like “Fing” is the ability to have the tool store scan results “in the cloud” and perform this comparison for you.

Drop a note in the comments if you have other suggestions, but _vendors be warned_: I’ll be moderating all comments to help ensure no evil links or blatant product shilling makes it to reader eyeballs.