Skip navigation

Tag Archives: XML

As I was putting together the [coord_proj]( ggplot2 extension I had posted a ( that I shared on Twitter. Said gist received a comment (several, in fact) and a bunch of us were painfully reminded of the fact that there is no built-in way to receive notifications from said comment activity.

@jennybryan posited that it could be possible to use IFTTT as a broker for these notifications, but after some checking that ended up not being directly doable since there are no “gist comment” triggers to act upon in IFTTT.

There are a few standalone Ruby gems that programmatically retrieve gist comments but I wasn’t interested in managing a Ruby workflow [ugh]. I did find a Heroku-hosted service – – that will turn gist comments into an RSS/Atom feed (based on Ruby again). I gave it a shot and hooked it up to IFTTT but my feed is far enough down on the food chain there that it never gets updated. It was possible to deploy that app on my own Heroku instance, but—again—I’m not interested in managing a Ruby workflow.

The Ruby scripts pretty much:

– grab your main gist RSS/Atom feed
– visit each gist in the feed
– extract comments & comment metadata from them (if any)
– return a composite data structure you can do anything with

That’s super-easy to duplicate in R, so I decided to build a small R script that does all that and generates an RSS/Atom file which I added to my Feedly feeds (I’m pretty much always scanning RSS, so really didn’t need the IFTTT notification setup). I put it into a `cron` job that runs every hour. When Feedly refreshes the feed, a new entry will appear whenever there’s a new comment.

The script is below and [on github]( (ironically as a gist). Here’s what you’ll grok from the code:

– one way to deal with the “default namespace” issue in R+XML
– one way to deal with error checking for scraping
– how to build an XML file (and, specifically, an RSS/Atom feed) with R
– how to escape XML entities with R
– how to get an XML object as a character string in R

You’ll definitely need to tweak this a bit for your own setup, but it should be a fairly complete starting point for you to work from. To see the output, grab the [generated feed](

# Roll your own GitHub Gist Comments Feed in R
library(xml2)    # github version
library(rvest)   # github version
library(stringr) # for str_trim & str_replace
library(dplyr)   # for data_frame & bind_rows
library(pbapply) # free progress bars for everyone!
library(XML)     # to build the RSS feed
who <- "hrbrmstr" # CHANGE ME!
# Grab the user's gist feed -----------------------------------------------
gist_feed <- sprintf("", who)
feed_pg <- read_xml(gist_feed)
ns <- xml_ns_rename(xml_ns(feed_pg), d1 = "feed")
# Extract the links & titles of the gists in the feed ---------------------
links <-  xml_attr(xml_find_all(feed_pg, "//feed:entry/feed:link", ns), "href")
titles <-  xml_text(xml_find_all(feed_pg, "//feed:entry/feed:title", ns))
#' This function does the hard part by iterating over the
#' links/titles and building a tbl_df of all the comments per-gist
get_comments <- function(links, titles) {
  bind_rows(pblapply(1:length(links), function(i) {
    # get gist
    pg <- read_html(links[i])
    # look for comments
    ref <- tryCatch(html_attr(html_nodes(pg, "div.timeline-comment-wrapper a[href^='#gistcomment']"), "href"),
                    error=function(e) character(0))
    # in theory if 'ref' exists then the rest will
    if (length(ref) != 0) {
      # if there were comments, get all the metadata we care about
      author <- html_text(html_nodes(pg, "div.timeline-comment-wrapper"))
      timestamp <- html_attr(html_nodes(pg, "div.timeline-comment-wrapper time"), "datetime")
      contentpg <- str_trim(html_text(html_nodes(pg, "div.timeline-comment-wrapper div.comment-body")))
    } else {
      ref <- author <- timestamp <- contentpg <- character(0)
    # bind_rows ignores length 0 tbl_df's
    if (sum(lengths(list(ref, author, timestamp, contentpg))==0)) {
    return(data_frame(title=titles[i], link=links[i],
                      ref=ref, author=author,
                      timestamp=timestamp, contentpg=contentpg))
comments <- get_comments(links, titles)
feed <- xmlTree("feed")
feed$addNode("id", sprintf("user:%s", who))
feed$addNode("title", sprintf("%s's gist comments", who))
feed$addNode("icon", "")
feed$addNode("link", attrs=list(href=sprintf("", who)))
feed$addNode("updated", format(Sys.time(), "%Y-%m-%dT%H:%M:%SZ", tz="GMT"))
for (i in 1:nrow(comments)) {
  feed$addNode("entry", close=FALSE)
    feed$addNode("id", sprintf("gist:comment:%s:%s", who, comments[i, "timestamp"]))
    feed$addNode("link", attrs=list(href=sprintf("%s%s", comments[i, "link"], comments[i, "ref"])))
    feed$addNode("title", sprintf("Comment by %s", comments[i, "author"]))
    feed$addNode("updated", comments[i, "timestamp"])
    feed$addNode("author", close=FALSE)
      feed$addNode("name", comments[i, "author"])
    feed$addNode("content", saveXML(xmlTextNode(as.character(comments[i, "contentpg"])), prefix=""), 
rss <- str_replace(saveXML(feed), "<feed>", '<feed xmlns="">')
writeLines(rss, con="feed.xml")

To get that RSS feed into something that an internet service can process you have to make sure that `feed.xml` is being written to a directory that translates to a publicly accessible web location (mine is at []( if you want to see it).

On the internet-facing Ubuntu box that generated the feed I’ve got a `cron` entry:

30  * * * * /home/bob/bin/gengcfeed.R

which means it’s going to check github every 30 minutes for comment updates. Tune said parameters to your liking.

At the top of `gengcfeed.R` I have an `Rscript` shebang:


and the execute bit is set on the file.

Run the file by hand, first, and then test the feed via []( to ensure it’s accessible and that it validates correctly. Now you can enter that feed URL into your favorite newsfeed reader (I use @feedly).

One of my most popular blog posts — 24,000 reads — in the old, co-mingled site was a short snippet on how to strip HTML tags from a block of content in Objective-C. It’s been used by many-an-iOS developer (which was the original intent).

An intrepid reader & user (“Brian” – no other attribution available) found a memory leak that really rears it’s ugly head when parsing large-content blocks. The updated code is below (with the original post text) and also in the comments on the old site. If Brian reads this, please post full attribution info in the comments or to @hrbrmstr so I can give you proper credit.

I needed to strip the tags from some HTML that was embedded in an XML feed so I could display a short summary from the full content in a UITableView. Rather than go through the effort of parsing HTML on the iPhone (as I already parsed the XML file) I built this simple method from some half-finished snippets I found. It has worked in all of the cases I have needed, but your mileage may vary. It is at least a working method (which cannot be said about most of the other examples). It works both in iOS (iPhone/iPad) and in plain-old OS X code, too.

– (NSString *) stripTags:(NSString *)str {

NSMutableString *html = [NSMutableString stringWithCapacity:[str length]];

NSScanner *scanner = [NSScanner scannerWithString:str];
NSString *tempText = nil;

while (![scanner isAtEnd]) {

[scanner scanUpToString:@"<" intoString:&tempText];

if (tempText != nil)
[html appendString:tempText];

[scanner scanUpToString:@">" intoString:NULL];

if (![scanner isAtEnd])
[scanner setScanLocation:[scanner scanLocation] + 1];

tempText = nil;


return html ;