Skip navigation

Author Archives: hrbrmstr

Don't look at me…I do what he does — just slower. #rstats avuncular • ?Resistance Fighter • Cook • Christian • [Master] Chef des Données de Sécurité @ @rapid7

Just joining in the fray of “where I’m speaking/where I’ll be the week of @RSAConference” posts…

SEM-003 – Information Security Leadership Development: Surviving as a Security Leader (Half Day – Delegates only)

WHEN: Monday : 0830-1130

I’m very pleased to be able to join:

– Derek Brink, Vice President & Research Fellow for IT Security & IT GRC, Aberdeen Group, a Harte-Hanks Company
– Justin Peavey, SVP Information Services & Security, CISO, Omgeo
– Dave Notch, President, Intensity Analytics
– Evan Wheeler, Director, Information Security, Omgeo
– James Burrell, Deputy Assistant Director, Federal Bureau of Investigation
– John Iatonna, SVP, Information Security, Edelman, Inc.

In this session. I’ll be covering “Are you fighting the wrong battles” and participating in a panel discussion.

GRC-T18 – Data Analysis and Visualization for Security Professionals

WHEN: Tuesday : 1430-1530

@JayJacobs & I will be delving into the dark arts & science of conducting & communicating data analyses through data visualization with a plethora of background material and two case studies.

SPO1-R33 – Achievement Unlocked: Designing a Compelling Security Awareness Program

WHEN: Thursday : 1040-1140

@csoandy & I will be entertaining and educating folks on how to kick your security awareness program up notch. Should be great fun and animated interaction is greatly encouraged.

☛ PhöCon

WHEN: Thursday : 1800+

Third year in a row where a bunch of us go out for Vietnamese food. Ping me on Twitter (@hrbrmstr) for more details.

Metricon 8

Friday : All Day!

A day of facilitated working sessions designed to radically transform critical areas of security metrics across the industry.

When not speaking, I’ll be attending many sessions, will have the “shield” on most of the time and would love to meet as many folks as possible during my time in SFO.

Here’s a quick example of couple additional ways to use the netintel R package I’ve been tinkering with. This could easily be done on the command line with other tools, but if you’re already doing scripting/analysis with R, this provides a quick way to tell if a list of IPs is in the @AlienVault IP reputation database. Zero revelations here for regular R users, but it might help some folks who are working to make R more of a first class scripting citizen.

I whipped up the following bit of code to check to see how many IP addresses in the @Mandiant APT-1 FQDN dump were already in the AlienVault database. Reverse resolution of the Mandiant APT-1 FQDN list is a bit dubious at this point so a cross-check with known current data is a good idea. I should also point out that not all the addresses resolved “well” (there are 2046 FQDNs and my quick dig only yielded 218 usable IPs).

library(netintel)
 
# get the @AlienVault reputation DB
av.rep = Alien.Vault.Reputation()
 
# read in resolved APT-1 FQDNs list
apt.1 = read.csv("apt-1-ips.csv")
 
# basic set operation
whats.left = intersect(apt.1$ip,av.rep$IP)
 
# how many were in the quickly resolved apt-1 ip list?
length(apt.1)
[1]218
 
# how many are common across the lists?
length(whats.left)
[1] 44
 
# take a quick look at them
whats.left
[1] "12.152.124.11"   "140.112.19.195"  "161.58.182.205"  "165.165.38.19"   "173.254.28.80"  
[6] "184.168.221.45"  "184.168.221.54"  "184.168.221.56"  "184.168.221.58"  "184.168.221.68" 
[11] "192.31.186.141"  "192.31.186.149"  "194.106.162.203" "199.59.166.109"  "203.170.198.56" 
[16] "204.100.63.18"   "204.93.130.138"  "205.178.189.129" "207.173.155.44"  "207.225.36.69"  
[21] "208.185.233.163" "208.69.32.230"   "208.73.210.87"   "213.63.187.70"   "216.55.83.12"   
[26] "50.63.202.62"    "63.134.215.218"  "63.246.147.10"   "64.12.75.1"      "64.12.79.57"    
[31] "64.126.12.3"     "64.14.81.30"     "64.221.131.174"  "66.228.132.20"   "66.228.132.53"  
[36] "68.165.211.181"  "69.43.160.186"   "69.43.161.167"   "69.43.161.178"   "70.90.53.170"   
[41] "74.14.204.147"   "74.220.199.6"    "74.93.92.50"     "8.5.1.34"

So, roughly a 20% overlap between (quickly-I’m sure there’s a more comprehensive list) resolved & “clean” APT-1 FQDNs IPs and the AlienVault reputation database.

For kicks, we can see where all the resolved APT-1 nodes live (BGP/network-wise) in relation to each other using some of the other library functions:

library(netintel)
library(igraph)
library(plyr)
 
apt.1 = read.csv("apt-1-ips.csv")
ips = apt.1$ip
 
# get BGP origin & peers
origin = BulkOrigin(ips)
peers = BulkPeer(ips)
 
# start graphing
g = graph.empty()
 
# Make IP vertices; IP endpoints are red
g = g + vertices(ips,size=1,color="red",group=1)
 
# Make BGP vertices ; BGP nodes are light blue
g = g + vertices(unique(c(peers$Peer.AS, origin$AS)),size=1.5,color="orange",group=2)
 
# no labels
V(g)$label = ""
 
# Make IP/BGP edges
ip.edges = lapply(ips,function(x) {
  iAS = origin[origin$IP==x,]$AS
  lapply(iAS,function(y){
    c(x,y)
  })
})
 
# Make BGP/peer edges
bgp.edges = lapply(unique(origin$BGP.Prefix),function(x) {
  startAS = unique(origin[origin$BGP.Prefix==x,]$AS)
  lapply(startAS,function(z) {
    pAS = peers[peers$BGP.Prefix==x,]$Peer.AS
    lapply(pAS,function(y) {
      c(z,y)
    })
  })
})
 
# get total graph node count
node.count = table(c(unlist(ip.edges),unlist(bgp.edges)))
 
# add edges 
g = g + edges(unlist(ip.edges))
g = g + edges(unlist(bgp.edges))
 
# base edge weight == 1
E(g)$weight = 1
 
# simplify the graph
g = simplify(g, edge.attr.comb=list(weight="sum"))
 
# no arrows
E(g)$arrow.size = 0
 
# best layout for this
L = layout.fruchterman.reingold(g)
 
# plot the graph
plot(g,margin=0)

apt-1

If we take out the BGP peer relationships from the graph (i.e. don’t add the bgp.edges in the above code) we can see the mal-host clusters even more clearly (the pseudo “Death Star” look is unintentional but appropro):

Rplot01

We can also determine which ASNs the bigger clusters belong to by checking out the degree. The “top” 5 clusters are:

16509 40676 36351 26496 15169 
    7     8     8    13    54

While my library doesn’t support direct ASN detail lookup yet (an oversight), we can take those ASN’s, check them out manually and see the results:

16509   | US | arin     | 2000-05-04 | AMAZON-02 - Amazon.com, Inc.
40676   | US | arin     | 2008-02-26 | PSYCHZ - Psychz Networks
36351   | US | arin     | 2005-12-12 | SOFTLAYER - SoftLayer Technologies Inc.
26496   | US | arin     | 2002-10-01 | AS-26496-GO-DADDY-COM-LLC - GoDaddy.com, LLC
15169   | US | arin     | 2000-03-30 | GOOGLE - Google Inc.

So Google servers are hosting the most mal-nodes from the resolved ASN-1 list, followed by GoDaddy. I actually expected Amazon to be higher up in the list.

I’ll be adding igraph and ASN lookup functions to the netintel library soon. Also, if anyone has a better APT-1 IP list, please shoot me a link.

I happened across [Between Hype and Understatement: Reassessing Cyber Risks as a Security Strategy](http://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=1107&context=jss) [PDF] when looking for something else at the [Journal of Strategic Security](http://scholarcommons.usf.edu/jss/) site and thought it was a good enough primer to annoy everyone with a tweet about it.

The paper is—well—_kinda_ wordy and has a Flesch-Kincaid grade reading level of 16*, making well suited for academia, but not rapid consumption in this blog era we abide in. I promised some folks that I’d summarize it (that phrase always reminds me [of this](http://www.youtube.com/watch?v=uwAOc4g3K-g)) and so I shall (try).

The fundamental arguments are:

– we underrate & often overlook pre-existing software weaknesses (a.k.a. vulnerabilities)
– we undervalue the costs of cybercrime by focusing solely on breaches & not including preventative/deterrence costs
– we get distracted from identifying real threats by over-hyped ones
– we suck at information sharing (not enough of it; incomplete, at times; too many “standards”)
– we underreport incidents—and that this actually _enables_ attackers
– we need a centralized body to report incidents to
– we should develop a complete & uniform taxonomy
– we must pay particular attention to vulnerabilities in critical infrastructure
– we must pressure governments & vendors to take an active role in “encouraging” removing vulnerabilities from software during the SDLC, not after deployment

The author discusses specific media references (there are a plethora of links in the endnotes) when it comes to hype and notes specific government initiatives when it comes to other topics such as incident handling/threat sharing (the author has a definite UK slant).

I especially liked this quote on threat actors/actions/motives & information sharing:

> _[the] distributed nature of the Internet can make it difficult to clearly attribute some incidents [as] criminal, terrorist actions, or acts of war. Consequently, to affirm that “the principal difference” between [these] “is in the attacker’s intent” is far too simplistic when many cyber-attackers cannot be identified. It is also quite simplistic to attribute financial motivation only to cyber criminals since terrorists can be motivated by monetary gain in order to finance their political actions. [An] added difficulty is that a pattern of cyber incidents may not reveal itself unless information is shared between the different stakeholders. For example, taken in isolation, a bank’s website being temporarily unavailable may look innocuous and not worth reporting to the competent agencies. Yet, when associated with other cyber incidents in which the victims and timeframe are similar, it may reveal a concerted effort to target a particular type of business or e-government resources, a pattern of behavior that could amount to crime (fraud, espionage) or terrorism if the motive can be established. Detection thus may depend on information being shared._

She does spend quite a bit of ink on vulnerabilities. Some choice (shorter) quotes:

> _[the] economic analysis adopted by software companies does not take into account (or not sufficiently) that the costs of non- secure software are significant, that these costs will be borne by others on the network and ultimately by themselves in clean-up operations_

> _Of course, to fix the vulnerabilities after release is laudable; it is also commendable that those companies participate in huge clean-up operations of botnets like Microsoft did in 2010. However, there is nothing more paradoxical than Microsoft (and others) spending money to circumscribe the effects of the very vulnerabilities they contributed to create in the first place_

She then concludes with suggesting that governments work with ISPs to actually severely restrict or disable internet connections of users found to be infected and contributing to spam/botnets, positing that this will cause users to demand more out of software vendors or use the free market to shift their loyalties to other software providers who do more to build less vulnerable software.

Again, I think it’s a good primer on the subject (despite some dubious analogies peppered throughout), but I also think there is too much focus on vulnerabilities and not enough on threat actors/actions/motives. I do like how she mixes economic theory into a topic that is usually defined solely in terms of warfare without diminishing the potential impacts of either.

It would have been pretty evident to see the influence of Beck & Giddens even if her references to [Risk Society](http://en.wikipedia.org/wiki/Risk_society) didn’t bookend the prose. I’ll leave you with what might just be her own one-sentence summary of the entire paper and definitely apropo for our current “cyber” situation:

> _[the] risks that industrialization and modernization created tend to be global, systemic with a “boomerang effect,” and denied, overlooked, or overhyped._

*Ironically enough, this blog post comes out at F/K-level 22-23
dat<- data.frame(t=seq(0, 2*pi, by=0.1) )
xhrt <- function(t) 16*sin(t)^3
yhrt <- function(t) 13*cos(t)-5*cos(2*t)-2*cos(3*t)-cos(4*t)
dat$y=yhrt(dat$t)
dat$x=xhrt(dat$t)
with(dat, polygon(x,y, col="hotpink"))

i heaRt you!

hearRt

(R code inspired by/lifted from: DWin on StackOverflow)

So, I’ve had some quick, consecutive blog posts around this R package I’m working on, and this one is more of an answer to my own, self-identified question of “so what?”. As I was working on an importer for AlienValut’s IP reputation database, I thought it might be interesting to visualize aspects of that data using some of the meta-information gained from the other “netintel” (my working name for the package) functions.

Acting on that impulse, I extracted all IPs that were uniquely identified as “Malicious Host“s (it’s a category in their database), did ASN & peer lookups for them and made two DrL graphs from them (I did a test singular graph but it would require a Times Square monitor to view).

h1

h2

You’ll need to select both images to make them bigger to view them more easily. Red nodes are hosts, blue ones are the ASNs they belong to.

While some of the visualized data was pretty obvious from the data table (nigh consecutive IP addresses in some cases), seeing the malicious clusters (per ASN) was (to me) pretty interesting. I don’t perform malicious host/network analysis as part of the day job, so the apparent clustering (and, also the “disconnected” ones) may not be interesting to anyone but me, but it gave me a practical example to test for the library I’m working on and may be interesting to others. It also shows you can make pretty graphs with R.

I’ve got the crufty R code up on github now and will keep poking at it as I have time. I’ll add the code that made the above image to the repository over the weekend.

The small igraph visualization in the previous post shows the basics of what you can do with the BulkOrigin & BulkPeer functions, and I thought a larger example with some basic D3 tossed in might be even more useful.

Assuming you have the previous functions in your environment, the following builds a larger graph structure (the IPs came from an overnight sample of pcap captured communication between my MacBook Pro & cloud services) and plots a similar circular graph:

library(igraph)
 
ips = c("100.43.81.11","100.43.81.7","107.20.39.216","108.166.87.63","109.152.4.217","109.73.79.58","119.235.237.17","128.12.248.13","128.221.197.57","128.221.197.60","128.221.224.57","129.241.249.6","134.226.56.7","137.157.8.253","137.69.117.58","142.56.86.35","146.255.96.169","150.203.4.24","152.62.109.57","152.62.109.62","160.83.30.185","160.83.30.202","160.83.72.205","161.69.220.1","168.159.192.57","168.244.164.254","173.165.182.190","173.57.120.151","175.41.236.5","176.34.78.244","178.85.44.139","184.172.0.214","184.72.187.192","193.164.138.35","194.203.96.184","198.22.122.158","199.181.136.59","204.191.88.251","204.4.182.15","205.185.121.149","206.112.95.181","206.47.249.246","207.189.121.46","207.54.134.4","209.221.90.250","212.36.53.166","216.119.144.209","216.43.0.10","23.20.117.241","23.20.204.157","23.20.9.81","23.22.63.190","24.207.64.10","24.64.233.203","37.59.16.223","49.212.154.200","50.16.130.169","50.16.179.34","50.16.29.33","50.17.13.221","50.17.43.219","50.18.234.67","63.71.9.108","64.102.249.7","64.31.190.1","65.210.5.50","65.52.1.12","65.60.80.199","66.152.247.114","66.193.16.162","66.249.71.143","66.249.71.47","66.249.72.76","66.41.34.181","69.164.221.186","69.171.229.245","69.28.149.29","70.164.152.31","71.127.49.50","71.41.139.254","71.87.20.2","74.112.131.127","74.114.47.11","74.121.22.10","74.125.178.81","74.125.178.82","74.125.178.88","74.125.178.94","74.176.163.56","76.118.2.138","76.126.174.105","76.14.60.62","76.168.198.238","76.22.130.45","77.79.6.37","81.137.59.193","82.132.239.186","82.132.239.97","8.28.16.254","83.111.54.154","83.251.15.145","84.61.15.10","85.90.76.149","88.211.53.36","89.204.182.67","93.186.30.114","96.27.136.169","97.107.138.192","98.158.20.231","98.158.20.237")
origin = BulkOrigin(ips)
peers = BulkPeer(ips)
 
g = graph.empty() + vertices(ips,size=10,color="red",group=1)
g = g + vertices(unique(c(peers$Peer.AS, origin$AS)),size=10,color="lightblue",group=2)
V(g)$label = c(ips, unique(c(peers$Peer.AS, origin$AS)))
ip.edges = lapply(ips,function(x) {
  c(x,origin[origin$IP==x,]$AS)
})
bgp.edges = lapply(unique(origin$BGP.Prefix),function(x) {
  startAS = unique(origin[origin$BGP.Prefix==x,]$AS)
  pAS = peers[peers$BGP.Prefix==x,]$Peer.AS
  lapply(pAS,function(y) {
    c(startAS,y)
  })
})
g = g + edges(unlist(ip.edges))
g = g + edges(unlist(bgp.edges))
E(g)$weight = 1
g = simplify(g, edge.attr.comb=list(weight="sum"))
E(g)$arrow.size = 0
g$layout = layout.circle
plot(g)

I’ll let you run that to see how horrid a large, style-/layout-unmodified circular layout graph looks.

Thanks to a snippet on StackOverflow, it’s really easy to get this into D3:

library(RJSONIO) 
temp<-cbind(V(g)$name,V(g)$group)
colnames(temp)<-c("name","group")
js1<-toJSON(temp)
write.graph(g,"/tmp/edgelist.csv",format="edgelist")
edges<-read.csv("/tmp/edgelist.csv",sep=" ",header=F)
colnames(edges)<-c("source","target")
edges<-as.matrix(edges)
js2<-toJSON(edges)
asn<-paste('{"nodes":',js1,',"links":',js2,'}',sep="")
write(asn,file="/tmp/asn.json")

We can take the resulting asn.json file and use it as a drop-in replacement for one of the example D3 force-directed layout building blocks and produce this:

Click for larger

Click for larger

Rather than view a static image, you can view the resulting D3 visualization (warning: it’s fairly big).

Both the conversion snippet and the D3 code can be easily tweaked to add more detail and be a tad more interactive/informative, but I’m hoping this larger example provides further inspiration for folks looking to do computer network analysis & visualization with R and may also help some others build more linkages between R & D3.

This is part of a larger project I’m working on, but it’s useful enough to share (github version coming soon).

The fine folks at @TeamCymru have a great service to map IP addresses to ASN/BGP information en masse.

There are libraries for Python, Perl and other languages but none for R (that I could find). So, I threw together a quick set of functions to interface to @TeamCymru’s service. Unlike many other modern services, this one isn’t XML or JSON over a RESTful interface, so the code uses a socketConnection() over the standard WHOIS TCP port to post and retrieve simple text lists.

#
# bulkorigin.R - perform bulk IP to ASN mapping via Team Cymru whois service
#
# Author: @hrbrmstr
# Version: 0.1
# Date: 2013-02-07
#
# Copyright 2013 Bob Rudis
# 
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#   
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
# 
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.#
 
library(plyr)
 
# short function to trim leading/trailing whitespace
trim <- function (x) gsub("^\\s+|\\s+$", "", x)
 
BulkOrigin <- function(ip.list,host="v4.whois.cymru.com",port=43) {
 
  # Retrieves BGP Origin ASN info for a list of IP addresses
  #
  # NOTE: IPv4 version
  #
  # NOTE: The Team Cymru's service is NOT a GeoIP service!
  # Do not use this function for that as your results will not
  # be accurate.
  #
  # Args:
  #   ip.list : character vector of IP addresses
  #   host: which server to hit for lookup (defaults to Team Cymru's server)
  #   post: TCP port to use (defaults to 43)
  #
  # Returns:
  #   data frame of BGP Origin ASN lookup results
 
 
  # setup query
  cmd = "begin\nverbose\n" 
  ips = paste(unlist(ip.list), collapse="\n")
  cmd = sprintf("%s%s\nend\n",cmd,ips)
 
  # setup connection and post query
  con = socketConnection(host=host,port=port,blocking=TRUE,open="r+")  
  cat(cmd,file=con)
  response = readLines(con)
  close(con)
 
  # trim header, split fields and convert results
  response = response[2:length(response)]
  response = lapply(response,function(n) {
    sapply(strsplit(n,"|",fixed=TRUE),trim)
  })  
  response = adply(response,c(1))
  response = response[,2:length(response)]
  names(response) = c("AS","IP","BGP.Prefix","CC","Registry","Allocated","AS.Name")
 
  return(response)
 
}
 
BulkPeer <- function(ip.list,host="v4-peer.whois.cymru.com",port=43) {
 
  # Retrieves BGP Peer ASN info for a list of IP addresses
  #
  # NOTE: IPv4 version
  #
  # NOTE: The Team Cymru's service is NOT a GeoIP service!
  # Do not use this function for that as your results will not
  # be accurate.
  #
  # Args:
  #   ip.list : character vector of IP addresses
  #   host: which server to hit for lookup (defaults to Team Cymru's server)
  #   post: TCP port to use (defaults to 43)
  #
  # Returns:
  #   data frame of BGP Peer ASN lookup results
 
 
  # setup query
  cmd = "begin\nverbose\n" 
  ips = paste(unlist(ip.list), collapse="\n")
  cmd = sprintf("%s%s\nend\n",cmd,ips)
 
  # setup connection and post query
  con = socketConnection(host=host,port=port,blocking=TRUE,open="r+")  
  cat(cmd,file=con)
  response = readLines(con)
  close(con)
 
  # trim header, split fields and convert results
  response = response[2:length(response)]
  response = lapply(response,function(n) {
    sapply(strsplit(n,"|",fixed=TRUE),trim)
  })  
  response = adply(response,c(1))
  response = response[,2:length(response)]
  names(response) = c("Peer.AS","IP","BGP.Prefix","CC","Registry","Allocated","Peer.AS.Name")
  return(response)
 
}

Take a list of IPs, make an IP connection, formulate a bulk query and convert the results. Here’s a small script to test it:

ips = c("100.43.81.11","100.43.81.7")
origin = BulkOrigin(ips)
str(origin)
peers = BulkPeer(ips)
str(peers)

That code outputs:

'data.frame':	2 obs. of  7 variables:
 $ AS        : chr  "13238" "13238"
 $ IP        : chr  "100.43.81.11" "100.43.81.7"
 $ BGP.Prefix: chr  "100.43.64.0/19" "100.43.64.0/19"
 $ CC        : chr  "US" "US"
 $ Registry  : chr  "arin" "arin"
 $ Allocated : chr  "2011-12-06" "2011-12-06"
 $ AS.Name   : chr  "YANDEX Yandex LLC" "YANDEX Yandex LLC"

and

'data.frame':	8 obs. of  7 variables:
 $ Peer.AS     : chr  "174" "3257" "9002" "10310" ...
 $ IP          : chr  "100.43.81.11" "100.43.81.11" "100.43.81.11" "100.43.81.11" ...
 $ BGP.Prefix  : chr  "100.43.64.0/19" "100.43.64.0/19" "100.43.64.0/19" "100.43.64.0/19" ...
 $ CC          : chr  "US" "US" "US" "US" ...
 $ Registry    : chr  "arin" "arin" "arin" "arin" ...
 $ Allocated   : chr  "2011-12-06" "2011-12-06" "2011-12-06" "2011-12-06" ...
 $ Peer.AS.Name: chr  "COGENT Cogent/PSI" "TINET-BACKBONE Tinet SpA" "RETN-AS ReTN.net Autonomous System" "YAHOO-1 - Yahoo!" ...

respectively for each str().

Nothing super-sexy, but it’s part of a mission I’m on to make IP addresses “first class citizens” in R. I’m starting with building some smaller functions that accumulate IP metadata and will ultimately collect them all into a compact R library.

In the interim, I thought these two routines might be useful to some folks.

With just these two functions, you can use various graphing libraries to get a picture of the network connectivity. Here’s a small sample to get you started:

library(igraph)
 
ips = c("100.43.81.11")
origin = BulkOrigin(ips)
peers = BulkPeer(ips)
 
g = graph.empty() + vertices(c(ips, peers$Peer.AS, origin$AS),size=30)
V(g)$label = c(ips, peers$Peer.AS, origin$AS)
e = lapply(peers$Peer.AS,function(x) {
  c(origin$AS,x)
})
g = g + edges(unlist(e))
g = g + edge(ips, origin$AS)
g$layout = layout.circle
plot(g)

asn

If you know of any other R libraries or code that provide functions that operate on IP addresses or interface to services that provide IP address metadata, please drop a note in the comments or ping me on Twitter.

Yesterday, I took a very short video capture from [wind map](http://hint.fm/wind/) of the massive wind flows buffeting the northeast. I did the same this morning and stitched them together to see what a difference a day makes.

Nothing earth-shattering here, but it is amazing to see how quickly the patterns can change (they changed intra-day yesterday as well, but I didn’t have time to do a series of videos this morning. the major change was that the very clear northerly pattern seen earlier in the day was replaced with a very clear easterly flow, with much more power than seen in today’s capture).

Winter Wind Flows