Skip navigation

Author Archives: hrbrmstr

Don't look at me…I do what he does — just slower. #rstats avuncular • ?Resistance Fighter • Cook • Christian • [Master] Chef des Données de Sécurité @ @rapid7

If you haven’t viewed/read Wendy Nather’s (@451Wendy) insightful [Living Below The Security Poverty Line](https://451research.com/t1r-insight-living-below-the-security-poverty-line) you really need to do that before continuing (we’ll still be here when you get back).

Unfortunately, the catalyst for this post came from two recent, real-world events: my returned exposure to the apparent ever-increasing homeless issue in San Francisco (a side effect of choosing a hotel 10 blocks away from Moscone) and the hacking of a [small, local establishment](http://www.tnhonline.com/works-bakery-customers-targeted-by-cyber-thieves-1.2988390#.UTMuF-tASS0) resulting in exposure of customer credit cards.

If you do any mom-and-pop, brick-and-mortar shopping you’ve seen it: the Windows-based point-of-sale terminal that is the *only* computer for the owners. Your credit card will be scanned on the same machine cat videos will be viewed and e-mail will be read. In many small shops, that machine is also where accounting functions are performed.

These truly small business (TSB) owners aren’t living below the security poverty line, they are security hobos. They *kinda* know they need to care about the safety of their data, but their focus is on their business or creative processes. When they do have time to care about security, that part of their world is so complex that it’s far too easy to make the choice to ignore it than to do something about it. If your immediate reaction was to disagree with my complexity posit, here are just a few tasks a TSB owner must face in a world of modern commerce:

– Updating operating system patches
– Updating browser software
– Updating Flash
– Updating Java
– Maintain web site/Twitter/Facebook securely
– Recognizing phishing e-mails/posts/tweets
– Understanding browser security
– Keeping signature anti-malware up-to-date
– Remember passwords for system, POS vendor, government sites, e-mail, etc.
– Maintain secure Wi-Fi and Internet firewall
– Maintain physical security (e.g. cameras)

Those tasks may be as autonomous as breathing for security folk and technically-savvy users, but they are extraneous tasks that are confusing for most TSBs and may often cause instability issues with the wretched POS software options out in the marketplace. These folks also cannot afford to hire security consultants to do this work for them.

Verizon’s 2012 DBIR & Trustwave’s 2012 report both showed that [these types of businesses](http://www.slate.com/articles/technology/technology/2012/03/verizon_s_data_breach_investigations_report_reveals_that_restaurants_are_the_easiest_target_for_hackers_.single.html#pagebreak_anchor_2) were part of the groups most targeted by criminals, yet the best our industry can do is dress up folks in schoolgirl costumes at @RSAConference whilst telling TSBs to keep their systems up-to-date and not re-use passwords. It’s the security equivalent of walking by a truly desperate person on the street without even making eye contact as your body language exudes the “get a job” vibe.

We have to do better than this.

Until software and hardware vendors start to—or are forced to—actually care about security, it will be up to security professionals to create the digital equivalent of a soup kitchen to make the situation better. What can you do?

– speak at local Chamber of Commerce meetings and provide practical take-aways for those who attend
– discuss security topics with friends or relatives who are TSB owners
– have your [ISSA|ISC2|NAISG] chapter setup a booth at conventions which attract TSBs (y’know…get out of the echo chamber, mebbe?)
– raise awareness through blogging and other media outlets
– produce & distribute awareness materials—a great example would be @Veracode’s non-domain [infographics](http://www.veracode.com/blog/category/infographics/)
– demand better (in general) out of your security vendors
– lobby government for better security standards

It may not seem like much, but we have to start somewhere if we’re going to find a way to help protect those that most vulnerable, especially since it will also mean helping to keep *our own* information safe.

In case you are a truly small business owner who is reading this post, there are some things you can do to help ensure you won’t be a victim:

– Use a dedicated machine for your POS work—an iPad with [Square](https://squareup.com/) is a good option but doesn’t work for everyone
– Do not perform any operations on the Internet on the system that you do accounting tasks on
– Use @1Password to create, store & manage all your passwords on all your systems/devices
– Use [Secunia PSI](http://secunia.com/vulnerability_scanning/personal/) to help keep your Windows systems up-to-date
– Set all operating system and anti-malware software to auto-update
– Do not put your security cameras on the Internet; if you do, password protect them
– Research what your responsibilities are and what actions you’ll need to take in the event you do discover that your business or customer information has been exposed

Come join us for a PhöCon good time at 18:30 (6:30PM) Thursday! The neighborhood is…interesting…but it’s close to Moscone and has had good food for the past couple years. I’ll be heading up there from the Expo area at ~1815. Hit me up on Twitter if you want to head out together.

[Miss Saigon](http://misssaigonsf.com‎) @ 100 6th Street


View Larger Map

Many thanks to all who attended the talk @jayjacobs & I gave at RSA on Tuesday, February 26, 2013. It was really great to be able to talk to so many of you afterwards as well.

We’ve enumerated quite a bit of non-slide-but-in-presentation information that we wanted to aggregate into a blog post so you can viz along at home. If you need more of a guided path, I strongly encourage you to take a look at some of the free courses over at [Coursera](https://www.coursera.org/).

For starters, here’s a bit.ly bundle of data analysis & visualization bookmarks that @dseverski & I maintain. We’ve been doing (IMO) a pretty good job adding new resources as they come up and may have some duplicates to the ones below.

People Mentioned

– [Stephen Few’s Perceptual Edge blog](http://www.perceptualedge.com/) : Start from the beginning to learn from a giant in information visualization
– [Andy Kirk’s Visualising Data blog](http://www.visualisingdata.com/) (@visualisingdata) : Perhaps the quintessential leader in the modern visualization movement.
– [Mike Bostock’s blog](http://bost.ocks.org/mike/) (@mbostock) : Creator of D3 and producer of amazing, interactive graphics for the @NYTimes
– [Edward Tufte’s blog](http://www.edwardtufte.com/tufte/) : The father of what we would now identify as our core visualization principles & practices

Tools Mentioned

– [R](http://www.r-project.org/) : Jay & I probably use this a bit too much as a hammer (i.e. treat ever data project as a nail) but it’s just far too flexible and powerful to not use as a go-to resource
– [RStudio](http://www.rstudio.com/) : An *amazing* IDE for R. I, personally, usually despise IDEs (yes, I even dislike Xcode), but RStudio truly improves workflow by several orders of magnitude. There are both desktop and server versions of it; the latter gives you the ability to setup a multi-user environment and use the IDE from practically anywhere you are. RStudio also makes generating [reproducible research](http://cran.r-project.org/web/views/ReproducibleResearch.html) a joy with built-in easy access to tools like [kintr](http://yihui.name/knitr/).
– [iPython](http://ipython.org/) : This version of Python takes an already amazing language and kicks it up a few notches. It brings it up to the level of R+RStudio, especially with it’s knitr-like [iPython Notebooks](http://ipython.org/ipython-doc/dev/interactive/htmlnotebook.html) for–again–reproducible research.
– [Mondrian](http://www.theusrus.de/Mondrian/) : This tool needs far more visibility. It enables extremely quick visualization of even very large data sets. The interface takes a bit of getting used to, but it’s faster then typing R commands or fumbling in Excel.
– [Tableau](http://www.tableausoftware.com/) : This tool may be one of the most accessible, fast & flexible ways to explore data sets to get an idea of where you need to/can do further analysis.
– [Processing](http://processing.org/) : A tool that was designed from the ground up to help journalists create powerful, interactive data visualizations that you can slipstream directly onto the web via the [Processing.js](http://processingjs.org/) library.
– [D3](http://d3js.org/) : The foundation of modern, data-driven visualization on the web.
– [Gephi](https://gephi.org/) : A very powerful tool when you need to explore networks & create beautiful, publication-worthy visualizations.
– [MongoDB](http://www.mongodb.org/) : NoSQL database that’s highly & easily scaleable without a steep learning curve.
– [CRUSH Tools by Google](https://code.google.com/p/crush-tools/) : Kicks up your command-line data munging.

Just joining in the fray of “where I’m speaking/where I’ll be the week of @RSAConference” posts…

SEM-003 – Information Security Leadership Development: Surviving as a Security Leader (Half Day – Delegates only)

WHEN: Monday : 0830-1130

I’m very pleased to be able to join:

– Derek Brink, Vice President & Research Fellow for IT Security & IT GRC, Aberdeen Group, a Harte-Hanks Company
– Justin Peavey, SVP Information Services & Security, CISO, Omgeo
– Dave Notch, President, Intensity Analytics
– Evan Wheeler, Director, Information Security, Omgeo
– James Burrell, Deputy Assistant Director, Federal Bureau of Investigation
– John Iatonna, SVP, Information Security, Edelman, Inc.

In this session. I’ll be covering “Are you fighting the wrong battles” and participating in a panel discussion.

GRC-T18 – Data Analysis and Visualization for Security Professionals

WHEN: Tuesday : 1430-1530

@JayJacobs & I will be delving into the dark arts & science of conducting & communicating data analyses through data visualization with a plethora of background material and two case studies.

SPO1-R33 – Achievement Unlocked: Designing a Compelling Security Awareness Program

WHEN: Thursday : 1040-1140

@csoandy & I will be entertaining and educating folks on how to kick your security awareness program up notch. Should be great fun and animated interaction is greatly encouraged.

☛ PhöCon

WHEN: Thursday : 1800+

Third year in a row where a bunch of us go out for Vietnamese food. Ping me on Twitter (@hrbrmstr) for more details.

Metricon 8

Friday : All Day!

A day of facilitated working sessions designed to radically transform critical areas of security metrics across the industry.

When not speaking, I’ll be attending many sessions, will have the “shield” on most of the time and would love to meet as many folks as possible during my time in SFO.

Here’s a quick example of couple additional ways to use the netintel R package I’ve been tinkering with. This could easily be done on the command line with other tools, but if you’re already doing scripting/analysis with R, this provides a quick way to tell if a list of IPs is in the @AlienVault IP reputation database. Zero revelations here for regular R users, but it might help some folks who are working to make R more of a first class scripting citizen.

I whipped up the following bit of code to check to see how many IP addresses in the @Mandiant APT-1 FQDN dump were already in the AlienVault database. Reverse resolution of the Mandiant APT-1 FQDN list is a bit dubious at this point so a cross-check with known current data is a good idea. I should also point out that not all the addresses resolved “well” (there are 2046 FQDNs and my quick dig only yielded 218 usable IPs).

library(netintel)
 
# get the @AlienVault reputation DB
av.rep = Alien.Vault.Reputation()
 
# read in resolved APT-1 FQDNs list
apt.1 = read.csv("apt-1-ips.csv")
 
# basic set operation
whats.left = intersect(apt.1$ip,av.rep$IP)
 
# how many were in the quickly resolved apt-1 ip list?
length(apt.1)
[1]218
 
# how many are common across the lists?
length(whats.left)
[1] 44
 
# take a quick look at them
whats.left
[1] "12.152.124.11"   "140.112.19.195"  "161.58.182.205"  "165.165.38.19"   "173.254.28.80"  
[6] "184.168.221.45"  "184.168.221.54"  "184.168.221.56"  "184.168.221.58"  "184.168.221.68" 
[11] "192.31.186.141"  "192.31.186.149"  "194.106.162.203" "199.59.166.109"  "203.170.198.56" 
[16] "204.100.63.18"   "204.93.130.138"  "205.178.189.129" "207.173.155.44"  "207.225.36.69"  
[21] "208.185.233.163" "208.69.32.230"   "208.73.210.87"   "213.63.187.70"   "216.55.83.12"   
[26] "50.63.202.62"    "63.134.215.218"  "63.246.147.10"   "64.12.75.1"      "64.12.79.57"    
[31] "64.126.12.3"     "64.14.81.30"     "64.221.131.174"  "66.228.132.20"   "66.228.132.53"  
[36] "68.165.211.181"  "69.43.160.186"   "69.43.161.167"   "69.43.161.178"   "70.90.53.170"   
[41] "74.14.204.147"   "74.220.199.6"    "74.93.92.50"     "8.5.1.34"

So, roughly a 20% overlap between (quickly-I’m sure there’s a more comprehensive list) resolved & “clean” APT-1 FQDNs IPs and the AlienVault reputation database.

For kicks, we can see where all the resolved APT-1 nodes live (BGP/network-wise) in relation to each other using some of the other library functions:

library(netintel)
library(igraph)
library(plyr)
 
apt.1 = read.csv("apt-1-ips.csv")
ips = apt.1$ip
 
# get BGP origin & peers
origin = BulkOrigin(ips)
peers = BulkPeer(ips)
 
# start graphing
g = graph.empty()
 
# Make IP vertices; IP endpoints are red
g = g + vertices(ips,size=1,color="red",group=1)
 
# Make BGP vertices ; BGP nodes are light blue
g = g + vertices(unique(c(peers$Peer.AS, origin$AS)),size=1.5,color="orange",group=2)
 
# no labels
V(g)$label = ""
 
# Make IP/BGP edges
ip.edges = lapply(ips,function(x) {
  iAS = origin[origin$IP==x,]$AS
  lapply(iAS,function(y){
    c(x,y)
  })
})
 
# Make BGP/peer edges
bgp.edges = lapply(unique(origin$BGP.Prefix),function(x) {
  startAS = unique(origin[origin$BGP.Prefix==x,]$AS)
  lapply(startAS,function(z) {
    pAS = peers[peers$BGP.Prefix==x,]$Peer.AS
    lapply(pAS,function(y) {
      c(z,y)
    })
  })
})
 
# get total graph node count
node.count = table(c(unlist(ip.edges),unlist(bgp.edges)))
 
# add edges 
g = g + edges(unlist(ip.edges))
g = g + edges(unlist(bgp.edges))
 
# base edge weight == 1
E(g)$weight = 1
 
# simplify the graph
g = simplify(g, edge.attr.comb=list(weight="sum"))
 
# no arrows
E(g)$arrow.size = 0
 
# best layout for this
L = layout.fruchterman.reingold(g)
 
# plot the graph
plot(g,margin=0)

apt-1

If we take out the BGP peer relationships from the graph (i.e. don’t add the bgp.edges in the above code) we can see the mal-host clusters even more clearly (the pseudo “Death Star” look is unintentional but appropro):

Rplot01

We can also determine which ASNs the bigger clusters belong to by checking out the degree. The “top” 5 clusters are:

16509 40676 36351 26496 15169 
    7     8     8    13    54

While my library doesn’t support direct ASN detail lookup yet (an oversight), we can take those ASN’s, check them out manually and see the results:

16509   | US | arin     | 2000-05-04 | AMAZON-02 - Amazon.com, Inc.
40676   | US | arin     | 2008-02-26 | PSYCHZ - Psychz Networks
36351   | US | arin     | 2005-12-12 | SOFTLAYER - SoftLayer Technologies Inc.
26496   | US | arin     | 2002-10-01 | AS-26496-GO-DADDY-COM-LLC - GoDaddy.com, LLC
15169   | US | arin     | 2000-03-30 | GOOGLE - Google Inc.

So Google servers are hosting the most mal-nodes from the resolved ASN-1 list, followed by GoDaddy. I actually expected Amazon to be higher up in the list.

I’ll be adding igraph and ASN lookup functions to the netintel library soon. Also, if anyone has a better APT-1 IP list, please shoot me a link.

I happened across [Between Hype and Understatement: Reassessing Cyber Risks as a Security Strategy](http://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=1107&context=jss) [PDF] when looking for something else at the [Journal of Strategic Security](http://scholarcommons.usf.edu/jss/) site and thought it was a good enough primer to annoy everyone with a tweet about it.

The paper is—well—_kinda_ wordy and has a Flesch-Kincaid grade reading level of 16*, making well suited for academia, but not rapid consumption in this blog era we abide in. I promised some folks that I’d summarize it (that phrase always reminds me [of this](http://www.youtube.com/watch?v=uwAOc4g3K-g)) and so I shall (try).

The fundamental arguments are:

– we underrate & often overlook pre-existing software weaknesses (a.k.a. vulnerabilities)
– we undervalue the costs of cybercrime by focusing solely on breaches & not including preventative/deterrence costs
– we get distracted from identifying real threats by over-hyped ones
– we suck at information sharing (not enough of it; incomplete, at times; too many “standards”)
– we underreport incidents—and that this actually _enables_ attackers
– we need a centralized body to report incidents to
– we should develop a complete & uniform taxonomy
– we must pay particular attention to vulnerabilities in critical infrastructure
– we must pressure governments & vendors to take an active role in “encouraging” removing vulnerabilities from software during the SDLC, not after deployment

The author discusses specific media references (there are a plethora of links in the endnotes) when it comes to hype and notes specific government initiatives when it comes to other topics such as incident handling/threat sharing (the author has a definite UK slant).

I especially liked this quote on threat actors/actions/motives & information sharing:

> _[the] distributed nature of the Internet can make it difficult to clearly attribute some incidents [as] criminal, terrorist actions, or acts of war. Consequently, to affirm that “the principal difference” between [these] “is in the attacker’s intent” is far too simplistic when many cyber-attackers cannot be identified. It is also quite simplistic to attribute financial motivation only to cyber criminals since terrorists can be motivated by monetary gain in order to finance their political actions. [An] added difficulty is that a pattern of cyber incidents may not reveal itself unless information is shared between the different stakeholders. For example, taken in isolation, a bank’s website being temporarily unavailable may look innocuous and not worth reporting to the competent agencies. Yet, when associated with other cyber incidents in which the victims and timeframe are similar, it may reveal a concerted effort to target a particular type of business or e-government resources, a pattern of behavior that could amount to crime (fraud, espionage) or terrorism if the motive can be established. Detection thus may depend on information being shared._

She does spend quite a bit of ink on vulnerabilities. Some choice (shorter) quotes:

> _[the] economic analysis adopted by software companies does not take into account (or not sufficiently) that the costs of non- secure software are significant, that these costs will be borne by others on the network and ultimately by themselves in clean-up operations_

> _Of course, to fix the vulnerabilities after release is laudable; it is also commendable that those companies participate in huge clean-up operations of botnets like Microsoft did in 2010. However, there is nothing more paradoxical than Microsoft (and others) spending money to circumscribe the effects of the very vulnerabilities they contributed to create in the first place_

She then concludes with suggesting that governments work with ISPs to actually severely restrict or disable internet connections of users found to be infected and contributing to spam/botnets, positing that this will cause users to demand more out of software vendors or use the free market to shift their loyalties to other software providers who do more to build less vulnerable software.

Again, I think it’s a good primer on the subject (despite some dubious analogies peppered throughout), but I also think there is too much focus on vulnerabilities and not enough on threat actors/actions/motives. I do like how she mixes economic theory into a topic that is usually defined solely in terms of warfare without diminishing the potential impacts of either.

It would have been pretty evident to see the influence of Beck & Giddens even if her references to [Risk Society](http://en.wikipedia.org/wiki/Risk_society) didn’t bookend the prose. I’ll leave you with what might just be her own one-sentence summary of the entire paper and definitely apropo for our current “cyber” situation:

> _[the] risks that industrialization and modernization created tend to be global, systemic with a “boomerang effect,” and denied, overlooked, or overhyped._

*Ironically enough, this blog post comes out at F/K-level 22-23
dat<- data.frame(t=seq(0, 2*pi, by=0.1) )
xhrt <- function(t) 16*sin(t)^3
yhrt <- function(t) 13*cos(t)-5*cos(2*t)-2*cos(3*t)-cos(4*t)
dat$y=yhrt(dat$t)
dat$x=xhrt(dat$t)
with(dat, polygon(x,y, col="hotpink"))

i heaRt you!

hearRt

(R code inspired by/lifted from: DWin on StackOverflow)

So, I’ve had some quick, consecutive blog posts around this R package I’m working on, and this one is more of an answer to my own, self-identified question of “so what?”. As I was working on an importer for AlienValut’s IP reputation database, I thought it might be interesting to visualize aspects of that data using some of the meta-information gained from the other “netintel” (my working name for the package) functions.

Acting on that impulse, I extracted all IPs that were uniquely identified as “Malicious Host“s (it’s a category in their database), did ASN & peer lookups for them and made two DrL graphs from them (I did a test singular graph but it would require a Times Square monitor to view).

h1

h2

You’ll need to select both images to make them bigger to view them more easily. Red nodes are hosts, blue ones are the ASNs they belong to.

While some of the visualized data was pretty obvious from the data table (nigh consecutive IP addresses in some cases), seeing the malicious clusters (per ASN) was (to me) pretty interesting. I don’t perform malicious host/network analysis as part of the day job, so the apparent clustering (and, also the “disconnected” ones) may not be interesting to anyone but me, but it gave me a practical example to test for the library I’m working on and may be interesting to others. It also shows you can make pretty graphs with R.

I’ve got the crufty R code up on github now and will keep poking at it as I have time. I’ll add the code that made the above image to the repository over the weekend.