Skip navigation

Category Archives: Cybersecurity

Cybersecurity is a domain that really likes surveys, or at the very least it has many folks within it that like to conduct and report on surveys. One recent survey on threat intelligence is in it’s second year, so it sets about comparing answers across years. Rather than go into the many technical/statistical issues with this survey, I’d like to focus on alternate ways to visualize the comparison across years.

We’ll use the data that makes up this chart (Figure 3 from the report):

surveybars

since it’s pretty representative of the remainder of the figures.

Let’s start by reproducing this figure with ggplot2:

library(dplyr)
library(tidyr)
library(stringr)
library(ggplot2)
library(scales)
library(ggthemes)
library(extrafont)

loadfonts(quiet=TRUE)

read.csv("question.csv", stringsAsFactors=FALSE) %>%
  gather(year, value, -belief) %>%
  mutate(year=factor(sub("y", "", year)),
         belief=str_wrap(belief, 40)) -> question

beliefs <- unique(question$belief)
question$belief <- factor(beliefs, levels=rev(beliefs[c(1,2,4,5,3,7,6)]))

gg <- ggplot(question, aes(belief, value, group=year))
gg <- gg + geom_bar(aes(fill=year), stat="identity", position="dodge",
                    color="white", width=0.85)
gg <- gg + geom_text(aes(label=percent(value)), hjust=-0.15,
                     position=position_dodge(width=0.8), size=3)
gg <- gg + scale_x_discrete(expand=c(0,0))
gg <- gg + scale_y_continuous(expand=c(0,0), label=percent, limits=c(0,0.8))
gg <- gg + scale_fill_tableau(name="")
gg <- gg + coord_flip()
gg <- gg + labs(x=NULL, y=NULL, title="Fig 3: Reasons for fully participating\n")
gg <- gg + theme_tufte(base_family="Arial Narrow")
gg <- gg + theme(axis.ticks.x=element_blank())
gg <- gg + theme(axis.text.x=element_blank())
gg <- gg + theme(axis.ticks.y=element_blank())
gg <- gg + theme(legend.position="bottom")
gg <- gg + theme(plot.title=element_text(hjust=0))
gg

Now, the survey does caveat the findings and talks about non-response bias, sampling-frame bias and self-reporting bias. However, nowhere does it talk about the margin of error or anything relating to uncertainty. Thankfully, both the 2014 and 2015 reports communicate population and sample sizes, so we can figure out the margin of error:

library(samplesize4surveys)

moe_2014 <- e4p(19915, 701, 0.5)
## With the parameters of this function: N = 19915 n =  701 P = 0.5 DEFF =  1 conf = 0.95 . 
## The estimated coefficient of variation is  3.709879 . 
## The margin of error is 3.635614 . 
## 

moe_2015 <- e4p(18705, 692, 0.5)
## With the parameters of this function: N = 18705 n =  692 P = 0.5 DEFF =  1 conf = 0.95 . 
## The estimated coefficient of variation is  3.730449 . 
## The margin of error is 3.655773 .

They are both roughly 3.65% so let's take a look at our dodged bar chart again with this new information:

mutate(question, ymin=value-0.0365, ymax=value+0.0365) -> question

gg <- ggplot(question, aes(belief, value, group=year))
gg <- gg + geom_bar(aes(fill=year), stat="identity",
                    position=position_dodge(0.85),
                    color="white", width=0.85)
gg <- gg + geom_linerange(aes(ymin=ymin, ymax=ymax),
                         position=position_dodge(0.85),
                         size=1.5, color="#bdbdbd")
gg <- gg + scale_x_discrete(expand=c(0,0))
gg <- gg + scale_y_continuous(expand=c(0,0), label=percent, limits=c(0,0.85))
gg <- gg + scale_fill_tableau(name="")
gg <- gg + coord_flip()
gg <- gg + labs(x=NULL, y=NULL, title="Fig 3: Reasons for fully participating\n")
gg <- gg + theme_tufte(base_family="Arial Narrow")
gg <- gg + theme(axis.ticks.x=element_blank())
gg <- gg + theme(axis.text.x=element_blank())
gg <- gg + theme(axis.ticks.y=element_blank())
gg <- gg + theme(legend.position="bottom")
gg <- gg + theme(plot.title=element_text(hjust=0))
gg

Hrm. There seems to be a bit of overlap. Let's just focus on that:

gg <- ggplot(question, aes(belief, value, group=year))
gg <- gg + geom_pointrange(aes(ymin=ymin, ymax=ymax),
                         position=position_dodge(0.25),
                         size=1, color="#bdbdbd", fatten=1)
gg <- gg + scale_x_discrete(expand=c(0,0))
gg <- gg + scale_y_continuous(expand=c(0,0), label=percent, limits=c(0,1))
gg <- gg + scale_fill_tableau(name="")
gg <- gg + coord_flip()
gg <- gg + labs(x=NULL, y=NULL, title="Fig 3: Reasons for fully participating\n")
gg <- gg + theme_tufte(base_family="Arial Narrow")
gg <- gg + theme(axis.ticks.x=element_blank())
gg <- gg + theme(axis.text.x=element_blank())
gg <- gg + theme(axis.ticks.y=element_blank())
gg <- gg + theme(legend.position="bottom")
gg <- gg + theme(plot.title=element_text(hjust=0))
gg

The report actually makes hard claims based on the year-over-year change in the answers to many of the questions (not just this chart). Most have these overlapping intervals. Now, I understand that when a paying customer says they want a report that they wouldn't really be satisfied with a one-pager saying "See last years's report", but not communicating the uncertainty in these results seems like a significant omission.

But, I digress. There are better (or at least alternate) ways than bars to show this comparison. One is a "dumbbell chart".

question %>%
  group_by(belief) %>%
  mutate(line_col=ifelse(diff(value)<0, "2015", "2014"),
         hjust=ifelse(diff(value)<0, -0.5, 1.5)) %>%
  ungroup() -> question

gg <- ggplot(question)
gg <- gg + geom_path(aes(x=value, y=belief, group=belief, color=line_col))
gg <- gg + geom_point(aes(x=value, y=belief, color=year))
gg <- gg + geom_text(data=filter(question, year=="2015"),
                     aes(x=value, y=belief, label=percent(value),
                         hjust=hjust), size=2.5)
gg <- gg + scale_x_continuous(expand=c(0,0), limits=c(0,0.8))
gg <- gg + scale_color_tableau(name="")
gg <- gg + labs(x=NULL, y=NULL, title="Fig 3: Reasons for fully participating\n")
gg <- gg + theme_tufte(base_family="Arial Narrow")
gg <- gg + theme(axis.ticks.x=element_blank())
gg <- gg + theme(axis.text.x=element_blank())
gg <- gg + theme(axis.ticks.y=element_blank())
gg <- gg + theme(legend.position="bottom")
gg <- gg + theme(plot.title=element_text(hjust=0))
gg

I've used line color to indicate whether the 2015 value increased or decreased from 2014.

But, we still have the issue of communicating the margin of error. One way I came up with (which is not perfect) is to superimpose the dot-plot on top of the entire margin of error interval. While it doesn't show the discrete start/end margin for each year it does help to show that making definitive statements on the value comparisons is not exactly a good idea:

group_by(question, belief) %>%
  summarize(xmin=min(ymin), xmax=max(ymax)) -> band

gg <- ggplot(question)
gg <- gg + geom_segment(data=band,
                        aes(x=xmin, xend=xmax, y=belief, yend=belief),
                        color="#bdbdbd", alpha=0.5, size=3)
gg <- gg + geom_path(aes(x=value, y=belief, group=belief, color=line_col),
                     show.legend=FALSE)
gg <- gg + geom_point(aes(x=value, y=belief, color=year))
gg <- gg + geom_text(data=filter(question, year=="2015"),
                     aes(x=value, y=belief, label=percent(value),
                         hjust=hjust), size=2.5)
gg <- gg + scale_x_continuous(expand=c(0,0), limits=c(0,0.8))
gg <- gg + scale_color_tableau(name="")
gg <- gg + labs(x=NULL, y=NULL, title="Fig 3: Reasons for fully participating\n")
gg <- gg + theme_tufte(base_family="Arial Narrow")
gg <- gg + theme(axis.ticks.x=element_blank())
gg <- gg + theme(axis.text.x=element_blank())
gg <- gg + theme(axis.ticks.y=element_blank())
gg <- gg + theme(legend.position="bottom")
gg <- gg + theme(plot.title=element_text(hjust=0))
gg

Finally, the year-to-year nature of the data was just begging for a slopegraph:

question %>% mutate(vjust=0.5) -> question
question[(question$belief=="Makes threat data more actionable") &
           (question$year=="2015"),]$vjust <- -1
question[(question$belief=="Reduces the cost of detecting and\npreventing cyber attacks") &
           (question$year=="2015"),]$vjust <- 1.5

question$year <- factor(question$year, levels=c("2013", "2014", "2015", "2016", "2017", "2018"))

gg <- ggplot(question)
gg <- gg + geom_path(aes(x=year, y=value, group=belief, color=line_col))
gg <- gg + geom_point(aes(x=year, y=value), shape=21, fill="black", color="white")
gg <- gg + geom_text(data=filter(question, year=="2015"),
                     aes(x=year, y=value,
                         label=sprintf("\u2000%s %s", percent(value),
                                       gsub("\n", " ", belief)),
                         vjust=vjust), hjust=0, size=3)
gg <- gg + geom_text(data=filter(question, year=="2014"),
                     aes(x=year, y=value, label=percent(value)),
                     hjust=1.3, size=3)
gg <- gg + scale_x_discrete(expand=c(0,0.1), drop=FALSE)
gg <- gg + scale_color_tableau(name="")
gg <- gg + labs(x=NULL, y=NULL, title="Fig 3: Reasons for fully participating\n")
gg <- gg + theme_tufte(base_family="Arial Narrow")
gg <- gg + theme(axis.ticks=element_blank())
gg <- gg + theme(axis.text=element_blank())
gg <- gg + theme(legend.position="none")
gg <- gg + theme(plot.title=element_text(hjust=0.5))
gg <- gg + theme(plot.title=element_text(hjust=0))
gg

It doesn't help communicate uncertainty but it's a nice alternative to bars.

Hopefully this helps provide some alternatives to bars for these types of comparisons and also ways to communicate uncertainty without confusing the reader (communicating uncertainty to a broad audience is hard).

Perhaps those conducting surveys (or data analyses in general) could subscribe to a "data visualizers" paraphrase of a quote from Epidemics, Book I, of the Hippocratic school:

"Practice two things in your dealings with data: either help or do not harm the reader."

The full Rmd and data for this post is in this gist.

The basic technique of cybercrime statistics—measuring the incidence of a given phenomenon (DDoS, trojan, APT) as a percentage of overall population size—had entered the mainstream of cybersecurity thought only in the previous decade. Cybersecurity as a science was still in its infancy, as many of its basic principles had yet to be established.

At the same time, the scientific method rarely intersected with the development and testing of new detection & prevention regimens. When you read through that endless stream of quack cybercures published daily on the Internet and at conferences like RSA, what strikes you most is not that they are all, almost without exception, based on anecdotal or woefully inadequately small evidence. What’s striking is that they never apologize for the shortcoming. They never pause to say, “Of course, this is all based on anecdotal evidence, but hear me out.” There’s no shame in these claims, no awareness of the imperfection of the methods, precisely because it seems to eminently reasonable that the local observation of a handful of minuscule cases might serve the silver bullet for cybercrime, if you look hard enough.


But, cybercrime couldn’t be studied in isolation. It was as much a product of the internet expansion as news and social media, where it was so uselessly anatomized. To understand the beast, you needed to think on the scale of the enterprise, from the hacker’s-eye view. You needed to look at the problem from the perspective of Henry Mayhew’s balloon. And you needed a way to persuade others to join you there.

Sadly, that’s not a modern story. It’s an adapted quote from chapter 4 (pp. 97-98, paperback) of The Ghost Map, by Steven Johnson, a book on the cholera epidemic of 1854.

I won’t ruin the book nor continue my attempt at analogy any further. Suffice it to say, you should read the book—if you haven’t already—and join me in calling out for the need for the John Snow of our cyber-time to arrive.

Far too many interesting bits to spam on Twitter individually but each is worth getting the word out on:

tumblr_m0wabdueuR1qd78lno1_1280

– It’s [π Day](https://www.google.com/search?q=pi+day)*
– Unless you’re living in a hole, you probably know that [Google Reader is on a death march](http://www.bbc.co.uk/news/technology-21785378). I’m really liking self-hosting [Tiny Tiny RSS](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CDEQFjAA&url=https%3A%2F%2Fgithub.com%2Fgothfox%2FTiny-Tiny-RSS&ei=YtlBUfOLJvLe4AOHtoDIAQ&usg=AFQjCNGwtEr8slx-i0vNzhQi4b4evRVXFA&bvm=bv.43287494,d.dmg) so far, and will follow up with a standalone blog post on it.
– @jayjacobs & I are speaking soon at both [SOURCE Boston](http://www.sourceconference.com/boston/) & [Secure360](http://secure360.org/). We hope to have time outside the talk to discuss security visualization & analysis with you, so go on and register!
– Speaking of datavis, [VizSec 2013](http://www.vizsec.org/) is on October 14th in Atlanta, GA this year and the aligned [VAST 2013 Challenge](http://vacommunity.org/VAST+Challenge+2013) (via @ieeevisweek) will have another infosec-themed mini-challenge. Watch that space, grab the data and start analyzing & visualizing!
– If you’re in or around Boston, it’d be great to meet you at @openvisconf on May 16-17.
– Majestic SEO has released a new data set for folks to play with: [a list of the top 100,000 websites broken down by the country in which they are hosted](http://blog.majesticseo.com/research/top-websites-by-country/). It requires a free reg to get the link, but no spam so far. (h/t @teamcymru)
– And, finally, @MarketplaceAPM aired a [good, accessible story](http://www.marketplace.org/topics/tech/who-pays-bill-cyber-war) on “Who pays the bill for a cyber war?”. I always encourage infosec industry folk to listen to shows like Marketplace to see how we sound to non-security folk and to glean ways we can communicate better with the people we are ultimately trying to help.

*Image via davincismurf

I happened across [Between Hype and Understatement: Reassessing Cyber Risks as a Security Strategy](http://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=1107&context=jss) [PDF] when looking for something else at the [Journal of Strategic Security](http://scholarcommons.usf.edu/jss/) site and thought it was a good enough primer to annoy everyone with a tweet about it.

The paper is—well—_kinda_ wordy and has a Flesch-Kincaid grade reading level of 16*, making well suited for academia, but not rapid consumption in this blog era we abide in. I promised some folks that I’d summarize it (that phrase always reminds me [of this](http://www.youtube.com/watch?v=uwAOc4g3K-g)) and so I shall (try).

The fundamental arguments are:

– we underrate & often overlook pre-existing software weaknesses (a.k.a. vulnerabilities)
– we undervalue the costs of cybercrime by focusing solely on breaches & not including preventative/deterrence costs
– we get distracted from identifying real threats by over-hyped ones
– we suck at information sharing (not enough of it; incomplete, at times; too many “standards”)
– we underreport incidents—and that this actually _enables_ attackers
– we need a centralized body to report incidents to
– we should develop a complete & uniform taxonomy
– we must pay particular attention to vulnerabilities in critical infrastructure
– we must pressure governments & vendors to take an active role in “encouraging” removing vulnerabilities from software during the SDLC, not after deployment

The author discusses specific media references (there are a plethora of links in the endnotes) when it comes to hype and notes specific government initiatives when it comes to other topics such as incident handling/threat sharing (the author has a definite UK slant).

I especially liked this quote on threat actors/actions/motives & information sharing:

> _[the] distributed nature of the Internet can make it difficult to clearly attribute some incidents [as] criminal, terrorist actions, or acts of war. Consequently, to affirm that “the principal difference” between [these] “is in the attacker’s intent” is far too simplistic when many cyber-attackers cannot be identified. It is also quite simplistic to attribute financial motivation only to cyber criminals since terrorists can be motivated by monetary gain in order to finance their political actions. [An] added difficulty is that a pattern of cyber incidents may not reveal itself unless information is shared between the different stakeholders. For example, taken in isolation, a bank’s website being temporarily unavailable may look innocuous and not worth reporting to the competent agencies. Yet, when associated with other cyber incidents in which the victims and timeframe are similar, it may reveal a concerted effort to target a particular type of business or e-government resources, a pattern of behavior that could amount to crime (fraud, espionage) or terrorism if the motive can be established. Detection thus may depend on information being shared._

She does spend quite a bit of ink on vulnerabilities. Some choice (shorter) quotes:

> _[the] economic analysis adopted by software companies does not take into account (or not sufficiently) that the costs of non- secure software are significant, that these costs will be borne by others on the network and ultimately by themselves in clean-up operations_

> _Of course, to fix the vulnerabilities after release is laudable; it is also commendable that those companies participate in huge clean-up operations of botnets like Microsoft did in 2010. However, there is nothing more paradoxical than Microsoft (and others) spending money to circumscribe the effects of the very vulnerabilities they contributed to create in the first place_

She then concludes with suggesting that governments work with ISPs to actually severely restrict or disable internet connections of users found to be infected and contributing to spam/botnets, positing that this will cause users to demand more out of software vendors or use the free market to shift their loyalties to other software providers who do more to build less vulnerable software.

Again, I think it’s a good primer on the subject (despite some dubious analogies peppered throughout), but I also think there is too much focus on vulnerabilities and not enough on threat actors/actions/motives. I do like how she mixes economic theory into a topic that is usually defined solely in terms of warfare without diminishing the potential impacts of either.

It would have been pretty evident to see the influence of Beck & Giddens even if her references to [Risk Society](http://en.wikipedia.org/wiki/Risk_society) didn’t bookend the prose. I’ll leave you with what might just be her own one-sentence summary of the entire paper and definitely apropo for our current “cyber” situation:

> _[the] risks that industrialization and modernization created tend to be global, systemic with a “boomerang effect,” and denied, overlooked, or overhyped._

*Ironically enough, this blog post comes out at F/K-level 22-23