Skip navigation

Category Archives: Information Security

Sadly, I could not make it to this year’s Workshop on the Economics of Information Security. However, the intrepid conference organizers were quick to post the papers that were presented, and I had a chance to sift through them to pick out what I believe to be the best of the best (they are all worth reading).

A Focus On The Bottom Line

First up is “Measuring the Cost of Cybercrime” by Ross Anderson, Chris Barton, Rainer B ̈ohme, Richard Clayton, Michel J.G. van Eeten, Michael Levi, Tyler Moore & Stefan Savage. They developed an interesting framework:

which tries to cover all angles of loss (including costs of defense) as well as that of gain by the criminals. They don’t just talk theory & math. They did actual investigations and have produced a great breakdown of costs & criminal gains on page 24 of the paper (click for larger image):

Beyond the details of their methodology, I include them in this list – in part – because of this paragraph:

The straightforward conclusion to draw on the basis of the comparative figures collected in this study is that we should perhaps spend less in anticipation of computer crime (on antivirus, firewalls etc.) but we should certainly spend an awful lot more on catching and punishing the perpetrators.

What a great, data-backed discussion-starter at your next security conference!

Might As Well Jump

Next up is a very maths-y offering by Adrian Baldwin, Iffat Gheyas, Christos Ioannidis, David Pym & Julian Williams on “Contagion in Cybersecurity Attacks“.

If you’re put off by math symbols, jump to the bottom of page four to stat your reading (right after reading the abstract & introduction). The authors used DShield data and focused on ten services (DNS, ssh, Oracle [they got the port #’s wrong], SQL, LDAP, http/s, SMB, IMAP/S, SMTP) sampled daily for the period 1 January 2003 to 28 February 2011. You can read the paper for their particular findings in this data set, but this extract hones in on the utility of their methodology:

Security threats to data, its quality and accessibility, represent potential losses to the integrity of the operations of the organization. Security managers, in assessing the potential risks, should be interested in the relationship between the contagious threats to these different security attributes. The nature of the inter- relationship between the threats provides additional information to assist managers in making their choices of mitigating responses. For example, if the inter-relationship between threats is constant, independently of the frequency and intensity of threats, security managers can adopt smooth mitigation profiles to meet the threat. In the absence of such stable relationships, the managers’ responses must be adjusted dynamically: for given temporal relationships between the number of attacks, their change (or ‘jump’) in frequency, and their change in size (extent of impact).

I can envision some product extensions incorporating this threat analysis into their offering or even service providers such as Akamai (they have deep, active threat intel) creating a broad, anonymized “contagion” report for public consumption with private, direct (paid) offerings for their clients.

That Is The Question

Lukas Demetz & Daniel Bachlechner hope to help security managers choose investment analysis strategies in their work on “To invest or not to invest?
Assessing the economic viability of a policy and security configuration management tool
“. They take eleven economic investment models and work through each of them for a selected tool/technology investment, pointing out the strengths & weaknesses of each (click for larger version of the summary table):

Unsurprisingly (at least for me), none were optimal, but this is the perfect paper for anyone who ever wanted to look at a summary/overview of the “should we invest?” work with an eye on real practicality.

Physician, Secure Thy Data

Martin S. Gaynor, Muhammad Zia Hydari & Rahul Telang aim to assess the impact of market competition on information security and privacy in their work on “Is Patient Data Better Protected in Competitive Healthcare Markets?“.

I first have to hand it to these researches for including the “WORK IN PROGRESS – PLEASE DO NOT QUOTE” tag right up front in the paper. Our industry seems to be one to jump on “facts” way to soon and this should give any infosec pundits pause.

However, (myself ignoring that previous paragraph) if the authors’ continued analysis does end up supporting their initial conclusion that increased competition is associated with a decline in the quality of patient data protection, it may show that security has an uphill battle getting into the “service differentiator” list.

The authors do take a moment to theorize as to why there seems to be an inverse relationship to competition & security:

We posit that hospitals in more competitive markets may be inclined to shift resources to more consumer visible activities from the less consumer visible activity of data protection

Is That A USB Of Patches In Your Pocket?

In “Online Promiscuity: Prophylactic Patching and the Spread of Computer Transmitted Infections“, Timothy Kelley & L. Jean Camp examine the efficacy of various aggregate patching and recovery behaviors using real world data and a plethora of interesting simulations.

If you listened to the SFS “Front Porch” conversation with @joshcorman, @armorguy & yours’ truly, you’ll know how I feel about patching, and I believe this paper help support the somewhat progressive approach to both the need for patching but also the need for intelligent patching (with the latter also requiring #spiffy incident response). The authors may say it best, tho:

We show, using our model and a real world data set, that small increases in patch rates and recovery speed are the most effective approaches to reduce system wide vulnerabilities due to unprotected computers. Our results illustrate that a public health approach may be feasible, as what is required is that a subpopulation adopt prophylactic actions rather than near-universal immunization.

What About The Green Jack?

Finally getting to the coding side of the security economics equation, Stephan Neuhaus & Bernhard Plattner look at whether software vulnerability fix rates decrease and if the time between successive fixes goes up as vulnerabilities become fewer and harder to fix in “Software Security Economics: Theory, in Practice“.

They chose Mozilla, Apache httpd and Apache Tomcat as targets of examination and did a thorough investigation of both vulnerability findings and code commits for each product using well-described and documented statistical methods (pretty graphs, too :-).

Here are the salient bits in their own words:

Our findings do not support the hypothesis that vulnerability fix rates decline. It seems as if the supply of easily fixable vulnerabilities is not running out and returns are not diminishing (yet).

and:

With this data and this analysis, we cannot confirm a Red Queen race.

Folks may not be too surprised with the former, but I suspect the latter will also be good conference debate fuel.

Law & Order : DBU (Data Breach Unit)

Sasha Romanosky, David Hoffman & Alessandro Acquisti analyzed court dockets for over 230 federal data breach lawsuits from 2000 to 2010 for their work on “Empirical Analysis of Data Breach Litigation“.

Why look at breach litigation outcomes? For starters, such analysis “can help provide firms with prescriptive guidance regarding the relative chances of being sued, and having to settle.” For insurance companies, this type of analysis can also be of help in crafting cyberinsurance policies. It can also help companies that have customer data as their primary asset/product better understand their obligations as custodians of such information.

But, you want to know what they found, so here’s the skinny:

Our results suggest that the odds of a firm being sued are 3.5 times greater when individuals suffer financial harm, but 6 times lower when the firm provides free credit monitoring. Moreover, defendants settle 30% more often when plaintiffs allege financial loss, or when faced with a certified class action suit. By providing the first comprehensive empirical analysis of data breach litigation, these findings offer insights in the debate over privacy litigation versus privacy regulation.

It’s a quick read and should be something you forward to your legal & compliance folk.

Achievement: Unlocked

On a topic close to home, Toshihiko Takemura & Ayako Komatsu investigate “Who Sometimes Violates the Rule of the Organizations?: Empirical Study on Information Security Behaviors and Awareness“.

The authors develop a behavioral model based on:

  • Attitude
  • Motivation toward the behavior
  • Information security awareness
  • Workplace environment

and use a survey-based approach to acquire their data.

The “money quote” (IMO) is this:

With regard to the information security awareness, in many cases it is
found that the higher the awareness is, the less the tendency to violate the rule is.

Get cranking on your awareness programs!

(If you made it this far and went through these or other WEIS 2012 papers, which ones were most impactful for you?)

Given the focus on actual development of the PySlopegraph tool in most of the blog posts of late, folks may be wondering why an infosec/inforisk guy is obsessing so much on a tool and not talking security. Besides the fixation on filling a void and promoting an underused visualization tool, I do believe there is a place for slopegraphs in infosec data analysis and will utilize some data from McAfee’s recent Q1 2012 Threat Report [PDF] to illustrate how one might use slopegraphs in interpreting the “Spam Volume” data presented in the “Messaging Threats” section (pages 11 & 12 of the report).

The report shows individual graphs of spam volume per country from April of 2011 through March of 2012. Each individual graph conveys useful information, but I put together two slopegraphs that each show alternate and aggregate views which let you compare spam volume data relative to each country (versus just in-country).

When first doing this exploration, the scale problem reared it’s ugly head again since the United States is a huge spam outlier and causes the chart to be as tall as my youngest son when printed. I really wanted to show relative spam volume between countries as well as the increase or decrease between years in one chart and — after chatting with @maximumyin a bit — decided to test out using a log scale option for the charting (click for larger image):

This chart — Spam Volume by Country — instantly shows that:

  • overall volume has declined for most countries
  • two countries have remained steady
  • one country (Germany) has increased

The next chart – Spam Volume Percentage by Country — also needed to be presented on a log scale and has some equally compelling information:

Despite holding steady count-wise, the United States percentage of global spam actually increased and is joined by seven other countries, with Germany having the second largest percentage increase. Both charts present an opportunity to further explore why the values changed (since the best metrics are supposed to both inform and be actionable in some way).

I’m going to extract some more data from the McAfee report and some other security reports to show how slopegraphs can be used to interpret the data. Feedback on both the views and the use of the log scale would be greatly appreciated by general data scientists as well as those in the infosec community.

I posted a link to Twitter earlier on a recent discovery of the ability to clone RSA SecurID soft tokens:

https://twitter.com/hrbrmstr/status/204908233645764609

It (rightfully so) received some critical responses by @wh1t3rabbit & @wikidsystems since, apart from what the hypesters may say, this is a low-risk weakness.

Think about it. Just looking at the two most likely threat actors & actions: an insider trying to siphon off soft tokens and an external attacker using crafted malware to grab soft tokens. The former (most likely) knows your organization is using soft tokens (and probably has one herself). The latter is unlikely to just try to blanket siphon off soft tokens so they’ll have to do some research to target an organization (which costs time/money).

Once a victim (or set of victims) is identified, the cloning steps would have to be perfectly executed (and, I’m not convinced that’s a given). Let’s say that this is a given, though. Now both the insider and external agent have access to the bits to clone a token. It is easier for the insider to get that data, but the external attacker has to exfiltrate successfully it somehow (more complexity/time/cost).

To be useful, the attacker needs the user id, PIN and – in most implementations – a password. An insider would (most likely) know the user id (since she probably has one herself) but that data would require more time/effort/cost to the external attacker (think opportunistic keylogger/screenscraper with successful exfiltration). For both attackers, getting the password requires either social engineering or the use of a keylogger. Even then, there’s a time-limit of 90 days or less (since, if you’re using soft tokens, you probably have a 90 day password policy). That shrinks the amount of time the attack can be successful.

Now, both attackers need to know where this soft token can be used and have direct access to those systems. Again, probably easier for an insider and fairly costly for an external attacker.

Looking at this, there’s definitely a greater risk associated with an insider from this weakness than there is from an external party (as pointed out by the aforementioned twitter commentators). As @wikidsystems further pointed out, this also shows the inherent positives of multi-factor authentication :: you need far more component parts to execute a successful attack, making the whole thing very costly to obtain. Security economics FTW!

My comment has been that if using the TPM store for Windows-based SecurID soft token implementations negates this weakness, then why not do it? Does the added deployment & management complexity really cost that much?

In the end, I would categorize this weakness as a low risk to most organizations using soft tokens with a non-TPM storage configuration. Unless you know you’re a nation-state target (my opine for the origin of the attacker) – and, even then, you’re probably using hard tokens – far too many celestial bodies need to align for this weakness to be exploited successfully.

NOTE: This post was not meant to be a comprehensive risk assessment of the weakness and does not cover all attack scenarios. I left out many, including Windows desktop administrators and privileged script access. I was merely trying to do my part to counter whatever hype ensues from this weakness. Comments on those vectors or the analysis in general are most welcome.

While the slides will be officially available from SIRA web site in the not-too-distant future—complete with video (for all the talks)—I figured it wouldn’t hurt to put them up here as well.

My sincere thanks, again, to @jayjacobs and the SIRA board for allowing me to have the privilege of being the first speaker at the first ever SIRA conference. If you didn’t go, you really missed some of the best thinking and content I’ve heard in this space. Every talk had useful, takeaways and the in-talk and hallway-exchanges were nothing short of amazing.

Mark your calendars for next year!

If you went to SOURCE Boston this year (2012), attended my security awareness talk and liked the Angry Birds theme to the slides, here’s a copy of the Keynote theme (it’s not really a true Keynote theme as there are divergent slides I’ve included). Here’s a sample:

You’re going to need the “Feast of Flesh BB” font (local source) by Blambot Comic Fonts & Lettering if you want to keep consistent with the Angry Birds lettering on various slides.

You can also grab my talk slides at the conference site or from my local archive.

BTW: In the event you’re also looking for a shortcut method of making some of the font-effects in the slides, I strongly suggest using some of the font manipulation tools in Microsoft Word if you don’t have more expensive tools like Adobe Acrobat kicking around. You can do some really cool things in Word, save as PDF, crop in Preview and import into Keynote or Photoshop with great results.

UPDATE: I forgot to include the MP3 of the theme song which I played as part of a transition from “blah” slides to the Angry Birds title slide. (Original files over at the Angry Birds Nest).

Just a quick post as I noticed that my nginx configuration was vulnerable to the BEAST attack thanks to the #spiffy SSL Certificate Tester from Qualys (I scored an “A”, btw :-).

The nginx docs show how to do this, now, and it’s pretty simple (very similar to the Apache configuration, in fact):

  1. ssl_ciphers RC4:HIGH:!aNULL:!MD5;
  2. ssl_prefer_server_ciphers on;

Set it to prefer RC4 ciphers and — BOOM! — you’re done.

Like many other system admins, I should have done this a long time ago. And, like many other system admins, I’ve got many other things going on. I let this slip (even though I’ve kept up on nginx patches) and I shouldn’t have. Thankfully, this was a low risk item as the site doesn’t perform truly critical transactions.

I definitely encourage folks to use the SSL Labs tool to help ensure you’ve got your site’s configuration up to snuff.

Also, make sure to follow @ivanristic on Twitter if you care at all about web app security.

I swear to fulfill, to the best of my ability and judgment, this covenant:

I will respect the hard-fought empirical gains of those practitioners in whose steps I walk, and gladly share such knowledge as is mine with those who are to follow.

I will apply, for the benefit of those who need it, all measures [that] are required, avoiding those twin traps of FUD and solutions that are unnecessary.

I will remember that there is art to security as well as science, and that respect, sympathy, and understanding may outweigh the metasploit or other blunt instruments.

I will not be ashamed to say “I don’t know”, nor will I fail to call in my colleagues when the skills of another are needed to solve a problem.

I will respect the privacy of those I serve, for their problems are not disclosed to me that the world may know. Most especially must I tread with care in matters of NPPI, PCI & HIPAA. If it is given to me to solve a problem, all thanks. But it may also be within my power to identify problems; this awesome responsibility must be faced with great humbleness and awareness of my own frailty. Above all, I must not play at God.

I will remember that I do not treat a server, a router, an application, but a fragile system, whose problems may affect a whole company and general economic stability. My responsibility includes these related problems, if I am to provide adequately for the those that need help.

I will prevent issues from occurring whenever I can, for prevention is preferable to remediation.

I will remember that I remain a member of society with special obligations to all my fellow human beings, those sound of mind and body as well as those who also need assistance.

If I do not violate this oath, may I enjoy life and art, respected while I live and remembered with affection thereafter. May I always act so as to preserve the finest traditions of my calling and may I long experience the joy of aiding those who seek my help.

It’s rare that two of my passions—food and information security—intersect, but thanks to the USDA’s announcement of their Blueprint For Stronger Service, I can touch on both in one post.

In 2011, the Obama administration challenged all departments to reduce costs in a effort dubbed the “Campaign to Cut Waste“. In response, the USDA has managed to trim annual expenses by $150 million through a number of efforts. One such effort is to close 259 domestic USDA offices (you can see which states are impacted below).

I’m going to expand on why this is a bad idea over at #nom later this week, but 2011 was not a good year in terms of controlling food poisoning in the United States and I don’t think closing offices will make for better oversight.

Other efforts focus on the elimination of redundancies and inefficiencies. The Blueprint has 27 initial (or to-be-implemented immediately) improvements that include the following:

  • Consolidate more than 700 cell phone plans into about 10
  • Standardize civil rights training and purchases of cyber security products
  • Centralize civil rights, human resource, procurement, and property management functions

So, they were either getting gouged by suppliers (unlikely since there is negotiated pricing for the government) or the USDA’s “cyber-security” strategy was severely fragmented (and, thus, broken) enough that even finance folks could see the problem. Regardless of the source, it had to be pretty bad to make it to the top three of 27 immediate items (and called out in every sub-department press release) and even more so amongst over 160 initiatives that are being or have been put in place.

I still cannot find the details of the plan or budget analysis that went into the focus on cyber security products (links appreciated if you have them), but as private organizations continue their efforts to defend against existing and emerging threats, it might be worth a look at your strategy and spend a bit more closely. Would your infosec department be included in a similar list if your organization went through such a sweeping cost-cutting analysis program? Is your portfolio of security products as optimized as it can be? Could you use a budget sweep as an opportunity to leap frog your security capabilities (e.g. move to whitelisting vs signature-based anti-malware) vs just pressure your existing vendors and re-negotiate contracts?

Unfortunately, the government being the government, I’m now even more concerned that the USDA may need to worry about increased infections on both the food-level and the “cyber” level.