Skip navigation

Category Archives: Metrics

IT Security Metrics : A Practical Framework for Measuring Security & Protecting Data has has solid reviews by Richard Bejtlich (@TaoSecurity), David J. Elfering (@icxc) & Dr. Anton Chuvakin (@anton_chuvakin), amongst others. You can get it (for a short time) for just about fourteen Washingtons by doing the following.

First, go to this Amazon link and enter “ETXTBOOK” (no quotes) as the code, you’ll get a credit of $10.00USD for Amazon Kindle textbooks. That credit expires on January 9th, 2012, btw.

Now, if you view IT Security Metrics : A Practical Framework for Measuring Security & Protecting Data on Amazon and order it (again, by January 9th, 2012), it will cost you a whole ~$14.00USD

NOTE: This is a re-post from a topic I started on the SecurityMetrics & SIRA mailing lists. Wanted to broaden the discussion to anyone not on those (and, why aren’t you on them?)

I had not heard the term micromort prior to listening to David Spiegelhalter’s Do Lecture and the concept of it really stuck in my (albeit thick) head all week.

I didn’t grab the paper yet, but the abstract for “Microrisks for Medical Decision Analysis” seems to be able to extrapolate directly to the risks we face in infosec:

“Many would agree on the need to inform patients about the risks of medical conditions or treatments and to consider those risks in making medical decisions. The question is how to describe the risks and how to balance them with other factors in arriving at a decision. In this article, we present the thesis that part of the answer lies in defining an appropriate scale for risks that are often quite small. We propose that a convenient unit in which to measure most medical risks is the microprobability, a probability of 1 in 1 million. When the risk consequence is death, we can define a micromort as one microprobability of death. Medical risks can be placed in perspective by noting that we live in a society where people face about 270 micromorts per year from interactions with motor vehicles.

Continuing risks or hazards, such as are posed by following unhealthful practices or by the side-effects of drugs, can be described in the same micromort framework. If the consequence is not death, but some other serious consequence like blindness or amputation, the microrisk structure can be used to characterize the probability of disability.

Once the risks are described in the microrisk form, they can be evaluated in terms of the patient’s willingness-to-pay to avoid them. The suggested procedure is illustrated in the case of a woman facing a cranial arteriogram of a suspected arterio-venous malformation. Generic curves allow such analyses to be performed approximately in terms of the patient’s sex, age, and economic situation. More detailed analyses can be performed if desired.

Microrisk analysis is based on the proposition that precision in language permits the soundness of thought that produces clarity of action and peace of mind.”

When my CC is handy and I feel like giving up some privacy I’ll grab the whole paper, but the correlations seem pretty clear from just that bit.

I must have missed Schneier’s blog post about it earlier this month where he links to understandinguncertainty.org/micromorts which links to plus.maths.org/content/os/issue55/features/risk/index (apologies for the link leapfrogging, but it provides background context that I did not have prior).

At a risk to my credibility, I’ll add another link to a Wikipedia article that lists some actual micromorts and include a small sample here:

Risks that increase the annual death risk by one micromort, and their associated cause of death:

  • smoking 1.4 cigarettes (cancer, heart disease)
  • drinking 0.5 liter of wine (cirrhosis of the liver)
  • spending 1 hour in a coal mine (black lung disease)
  • spending 3 hours in a coal mine (accident)
  • living 2 days in New York or Boston (air pollution)

I asked on Twitter if anyone thought we had an equivalent – a “micropwn“, say – for our discipline. Do we have enough high level data to produce a generic micropwn for something like:

  • 1 micropwn for every 3 consecutive days of missed DAT updates
  • 1 micropwn for every 10 Windows desktops with users with local Administrator privileges
  • 1 micropwn for every 5 consecutive days of missed IDS/IDP signature updates

Just like with the medical side of things, the micropwn calculation can be increased depending on the level of detail. For example (these are all made up for medicine):

  • 1 micromort for smoking 0.5 cigarettes if you are an overweight man in his 50’s
  • 1 micromort for smoking 0.25 cigarettes if you are an overwight man in his 50’s with a family genetic history of lung cancer

(again, I don’t have the paper, but the abstract seems to suggest this is how medical micromorts work)

Similarly, the micropwn calculation could get more granular by factoring in type of industry, geographic locations, breach histiory, etc.

Also, a micropwn (just like micromort) doesn’t necessarily mean “catastrophic” breach (I dislike that word as I think of it as a broad term when most folks associate it directly with sensitive record loss). Could mean successful malware infection in my view.

So, to further refine the question I originally posed on Twitter: Do we have enough broad data to provide input for micropwn calculations and can we define a starter-list of micropwns that would prove valuable in helping articulate risk within and outside our discipline?

You were right…(UPDATED with PK talk appearance )

Speaker: Jennifer Bayuk

 

Based on work for Stevens Institute of Technology.

How do professional systems engineers work?

History:

  1. Mainframe
  2. physical security (punch cards)
  3. cables to terminals
  4. network to workstations (some data moves there & on floppies) *spike in misuse & abuse
  5. modems and dedicated links to external providers/partners
  6. added midrange servers (including e-mail)
  7. added dial-back procedures to modem
  8. e-mail & other issues begat firewalls
  9. firewalls begat the “port 80” problem
  10. modems expanded to the remote access issue
  11. remote access issue begat multi-factor auth
  12. then an explosion of midrange begat more malware
  13. internal infestation from web sites & more e-mail
  14. added proxy servers
  15. made anti-virus ubiquitous
  16. kicked in SSL on web servers that now host critical biz apps
  17. (VPN sneaks in for vendors & remote access)
  18. more customers begat identity management
  19. increasing attacks begat IDS
  20. formalized “policies” in technical security enforcement devices
  21. now we have data & access everywhere, begets log management
  22. data loss begat disk encryption on servers & workstations
  23. increasingly common app vulns begat WAFs

 

Reference: Stevens Inst. “systems thinking”

Use systemogram to show what systems are supposed to do (very cool visualization for differing views of “security systems thinking”)

applied that systemogram model to a real world example of Steven’s school computer lab

 

Shows the “Vee Model” (her diagram is more thorough – GET THE PRESENTATION)

 

Advantages of this approach include:

  • Manage complexity
  • Top-down requirements tracing
  • Black box modeling
  • Logical flow analysis
  • Documentation
  • Peer review
  • Detailed Communication

Must advance and move beyond threat->countermeasure insidious cycle.

 

Traditional requirements process involves gathering functional requirements, interface definition and system-wide “ilities” – need to get it in before the interface level (high-level “black box”)

The major vulnerabilities are at the functional decompositional level

Many security vulns are introduced at the interface level as well

Unfortunately, it’s usually put at the system-wide level (as they do with availability ,etc)

 

What Do Security Requiremens Look Like Today?

  • Functional – what is necessary for mission assurance
  • Nonfunctional: what is necessary for system survival
  • V&V: what is necessary to ensure requirements are met

 

V&V: Verification: did we build it right? Validation: was it built right? (akin to correctness & effectiveness)

There are more similarities than system architects really want to believe or understand.

 

Much of security metrics are really verification vs validation

 

Validation Criteria

  • content
  • face
  • criterion
  • construct

Speaker: Jared Pfost (@JaredPfost)

Framing: IT Security Metrics in an Enterprise

 

If metrics are valuable, why aren’t we measuring them. Virtually no research on them.

 

The Chase

  • Measuring metric program maturity would be easy, but not valuable
  • Metric programs aren’t a priority for enough CISOs for a benchmark to matter
  • Additional proof needed: correlate maturity and losses

Bottom line: which metrics impact actual loss?

 

Make a Difference?

  • Metrics don’t matter to enough people
  • Results would not inspire action
  • We need benchmarking to commute security posture, key attributes of effective controls and hold control owners accountable with visibility

 

When you provide visibility into the efficacy of your environment it drives behaviour. (Even if it’s not necessarily the behaviour you wanted…at least they make a risk-based decision)

 

Metrics are too hard to get now :: get vendors to improve metric repots

 

Actions

  • Reactive: anytime a loss occurs, measure metric maturity (relevant to root cause)
  • Proactive: ITPI-type measurements needed; require metrics to be defined before budget approval
  • Good example: @Tripwire’s “Cost of Compliance“?

 

It’s crucial to effective compliance initiatives to have solid, real metrics that define program success.

 

UPDATE – 2011-02-26: Alphonso has posted his slides and BeeWise is open!

Speaker: Alfonso De Gregorio

How do we build a future in software security?

 

/me: the slides that will be posted have a ton of detail that Alfonso sped through. you’ll get a very good feel from them

 

Metrics are the servants of risk management and RM is about making decisions

we have incomplete information about # & severity of vulns

software products are highly defective and have no accountability

 

Bugs & Carrots

discussion around what software vendors are incented to do/why

features > security

bug fix > vuln fix

time to market > test/verify

 

M&Ms

(Markets & Metrics)

we need to put a cost on the software flaws with laws/regs & change in liability models

create feedback mechanisms (/me: open group work on security architecture?)

 

investment metrics to-date have challenges, especially in severity and probability of events

market-based metrics would provide a different context (e.g. stock market pricing)

create an infosec security market?

  • bug challenges
  • auctions
  • vuln brokers
  • infosec insurance
  • exploit derivatives

 

info function / incentive function / risk balancing function efficiency – all factors in creating a vulnerability market

/me: make a table with bullets above as rows and factors list as columns to do a comparison

suggests an Exploit Derivatives market (future’s contracts for vulns)

[side-talk: discussion about derviatives vs future and how the profit incentives may be conflicting]

[side-talk: why will make software companies pay attention to what seems to be a market that only makes speculators rich?]

[side-talk: is this legal? can we get this baked into contracts?]

[side-talk: degraded convo down to responsibility of software companies]

[side-talk: interesting analogy to the airline industry needing to be in the oil futures market to software companies needing to be in this potential vuln/exploit market]

another example is weather derivatives

 

cites two examples of how prediction markets can incent change

cites tradesports.com  and a FIFA predction market

 

Speakers: Fruhwirth, Proschinger, Lendl, Savola

“On the use of name server log data as input for security measurements”

 

CERT.at ERT

  • coordinate sec efforts & inc resp for IT sec prblms on a national level in Austria
  • constituted of IT company security teams and local CERTs

 

Why name server data?

  • CERT.at is mandated to inform and facilitate comm.
  • DNS data is a rich data source, easily obtainable
  • DNS logs usefulness increased if you can get them from the largest number of users

 

DNS 101

  • gTLDs & ccTLDs
  • ccTLDs have local policies
  • Passive collection will enable determination of IP addr changes, NS changes & domains/IP but impracticable to have sensors everywhere

 

DNS Reporting View

DNS view is a matrix for stakeholders and security chain/measurements

Vuln CERT | Large Co | SME | User

Exp
Threat
Risk
Countermasure
Incident

third dimension – field of view – DNS hierarchy changes view picture

 

4 example use cases CERT.at worked on:

Aurora

  • CnC server was based on dynamic DNS

/me: their matrix analysis makes it easy to see where DNS logs provided insigne to signs of vuln, severity of threat per stakeholder, whether it was something an org could have identified on their own (data source)

 

Conficker

Pseudorandom domains B-250 regs/day, C-450 .at domains/day

Aconet CERT runs nameservers and a sinkhole

CERT.at uses the DNS data to generate warnings

/me: the table view shows that you can both detect with DNS and deploy countermeasures with DNS (and what org can do what)

 

Kaminsky DNS Bug

CERT.at used logs to score resolvers

score = port changes/queries & ports/min  (higher score == better)

they were able to see how quickly servers were patched (very interesting view)

/me: the chart is a bit hard to read but it shows the difficulty of not having a larger view of DNS to help detect subtle issues like this one

 

Stuxnet

CnC attempts visible in DNS logs

/me: the chart shows that if you knew the domains, you could have detected in your own network

 

There are blind points: lack of visibility in top-down view; DNS can’t really show severity

info exchange on signs of exploited vulns; focus on info exchange of incidents

[side-talk: how do we incent folks to share data… “ask nicely!”…]

[side-talk: what % did not know who CERT.at was? 80% of crit infra knew; highly dependent on sector; CERT.at deliberately hired across sectors to help promote /me: good q]

[side-talk: good discussion on CERT practices; how they detect and then how they engage constituents]

Speaker: Juhaniu Eronen

“The Autoreporter Project” – Background

Goal: make finland mostly harmless to the rest of the internet

(that’s actually in the law – Protection of Privacy in Electronic Comms/Finland)

 

/me: I’ll need to put some verbiage around this tonight to give you a good picture of what Juhaniu was conveying…really good description of their charter, goals, challenges, successes

 

What’s a “finnish” system:

  • any autonomous systems in finnish soil, operated or owned by finnish orgs
  • .fi .ax domains
  • +358 telephone prefix
  • other networks owned by finnish orgs
  • finnish banks/brands/CC

 

Telcos mandated to report infosec incidents as well as major faults affecting users, networks or provider ability to operate

 

FICORA

Regulation for finnish security providers: Basic security of facilities & processes, Business continuity, spam blocking

  • Setup mandatory reporting for ISPs
  • Establish CERT-FI

 

Issues

Problem: Finland cleans up its own house, but they still end up getting attacked!

Problem: Most incidents are out of scope in mandated reporting

Problem: Establishing CERT-FI – no ownership or visibility of network; 3 ppl that in theory are expected to be there 7×24!

Huge increase in incidents [reported] from 2002-2006. It’s a pretty graph, but it really shows that the CERT-FI workforce increased and that processes were honed

 

How many incidents affect finnish networks?

How are we compared to neighbors (would love to take a data-driven jab at swedes).

 

So, workforce, regulatory and other constraints & need for actionable data == make automated system.

 

2006: created automated system to capture incident reports (mostly malware) from various monitoring projects around the globe.

Daily reports, e-mailed, CSV format pre-defined agreed-upon subjects. digitally signed. reported incidents in body.

 

How CERT-FI handles abuse:

  • detection
  • reports (e-mail/phone/fax) – Funny story: one woman printed out all the spam she received and sent to CERT-FI, until asked not to anymore.
  • Scraping feeds, normalizing/correlating data
  • Finding owners
  • -Map bad events to netblocks
  • -maintain contact list (& contact prefs!)
  • -manage customer expectations
  • Report out stats, trends, chronic cases
  • Assist in incident response

 

There are dozens of projects, data sources, blacklists etc but they vary in format (even timestamps), purpose, channel (IRC, http, ftp)

  • data is frequently missed due to downtime, system availability
  • info integrity is difficult to gauge
  • bugs in feeds data & reporting
  • wildly differing frequency of updates (realtime to monthly)
  • taxonomies are diverse
  • detail level not discrete

 

Ensuring Focus of CERT-FI

  • What are we not seeing?
  • What should I prepare for?
  • Who is the target of damage & who is just collateral
  • Can the data/sources be trusted?

 

[side-talk: CERT-FI manages intake and the privacy laws make it difficult to delegate collection to the ISPs]

[side-talk: 5.5 mill population of finland, very high # of folks with internet access, everyone has a cell phone. internet considered a basic human right]

 

CERT-FI shows ISP incident graphs in comparison to other ISPs. /me: the embarrassment factor is a good motivator

interesting: conficker is still a problem

CERT-FI autoreporter can actually report out incidents per broadband customer (trending)

 

Abusehelper: http://code.google.com/p/abusehelper/wiki/README

Abuse Helper is toolkit for CERT and Abuse teams. It is a modular, (hopefully) scalable and robust framework to help you in your abuse handling.

With Abuse Helper you can:

  • Retrieve Internet Abuse Handling related information via several sources which are
    • near-real-time (such as IRC)
    • periodic (such as Email reports), or
    • request/response (such as HTTP).
  • You can then aggregate that information based on different keys, such as AS numbers or country codes
  • Sent out reports in different formats, via different transports and using different timings

Abuse Helper features include:

  • Fully modular (you can utilize different readers, parsers, transports, splitters, combiners in a pipe-like manner)
  • Scalable: you can distribute the work to different machines and different geolocations
  • Observable: you can use your favourite XMPP client to observe the bots at work

 

Great overall presentation for the rationale to report incidents outside your org