Skip navigation

Category Archives: Risk

Speaker: Jennifer Bayuk

 

Based on work for Stevens Institute of Technology.

How do professional systems engineers work?

History:

  1. Mainframe
  2. physical security (punch cards)
  3. cables to terminals
  4. network to workstations (some data moves there & on floppies) *spike in misuse & abuse
  5. modems and dedicated links to external providers/partners
  6. added midrange servers (including e-mail)
  7. added dial-back procedures to modem
  8. e-mail & other issues begat firewalls
  9. firewalls begat the “port 80” problem
  10. modems expanded to the remote access issue
  11. remote access issue begat multi-factor auth
  12. then an explosion of midrange begat more malware
  13. internal infestation from web sites & more e-mail
  14. added proxy servers
  15. made anti-virus ubiquitous
  16. kicked in SSL on web servers that now host critical biz apps
  17. (VPN sneaks in for vendors & remote access)
  18. more customers begat identity management
  19. increasing attacks begat IDS
  20. formalized “policies” in technical security enforcement devices
  21. now we have data & access everywhere, begets log management
  22. data loss begat disk encryption on servers & workstations
  23. increasingly common app vulns begat WAFs

 

Reference: Stevens Inst. “systems thinking”

Use systemogram to show what systems are supposed to do (very cool visualization for differing views of “security systems thinking”)

applied that systemogram model to a real world example of Steven’s school computer lab

 

Shows the “Vee Model” (her diagram is more thorough – GET THE PRESENTATION)

 

Advantages of this approach include:

  • Manage complexity
  • Top-down requirements tracing
  • Black box modeling
  • Logical flow analysis
  • Documentation
  • Peer review
  • Detailed Communication

Must advance and move beyond threat->countermeasure insidious cycle.

 

Traditional requirements process involves gathering functional requirements, interface definition and system-wide “ilities” – need to get it in before the interface level (high-level “black box”)

The major vulnerabilities are at the functional decompositional level

Many security vulns are introduced at the interface level as well

Unfortunately, it’s usually put at the system-wide level (as they do with availability ,etc)

 

What Do Security Requiremens Look Like Today?

  • Functional – what is necessary for mission assurance
  • Nonfunctional: what is necessary for system survival
  • V&V: what is necessary to ensure requirements are met

 

V&V: Verification: did we build it right? Validation: was it built right? (akin to correctness & effectiveness)

There are more similarities than system architects really want to believe or understand.

 

Much of security metrics are really verification vs validation

 

Validation Criteria

  • content
  • face
  • criterion
  • construct

Speaker: Jared Pfost (@JaredPfost)

Framing: IT Security Metrics in an Enterprise

 

If metrics are valuable, why aren’t we measuring them. Virtually no research on them.

 

The Chase

  • Measuring metric program maturity would be easy, but not valuable
  • Metric programs aren’t a priority for enough CISOs for a benchmark to matter
  • Additional proof needed: correlate maturity and losses

Bottom line: which metrics impact actual loss?

 

Make a Difference?

  • Metrics don’t matter to enough people
  • Results would not inspire action
  • We need benchmarking to commute security posture, key attributes of effective controls and hold control owners accountable with visibility

 

When you provide visibility into the efficacy of your environment it drives behaviour. (Even if it’s not necessarily the behaviour you wanted…at least they make a risk-based decision)

 

Metrics are too hard to get now :: get vendors to improve metric repots

 

Actions

  • Reactive: anytime a loss occurs, measure metric maturity (relevant to root cause)
  • Proactive: ITPI-type measurements needed; require metrics to be defined before budget approval
  • Good example: @Tripwire’s “Cost of Compliance“?

 

It’s crucial to effective compliance initiatives to have solid, real metrics that define program success.

 

UPDATE – 2011-02-26: Alphonso has posted his slides and BeeWise is open!

Speaker: Alfonso De Gregorio

How do we build a future in software security?

 

/me: the slides that will be posted have a ton of detail that Alfonso sped through. you’ll get a very good feel from them

 

Metrics are the servants of risk management and RM is about making decisions

we have incomplete information about # & severity of vulns

software products are highly defective and have no accountability

 

Bugs & Carrots

discussion around what software vendors are incented to do/why

features > security

bug fix > vuln fix

time to market > test/verify

 

M&Ms

(Markets & Metrics)

we need to put a cost on the software flaws with laws/regs & change in liability models

create feedback mechanisms (/me: open group work on security architecture?)

 

investment metrics to-date have challenges, especially in severity and probability of events

market-based metrics would provide a different context (e.g. stock market pricing)

create an infosec security market?

  • bug challenges
  • auctions
  • vuln brokers
  • infosec insurance
  • exploit derivatives

 

info function / incentive function / risk balancing function efficiency – all factors in creating a vulnerability market

/me: make a table with bullets above as rows and factors list as columns to do a comparison

suggests an Exploit Derivatives market (future’s contracts for vulns)

[side-talk: discussion about derviatives vs future and how the profit incentives may be conflicting]

[side-talk: why will make software companies pay attention to what seems to be a market that only makes speculators rich?]

[side-talk: is this legal? can we get this baked into contracts?]

[side-talk: degraded convo down to responsibility of software companies]

[side-talk: interesting analogy to the airline industry needing to be in the oil futures market to software companies needing to be in this potential vuln/exploit market]

another example is weather derivatives

 

cites two examples of how prediction markets can incent change

cites tradesports.com  and a FIFA predction market

 

Speakers: Fruhwirth, Proschinger, Lendl, Savola

“On the use of name server log data as input for security measurements”

 

CERT.at ERT

  • coordinate sec efforts & inc resp for IT sec prblms on a national level in Austria
  • constituted of IT company security teams and local CERTs

 

Why name server data?

  • CERT.at is mandated to inform and facilitate comm.
  • DNS data is a rich data source, easily obtainable
  • DNS logs usefulness increased if you can get them from the largest number of users

 

DNS 101

  • gTLDs & ccTLDs
  • ccTLDs have local policies
  • Passive collection will enable determination of IP addr changes, NS changes & domains/IP but impracticable to have sensors everywhere

 

DNS Reporting View

DNS view is a matrix for stakeholders and security chain/measurements

Vuln CERT | Large Co | SME | User

Exp
Threat
Risk
Countermasure
Incident

third dimension – field of view – DNS hierarchy changes view picture

 

4 example use cases CERT.at worked on:

Aurora

  • CnC server was based on dynamic DNS

/me: their matrix analysis makes it easy to see where DNS logs provided insigne to signs of vuln, severity of threat per stakeholder, whether it was something an org could have identified on their own (data source)

 

Conficker

Pseudorandom domains B-250 regs/day, C-450 .at domains/day

Aconet CERT runs nameservers and a sinkhole

CERT.at uses the DNS data to generate warnings

/me: the table view shows that you can both detect with DNS and deploy countermeasures with DNS (and what org can do what)

 

Kaminsky DNS Bug

CERT.at used logs to score resolvers

score = port changes/queries & ports/min  (higher score == better)

they were able to see how quickly servers were patched (very interesting view)

/me: the chart is a bit hard to read but it shows the difficulty of not having a larger view of DNS to help detect subtle issues like this one

 

Stuxnet

CnC attempts visible in DNS logs

/me: the chart shows that if you knew the domains, you could have detected in your own network

 

There are blind points: lack of visibility in top-down view; DNS can’t really show severity

info exchange on signs of exploited vulns; focus on info exchange of incidents

[side-talk: how do we incent folks to share data… “ask nicely!”…]

[side-talk: what % did not know who CERT.at was? 80% of crit infra knew; highly dependent on sector; CERT.at deliberately hired across sectors to help promote /me: good q]

[side-talk: good discussion on CERT practices; how they detect and then how they engage constituents]

Speaker: Juhaniu Eronen

“The Autoreporter Project” – Background

Goal: make finland mostly harmless to the rest of the internet

(that’s actually in the law – Protection of Privacy in Electronic Comms/Finland)

 

/me: I’ll need to put some verbiage around this tonight to give you a good picture of what Juhaniu was conveying…really good description of their charter, goals, challenges, successes

 

What’s a “finnish” system:

  • any autonomous systems in finnish soil, operated or owned by finnish orgs
  • .fi .ax domains
  • +358 telephone prefix
  • other networks owned by finnish orgs
  • finnish banks/brands/CC

 

Telcos mandated to report infosec incidents as well as major faults affecting users, networks or provider ability to operate

 

FICORA

Regulation for finnish security providers: Basic security of facilities & processes, Business continuity, spam blocking

  • Setup mandatory reporting for ISPs
  • Establish CERT-FI

 

Issues

Problem: Finland cleans up its own house, but they still end up getting attacked!

Problem: Most incidents are out of scope in mandated reporting

Problem: Establishing CERT-FI – no ownership or visibility of network; 3 ppl that in theory are expected to be there 7×24!

Huge increase in incidents [reported] from 2002-2006. It’s a pretty graph, but it really shows that the CERT-FI workforce increased and that processes were honed

 

How many incidents affect finnish networks?

How are we compared to neighbors (would love to take a data-driven jab at swedes).

 

So, workforce, regulatory and other constraints & need for actionable data == make automated system.

 

2006: created automated system to capture incident reports (mostly malware) from various monitoring projects around the globe.

Daily reports, e-mailed, CSV format pre-defined agreed-upon subjects. digitally signed. reported incidents in body.

 

How CERT-FI handles abuse:

  • detection
  • reports (e-mail/phone/fax) – Funny story: one woman printed out all the spam she received and sent to CERT-FI, until asked not to anymore.
  • Scraping feeds, normalizing/correlating data
  • Finding owners
  • -Map bad events to netblocks
  • -maintain contact list (& contact prefs!)
  • -manage customer expectations
  • Report out stats, trends, chronic cases
  • Assist in incident response

 

There are dozens of projects, data sources, blacklists etc but they vary in format (even timestamps), purpose, channel (IRC, http, ftp)

  • data is frequently missed due to downtime, system availability
  • info integrity is difficult to gauge
  • bugs in feeds data & reporting
  • wildly differing frequency of updates (realtime to monthly)
  • taxonomies are diverse
  • detail level not discrete

 

Ensuring Focus of CERT-FI

  • What are we not seeing?
  • What should I prepare for?
  • Who is the target of damage & who is just collateral
  • Can the data/sources be trusted?

 

[side-talk: CERT-FI manages intake and the privacy laws make it difficult to delegate collection to the ISPs]

[side-talk: 5.5 mill population of finland, very high # of folks with internet access, everyone has a cell phone. internet considered a basic human right]

 

CERT-FI shows ISP incident graphs in comparison to other ISPs. /me: the embarrassment factor is a good motivator

interesting: conficker is still a problem

CERT-FI autoreporter can actually report out incidents per broadband customer (trending)

 

Abusehelper: http://code.google.com/p/abusehelper/wiki/README

Abuse Helper is toolkit for CERT and Abuse teams. It is a modular, (hopefully) scalable and robust framework to help you in your abuse handling.

With Abuse Helper you can:

  • Retrieve Internet Abuse Handling related information via several sources which are
    • near-real-time (such as IRC)
    • periodic (such as Email reports), or
    • request/response (such as HTTP).
  • You can then aggregate that information based on different keys, such as AS numbers or country codes
  • Sent out reports in different formats, via different transports and using different timings

Abuse Helper features include:

  • Fully modular (you can utilize different readers, parsers, transports, splitters, combiners in a pipe-like manner)
  • Scalable: you can distribute the work to different machines and different geolocations
  • Observable: you can use your favourite XMPP client to observe the bots at work

 

Great overall presentation for the rationale to report incidents outside your org

Speaker: Chris Eng / Veracode

Every major infosec company publishes quarterly/yearly summary reports. Some based on survey, some based on real captured data.

Recognizing the Narrative

Every fancy looking infosec metrics report is a marketing vehicle; each has different perspectives; no consistency, but you can figure out the framing by looking at the exec summary or ToC; other times it may require real digging. Need to understand “what they are selling”. The text in the report is there to back up the narrative.

 

Veracode Report Narrative

  • More than 0.5 of all software failed to achieve acceptable level of security
  • 3rd party apps had lowest security quality
  • No single method of testing is accurate

(goal: use Veracode to analyze third party apps :-)

 

Trustwave Report Narrative

  • 2010 incident response investigations
  • attack vector evolution
  • 11 strategis initiatives for 2011

(goal: “we can help…we are good at this stuff”)


WhiteHat Report

  • Which web programming languages are most secure

(differs in goal from previous WH reports)

 

Bottom line: try to understand the framinggoal when reviewing the narrative

 

Using Stats Responsibly

Sample distribution review/discussion

normal distribution curves can still vary, but overall shape remains the same (std deviations, & avg)

bimodal distribution (two peaks)…may miss if you report only on averages

[game: Guess the Report Jeopardy! used primarily to show the pervasiveness of the use of averages]

[side-talk: discussion about different distributions by different sources]

 

(/me: this is very interesting)

Would a table of # of flaws per 1K lines of code per language be enough?

Would adding 1st quartike, median and 3rd quartile provide more insight?

Will this help understand the anomalies? Will it help prioritize?

How do we ensure normalized data for comparison?

[side-talk: what’s a “line of code”…same problem in app bug analysis]

[side-talk: Truth in stats: “What’s the question? What matters?”]

Can you overdo it? Yes.

 

[game: continued]

Power analysis can be use to determine the statisticaly significant sample size required to ensure the prob of error is acceptable.

Should you really include non-statistically significant data? “To asterisk or not to asterisk?”

It’s hard to un-see something after you see it (/me: good point)

[side-talk: show cell counts as well as %-ages; don’t use a bar chart when a crosstab is more useful]

[side-talk: we should follow guidance from social services in terms of how to present data for action]

 

Storytelling Via Omission

[side-talk: no report provided raw data]

What unwanted assumptions might result if the “wrong” data is included?

 

We need to provide access to raw data even though the majority of the population of consumers don’t want it.

Veracode will open up analytics platform to security researches :: vercode.com/analytics

[side-talk: Every company that publishes a report needs to publish name and contact info of their stats person who will backup the processed & data used]

[side-talk: is “truth” really what infosec companies really want to promote in their reports? @alexhutton: isn’t that ?]

Better management through better measurement
Speakers: Wade Baker and Alex Hutton and Chris Porter

State of the industry: are we a science or pseudoscience?

  • random fact gathering
  • morass of interesting, trivial, irrelevant obs
  • variety of theories that provide little guidance to data gathering

 

Sources of knowledge under “risk” aggregate:

  • asset landscape
  • impact landscape
  • threat landscape
  • controls landscape

 

Risk Management:

Need to move from evidence-based practices (state of nature) to state of knowledge (lists, simple derived models w/ad-hoc monitoring, formal modeling) to wisdom (accomplishment, outcomes, constructs for decision making)

 

[side-talk: different perspectives on risk at different levels of the company]

[side-talk: science as data vs science as method…shld we have a systematic method? do methods just help acquire state of nature]

 

VERIS Framework

VZ A4 threat model

  • Agent: whose actions affected the asset
  • Action: what actions affected the asset
  • Asset: which assets were affected
  • Attribute: how asset was affected

set of metrics designed to describe security incidents; designed to provide a common language for describing security incidents (or threats) ina structured/repeatable manner; overall goal: foundation for risk mgmt…data driven decisions!

reduce risk; reduce spending

 

VERIS Community

1921 submissions to veris community since November. majority from probes and attacks. ~60 genuine incident submissions

[side-talk: why is VZ a player? mainly due to cybertrust acquisition; interesting discussion of why/how VZ views security as so important/strategic; product of converging IT & security practices]

 

VERIS Detailed Analysis

Chris explained some of the intricacies and digging a bit deeper. really need the slides. /me: this is why u shld have been at Metricon and not at yet-another cloud preso

“why group servers with apps instead of network devices?” – natural grouping since apps run on servers; often folks use “app” when it was really “server” – i.e. “my app got attacked” is more likely your “server got hacked”.

[side-talk: scenarios impacting assets; discussion about nuances between avail & util]

Can use this detailed analysis to map back to controls that would be relevant to this scenario (and potentially which ones failed or were missing completely)

Enables mapping of action types to identified vulnerabilities which can help prioritize actions to mitigate

[side-talk: how VZ constructs event chains for each attack]

 

A vision of EBRM Metrics

@alexhutton – baseball metrics view for exec dashboard. sample: frequency of incidents; peer comparison & gauge of impact :: can learn much from Jack Jones’ threat descriptions (/me: and I would argue the impact $ banding)

at the very least this will give us the ability to mature how we estimate loss value;

awesome point how this is really not like baseball: we don’t have comprehensive data like batter stats.

I wanted to play with the AwesomeChartJS library and figured an interesting way to do that was to use it to track Microsoft Security Bulletins this year. While I was drawn in by just how simple it is to craft basic charts, that simplicity really only makes it useful for simple data sets. So, while I’ve produced three diferent views of Microsoft Security Bulletins for 2011 (to-date, and in advance of February’s Patch Tuesday), it would not be a good choice to do a running comparison between past years and 20111 (per-month).  The authors self-admit that there are [deliberate] limitations and point folks to the most excellent flot library for more sophisticated analytics (which I may feature in March).

The library itself only works within an HTML5 environment (one of the reasons I chose it) and uses a separate <canvas> element to house each chart. After loading up the library iself in a script tag:

<script src="/b/js/AwesomeChartJS/awesomechart.js" type="application/javascript">

(which is ~32K un-minified) you then declare a canvas element:

<canvas id="canvas1" width="400" height="300"></canvas>


and use some pretty straighforward javascript (no dependency on jQuery or other large frameworks) to do the drawing:

var mychart = new AwesomeChart('canvas1');
mychart.title = "Microsoft Security Bulletins Raw Count By Month - 2011";
mychart.data = [2, 12];
mychart.colors = ["#0000FF","#0000FF"];
mychart.labels = ["January", "February"];
mychart.draw();

It’s definitely worth a look if you have simple charting needs.

Regrettably, it looks like February is going to be a busy month for Windows administrators.

Your web-browser does not support the HTML 5 canvas element.

Your web-browser does not support the HTML 5 canvas element.

Your web-browser does not support the HTML 5 canvas element.