Skip navigation

Tag Archives: Speaker

Speaker: Jennifer Bayuk

 

Based on work for Stevens Institute of Technology.

How do professional systems engineers work?

History:

  1. Mainframe
  2. physical security (punch cards)
  3. cables to terminals
  4. network to workstations (some data moves there & on floppies) *spike in misuse & abuse
  5. modems and dedicated links to external providers/partners
  6. added midrange servers (including e-mail)
  7. added dial-back procedures to modem
  8. e-mail & other issues begat firewalls
  9. firewalls begat the “port 80” problem
  10. modems expanded to the remote access issue
  11. remote access issue begat multi-factor auth
  12. then an explosion of midrange begat more malware
  13. internal infestation from web sites & more e-mail
  14. added proxy servers
  15. made anti-virus ubiquitous
  16. kicked in SSL on web servers that now host critical biz apps
  17. (VPN sneaks in for vendors & remote access)
  18. more customers begat identity management
  19. increasing attacks begat IDS
  20. formalized “policies” in technical security enforcement devices
  21. now we have data & access everywhere, begets log management
  22. data loss begat disk encryption on servers & workstations
  23. increasingly common app vulns begat WAFs

 

Reference: Stevens Inst. “systems thinking”

Use systemogram to show what systems are supposed to do (very cool visualization for differing views of “security systems thinking”)

applied that systemogram model to a real world example of Steven’s school computer lab

 

Shows the “Vee Model” (her diagram is more thorough – GET THE PRESENTATION)

 

Advantages of this approach include:

  • Manage complexity
  • Top-down requirements tracing
  • Black box modeling
  • Logical flow analysis
  • Documentation
  • Peer review
  • Detailed Communication

Must advance and move beyond threat->countermeasure insidious cycle.

 

Traditional requirements process involves gathering functional requirements, interface definition and system-wide “ilities” – need to get it in before the interface level (high-level “black box”)

The major vulnerabilities are at the functional decompositional level

Many security vulns are introduced at the interface level as well

Unfortunately, it’s usually put at the system-wide level (as they do with availability ,etc)

 

What Do Security Requiremens Look Like Today?

  • Functional – what is necessary for mission assurance
  • Nonfunctional: what is necessary for system survival
  • V&V: what is necessary to ensure requirements are met

 

V&V: Verification: did we build it right? Validation: was it built right? (akin to correctness & effectiveness)

There are more similarities than system architects really want to believe or understand.

 

Much of security metrics are really verification vs validation

 

Validation Criteria

  • content
  • face
  • criterion
  • construct

Speaker: Jared Pfost (@JaredPfost)

Framing: IT Security Metrics in an Enterprise

 

If metrics are valuable, why aren’t we measuring them. Virtually no research on them.

 

The Chase

  • Measuring metric program maturity would be easy, but not valuable
  • Metric programs aren’t a priority for enough CISOs for a benchmark to matter
  • Additional proof needed: correlate maturity and losses

Bottom line: which metrics impact actual loss?

 

Make a Difference?

  • Metrics don’t matter to enough people
  • Results would not inspire action
  • We need benchmarking to commute security posture, key attributes of effective controls and hold control owners accountable with visibility

 

When you provide visibility into the efficacy of your environment it drives behaviour. (Even if it’s not necessarily the behaviour you wanted…at least they make a risk-based decision)

 

Metrics are too hard to get now :: get vendors to improve metric repots

 

Actions

  • Reactive: anytime a loss occurs, measure metric maturity (relevant to root cause)
  • Proactive: ITPI-type measurements needed; require metrics to be defined before budget approval
  • Good example: @Tripwire’s “Cost of Compliance“?

 

It’s crucial to effective compliance initiatives to have solid, real metrics that define program success.

 

UPDATE – 2011-02-26: Alphonso has posted his slides and BeeWise is open!

Speaker: Alfonso De Gregorio

How do we build a future in software security?

 

/me: the slides that will be posted have a ton of detail that Alfonso sped through. you’ll get a very good feel from them

 

Metrics are the servants of risk management and RM is about making decisions

we have incomplete information about # & severity of vulns

software products are highly defective and have no accountability

 

Bugs & Carrots

discussion around what software vendors are incented to do/why

features > security

bug fix > vuln fix

time to market > test/verify

 

M&Ms

(Markets & Metrics)

we need to put a cost on the software flaws with laws/regs & change in liability models

create feedback mechanisms (/me: open group work on security architecture?)

 

investment metrics to-date have challenges, especially in severity and probability of events

market-based metrics would provide a different context (e.g. stock market pricing)

create an infosec security market?

  • bug challenges
  • auctions
  • vuln brokers
  • infosec insurance
  • exploit derivatives

 

info function / incentive function / risk balancing function efficiency – all factors in creating a vulnerability market

/me: make a table with bullets above as rows and factors list as columns to do a comparison

suggests an Exploit Derivatives market (future’s contracts for vulns)

[side-talk: discussion about derviatives vs future and how the profit incentives may be conflicting]

[side-talk: why will make software companies pay attention to what seems to be a market that only makes speculators rich?]

[side-talk: is this legal? can we get this baked into contracts?]

[side-talk: degraded convo down to responsibility of software companies]

[side-talk: interesting analogy to the airline industry needing to be in the oil futures market to software companies needing to be in this potential vuln/exploit market]

another example is weather derivatives

 

cites two examples of how prediction markets can incent change

cites tradesports.com  and a FIFA predction market

 

Speaker: Juhaniu Eronen

“The Autoreporter Project” – Background

Goal: make finland mostly harmless to the rest of the internet

(that’s actually in the law – Protection of Privacy in Electronic Comms/Finland)

 

/me: I’ll need to put some verbiage around this tonight to give you a good picture of what Juhaniu was conveying…really good description of their charter, goals, challenges, successes

 

What’s a “finnish” system:

  • any autonomous systems in finnish soil, operated or owned by finnish orgs
  • .fi .ax domains
  • +358 telephone prefix
  • other networks owned by finnish orgs
  • finnish banks/brands/CC

 

Telcos mandated to report infosec incidents as well as major faults affecting users, networks or provider ability to operate

 

FICORA

Regulation for finnish security providers: Basic security of facilities & processes, Business continuity, spam blocking

  • Setup mandatory reporting for ISPs
  • Establish CERT-FI

 

Issues

Problem: Finland cleans up its own house, but they still end up getting attacked!

Problem: Most incidents are out of scope in mandated reporting

Problem: Establishing CERT-FI – no ownership or visibility of network; 3 ppl that in theory are expected to be there 7×24!

Huge increase in incidents [reported] from 2002-2006. It’s a pretty graph, but it really shows that the CERT-FI workforce increased and that processes were honed

 

How many incidents affect finnish networks?

How are we compared to neighbors (would love to take a data-driven jab at swedes).

 

So, workforce, regulatory and other constraints & need for actionable data == make automated system.

 

2006: created automated system to capture incident reports (mostly malware) from various monitoring projects around the globe.

Daily reports, e-mailed, CSV format pre-defined agreed-upon subjects. digitally signed. reported incidents in body.

 

How CERT-FI handles abuse:

  • detection
  • reports (e-mail/phone/fax) – Funny story: one woman printed out all the spam she received and sent to CERT-FI, until asked not to anymore.
  • Scraping feeds, normalizing/correlating data
  • Finding owners
  • -Map bad events to netblocks
  • -maintain contact list (& contact prefs!)
  • -manage customer expectations
  • Report out stats, trends, chronic cases
  • Assist in incident response

 

There are dozens of projects, data sources, blacklists etc but they vary in format (even timestamps), purpose, channel (IRC, http, ftp)

  • data is frequently missed due to downtime, system availability
  • info integrity is difficult to gauge
  • bugs in feeds data & reporting
  • wildly differing frequency of updates (realtime to monthly)
  • taxonomies are diverse
  • detail level not discrete

 

Ensuring Focus of CERT-FI

  • What are we not seeing?
  • What should I prepare for?
  • Who is the target of damage & who is just collateral
  • Can the data/sources be trusted?

 

[side-talk: CERT-FI manages intake and the privacy laws make it difficult to delegate collection to the ISPs]

[side-talk: 5.5 mill population of finland, very high # of folks with internet access, everyone has a cell phone. internet considered a basic human right]

 

CERT-FI shows ISP incident graphs in comparison to other ISPs. /me: the embarrassment factor is a good motivator

interesting: conficker is still a problem

CERT-FI autoreporter can actually report out incidents per broadband customer (trending)

 

Abusehelper: http://code.google.com/p/abusehelper/wiki/README

Abuse Helper is toolkit for CERT and Abuse teams. It is a modular, (hopefully) scalable and robust framework to help you in your abuse handling.

With Abuse Helper you can:

  • Retrieve Internet Abuse Handling related information via several sources which are
    • near-real-time (such as IRC)
    • periodic (such as Email reports), or
    • request/response (such as HTTP).
  • You can then aggregate that information based on different keys, such as AS numbers or country codes
  • Sent out reports in different formats, via different transports and using different timings

Abuse Helper features include:

  • Fully modular (you can utilize different readers, parsers, transports, splitters, combiners in a pipe-like manner)
  • Scalable: you can distribute the work to different machines and different geolocations
  • Observable: you can use your favourite XMPP client to observe the bots at work

 

Great overall presentation for the rationale to report incidents outside your org

Speaker: Chris Eng / Veracode

Every major infosec company publishes quarterly/yearly summary reports. Some based on survey, some based on real captured data.

Recognizing the Narrative

Every fancy looking infosec metrics report is a marketing vehicle; each has different perspectives; no consistency, but you can figure out the framing by looking at the exec summary or ToC; other times it may require real digging. Need to understand “what they are selling”. The text in the report is there to back up the narrative.

 

Veracode Report Narrative

  • More than 0.5 of all software failed to achieve acceptable level of security
  • 3rd party apps had lowest security quality
  • No single method of testing is accurate

(goal: use Veracode to analyze third party apps :-)

 

Trustwave Report Narrative

  • 2010 incident response investigations
  • attack vector evolution
  • 11 strategis initiatives for 2011

(goal: “we can help…we are good at this stuff”)


WhiteHat Report

  • Which web programming languages are most secure

(differs in goal from previous WH reports)

 

Bottom line: try to understand the framinggoal when reviewing the narrative

 

Using Stats Responsibly

Sample distribution review/discussion

normal distribution curves can still vary, but overall shape remains the same (std deviations, & avg)

bimodal distribution (two peaks)…may miss if you report only on averages

[game: Guess the Report Jeopardy! used primarily to show the pervasiveness of the use of averages]

[side-talk: discussion about different distributions by different sources]

 

(/me: this is very interesting)

Would a table of # of flaws per 1K lines of code per language be enough?

Would adding 1st quartike, median and 3rd quartile provide more insight?

Will this help understand the anomalies? Will it help prioritize?

How do we ensure normalized data for comparison?

[side-talk: what’s a “line of code”…same problem in app bug analysis]

[side-talk: Truth in stats: “What’s the question? What matters?”]

Can you overdo it? Yes.

 

[game: continued]

Power analysis can be use to determine the statisticaly significant sample size required to ensure the prob of error is acceptable.

Should you really include non-statistically significant data? “To asterisk or not to asterisk?”

It’s hard to un-see something after you see it (/me: good point)

[side-talk: show cell counts as well as %-ages; don’t use a bar chart when a crosstab is more useful]

[side-talk: we should follow guidance from social services in terms of how to present data for action]

 

Storytelling Via Omission

[side-talk: no report provided raw data]

What unwanted assumptions might result if the “wrong” data is included?

 

We need to provide access to raw data even though the majority of the population of consumers don’t want it.

Veracode will open up analytics platform to security researches :: vercode.com/analytics

[side-talk: Every company that publishes a report needs to publish name and contact info of their stats person who will backup the processed & data used]

[side-talk: is “truth” really what infosec companies really want to promote in their reports? @alexhutton: isn’t that ?]