Skip navigation

Tag Archives: Twitter

Sam Ransbotham
Sabayasachi Mitra

Presentation [PDF]

Twitter transcript

#weis2011 Does immediate disclosure of vulns affect exploitation attempts? Looking at impact on risk/diffusion/volume

#weis2011 speaker is presenting standard attack process & security processes timelines (slides will be in the blog post)

#weis2011 the fundamental question is when from the vulnerability discovery to patch development is disclosure appropriate

#weis2011 immediate disclosure places a significant amount of pressure on defenders while aiding attackers < yep. #weis2011 penalty for MSSP, IDS/IDP/malware vendors for not doing nigh daily "software updates" is huge. a very high pressure industry. #weis2011 IDS systems produce tons of records which needs to be analyzed and understood. results may or may not be actionable. #weis2011 *Tons* of neat data on analysis of NVD data. Very data rich slides (some of them). Lots of math. #good #stuff #weis2011 immediate disclosure has significant increase in acceleration of exploit devel only slight increase in penetration #weis2011 the window may open faster, but defenders are reacting really quickly. this has effect of causing attackers to stop attacks sooner #weis2011 vendors patch vulnerabilities that have been immediately disclosed faster than "traditional" ones. #weiss answer to a q: "the data does not support immediate disclosure for all vulns. no way to extrapolate that information"

Dr Greer [cgreer at ostp.eop.gov] is Assistant Director, Information Technology R&D, Office of Science & Technology Policy, The White House

Opening: “The expertise of the attendees is greatly needed.”

He provided a broad overview of the goals & initiatives of the federal government as they relate to domestic & international cybersecurity. Greer went through the responsibilities of various agencies and made it clear that this is a highly distributed effort across all sectors of government.

He emphasized the need for a close partnership with private sector to accomplish these goals and also the criticality of not just coming up with plans but also implementing those plans.

It really was a high-level overview and – as I point out in the twitter transcript – would have been cooler if Dr Greer did a deep-dive on 2-3 items vs do a survey. He did set the tone pretty well – we are in challenging times that are changing rapidly. We’re still fighting the fights of 5-10 years ago but are working to provide a framework for keeping pace with cybercrimminals. The government is “doing stuff”, but it’s all useless without translating thousands of pages of legal mumbo jumbo into practical, actionable activities.

The 10 minute post-talk Q&A was far better than the actual preso.

Twitter transcript:

#weis2011 Obama: "America's economic prosperity in 21st cent will depend on cybersecurity" :: sec begets growth but underscores threats, too

#weis2011 one time we never expected every individual to need an IP address, now even refrigerators have one.

#weis2011 IPv6 need exacerbated by mobile, mobile apps themselves have great benefit, but also introduces new threat vector.

#weis2011 OSTP runs phishing tests 3x year #spiffy

#weis2011 POTUS Strategy: Catalyze brkthrus for natnl priorities, promote mkt-based innov; invest in building blocks of american innovation

#weis2011 policy review (2009) themes: lead frm top;build cap for dig natin;share resp for cybersec;effective info sharing/irp; encrge innov

#weis2011 pimping the International Strategy For Cyberspace release recently http://1.usa.gov/jZXIdE

#weis2011 key "norms" in ISC report: upholding fundamental freedoms (esp speech), global interoperability & cybersecurity due diligence

#weis2011 Greer shifting to talking about legis; OSTP has been wrkng to promote good bills esp for natnl data breach rprting & penalties

#weis2011 computer fraud & abuse act is *25 years old*. We need new regulations to help fight 21st century crime < 25 years! yikes! #weis2011 FISMA shifting from compliance-based to proactive protection-based; mentioned EINSTEIN IDS/ISP #wes2011 pimping http://csrc.nist.gov/nice/ education & awareness efforts #weis2011 pimping fed trusted ID initiative http://www.nist.gov/nstic/ ; password are $ & failing; multiple accts are real & problematic #weis2011 (pers comment) the audience knows much of what Greer is saying, surprised he's giving such a broad overview vs 2/3 deep dives #weis2011 (pers comment) the efforts for fed cybesec seem waaay to disjoint & distributed to truly be effective. #weis2011 pimping fed trusted ID initiative http://www.nist.gov/nstic ; password are $ & failing; multiple accts are real & problematic #weis2011 pimping http://www.nitrd.gov/ CSIA, SSG & SCORE < much alphabet soup in fed cybersec…the letters didn't help senate.gov today #weis2011 results of many research efforts are both near & just over the horizon, but all useless if not put into effective practice #weiss2011 impt to work with priv sector on economics of legis&policy choices (immunity/liability/safe hrbr/incentives/disclosure/audit) #weis2011 need to understand market factors incentivizing hackers (valuation/cost-ben/risk-decision making/criminal markets) #weis2011 (pers comment) another poke at Microsoft when talking about server security. Major hacks of late were linux/apache/solaris. #lame #weis2011 Cyber insurance is a possibility if we can develop good quant-based risk assessment/management frameworks #weis2011 cgreer@ostp.eop.gov #weis2011 q:"where will cybersec be in 10yrs?" -cyberspace will be more resilient & trustworthy; hardening sys&nets useless w/o educatng ppl #weis2011 by 2021 we will have solved all the cybersecurity issues of 2005 < wise man #weis2011 q:"the US spends > than rest of wrld combined on cybersec but it's still just pennies. will this change?" :: it's in the proposals

NOTE: This is a re-post from a topic I started on the SecurityMetrics & SIRA mailing lists. Wanted to broaden the discussion to anyone not on those (and, why aren’t you on them?)

I had not heard the term micromort prior to listening to David Spiegelhalter’s Do Lecture and the concept of it really stuck in my (albeit thick) head all week.

I didn’t grab the paper yet, but the abstract for “Microrisks for Medical Decision Analysis” seems to be able to extrapolate directly to the risks we face in infosec:

“Many would agree on the need to inform patients about the risks of medical conditions or treatments and to consider those risks in making medical decisions. The question is how to describe the risks and how to balance them with other factors in arriving at a decision. In this article, we present the thesis that part of the answer lies in defining an appropriate scale for risks that are often quite small. We propose that a convenient unit in which to measure most medical risks is the microprobability, a probability of 1 in 1 million. When the risk consequence is death, we can define a micromort as one microprobability of death. Medical risks can be placed in perspective by noting that we live in a society where people face about 270 micromorts per year from interactions with motor vehicles.

Continuing risks or hazards, such as are posed by following unhealthful practices or by the side-effects of drugs, can be described in the same micromort framework. If the consequence is not death, but some other serious consequence like blindness or amputation, the microrisk structure can be used to characterize the probability of disability.

Once the risks are described in the microrisk form, they can be evaluated in terms of the patient’s willingness-to-pay to avoid them. The suggested procedure is illustrated in the case of a woman facing a cranial arteriogram of a suspected arterio-venous malformation. Generic curves allow such analyses to be performed approximately in terms of the patient’s sex, age, and economic situation. More detailed analyses can be performed if desired.

Microrisk analysis is based on the proposition that precision in language permits the soundness of thought that produces clarity of action and peace of mind.”

When my CC is handy and I feel like giving up some privacy I’ll grab the whole paper, but the correlations seem pretty clear from just that bit.

I must have missed Schneier’s blog post about it earlier this month where he links to understandinguncertainty.org/micromorts which links to plus.maths.org/content/os/issue55/features/risk/index (apologies for the link leapfrogging, but it provides background context that I did not have prior).

At a risk to my credibility, I’ll add another link to a Wikipedia article that lists some actual micromorts and include a small sample here:

Risks that increase the annual death risk by one micromort, and their associated cause of death:

  • smoking 1.4 cigarettes (cancer, heart disease)
  • drinking 0.5 liter of wine (cirrhosis of the liver)
  • spending 1 hour in a coal mine (black lung disease)
  • spending 3 hours in a coal mine (accident)
  • living 2 days in New York or Boston (air pollution)

I asked on Twitter if anyone thought we had an equivalent – a “micropwn“, say – for our discipline. Do we have enough high level data to produce a generic micropwn for something like:

  • 1 micropwn for every 3 consecutive days of missed DAT updates
  • 1 micropwn for every 10 Windows desktops with users with local Administrator privileges
  • 1 micropwn for every 5 consecutive days of missed IDS/IDP signature updates

Just like with the medical side of things, the micropwn calculation can be increased depending on the level of detail. For example (these are all made up for medicine):

  • 1 micromort for smoking 0.5 cigarettes if you are an overweight man in his 50’s
  • 1 micromort for smoking 0.25 cigarettes if you are an overwight man in his 50’s with a family genetic history of lung cancer

(again, I don’t have the paper, but the abstract seems to suggest this is how medical micromorts work)

Similarly, the micropwn calculation could get more granular by factoring in type of industry, geographic locations, breach histiory, etc.

Also, a micropwn (just like micromort) doesn’t necessarily mean “catastrophic” breach (I dislike that word as I think of it as a broad term when most folks associate it directly with sensitive record loss). Could mean successful malware infection in my view.

So, to further refine the question I originally posed on Twitter: Do we have enough broad data to provide input for micropwn calculations and can we define a starter-list of micropwns that would prove valuable in helping articulate risk within and outside our discipline?

I was trying to convey my backup workflow/setup to @joeday in 140 and it just wasn’t working very well. Twitter – as one might expect – is not exactly the place for detailed technical discussions, but it does provide fertile ground to spark ideas and dialogue. I told @geekshui that I’d blog my setup and that turned out to be just enough of a catalyst to force me to iron out my strategy for rud.is and future (if any) non-cooking/family blogging.

Background

I’m [still] a die-hard OS X user, despite the increasing gatekeeper motif Apple is sporting these days. My main computer is a MacBook Pro which I would stupidly run back into a burning building to rescue. Everything is on it. Everything. I digitize receipts, house our multimedia, spin out VMs like a DJ, create, compose, torrent, rip, zip and hack from it. Consequently, ensuring my data is available is kinda important to me.

I’ve been around computers long enough to have learned some painful lessons from four simple characters: MTBF. Drives break. Electronics fail. It’s an undeniable fact. The only way to recover from these failures is to have a good strategy for keeping your data available.

Strategy #1: Backups

While hard to digest on Twitter, my backup strategy is pretty straightforward. I use Time Machine for OS-managed full system backups. I rotate these between two large (1TB & 2TB) hard drives and I retire one large hard drive each year (MTBF…remember?). This gets me individual file recovery pretty quickly over a decent time period and a bit of hardware piece of mind.

I also have two 2.5″ IEEE 1394 drives that I SuperDupe/CarbonCopyClone images to every time Apple issues a 10.x.y update. Again, I rotate between since I really don’t trust drive manufacturers. I haven’t relied on TrueCrypt for a while (which would make for an ugly workflow) for system volumes, but it’s easy to clone disks that have FileVault protected data as long as you do so from an account that does not use or rely on FileVault data.

Both Time Machine and the drive cloning can occur while I’m sleeping, so no workflow is impacted.

Strategy #2: Dropbox

I have to start by sharing just how much I <3 Dropbox. I don’t use the free service as I grew weary of keeping within the paultry limits. Getting a paid sub to it provides more than just freedom from minutiae. I now get (as long as they have no hiccups) full recovery back as far as I want in the event I do actually lose a file or two. I have Dropbox configured on my MacBook Pro, a home Windows machine and a home Linux box. This means that even if I lose the drive on my Mac, I can get some of my non-sensitive data back from one of the other Dropbox-enabled systems (which is much faster than recovering from backups). It also means that I can get right back to work on a different system – as long as I have not used an OS X-specific program.

I could rant for quite a while about Dropbox, but it should be pretty obvious why this is part of of my backup strategy.

Strategy #3: rsync.net

While Dropbox houses non-sensitive data offsite (again, assuming no service hiccups), there is a subset of my information that I do want housed off-site in the event there is a catastrophic issue with our abode. For that, I have been using rsync.net since it’s inception. They provide outstanding customer support, have a unique view and practices around warrants and fully understand the needs of technical users concerned about availability and privacy.

There are some other things we do to ensure a refresh of the content on media drives that get hooked up to our PS3 or displays, but the the above three steps are how I ensure that I always have access to the data that enables my workflow.