Skip navigation

Tag Archives: Actuarial science

Everyone who can read this blog should remember the Deepwater Horizon spill that occurred in the Spring of 2010; huge loss of life (any loss is huge from my persective) and still unknown impact to the environment. This event was a wake-up call to BP execs and other companies in that industry sector.

You should all also remember the “Sonage” of this Spring where Sony lost millions of records across 12+ web site breaches and should have been a wake-up call to almost every sector.

BP committed to developing and implmenting a new Safety & Operational Risk (S&OR) program (which is now active). Sony is planning on hiring a CISO and has started hiring security folk, but they really need to develop a comprehensive Security & Operational Information Risk Program (and I suspect your org does as well).

What can we in the info risk world glean (steal) from BP’s plan and new S&OR Organization? Well, to adapt their charter, a new S&OIR program charter might be:

<ul><li>Strengthen & clarify requirements for secure, compliant and reliable computing & networking operations</li>
  • Have an appropriately staffed department of specialists that are integrated with the business
  • Provide deep technical expertise to the company’s operating business
  • Intervenes where needed to stop operations and bring about corrective actions
  • Provides checks & balances independent of business & IT
  • Strengthens mandatory security & compliance standards & processes (including operational risk management)
  • Provide an independent view of operational risk
  • Assess and enhance the competency of its workforce in matters related to information security
  • BP claims success form their current program (the link above has examples), and imagine – just imagine – if you your org required – yes, required – that new systems & applications conform to core, reasonable standards.

    In their annual report, BP fully acknowledged that risks inherent in its operations include a number of hazards that, “although many may have a low probability of occurrence, they can have extremely serious consequences if they do occur, such as the Gulf of Mexico incident.”. Imagine – just imagine – if you could get your org to think the same way about information risk (you have plenty of examples to work from).

    BP did not remove responsibility for managing operational risk and operational delivery from the business lines, but they integrated risk analysts into those teams and gave them the authority to intervene when necessary. It took a disaster to forge this new plan. You don’t need to wait for a disaster in your org to begin socializing this type of change.

    Imagine…just, imagine…

    UPDATE: I have intentionally cross-posted this to my SIRA blog since the combined wit & intelligence of the folks there trumps anything I could do alone here.

    All the following newly-minted risk assessment types have been inspired by actual situations. Hopefully you get to stick to just the proper OCTAVE/FAIR/NIST/etc. ones where you practice.

    • HARA :: Half-Assed Risk Assessment — When you are not provided any semblance of potential impact data and a woefully incomplete list of assets, but are still expected to return a valid risk rating.
    • CRA :: Cardassian Risk Assessment — When you are provided the resultant risk rating prior to beginning your risk assessment. (It’s a Star Trek reference for those with actual lives)

      We’re going to do x anyway because we don’t believe it’s a high risk, but go ahead and do your assessment since the Policy mandates that you do one.

    • IRA :: Immediate Risk Assessment — This one has been showcased well by our own Mr. DBIR himself on the SIRA podcasts. A risk assessment question by a senior executive who wants an answer *now* (dammit)! It is often phrased as “Which is more secure, x or y?” or “We need to do z. What’s the worst that can happen?“. You literally have no time to research and – if you don’t know the answer – then “Security” must not be very smart.
    • IRAVA :: In Reality, A Vulnerability Assessment — When you’re asked to determine risk when what they are *really* asking for what the vulnerabilities are in a particular system/app. Think Tenable/Qualys scan results vs FAIR or OCTAVE.
    • IOCAL :: I Only Care About Likelihood — This is when the requester is absolutely fixated on likelihood and believes wholeheartedly that a low likelihood immediately means low risk. Any answer you give is also followed up with “have we ever had anything like x happen in the past?” and/or “have our competitors been hit with y yet?
    • AD3RA :: Architecture Design Document Disguised As A Risk Assessment — When you are given all (and decent) inputs necessary to complete a pretty comprehensive risk assessment but are then asked to include a full architecture design document on how to mitigate them all. The sad truth is, the project team couldn’t get the enterprise architects (EA) to the table for the first project commit stage, but since you know enough about the technologies in play to fix the major problems, why not just make you do the EA dept’s job while you are just wasting time cranking out the mandatory risk assessment.
    • WDRA :: Wikipedia Deflected Risk Assessment — When you perform a risk assessment, but a manager or senior manager finds data on Wikipedia that they use to negate your findings. (Since – as we all know – Wikipedia is the sum of all correct human knowledge).

    If you are also coerced into performing an insane risk assessment that doesn’t fit these models, feel free to share them in the comments.

    Had to modify the latimes URL in the post due to a notice from Wordfence/Google

    I was reviewing the – er – highlights? – from the ninth ERM Symposium in Chicago over at Riskviews this morning and was intrigued by some of the parallels to the current situation in enterprise security risk management (the ERM symposium seemed to be laser-focused on financial risk, which is kinda sad since ERM should make security/IT compliance risk a first class citizen). Not all topics had a 1:1 parallel, but there were some interesting ones:

    • Compliance culture of risk management in banks contributed to the crisis :: While not necessarily at crisis levels yet, the compliance culture that is infecting information security is headed toward this same fate. Relying on semi-competent auditors to wade through volumes of compensating controls and point-in-time reviews to deliver ✓’s in the right boxes is not a recipe for a solid security program that will help mitigate and respond effectively to emerging threats.
    • Banks were supposed to have been sophisticated enough to control their risks :: I’ll focus on medium-to-large enterprises for this comparison, but I’m fairly confident that this is a prevalent attitude regarding information security in corporations across the globe (“We manage information risk well“). Budgets seem to be focused on three fundamental areas that non-security-folk can conceptually grasp: firewalls (stop), traditional anti-virus (block) and endpoint disk encryption (scramble). By now, with a decade of OWASP failures [PDF, pg 28], a multi-year debate about anti-virus efficacy and ample proof that vendors suck at building secure software as evidence, you’d think we’d be focusing on identifying the areas of greatest risk and designing & following roadmaps to mitigate them.

      This may be more of a failure on our part to effectively communicate the issues in a way that decision makers can understand. While not as bad as the outright lying committed by those who helped bake the financial meltdown cake, it is important to call out since I believe senior management and company boards would Do The Right Thing™ if we effectively communicated what that Right Thing is.

    • Regulators need to keep up with innovation and excessive leverage from innovation. :: The spirit of this is warning that financial regulators need to keep a sharp eye out for the tricky ways institutions come up with to get around regulations (that’s my concise summary of “innovation”). Think “residential mortgage-backed securities”. The “excessive leverage” bit is consumers borrowing way too much money for over-priced houses.

      I’m not going to try to make a raw parallel, but just focus on the first part: Regulators need to keep up with innovation. The bad guys are getting more sophisticated and clever all the time and keep up with hot trends faster than we can defend against them…due in part to our wasted time testing controls and responding to low-grade audit findings. When even the SOX compliant security giants can fall hard, you know there’s a fundamental problem in how we are managing information security risk. Regulators & legislators need to stop ( http:// articles. latimes. com /2011/feb/11/business/la-fi-0211-privacy-20110211 ) jerking knees and partner with the best and the brightest in our field to develop new approaches for prescribing and validating security programs.

    • ERM is not an EASY button from Staples :: I’m *so* using that quote in an infosec context this week
    • Many banks and insurers should be failing the use test for ERM regulation to be effective. :: More firms need to fail SOX and PCI and [insert devastating regulation acronym here] checks or SOX & PCI requirements need to change so that we see more failing. Pick one SOX and PCI compliant company at random and I’ll bet they have at least one exploitable Internet-based exposure or that custom-crafted malware can get through. If we start making real, effective, and sane regulations, we’ll start contributing to the betterment information security in organizations.
    • Stress testing is becoming a major tool for regulators. :: What if regulators did actual stress testing of our security controls versus relying on point-in-time checks? I know that the stress tests for banks end up being a paper exercise, but even those exercises have managed to find problems. Come in, pick three modern exploit vectors and walk-through how company defenses would hold up.
    • Regulators need to be able to pay competitive market salaries :: we need smarter rule-makers and examiners. There are good people doing good work in this space, just not enough of them.
    • Difficult for risk managers to operate under multiple constraints of multiple regulators, accounting systems. :: Just domestically, 42 states with separate privacy regulations, SOX for public companies, PCI compliance for those who process credit cards and independent infosec auditing standards across any third-party one needs to do business with make it almost impossible to stop spinning around low-level findings and focus on protecting critical information assets. We need to get to a small number of solid standards that we can effectively understand and design solutions to meet.
    • Nice tree/forest story: Small trees take resources from the forest. Large trees shade smaller trees making it harder for them to get sunlight. Old trees die and fall crashing through the forest taking out smaller trees. :: This made me think of the rampant consolidation of the security tech industry. Savvy, nimble & competent boutique vendors are being swallowed by giants. The smart people leave when they can and the solutions are diluted and become part of a leftover stew of offerings that don’t quite fit together well and are not nearly as effective as they once were.
    • Things that people say will never go wrong will go wrong. :: “We’ll never have a SQL injection. Our mobile devices will never get malware on them. Those users will never figure out out to do [that thing], why should we spend time and resources building it correctly?”
    • Compliance should be the easy part of ERM, not the whole thing :: So. True.
    • Asking dumb questions should be seen as good for firm. 10th dumb question might reveal something that no one else saw. :: This needs to be a requirement at everyone’s next architecture meeting or project initiation meeting. At the very least, do something similar before you let someone open up a firewall port.
    • There is a lack of imagination of adverse events. US has cultural optimism. Culture is risk seeking. :: Can be easily seen in our headstrong rush into consumerizing IT. I find that architects, engineers and application developers tend to see 1-2 “security moves” out. We need to do a better job training them to play Go or Chess in the enterprise.
    • People understand and prefer principles based regulation. But when trust is gone everything moves towards rules. :: If firms had been Doing The Right Thing™ in information security when they had the chance, we wouldn’t be in the state we are in now. I can’t see us getting [back] to principled-based regulation any time soon.
    • Supervisors need to learn to say no :: How many firewall port opens, disk-encryption exclusions, anti-virus disables and other policy exceptions have you processed just this past week? How many defenses have you had to give up during an architecture battle? Non-infosec leaders absolutely need to start learning how to say “no” when their best-and-brightest want to do the wrong thing.
    • Caveat Emptor :: Don’t believe your infosec vendors
    • A risk metric that makes you more effective makes you special. :: We have risk metrics? Seriously, tho, if we can measure and report risk effectively, our infosec programs will get better.

    I may have missed some or got some wrong. I’d be interested in any similarities or differences other saw in the list or if you think that I’m overly cynical about the state of affairs in infosec risk.

    NOTE: This is a re-post from a topic I started on the SecurityMetrics & SIRA mailing lists. Wanted to broaden the discussion to anyone not on those (and, why aren’t you on them?)

    I had not heard the term micromort prior to listening to David Spiegelhalter’s Do Lecture and the concept of it really stuck in my (albeit thick) head all week.

    I didn’t grab the paper yet, but the abstract for “Microrisks for Medical Decision Analysis” seems to be able to extrapolate directly to the risks we face in infosec:

    “Many would agree on the need to inform patients about the risks of medical conditions or treatments and to consider those risks in making medical decisions. The question is how to describe the risks and how to balance them with other factors in arriving at a decision. In this article, we present the thesis that part of the answer lies in defining an appropriate scale for risks that are often quite small. We propose that a convenient unit in which to measure most medical risks is the microprobability, a probability of 1 in 1 million. When the risk consequence is death, we can define a micromort as one microprobability of death. Medical risks can be placed in perspective by noting that we live in a society where people face about 270 micromorts per year from interactions with motor vehicles.

    Continuing risks or hazards, such as are posed by following unhealthful practices or by the side-effects of drugs, can be described in the same micromort framework. If the consequence is not death, but some other serious consequence like blindness or amputation, the microrisk structure can be used to characterize the probability of disability.

    Once the risks are described in the microrisk form, they can be evaluated in terms of the patient’s willingness-to-pay to avoid them. The suggested procedure is illustrated in the case of a woman facing a cranial arteriogram of a suspected arterio-venous malformation. Generic curves allow such analyses to be performed approximately in terms of the patient’s sex, age, and economic situation. More detailed analyses can be performed if desired.

    Microrisk analysis is based on the proposition that precision in language permits the soundness of thought that produces clarity of action and peace of mind.”

    When my CC is handy and I feel like giving up some privacy I’ll grab the whole paper, but the correlations seem pretty clear from just that bit.

    I must have missed Schneier’s blog post about it earlier this month where he links to understandinguncertainty.org/micromorts which links to plus.maths.org/content/os/issue55/features/risk/index (apologies for the link leapfrogging, but it provides background context that I did not have prior).

    At a risk to my credibility, I’ll add another link to a Wikipedia article that lists some actual micromorts and include a small sample here:

    Risks that increase the annual death risk by one micromort, and their associated cause of death:

    • smoking 1.4 cigarettes (cancer, heart disease)
    • drinking 0.5 liter of wine (cirrhosis of the liver)
    • spending 1 hour in a coal mine (black lung disease)
    • spending 3 hours in a coal mine (accident)
    • living 2 days in New York or Boston (air pollution)

    I asked on Twitter if anyone thought we had an equivalent – a “micropwn“, say – for our discipline. Do we have enough high level data to produce a generic micropwn for something like:

    • 1 micropwn for every 3 consecutive days of missed DAT updates
    • 1 micropwn for every 10 Windows desktops with users with local Administrator privileges
    • 1 micropwn for every 5 consecutive days of missed IDS/IDP signature updates

    Just like with the medical side of things, the micropwn calculation can be increased depending on the level of detail. For example (these are all made up for medicine):

    • 1 micromort for smoking 0.5 cigarettes if you are an overweight man in his 50’s
    • 1 micromort for smoking 0.25 cigarettes if you are an overwight man in his 50’s with a family genetic history of lung cancer

    (again, I don’t have the paper, but the abstract seems to suggest this is how medical micromorts work)

    Similarly, the micropwn calculation could get more granular by factoring in type of industry, geographic locations, breach histiory, etc.

    Also, a micropwn (just like micromort) doesn’t necessarily mean “catastrophic” breach (I dislike that word as I think of it as a broad term when most folks associate it directly with sensitive record loss). Could mean successful malware infection in my view.

    So, to further refine the question I originally posed on Twitter: Do we have enough broad data to provide input for micropwn calculations and can we define a starter-list of micropwns that would prove valuable in helping articulate risk within and outside our discipline?