Skip navigation

Category Archives: Risk

A while back I was engaged in a conversation on Twitter with @diami03 & @chriseng regarding (what I felt was) the need for someone to provide the perspective from within a medium-to-large enterprise, especially when there are so many folks in infosec who are fond of saying “why didn’t they just…?” in response to events like the Sony attack or the compromise of the web servers.

Between consulting and full-time employment I’ve been in over 20 enterprises ranging from manufacturing to health care to global finance. Some of these shops built their own software, others used/customized COTS. Some have outsourced (to various degrees) IT operations and others were determined to keep all activity in-house. Each of them has had challenges in what many would say should be “easy” activities, such as patching, vulnerability management or ensuring teams were using good coding practices.

It’s pretty easy for a solitary penetration tester or industry pundit to lay down some snark and mock large companies for how they manage their environments. It’s quite another experience to try to manage risk across tens (or hundreds) of thousands of employees/contractors and an equal (or larger) number of workstations, combined with thousands of servers and applications plus hundreds (or thousands) of suppliers/partners.

While I would not attempt to defend all enterprise inadequacies, I will cherry-pick some of the top snarks & off-hand statements for this series and try to explain the difficulties an enterprise might have along with some suggestions on how to overcome them.

If you have a “why didn’t they just…?” you’d like answered drop me a note on Twitter or in the comments.

Everyone who can read this blog should remember the Deepwater Horizon spill that occurred in the Spring of 2010; huge loss of life (any loss is huge from my persective) and still unknown impact to the environment. This event was a wake-up call to BP execs and other companies in that industry sector.

You should all also remember the “Sonage” of this Spring where Sony lost millions of records across 12+ web site breaches and should have been a wake-up call to almost every sector.

BP committed to developing and implmenting a new Safety & Operational Risk (S&OR) program (which is now active). Sony is planning on hiring a CISO and has started hiring security folk, but they really need to develop a comprehensive Security & Operational Information Risk Program (and I suspect your org does as well).

What can we in the info risk world glean (steal) from BP’s plan and new S&OR Organization? Well, to adapt their charter, a new S&OIR program charter might be:

<ul><li>Strengthen & clarify requirements for secure, compliant and reliable computing & networking operations</li>
  • Have an appropriately staffed department of specialists that are integrated with the business
  • Provide deep technical expertise to the company’s operating business
  • Intervenes where needed to stop operations and bring about corrective actions
  • Provides checks & balances independent of business & IT
  • Strengthens mandatory security & compliance standards & processes (including operational risk management)
  • Provide an independent view of operational risk
  • Assess and enhance the competency of its workforce in matters related to information security
  • BP claims success form their current program (the link above has examples), and imagine – just imagine – if you your org required – yes, required – that new systems & applications conform to core, reasonable standards.

    In their annual report, BP fully acknowledged that risks inherent in its operations include a number of hazards that, “although many may have a low probability of occurrence, they can have extremely serious consequences if they do occur, such as the Gulf of Mexico incident.”. Imagine – just imagine – if you could get your org to think the same way about information risk (you have plenty of examples to work from).

    BP did not remove responsibility for managing operational risk and operational delivery from the business lines, but they integrated risk analysts into those teams and gave them the authority to intervene when necessary. It took a disaster to forge this new plan. You don’t need to wait for a disaster in your org to begin socializing this type of change.

    Imagine…just, imagine…

    UPDATE: I have intentionally cross-posted this to my SIRA blog since the combined wit & intelligence of the folks there trumps anything I could do alone here.

    All the following newly-minted risk assessment types have been inspired by actual situations. Hopefully you get to stick to just the proper OCTAVE/FAIR/NIST/etc. ones where you practice.

    • HARA :: Half-Assed Risk Assessment — When you are not provided any semblance of potential impact data and a woefully incomplete list of assets, but are still expected to return a valid risk rating.
    • CRA :: Cardassian Risk Assessment — When you are provided the resultant risk rating prior to beginning your risk assessment. (It’s a Star Trek reference for those with actual lives)

      We’re going to do x anyway because we don’t believe it’s a high risk, but go ahead and do your assessment since the Policy mandates that you do one.

    • IRA :: Immediate Risk Assessment — This one has been showcased well by our own Mr. DBIR himself on the SIRA podcasts. A risk assessment question by a senior executive who wants an answer *now* (dammit)! It is often phrased as “Which is more secure, x or y?” or “We need to do z. What’s the worst that can happen?“. You literally have no time to research and – if you don’t know the answer – then “Security” must not be very smart.
    • IRAVA :: In Reality, A Vulnerability Assessment — When you’re asked to determine risk when what they are *really* asking for what the vulnerabilities are in a particular system/app. Think Tenable/Qualys scan results vs FAIR or OCTAVE.
    • IOCAL :: I Only Care About Likelihood — This is when the requester is absolutely fixated on likelihood and believes wholeheartedly that a low likelihood immediately means low risk. Any answer you give is also followed up with “have we ever had anything like x happen in the past?” and/or “have our competitors been hit with y yet?
    • AD3RA :: Architecture Design Document Disguised As A Risk Assessment — When you are given all (and decent) inputs necessary to complete a pretty comprehensive risk assessment but are then asked to include a full architecture design document on how to mitigate them all. The sad truth is, the project team couldn’t get the enterprise architects (EA) to the table for the first project commit stage, but since you know enough about the technologies in play to fix the major problems, why not just make you do the EA dept’s job while you are just wasting time cranking out the mandatory risk assessment.
    • WDRA :: Wikipedia Deflected Risk Assessment — When you perform a risk assessment, but a manager or senior manager finds data on Wikipedia that they use to negate your findings. (Since – as we all know – Wikipedia is the sum of all correct human knowledge).

    If you are also coerced into performing an insane risk assessment that doesn’t fit these models, feel free to share them in the comments.

    Had to modify the latimes URL in the post due to a notice from Wordfence/Google

    I was reviewing the – er – highlights? – from the ninth ERM Symposium in Chicago over at Riskviews this morning and was intrigued by some of the parallels to the current situation in enterprise security risk management (the ERM symposium seemed to be laser-focused on financial risk, which is kinda sad since ERM should make security/IT compliance risk a first class citizen). Not all topics had a 1:1 parallel, but there were some interesting ones:

    • Compliance culture of risk management in banks contributed to the crisis :: While not necessarily at crisis levels yet, the compliance culture that is infecting information security is headed toward this same fate. Relying on semi-competent auditors to wade through volumes of compensating controls and point-in-time reviews to deliver ✓’s in the right boxes is not a recipe for a solid security program that will help mitigate and respond effectively to emerging threats.
    • Banks were supposed to have been sophisticated enough to control their risks :: I’ll focus on medium-to-large enterprises for this comparison, but I’m fairly confident that this is a prevalent attitude regarding information security in corporations across the globe (“We manage information risk well“). Budgets seem to be focused on three fundamental areas that non-security-folk can conceptually grasp: firewalls (stop), traditional anti-virus (block) and endpoint disk encryption (scramble). By now, with a decade of OWASP failures [PDF, pg 28], a multi-year debate about anti-virus efficacy and ample proof that vendors suck at building secure software as evidence, you’d think we’d be focusing on identifying the areas of greatest risk and designing & following roadmaps to mitigate them.

      This may be more of a failure on our part to effectively communicate the issues in a way that decision makers can understand. While not as bad as the outright lying committed by those who helped bake the financial meltdown cake, it is important to call out since I believe senior management and company boards would Do The Right Thing™ if we effectively communicated what that Right Thing is.

    • Regulators need to keep up with innovation and excessive leverage from innovation. :: The spirit of this is warning that financial regulators need to keep a sharp eye out for the tricky ways institutions come up with to get around regulations (that’s my concise summary of “innovation”). Think “residential mortgage-backed securities”. The “excessive leverage” bit is consumers borrowing way too much money for over-priced houses.

      I’m not going to try to make a raw parallel, but just focus on the first part: Regulators need to keep up with innovation. The bad guys are getting more sophisticated and clever all the time and keep up with hot trends faster than we can defend against them…due in part to our wasted time testing controls and responding to low-grade audit findings. When even the SOX compliant security giants can fall hard, you know there’s a fundamental problem in how we are managing information security risk. Regulators & legislators need to stop ( http:// articles. latimes. com /2011/feb/11/business/la-fi-0211-privacy-20110211 ) jerking knees and partner with the best and the brightest in our field to develop new approaches for prescribing and validating security programs.

    • ERM is not an EASY button from Staples :: I’m *so* using that quote in an infosec context this week
    • Many banks and insurers should be failing the use test for ERM regulation to be effective. :: More firms need to fail SOX and PCI and [insert devastating regulation acronym here] checks or SOX & PCI requirements need to change so that we see more failing. Pick one SOX and PCI compliant company at random and I’ll bet they have at least one exploitable Internet-based exposure or that custom-crafted malware can get through. If we start making real, effective, and sane regulations, we’ll start contributing to the betterment information security in organizations.
    • Stress testing is becoming a major tool for regulators. :: What if regulators did actual stress testing of our security controls versus relying on point-in-time checks? I know that the stress tests for banks end up being a paper exercise, but even those exercises have managed to find problems. Come in, pick three modern exploit vectors and walk-through how company defenses would hold up.
    • Regulators need to be able to pay competitive market salaries :: we need smarter rule-makers and examiners. There are good people doing good work in this space, just not enough of them.
    • Difficult for risk managers to operate under multiple constraints of multiple regulators, accounting systems. :: Just domestically, 42 states with separate privacy regulations, SOX for public companies, PCI compliance for those who process credit cards and independent infosec auditing standards across any third-party one needs to do business with make it almost impossible to stop spinning around low-level findings and focus on protecting critical information assets. We need to get to a small number of solid standards that we can effectively understand and design solutions to meet.
    • Nice tree/forest story: Small trees take resources from the forest. Large trees shade smaller trees making it harder for them to get sunlight. Old trees die and fall crashing through the forest taking out smaller trees. :: This made me think of the rampant consolidation of the security tech industry. Savvy, nimble & competent boutique vendors are being swallowed by giants. The smart people leave when they can and the solutions are diluted and become part of a leftover stew of offerings that don’t quite fit together well and are not nearly as effective as they once were.
    • Things that people say will never go wrong will go wrong. :: “We’ll never have a SQL injection. Our mobile devices will never get malware on them. Those users will never figure out out to do [that thing], why should we spend time and resources building it correctly?”
    • Compliance should be the easy part of ERM, not the whole thing :: So. True.
    • Asking dumb questions should be seen as good for firm. 10th dumb question might reveal something that no one else saw. :: This needs to be a requirement at everyone’s next architecture meeting or project initiation meeting. At the very least, do something similar before you let someone open up a firewall port.
    • There is a lack of imagination of adverse events. US has cultural optimism. Culture is risk seeking. :: Can be easily seen in our headstrong rush into consumerizing IT. I find that architects, engineers and application developers tend to see 1-2 “security moves” out. We need to do a better job training them to play Go or Chess in the enterprise.
    • People understand and prefer principles based regulation. But when trust is gone everything moves towards rules. :: If firms had been Doing The Right Thing™ in information security when they had the chance, we wouldn’t be in the state we are in now. I can’t see us getting [back] to principled-based regulation any time soon.
    • Supervisors need to learn to say no :: How many firewall port opens, disk-encryption exclusions, anti-virus disables and other policy exceptions have you processed just this past week? How many defenses have you had to give up during an architecture battle? Non-infosec leaders absolutely need to start learning how to say “no” when their best-and-brightest want to do the wrong thing.
    • Caveat Emptor :: Don’t believe your infosec vendors
    • A risk metric that makes you more effective makes you special. :: We have risk metrics? Seriously, tho, if we can measure and report risk effectively, our infosec programs will get better.

    I may have missed some or got some wrong. I’d be interested in any similarities or differences other saw in the list or if you think that I’m overly cynical about the state of affairs in infosec risk.

    This morning, @joshcorman linked to an article in the Harvard Business ReviewThe Conversation” blog that put forth the author’s view of The Four Personas of the Next-Genereation CIO. The term persona is very Jungian and literally refers to “masks worn by a mime”. According to Jung, the persona “enables an individual to interrelate with the surrounding environment by reflecting the role in life that the individual is playing. In this way one can arrive at a compromise between one’s innate psychological constitution and society.1

    So, the jist of the article is that there are four critial roles that the new CIO must play to succesfully interrelate with and orchestrate the IT environment within their business. I believe this provides a context to dovetail information security & compliance components within personas (since none of them are overtly infosec or compliance), essentially facilitating a compromise between the innate desire to “do the right thing” – i.e. compliance/security – (which I do believe most CIOs possess) and the initiatives that stem from these personas which appear to be – on the surface – in direct conflict. As Josh pointed out, this gives us – the professionals that support our CIOs – an opportunity to help rather than obstruct. Let’s take a look at each of the four personas and what parts of information security & compliance are critical to the real success of each role.

    Chief “Infrastructure” Officer

    Key points:

    • cost reduction
    • accounts for 70% of IT budget
    • “lights on” focus
    • needs to maintain legacy environments while trying to integrate disruptive technologies
    • internal-facing

    This area is where most IT information security & compliance dollars are spent and typically involve personnel & legacy security technology costs (e.g. firewalls and traditional anti-virus) and contribute to the overall budget impact of verifying the efficacy of established controls (i.e. audits).

    The best way to help this CIO persona is to ensure that your organization is only spending what it needs to in order to safeguard the information at risk and can be facilitated through regular and repeated risk assessments which prioritize the systems, networks and applications that require the most protection and enable the design & implementation of automated controls in these environments.

    Government and industry regulations, third party business partner mandates and internal audit requirements are all factors in the risk assessment process, so there should be no surprises when the auditors come around. Furthermore, this risk assessment process will ultimately ensure that the controls are operating as efficiently as your organization can support with as little resource consumption as possible. It will also help shed light on controls that are missing or ineffective (both the first time through and as you perform regular validations). If you don’t believe compliance has a budget impact, take a look at the True Cost of Compliance report by Ponemon Instutite & Tripwire, Inc.

    Solid and well-integrated risk assessment methodologies will speed up infrastructure and application deployment times as there will be no last-minute security surprises that either hold up a rollout or cause compliance problems at a later date due to them not being properly considered. By driving manual control costs to automation, focusing on the right risks and keeping the compliance folks satisfied, you will provide your CIO the tools she needs to keep the lights on and the budget requried to correctly integrate new technologies.

    Chief “Integration” Officer

    Key points:

    • connect internal & external ecosystems
    • accounts for 10% of IT budget
    • connects disperate processes, data, systems, etc
    • M&A-centric
    • external- & internal-facing

    Of all the areas, I believe this one presents the best opportunity for our profession to shine and deliver the most value to IT & the business. They key is to weed out all “Doctor No’s” in your organization. The “Doctor No” basically says “No” to every ” can we do ‘x'” questions, which is at the heart of all the activities this CIO persona needs to perform. For example:

    Senior Business Analyst: “Can we connect these systems to this cloud service?”
    Security Analyst: “NO! Of course not, you fool!”

    Moving from “No” to enabling the business to operate in a secure fashion is an extension of the risk management practices that make the Infrastructure persona successful and needs to be combined with a strong Security Architecture program. For every connection, identify the non-negotiable compliance requirements as factors in a thoughtful risk assessment process. Communicate the “must-haves” and the risk finding to the business & IT stakeholders and enable them to make an informed decision with innovative and/or time-tested architecture options.

    Business owners are used to taking risks all the time (or just going out of business if they fail to take on risks regularly) and security/compliance risks are no different except that most senior executives are far more familiar with traditional risk management activities and need your help integrating security & compliance risk understanding into their existing knowledge-base.

    Natural by-products of your support in this area will be:

    • an understanding of the benefits of a data classification system (so your CIO will know what she really does need to protect),
    • an appreciation for the development of lightweight, repeatable process during the early stages of analysis & integration design, and
    • a comprehension of the inherent, legitimate risks in the environment

    which will all support more efficient acquisition and integration endeavours (and, I can’t think of one CIO that would be against increased efficieny in those areas).

    Chief “Intelligence” Officer

    Key points:

    • actionable insight
    • accounts for 10% of IT budget
    • improves business-user access to information
    • right data to right person at right time on right interface
    • internal-facing

    This area has a similar reliance on a robust data classification program and can be more easily facilitated via a robust identity and access management program. To make your CIO successful, you will need to help her work with the business to identify the information assets to be incorporated into each business intelligence (BI) initiative and ensure they are classified by their owners/stakeholders. There’s a good chance you’ll need to re-think your access control infrastructure architecture as well since you will be facing users armed with iPads, Macs and modern web client technologies that you’ve managed to avoid up until now.

    To ensure these BI activities will not just add to audit findings it will be important to incorporate regular access rights reviews into the mix since authentication and authorization will be the most robust control points. Perhaps this will also be a way to start the discussion on moving from archaic username and password credentials to multi-factor authentication, adaptive authentication or even a full-on migration to PKI. You will not have a better time than now for showing how these solutions enable better access control and give even more options even in the general application space.

    Finally, with all of this information flying through your network this may be the most opportune time to research what DLP solutions are available and how they might be deployed (one size and even one solution does not fit all) to ensure the business is retaining as much control over the data as it wants (risk management always seems to sneak its way in).

    Remember, the goal is to facilitate the business operations with as little disruption as possible. Getting in up-front with your ideas and solutions will make your CIO much more effective in orchestrating successful BI programs and projects.

    Chief “Innovation” Officer

    Key points:

    • pilots disruptive technologies
    • accounts for 10% of IT budget
    • move fast; fail fast; move on
    • externa-facing

    I like shiny objects as much as the next tech-SQUIRREL!-and many CIOs do as well. Who wouldn’t want to arm their workforce with iPads connected to VDI sessions in the cloud (shameless SEO-inspired sentence)? Seriously, though, the modern CIO must regularly push IT & business users and management out of their comfort zones to avoid having their whole shop turn into a data center maze of twisty legacy deployments. Even if the mainframe is still king in many large shops, getting that data into the hands of consumers wielding iOS and Android devices will be crucial to the ongoing success of each enterprise (and, that’s just today).

    Moving beyond traditional development models and languages and embracing faster, lighter and domain-specific tools is also part of the equation. Rapid code updates across far more platforms than you are use to will eventually be the norm. And, deployment models that involve traditional systems, internally dynamically provisioned application spaces and external content and hosting providers will almost be a necessity for business to succeed.

    You must be prepared to have your organization adapt with these changing models. Make sure your staff is part of initiatives like the Cloud Security Alliance, OWASP and Rugged. Keep up with disruptive innovators and embrace the challenge of working with these groups instead of fighting against them.

    You will need to help your CIO bake security and compliance checkpoints all throughout the exploratory and development phases of these risky endeavours. Identifying compliance pitfalls will be paramount as the regulatory bodies and auditors are even less apt to embrace change than your security teams are, never mind the monumental and drawn-out tasks of making any changes to established regulatory requirements. Working to help these efforts succeed is great, but you also need to take care to avoid being the reason they fail (especially if that’s not due solely to a compliance problem).

    I believe this new model of modern CIO will be very willing to work with a information security/risk/compliance group that exhibits even some of the qualities listed above. It won’t be easy (hey, it isn’t now) but it will give you the most opportunity to be successful in your program(s) and be one of the most critical components in enabling your CIO to respond to ever changing business needs and ventures.

    NOTE: This is a re-post from a topic I started on the SecurityMetrics & SIRA mailing lists. Wanted to broaden the discussion to anyone not on those (and, why aren’t you on them?)

    I had not heard the term micromort prior to listening to David Spiegelhalter’s Do Lecture and the concept of it really stuck in my (albeit thick) head all week.

    I didn’t grab the paper yet, but the abstract for “Microrisks for Medical Decision Analysis” seems to be able to extrapolate directly to the risks we face in infosec:

    “Many would agree on the need to inform patients about the risks of medical conditions or treatments and to consider those risks in making medical decisions. The question is how to describe the risks and how to balance them with other factors in arriving at a decision. In this article, we present the thesis that part of the answer lies in defining an appropriate scale for risks that are often quite small. We propose that a convenient unit in which to measure most medical risks is the microprobability, a probability of 1 in 1 million. When the risk consequence is death, we can define a micromort as one microprobability of death. Medical risks can be placed in perspective by noting that we live in a society where people face about 270 micromorts per year from interactions with motor vehicles.

    Continuing risks or hazards, such as are posed by following unhealthful practices or by the side-effects of drugs, can be described in the same micromort framework. If the consequence is not death, but some other serious consequence like blindness or amputation, the microrisk structure can be used to characterize the probability of disability.

    Once the risks are described in the microrisk form, they can be evaluated in terms of the patient’s willingness-to-pay to avoid them. The suggested procedure is illustrated in the case of a woman facing a cranial arteriogram of a suspected arterio-venous malformation. Generic curves allow such analyses to be performed approximately in terms of the patient’s sex, age, and economic situation. More detailed analyses can be performed if desired.

    Microrisk analysis is based on the proposition that precision in language permits the soundness of thought that produces clarity of action and peace of mind.”

    When my CC is handy and I feel like giving up some privacy I’ll grab the whole paper, but the correlations seem pretty clear from just that bit.

    I must have missed Schneier’s blog post about it earlier this month where he links to which links to (apologies for the link leapfrogging, but it provides background context that I did not have prior).

    At a risk to my credibility, I’ll add another link to a Wikipedia article that lists some actual micromorts and include a small sample here:

    Risks that increase the annual death risk by one micromort, and their associated cause of death:

    • smoking 1.4 cigarettes (cancer, heart disease)
    • drinking 0.5 liter of wine (cirrhosis of the liver)
    • spending 1 hour in a coal mine (black lung disease)
    • spending 3 hours in a coal mine (accident)
    • living 2 days in New York or Boston (air pollution)

    I asked on Twitter if anyone thought we had an equivalent – a “micropwn“, say – for our discipline. Do we have enough high level data to produce a generic micropwn for something like:

    • 1 micropwn for every 3 consecutive days of missed DAT updates
    • 1 micropwn for every 10 Windows desktops with users with local Administrator privileges
    • 1 micropwn for every 5 consecutive days of missed IDS/IDP signature updates

    Just like with the medical side of things, the micropwn calculation can be increased depending on the level of detail. For example (these are all made up for medicine):

    • 1 micromort for smoking 0.5 cigarettes if you are an overweight man in his 50’s
    • 1 micromort for smoking 0.25 cigarettes if you are an overwight man in his 50’s with a family genetic history of lung cancer

    (again, I don’t have the paper, but the abstract seems to suggest this is how medical micromorts work)

    Similarly, the micropwn calculation could get more granular by factoring in type of industry, geographic locations, breach histiory, etc.

    Also, a micropwn (just like micromort) doesn’t necessarily mean “catastrophic” breach (I dislike that word as I think of it as a broad term when most folks associate it directly with sensitive record loss). Could mean successful malware infection in my view.

    So, to further refine the question I originally posed on Twitter: Do we have enough broad data to provide input for micropwn calculations and can we define a starter-list of micropwns that would prove valuable in helping articulate risk within and outside our discipline?

    Last night, the kids left the garage open after sledding all afternoon and I failed to perform my usual rounds due to still being horrendously ill. At some point between 23:00 & 05:30, miscreants did a snatch & grab on some electronics and other items. Ugh. This was both a physical security failure and a risk management issue, but I’ll keep this post on-topic and expound on the other items at a later date.

    The first thing I did after noticing something was amiss was to take an inventory of what was missing from both the vehicles and then the garage in general. Once I had that list, it was time to start making the calls to the police department and insurance company. If you’ve ever been a vitcim of such a loss, you know the one question that comes up almost immediately: “Do you have the serial #’s and approximate value of the items taken?”. Most people, unfortunatley, don’t.

    While you do not necessarily need anything more than a file folder and some note paper (plus receipts), using a tool like Evernote can really help. You first need to create a folder called (something like) “Home Inventory”. Then, on or very near the date of acquisition do the following:

    • Take a picture of the object (television, GPS, camera, road bike, guitar, etc) and put it the Evernote home inventory folder as a new note with the title being the actual, full brand/product name. This is made even easier if you use a smart phone that has an Evernote app on it. Take more than one picture if you want to include more of the location it was ultimately placed in. I would also suggest having yourself or another owner be in one of the pictures.
    • Add to the note the actual date of purchase. It also helps to either take a picture of or scan the printed receipt or paste in the e-mail with the receipt (if purchased online).
    • Locate the serial number and manufacturer id number and put those in. It helps to take an actual picture of this information as well.
    • Include the room or other physical location of where the object was ultimately placed (this makes sense only if it’s a fixed-asset like a TV, stereo or car GPS).
    • If you have the technical know-how, make an MD5 hash of all attachments (pictures/scans) and include the MD5 sums in the entry. This validates the integrity of the stored information as best as possible
    • Do not forget to include any SD cards, docking stations, GPS mounts, cables, custom road bike wheelsets etc in your list. I would recommend associating them with the entry for the main object you are documenting
    • If you have any account, financial or other personal infomation (e.g. e-mail address, usernames, passwords) stored on the object – and even TV’s hold this type of data these days – document those as well and include any remediation steps or contact info (such as bank phone number)
    • If the object is something like a media center (e.g. Apple TV) or media hard drive, you will need to keep a separate inventory of the non-replaceable, paid content on it and include that in any loss document if you do not have backups

    Now, when you do become the victim of such a crime or incur damage as a result of a fire/flood/etc, you have all the data any agency or company will need to document the loss. This information may also be useful to help law enforcement find the stolen objects (if a theft), especially if there are unique markings.

    While this was not a fun experience, it did validate my time and effort building and maintaining this inventory and will hopefully be helpful to others, though I sincerely hope you never have to go through anything similar.

    You were right…(UPDATED with PK talk appearance )