Skip navigation

Author Archives: hrbrmstr

Don't look at me…I do what he does — just slower. #rstats avuncular • ?Resistance Fighter • Cook • Christian • [Master] Chef des Données de Sécurité @ @rapid7

This morning, @joshcorman linked to an article in the Harvard Business ReviewThe Conversation” blog that put forth the author’s view of The Four Personas of the Next-Genereation CIO. The term persona is very Jungian and literally refers to “masks worn by a mime”. According to Jung, the persona “enables an individual to interrelate with the surrounding environment by reflecting the role in life that the individual is playing. In this way one can arrive at a compromise between one’s innate psychological constitution and society.1

So, the jist of the article is that there are four critial roles that the new CIO must play to succesfully interrelate with and orchestrate the IT environment within their business. I believe this provides a context to dovetail information security & compliance components within personas (since none of them are overtly infosec or compliance), essentially facilitating a compromise between the innate desire to “do the right thing” – i.e. compliance/security – (which I do believe most CIOs possess) and the initiatives that stem from these personas which appear to be – on the surface – in direct conflict. As Josh pointed out, this gives us – the professionals that support our CIOs – an opportunity to help rather than obstruct. Let’s take a look at each of the four personas and what parts of information security & compliance are critical to the real success of each role.

Chief “Infrastructure” Officer

Key points:

  • cost reduction
  • accounts for 70% of IT budget
  • “lights on” focus
  • needs to maintain legacy environments while trying to integrate disruptive technologies
  • internal-facing

This area is where most IT information security & compliance dollars are spent and typically involve personnel & legacy security technology costs (e.g. firewalls and traditional anti-virus) and contribute to the overall budget impact of verifying the efficacy of established controls (i.e. audits).

The best way to help this CIO persona is to ensure that your organization is only spending what it needs to in order to safeguard the information at risk and can be facilitated through regular and repeated risk assessments which prioritize the systems, networks and applications that require the most protection and enable the design & implementation of automated controls in these environments.

Government and industry regulations, third party business partner mandates and internal audit requirements are all factors in the risk assessment process, so there should be no surprises when the auditors come around. Furthermore, this risk assessment process will ultimately ensure that the controls are operating as efficiently as your organization can support with as little resource consumption as possible. It will also help shed light on controls that are missing or ineffective (both the first time through and as you perform regular validations). If you don’t believe compliance has a budget impact, take a look at the True Cost of Compliance report by Ponemon Instutite & Tripwire, Inc.

Solid and well-integrated risk assessment methodologies will speed up infrastructure and application deployment times as there will be no last-minute security surprises that either hold up a rollout or cause compliance problems at a later date due to them not being properly considered. By driving manual control costs to automation, focusing on the right risks and keeping the compliance folks satisfied, you will provide your CIO the tools she needs to keep the lights on and the budget requried to correctly integrate new technologies.

Chief “Integration” Officer

Key points:

  • connect internal & external ecosystems
  • accounts for 10% of IT budget
  • connects disperate processes, data, systems, etc
  • M&A-centric
  • external- & internal-facing

Of all the areas, I believe this one presents the best opportunity for our profession to shine and deliver the most value to IT & the business. They key is to weed out all “Doctor No’s” in your organization. The “Doctor No” basically says “No” to every ” can we do ‘x'” questions, which is at the heart of all the activities this CIO persona needs to perform. For example:

Senior Business Analyst: “Can we connect these systems to this cloud service?”
Security Analyst: “NO! Of course not, you fool!”

Moving from “No” to enabling the business to operate in a secure fashion is an extension of the risk management practices that make the Infrastructure persona successful and needs to be combined with a strong Security Architecture program. For every connection, identify the non-negotiable compliance requirements as factors in a thoughtful risk assessment process. Communicate the “must-haves” and the risk finding to the business & IT stakeholders and enable them to make an informed decision with innovative and/or time-tested architecture options.

Business owners are used to taking risks all the time (or just going out of business if they fail to take on risks regularly) and security/compliance risks are no different except that most senior executives are far more familiar with traditional risk management activities and need your help integrating security & compliance risk understanding into their existing knowledge-base.

Natural by-products of your support in this area will be:

  • an understanding of the benefits of a data classification system (so your CIO will know what she really does need to protect),
  • an appreciation for the development of lightweight, repeatable process during the early stages of analysis & integration design, and
  • a comprehension of the inherent, legitimate risks in the environment

which will all support more efficient acquisition and integration endeavours (and, I can’t think of one CIO that would be against increased efficieny in those areas).

Chief “Intelligence” Officer

Key points:

  • actionable insight
  • accounts for 10% of IT budget
  • improves business-user access to information
  • right data to right person at right time on right interface
  • internal-facing

This area has a similar reliance on a robust data classification program and can be more easily facilitated via a robust identity and access management program. To make your CIO successful, you will need to help her work with the business to identify the information assets to be incorporated into each business intelligence (BI) initiative and ensure they are classified by their owners/stakeholders. There’s a good chance you’ll need to re-think your access control infrastructure architecture as well since you will be facing users armed with iPads, Macs and modern web client technologies that you’ve managed to avoid up until now.

To ensure these BI activities will not just add to audit findings it will be important to incorporate regular access rights reviews into the mix since authentication and authorization will be the most robust control points. Perhaps this will also be a way to start the discussion on moving from archaic username and password credentials to multi-factor authentication, adaptive authentication or even a full-on migration to PKI. You will not have a better time than now for showing how these solutions enable better access control and give even more options even in the general application space.

Finally, with all of this information flying through your network this may be the most opportune time to research what DLP solutions are available and how they might be deployed (one size and even one solution does not fit all) to ensure the business is retaining as much control over the data as it wants (risk management always seems to sneak its way in).

Remember, the goal is to facilitate the business operations with as little disruption as possible. Getting in up-front with your ideas and solutions will make your CIO much more effective in orchestrating successful BI programs and projects.

Chief “Innovation” Officer

Key points:

  • pilots disruptive technologies
  • accounts for 10% of IT budget
  • move fast; fail fast; move on
  • externa-facing

I like shiny objects as much as the next tech-SQUIRREL!-and many CIOs do as well. Who wouldn’t want to arm their workforce with iPads connected to VDI sessions in the cloud (shameless SEO-inspired sentence)? Seriously, though, the modern CIO must regularly push IT & business users and management out of their comfort zones to avoid having their whole shop turn into a data center maze of twisty legacy deployments. Even if the mainframe is still king in many large shops, getting that data into the hands of consumers wielding iOS and Android devices will be crucial to the ongoing success of each enterprise (and, that’s just today).

Moving beyond traditional development models and languages and embracing faster, lighter and domain-specific tools is also part of the equation. Rapid code updates across far more platforms than you are use to will eventually be the norm. And, deployment models that involve traditional systems, internally dynamically provisioned application spaces and external content and hosting providers will almost be a necessity for business to succeed.

You must be prepared to have your organization adapt with these changing models. Make sure your staff is part of initiatives like the Cloud Security Alliance, OWASP and Rugged. Keep up with disruptive innovators and embrace the challenge of working with these groups instead of fighting against them.

You will need to help your CIO bake security and compliance checkpoints all throughout the exploratory and development phases of these risky endeavours. Identifying compliance pitfalls will be paramount as the regulatory bodies and auditors are even less apt to embrace change than your security teams are, never mind the monumental and drawn-out tasks of making any changes to established regulatory requirements. Working to help these efforts succeed is great, but you also need to take care to avoid being the reason they fail (especially if that’s not due solely to a compliance problem).

I believe this new model of modern CIO will be very willing to work with a information security/risk/compliance group that exhibits even some of the qualities listed above. It won’t be easy (hey, it isn’t now) but it will give you the most opportunity to be successful in your program(s) and be one of the most critical components in enabling your CIO to respond to ever changing business needs and ventures.

NOTE: This is a re-post from a topic I started on the SecurityMetrics & SIRA mailing lists. Wanted to broaden the discussion to anyone not on those (and, why aren’t you on them?)

I had not heard the term micromort prior to listening to David Spiegelhalter’s Do Lecture and the concept of it really stuck in my (albeit thick) head all week.

I didn’t grab the paper yet, but the abstract for “Microrisks for Medical Decision Analysis” seems to be able to extrapolate directly to the risks we face in infosec:

“Many would agree on the need to inform patients about the risks of medical conditions or treatments and to consider those risks in making medical decisions. The question is how to describe the risks and how to balance them with other factors in arriving at a decision. In this article, we present the thesis that part of the answer lies in defining an appropriate scale for risks that are often quite small. We propose that a convenient unit in which to measure most medical risks is the microprobability, a probability of 1 in 1 million. When the risk consequence is death, we can define a micromort as one microprobability of death. Medical risks can be placed in perspective by noting that we live in a society where people face about 270 micromorts per year from interactions with motor vehicles.

Continuing risks or hazards, such as are posed by following unhealthful practices or by the side-effects of drugs, can be described in the same micromort framework. If the consequence is not death, but some other serious consequence like blindness or amputation, the microrisk structure can be used to characterize the probability of disability.

Once the risks are described in the microrisk form, they can be evaluated in terms of the patient’s willingness-to-pay to avoid them. The suggested procedure is illustrated in the case of a woman facing a cranial arteriogram of a suspected arterio-venous malformation. Generic curves allow such analyses to be performed approximately in terms of the patient’s sex, age, and economic situation. More detailed analyses can be performed if desired.

Microrisk analysis is based on the proposition that precision in language permits the soundness of thought that produces clarity of action and peace of mind.”

When my CC is handy and I feel like giving up some privacy I’ll grab the whole paper, but the correlations seem pretty clear from just that bit.

I must have missed Schneier’s blog post about it earlier this month where he links to understandinguncertainty.org/micromorts which links to plus.maths.org/content/os/issue55/features/risk/index (apologies for the link leapfrogging, but it provides background context that I did not have prior).

At a risk to my credibility, I’ll add another link to a Wikipedia article that lists some actual micromorts and include a small sample here:

Risks that increase the annual death risk by one micromort, and their associated cause of death:

  • smoking 1.4 cigarettes (cancer, heart disease)
  • drinking 0.5 liter of wine (cirrhosis of the liver)
  • spending 1 hour in a coal mine (black lung disease)
  • spending 3 hours in a coal mine (accident)
  • living 2 days in New York or Boston (air pollution)

I asked on Twitter if anyone thought we had an equivalent – a “micropwn“, say – for our discipline. Do we have enough high level data to produce a generic micropwn for something like:

  • 1 micropwn for every 3 consecutive days of missed DAT updates
  • 1 micropwn for every 10 Windows desktops with users with local Administrator privileges
  • 1 micropwn for every 5 consecutive days of missed IDS/IDP signature updates

Just like with the medical side of things, the micropwn calculation can be increased depending on the level of detail. For example (these are all made up for medicine):

  • 1 micromort for smoking 0.5 cigarettes if you are an overweight man in his 50’s
  • 1 micromort for smoking 0.25 cigarettes if you are an overwight man in his 50’s with a family genetic history of lung cancer

(again, I don’t have the paper, but the abstract seems to suggest this is how medical micromorts work)

Similarly, the micropwn calculation could get more granular by factoring in type of industry, geographic locations, breach histiory, etc.

Also, a micropwn (just like micromort) doesn’t necessarily mean “catastrophic” breach (I dislike that word as I think of it as a broad term when most folks associate it directly with sensitive record loss). Could mean successful malware infection in my view.

So, to further refine the question I originally posed on Twitter: Do we have enough broad data to provide input for micropwn calculations and can we define a starter-list of micropwns that would prove valuable in helping articulate risk within and outside our discipline?

Last night, the kids left the garage open after sledding all afternoon and I failed to perform my usual rounds due to still being horrendously ill. At some point between 23:00 & 05:30, miscreants did a snatch & grab on some electronics and other items. Ugh. This was both a physical security failure and a risk management issue, but I’ll keep this post on-topic and expound on the other items at a later date.

The first thing I did after noticing something was amiss was to take an inventory of what was missing from both the vehicles and then the garage in general. Once I had that list, it was time to start making the calls to the police department and insurance company. If you’ve ever been a vitcim of such a loss, you know the one question that comes up almost immediately: “Do you have the serial #’s and approximate value of the items taken?”. Most people, unfortunatley, don’t.

While you do not necessarily need anything more than a file folder and some note paper (plus receipts), using a tool like Evernote can really help. You first need to create a folder called (something like) “Home Inventory”. Then, on or very near the date of acquisition do the following:

  • Take a picture of the object (television, GPS, camera, road bike, guitar, etc) and put it the Evernote home inventory folder as a new note with the title being the actual, full brand/product name. This is made even easier if you use a smart phone that has an Evernote app on it. Take more than one picture if you want to include more of the location it was ultimately placed in. I would also suggest having yourself or another owner be in one of the pictures.
  • Add to the note the actual date of purchase. It also helps to either take a picture of or scan the printed receipt or paste in the e-mail with the receipt (if purchased online).
  • Locate the serial number and manufacturer id number and put those in. It helps to take an actual picture of this information as well.
  • Include the room or other physical location of where the object was ultimately placed (this makes sense only if it’s a fixed-asset like a TV, stereo or car GPS).
  • If you have the technical know-how, make an MD5 hash of all attachments (pictures/scans) and include the MD5 sums in the entry. This validates the integrity of the stored information as best as possible
  • Do not forget to include any SD cards, docking stations, GPS mounts, cables, custom road bike wheelsets etc in your list. I would recommend associating them with the entry for the main object you are documenting
  • If you have any account, financial or other personal infomation (e.g. e-mail address, usernames, passwords) stored on the object – and even TV’s hold this type of data these days – document those as well and include any remediation steps or contact info (such as bank phone number)
  • If the object is something like a media center (e.g. Apple TV) or media hard drive, you will need to keep a separate inventory of the non-replaceable, paid content on it and include that in any loss document if you do not have backups

Now, when you do become the victim of such a crime or incur damage as a result of a fire/flood/etc, you have all the data any agency or company will need to document the loss. This information may also be useful to help law enforcement find the stolen objects (if a theft), especially if there are unique markings.

While this was not a fun experience, it did validate my time and effort building and maintaining this inventory and will hopefully be helpful to others, though I sincerely hope you never have to go through anything similar.

If you’re preparing to install Windows 7 or Windows Server 2008 R2 Service Pack 1, now would be a good time to give Microsoft’s Attack Surface Analyzer a spin. ASA takes a baseline snapshot of your system state and then lets you take another snapshot after any configuration change or product installation and displays the changes to a number of key elements of the Windows attack surface, including analysis of changed or newly added files, registry keys, services, ActiveX Controls, listening ports, access control lists and other parameters.

Ideally, you’d take your baseline after a fresh install of your workstation or server from known, good media/images and after your own base configuration changes.

This would also be a good thing to do when building your base VM images so you can then validate their state as you duplicate and modify VDIs.

The installation of a Service Pack is a pretty radical change to your environment. If you run ASA prior to the SP install you can see if there are any significant changes to your system’s security profile after the bundle of patches and hotfixes are put down. You could also use the SP1 event to baseline post-install, provided you’ve done as thorough of a malware & rootkit sweep as can be done (you still cannot truly trust the results).

It may take some discipline to run ASA regularly on your personal systems every time you update software or drivers. IT shops should have an easier time scripting ASA during system deployments as well as application code updates. In either scenario, this free tool from Microsoft should help make you a more informed user and also aid you in building and maintaining more secure systems.

See also: MSDN SDLC blog post on the new Attack Surface Analyzer

The following excerpt is from The Five Dysfunctions of a Team: A Leadership Fable by Patrick Lencioni.

I wonder how many of you recognize traits like these on your own team(s), past or present. I can certainly point to these as being core reasons various teams I’ve been on or led have been ineffective and unsuccessful. The book seems like a good, short read, too.

  • Dysfunction OneAbsence of Trust

    When team members do not trust one another, they are unwilling to be vulnerable within the team. It is impossible for a team to build a foundation for trust when team members are not genuinely open about their mistakes and weaknesses.

  • Dysfunction TwoFear of Conflict

    Failure to build trust sets the stage for the second dysfunction. Teams without trust are unable to engage in passionate debate about ideas. Instead, they are guarded in their comments and resort to discussions that mask their true feelings.

  • Dysfunction ThreeLack of Commitment

    Teams that do not engage in healthy conflict will suffer from the third dysfunction. Because they do not openly surface their true opinions or engage in open debate, team members will rarely commit to team decisions, though they may feign agreement in order to avoid controversy or conflict.

  • Dysfunction FourAvoidance of Accountability

    A lack of commitment creates an atmosphere where team members do not hold one another accountable. Because there is no commitment to a clear action plan, team members hesitate to hold one another accountable on actions and behaviors that are contrary to the good of the team.

  • Dysfunction FiveInattention to Results

    The lack of accountability makes it possible for people to put their own needs above the team’s goals. Team members will focus on their own career goals or recognition for their departments to the detriment of the team.

A weakness in any one area can cause teamwork to deteriorate. The model is easy to understand, and yet can be difficult to practice because it requires high levels of discipline and persistence.
and persistence.

By now, many non-IT and non-Security folk have heard of Firesheep, a tool written by @codebutler which allows anyone using Firefox on unprotected networks to capture and hjijack active sessions to popular social media sites (and other web sites). The sidebar/extension puts an attactive and easy-to-understand GUI over a process that “real” security people have been using for as long as there has been http-based sessions.

I’ve been using Firesheep quite a bit in non-echo-chamber demos to help illustrate some of the core issues facing enterprises and individual users. A big question that comes out of each demo is “what can I do to safeguard my access to Facebook?”. I provide quick guidance on-the-spot to interested individuals and wanted to share what I communicate to them here both to help a broader audience and get feedback on other steps users can take to safeguard their connections.

General Guidance

The first action I tell users to take is an anti-action: if at all possible, never use free/unsecured Wi-Fi connections. While there are ways of grabbing sessions and other data on wired or secure Wi-Fi networks, the means to do so are beyond the capabilities of most Firesheep users. The danger is still present and you should always consider how much you trust the network you are on when accessing anything on the Internet, but the risk is greatly diminished.

If users are unable or unwilling to follow that first action (and even if they do avoid insecure networks) I then instruct them to ensure that all services they access always use “https (SSL/TLS) which encrypts the communication and prevents tools like Firesheep from working. It still – much like the first action – doesn’t stop determined & skilled attackers.

I then caution users on smartphones and tablets to also make sure any applications they use also communicate over SSL. This is far too easy to overlook and can leak data just as easily as a web browser. Tablet & smartphone users can also switch to only using 3G connections to make it that much more difficut for otherrs to eavesdrop.

Finally, I suggest using a virtual private networking (VPN) service such as PureVPN to secure all their connections – not just browser sessions – on public networks (secured or otherwise). SSL/TLS connections are potentially susceptible to what is called a man-in-the-middle (MITM) attack [SANS Reading Room (PDF)] and one way to mitigate that threat is to use a VPN to secure all network communication using a more robust/holistic solution. PureVPN (and other, similar good services) are not free, but $5.00-10.00USD per month is not much to pay for personal data security on-the-go.

The Elephant In The Room

For some reason, even with that general guidance, the whole concept of someone hijacking their Facebook account really scares folks and many end up asking specific question on ensuring their Facebook access is protected. This usually involves walking them through how to check to see if SSL is enabled by Facebook’s service and also how to monitor access to their Facebook account.

Unsurprisingly, Facebook does not make setting SSL as a default an easy task. It’s unintuitively not under any “privacy” settings. Instead, you need to navigate down to account settings and poke around to get to the right areas. The screen captures below show the navigation sequence. You’ll notice that this account does not have security enabled since it’s the one I use for demos (I do not have a personal Facebook account).

Getting to Facebook Account Settings

Location of Facebook Account Security Settings

Facebook SSL Settings

You’ll also notice that you can have Facebook send you an e-mail when there is an access to your account from an unknown device and also review recent activity on your account. This gives you the ability to be in control as much or as little as you desire.

Homeward Bound

I usually close with guidance on securing your home Wi-Fi network. Many users still have an aging 802.11b/g router that barely does wired-equivalent-privacy (WEP) security. Even newer Wi-Fi equipment with Wi-Fi Protected Access (WPA/WPA2) may not be enough as you or someone else in your house most likely handout the access password to any guest you allow in the residence. Any malware on their systems now has the potential to infect other systems on your network and you have also given the keys to your local security to someone you may not fully trust. Many of the newest Wi-Fi access points – such as Apple AirPort Extremes and Netgear N[3|6]00s – provide for the ability to setup both a protected internal network and as open of a guest network as you want. I still suggest ensuring that the guest network be secured as you may be liable for any actions taken from your network (protected or otherwise).

Highway Safety

Being safe[r] on the Internet is much lke being safe[r] when driving a car. You need to make sure the fluids are at the right levels, that the tire pressure is sufficient for the driving conditions and that you wear your seatbelt before leaving the driveway. If you don’t regularly perform those tasks you run the risk of significant problems out on the road. You need to get in the habit of doing similar checks when navigating in potentially dangerous network territory as well. It doesn’t help that Facebook cares not a whit about your privacy or security and will seemingly randomly change your settings if it benefits them (or if they are just their usual incompetent selves). Want proof? You have to be diligent in the maintenance of all Internet security settings to ensure your consistent, personal online safety.

If anyone is experiencing the “Are you sure…” problem when trying to upload media to your WordPress blog, it’s usually due to a wonky plugin. In my case it was “Google Analytics 3 codes for WordPress”. Hopefully this helps others who are searching for a resolution to the problem.