Skip navigation

Category Archives: Information Security

A while back I was engaged in a conversation on Twitter with @diami03 & @chriseng regarding (what I felt was) the need for someone to provide the perspective from within a medium-to-large enterprise, especially when there are so many folks in infosec who are fond of saying “why didn’t they just…?” in response to events like the Sony attack or the compromise of the senate.gov web servers.

Between consulting and full-time employment I’ve been in over 20 enterprises ranging from manufacturing to health care to global finance. Some of these shops built their own software, others used/customized COTS. Some have outsourced (to various degrees) IT operations and others were determined to keep all activity in-house. Each of them has had challenges in what many would say should be “easy” activities, such as patching, vulnerability management or ensuring teams were using good coding practices.

It’s pretty easy for a solitary penetration tester or industry pundit to lay down some snark and mock large companies for how they manage their environments. It’s quite another experience to try to manage risk across tens (or hundreds) of thousands of employees/contractors and an equal (or larger) number of workstations, combined with thousands of servers and applications plus hundreds (or thousands) of suppliers/partners.

While I would not attempt to defend all enterprise inadequacies, I will cherry-pick some of the top snarks & off-hand statements for this series and try to explain the difficulties an enterprise might have along with some suggestions on how to overcome them.

If you have a “why didn’t they just…?” you’d like answered drop me a note on Twitter or in the comments.

Eventbrite site: http://www.eventbrite.com/s/5cnV

It’s at Fort Foster. We thought about it a bit late in the season so no pavillion, but I’ll be there wicked-early and have a gazebo-like covering setup over some tables. I highly suggest bringing folding chairs. There is a nominal entrance fee (cash). Link to Fort Foster is in the Eventbrite page.

I’ll have signs up showing where we’re setup.

Right now (08-16) there are over 30 ppl (including family members) signed up!

I’m making a boatload of pulled bbq (still not decided on chicken or pork) for everyone. It’s potluck, so bring as much or little as you like. I’ll work on a massive bucket of drinks as well (tis busy week here, unfortunately) and will no doubt have spare plates & napkins.

We setup a google spreadsheet if you want to wish to post what you plan to share (if anything…no worries).

I updated the eventbrite posting with my google voice # if you need to reach me by phone for any reason.

Fort Foster has a beach area, scuba diving area, can handle kayaks and has a nice (but smallish) set of hiking trails. There are also grills if you plan on bringing raw items to cook.

If you have any questions, do not hesitate to ask!

Oh, yeah, it will come as a complete surprise, but I’ll have the “shield” on so you can recognize me :-)

Everyone who can read this blog should remember the Deepwater Horizon spill that occurred in the Spring of 2010; huge loss of life (any loss is huge from my persective) and still unknown impact to the environment. This event was a wake-up call to BP execs and other companies in that industry sector.

You should all also remember the “Sonage” of this Spring where Sony lost millions of records across 12+ web site breaches and should have been a wake-up call to almost every sector.

BP committed to developing and implmenting a new Safety & Operational Risk (S&OR) program (which is now active). Sony is planning on hiring a CISO and has started hiring security folk, but they really need to develop a comprehensive Security & Operational Information Risk Program (and I suspect your org does as well).

What can we in the info risk world glean (steal) from BP’s plan and new S&OR Organization? Well, to adapt their charter, a new S&OIR program charter might be:

<ul><li>Strengthen & clarify requirements for secure, compliant and reliable computing & networking operations</li>
  • Have an appropriately staffed department of specialists that are integrated with the business
  • Provide deep technical expertise to the company’s operating business
  • Intervenes where needed to stop operations and bring about corrective actions
  • Provides checks & balances independent of business & IT
  • Strengthens mandatory security & compliance standards & processes (including operational risk management)
  • Provide an independent view of operational risk
  • Assess and enhance the competency of its workforce in matters related to information security
  • BP claims success form their current program (the link above has examples), and imagine – just imagine – if you your org required – yes, required – that new systems & applications conform to core, reasonable standards.

    In their annual report, BP fully acknowledged that risks inherent in its operations include a number of hazards that, “although many may have a low probability of occurrence, they can have extremely serious consequences if they do occur, such as the Gulf of Mexico incident.”. Imagine – just imagine – if you could get your org to think the same way about information risk (you have plenty of examples to work from).

    BP did not remove responsibility for managing operational risk and operational delivery from the business lines, but they integrated risk analysts into those teams and gave them the authority to intervene when necessary. It took a disaster to forge this new plan. You don’t need to wait for a disaster in your org to begin socializing this type of change.

    Imagine…just, imagine…

    What can the @lulzsec senate.gov dump tell us about how the admins maintained their system/site?

    [code light=”true”]SunOS a-ess-wwwi 5.10 Generic_139555-08 sun4u sparc SUNW,SPARC-Enterprise[/code]

    means they haven’t kept up with OS patches. [-1 patch management]

    [code light=”true”]celerra:/wwwdata 985G 609G 376G 62% /net/celerra/wwwdata[/code]

    tells us they use EMC NAS kit for web content.

    The ‘last‘ dump shows they were good about using normal logins and (probably) ‘sudo‘, and used ‘root‘ only on the console. [+1 privileged id usage]

    They didn’t show the running apache version (just the config file…I guess I could have tried to profile that to figure out a range of version numbers). There’s decent likelihood that it was not at the latest patch version (based on not patching the OS) or major vendor version.

    [code light=”true”]Alias /CFIDE /WEBAPPS/Apache/htdocs/CFIDE
    Alias /coldfusion /WEBAPPS/Apache/htdocs/coldfusion
    LoadModule jrun_module /WEBAPPS/coldfusionmx8/runtime/lib/wsconfig/1/mod_jrun22.so
    JRunConfig Bootstrap 127.0.0.1:51800[/code]

    Those and other entries says they are running Cold Fusion, an Adobe web application server/framework, on the same system. The “mx8” suggests an out of date, insecure version. [-1 layered product lifecycle management]

    [code light=”true”] SSLEngine on
    SSLCertificateFile /home/Apache/bin/senate.gov.crt
    SSLCertificateKeyFile /home/Apache/bin/senate.gov.key
    SSLCACertificateFile /home/Apache/bin/sslintermediate.crt[/code]

    (along with the file system listing) suggests the @lulzsec folks have everything they need to host fake SSL web sites impersonating senate.gov.

    Sadly,

    [code light=”true”]LoadModule security_module modules/mod_security.so

    <IfModule mod_security.c>
    # Turn the filtering engine On or Off
    SecFilterEngine On

    # Make sure that URL encoding is valid
    SecFilterCheckURLEncoding On
    
    # Unicode encoding check
    SecFilterCheckUnicodeEncoding Off
    
    # Only allow bytes from this range
    SecFilterForceByteRange 0 255
    
    # Only log suspicious requests
    SecAuditEngine RelevantOnly
    
    # The name of the audit log file
    SecAuditLog logs/audit_log
    
    # Debug level set to a minimum
    SecFilterDebugLog logs/modsec_debug_log    
    SecFilterDebugLevel 0
    
    # Should mod_security inspect POST payloads
    SecFilterScanPOST On
    
    # By default log and deny suspicious requests
    # with HTTP status 500
    SecFilterDefaultAction &quot;deny,log,status:500&quot;
    

    </IfModule>[/code]

    shows they had a built-in WAF available, but either did not configure it well enough or did not view the logs from it. [-10 checkbox compliance vs security]

    [code light=”true”]-rw-r–r– 1 cfmx 102 590654 Feb 3 2006 66_00064d.jpg[/code]

    (many entries with ‘102’ instead of a group name) shows they did not do identity & access management configurations well. [-1 IDM]

    The apache config file discloses pseudo-trusted IP addresses & hosts (and we can assume @lulzsec has the passwords as well).

    As I tweeted in the wee hours of the morning, this was a failure on many levels since they did not:

    • Develop & use secure configuration of their servers & layered products + web applications
    • Patch their operating systems
    • Patch their layered products

    They did have a WAF, but it wasn’t configured well and they did not look at the WAF logs or – again, most likely – any system logs. This may have been a case where those “white noise port scans” everyone ignores was probably the intelligence probe that helped bring this box down.

    Is this a terrible breach of government security? No. It’s a public web server with public data. They may have gotten to a firewalled zone, but it’s pretty much a given that no sensitive systems were on that same segment. This is just an embarrassment with a bit of extra badness in that the miscreants have SSL certs. It does show just how important it is to make sure server admins maintain systems well (note, I did not say security admins) and that application teams keep current, too. It also shows that we should be looking at all that log content we collect.

    This wasn’t the first @lulzsec hack and it will not be the last. They are providing a good reminder to organizations to take their external network presence seriously.

    Laura Brandimarte
    Alessandro Acquisti
    Joachin Vosgerau

    Twitter transcript

    #weis2011 How does information related to past events and retrieved today get discounted? Why does neg valence receive more weight?

    #weis2011 how do we improve trustworthyness?

    #weis2011 "designers of modern tech do not understand human fallibility and design systems w/o taking them into account" < true #weis2011 the reason why bad sticks better than good is that they way it gets discounted may be different. #weis2011 experiments were survey based & randomized. all were students < not sure that's random enough or broad enough selection #weis2011 (me) I hope they make the slides avail. ton of good info I just can't capture (and I don't have an e-copy) #weis2011 "good" information only matters if it's _recent_. "bad" information is not discounted at all. it "sticks" < huge e-implications

    Susan Landau
    Tyler Moore

    Presentation [PDF]

    Tyler presented really well and it’s a great data set and problem to investigate. He & Susan shed a ton of light on an area most folks never think about. Well done.

    Twitter transcript

    #weis2011 this looks to be a "must read" resource for anyone embarking on a federated identity management (FIM) system.

    #weis2011 Tussle #1: Who gets to collect transactional data? FIMs generate a TON of data. Diff FIMs benefit svc prvdrs, others id prvdrs

    #weis2011 Facebook is a HUGE FIM, both id provider & service provider < and u thought it was just for congresscritters to show private parts #weis2011 FIM platforms that share social graph data attract more service providers < so much for privacy #weis2011 Tussle #2: who sets rules for authentication in FIMs? Time to market is primary concern. Users want "easy" < security loses #weis2011 Tussle #3: What happens when things go wrong? svc unavail == no login; unauth users can be incorrectly authenticated; lots of finger pointing

    Catherine Tucker

    Presentation [PDF]

    Catherine’s talk was really good. She handled questions well and is a very dynamic speaker. I’m looking forward to the paper.

    Twitter transcript

    #weis2011 Premise of the study was to see what impact privacy controls enablement/usage have on advertising. It's an empirical study #data!

    #weis2011 click through rates DOUBLED for personalized ads after the fb privacy controls policy change

    #weis2011 it's been a "slightly augment the slides with humor" for the remaining slides. Good data. View the slides & paper (when avail)

    Nevena Vratonjic
    Julien Freudiger
    Vincent Bindschaedler
    Jeane-Pierre Hubaux

    Presentation [PDF]

    Twitter transcript

    #weis2011 Overview of basic ssl/tls/https concepts. Asking: how prevalent is https, what are problems with https?

    #weis2011 Out of their large sample, only 1/3 (34.7%) have support for https, login is worse! only 22.6% < #data!

    #weis2011 (me) just like Microsoft for patches/vulns, everyone uses Bank of America for https & identity examples. #sigh

    #weis2011 More Certificates 101, but a good venn diagram explaining what authentication success looks like w/%ages. rly good visualization.

    #weis2011 domain mismatch accounts for over 80% of certificate authentication failures. why? improper reuse. it has a simple solution (SNI)

    #weis2011 the team did a very thorough analysis that puts data behind what most folks have probably assumed. #dataisspiffy

    #weis2011 We've created a real mess for users with certs. EV certs help, but are expensive and not pervasive (***6%***!)

    #weis2011 economics don't back good cert issuance practices; 0 liability on issuers; too many subcontractors; we trained users to click "OK"

    #weis2011 great slide on CA success rates (hint: godaddy is #1) #sadtrombone

    #weis2011 sample: 1 million web sites; less than 6% do SSL/TLS right. cheap certs == cheap "security"; policies need to change incentives

    #weis2011 URL for the data is in the last slide. first question is challenging the approach for the analysis and went on for a while