Skip navigation

Tag Archives: Conficker

Speakers: Fruhwirth, Proschinger, Lendl, Savola

“On the use of name server log data as input for security measurements”

 

CERT.at ERT

  • coordinate sec efforts & inc resp for IT sec prblms on a national level in Austria
  • constituted of IT company security teams and local CERTs

 

Why name server data?

  • CERT.at is mandated to inform and facilitate comm.
  • DNS data is a rich data source, easily obtainable
  • DNS logs usefulness increased if you can get them from the largest number of users

 

DNS 101

  • gTLDs & ccTLDs
  • ccTLDs have local policies
  • Passive collection will enable determination of IP addr changes, NS changes & domains/IP but impracticable to have sensors everywhere

 

DNS Reporting View

DNS view is a matrix for stakeholders and security chain/measurements

Vuln CERT | Large Co | SME | User

Exp
Threat
Risk
Countermasure
Incident

third dimension – field of view – DNS hierarchy changes view picture

 

4 example use cases CERT.at worked on:

Aurora

  • CnC server was based on dynamic DNS

/me: their matrix analysis makes it easy to see where DNS logs provided insigne to signs of vuln, severity of threat per stakeholder, whether it was something an org could have identified on their own (data source)

 

Conficker

Pseudorandom domains B-250 regs/day, C-450 .at domains/day

Aconet CERT runs nameservers and a sinkhole

CERT.at uses the DNS data to generate warnings

/me: the table view shows that you can both detect with DNS and deploy countermeasures with DNS (and what org can do what)

 

Kaminsky DNS Bug

CERT.at used logs to score resolvers

score = port changes/queries & ports/min  (higher score == better)

they were able to see how quickly servers were patched (very interesting view)

/me: the chart is a bit hard to read but it shows the difficulty of not having a larger view of DNS to help detect subtle issues like this one

 

Stuxnet

CnC attempts visible in DNS logs

/me: the chart shows that if you knew the domains, you could have detected in your own network

 

There are blind points: lack of visibility in top-down view; DNS can’t really show severity

info exchange on signs of exploited vulns; focus on info exchange of incidents

[side-talk: how do we incent folks to share data… “ask nicely!”…]

[side-talk: what % did not know who CERT.at was? 80% of crit infra knew; highly dependent on sector; CERT.at deliberately hired across sectors to help promote /me: good q]

[side-talk: good discussion on CERT practices; how they detect and then how they engage constituents]

Speaker: Juhaniu Eronen

“The Autoreporter Project” – Background

Goal: make finland mostly harmless to the rest of the internet

(that’s actually in the law – Protection of Privacy in Electronic Comms/Finland)

 

/me: I’ll need to put some verbiage around this tonight to give you a good picture of what Juhaniu was conveying…really good description of their charter, goals, challenges, successes

 

What’s a “finnish” system:

  • any autonomous systems in finnish soil, operated or owned by finnish orgs
  • .fi .ax domains
  • +358 telephone prefix
  • other networks owned by finnish orgs
  • finnish banks/brands/CC

 

Telcos mandated to report infosec incidents as well as major faults affecting users, networks or provider ability to operate

 

FICORA

Regulation for finnish security providers: Basic security of facilities & processes, Business continuity, spam blocking

  • Setup mandatory reporting for ISPs
  • Establish CERT-FI

 

Issues

Problem: Finland cleans up its own house, but they still end up getting attacked!

Problem: Most incidents are out of scope in mandated reporting

Problem: Establishing CERT-FI – no ownership or visibility of network; 3 ppl that in theory are expected to be there 7×24!

Huge increase in incidents [reported] from 2002-2006. It’s a pretty graph, but it really shows that the CERT-FI workforce increased and that processes were honed

 

How many incidents affect finnish networks?

How are we compared to neighbors (would love to take a data-driven jab at swedes).

 

So, workforce, regulatory and other constraints & need for actionable data == make automated system.

 

2006: created automated system to capture incident reports (mostly malware) from various monitoring projects around the globe.

Daily reports, e-mailed, CSV format pre-defined agreed-upon subjects. digitally signed. reported incidents in body.

 

How CERT-FI handles abuse:

  • detection
  • reports (e-mail/phone/fax) – Funny story: one woman printed out all the spam she received and sent to CERT-FI, until asked not to anymore.
  • Scraping feeds, normalizing/correlating data
  • Finding owners
  • -Map bad events to netblocks
  • -maintain contact list (& contact prefs!)
  • -manage customer expectations
  • Report out stats, trends, chronic cases
  • Assist in incident response

 

There are dozens of projects, data sources, blacklists etc but they vary in format (even timestamps), purpose, channel (IRC, http, ftp)

  • data is frequently missed due to downtime, system availability
  • info integrity is difficult to gauge
  • bugs in feeds data & reporting
  • wildly differing frequency of updates (realtime to monthly)
  • taxonomies are diverse
  • detail level not discrete

 

Ensuring Focus of CERT-FI

  • What are we not seeing?
  • What should I prepare for?
  • Who is the target of damage & who is just collateral
  • Can the data/sources be trusted?

 

[side-talk: CERT-FI manages intake and the privacy laws make it difficult to delegate collection to the ISPs]

[side-talk: 5.5 mill population of finland, very high # of folks with internet access, everyone has a cell phone. internet considered a basic human right]

 

CERT-FI shows ISP incident graphs in comparison to other ISPs. /me: the embarrassment factor is a good motivator

interesting: conficker is still a problem

CERT-FI autoreporter can actually report out incidents per broadband customer (trending)

 

Abusehelper: http://code.google.com/p/abusehelper/wiki/README

Abuse Helper is toolkit for CERT and Abuse teams. It is a modular, (hopefully) scalable and robust framework to help you in your abuse handling.

With Abuse Helper you can:

  • Retrieve Internet Abuse Handling related information via several sources which are
    • near-real-time (such as IRC)
    • periodic (such as Email reports), or
    • request/response (such as HTTP).
  • You can then aggregate that information based on different keys, such as AS numbers or country codes
  • Sent out reports in different formats, via different transports and using different timings

Abuse Helper features include:

  • Fully modular (you can utilize different readers, parsers, transports, splitters, combiners in a pipe-like manner)
  • Scalable: you can distribute the work to different machines and different geolocations
  • Observable: you can use your favourite XMPP client to observe the bots at work

 

Great overall presentation for the rationale to report incidents outside your org