Skip navigation

Tag Archives: Computing

I usually take a peek at the Internet Traffic Report (ITR) a couple times a day as part of my routine and was a bit troubled by all of the red today:

I wanted to do some crunching on the data, and I deliberately do not have Word or Excel on my new MacBook Pro (for reasons I can detail if asked). A SELECT / CUT / PASTE into TextWrangler did not really thrill me and I knew there had to be a way to get non-marked-up, columnar data into a format I could mangle and share easily.

Enter, Google Shreadsheet’s importHTML function.

If you don’t have the forumla bar enabled in Google Spreadsheets, just go to View->Formula Bar to enable it. Once there, enter the following in the formula bar to get the data from the ITR into a set of columns that will auto-update every time you reference the sheet.

=importHTML("http://www.internettrafficreport.com/namerica.htm","table",0)

(as you can see, it’s not case sensitive, either)

Yes, I know Excel can do this. I could have done a quick script whack the pasted data in TextWrangler. You can do something similar in R with htmlTreeParse + xpathApply and Perl has HTML::TableContentParser (and other handy modules), but this was a fast, easy way to get me to a point where I could do the basic analytics I wanted to perform (and, sometimes, all you need is quick & easy).

Official Google Help page on importHTML.

Feedburner has borked the old RSS feed for the site and has completely disassociated me from it (meaning it’s no longer in my Google Feedburner admin options and they won’t let me re-claim it).

So… the new feed link is http://rud.is/b/feed/atom/.

Apologies for any inconvenience.

Another #spiffy tip from @MetricsHulk:

Evan Applegate put together a great & simple infographic for Businessweek that illustrates the number and size of 2011 data breaches pretty well.


(Click for larger version)

The summary data (below the timeline bubble chart) shows there was a 37.4% increase in reported incidents and over 260 million records exposed/stolen for the year. It will be interesting to see how this compares with the DBIR.

As you can probably tell from a previous post, I’m not a fan of paywalls—especially poorly implemented ones. Clicking on a link in an RSS feed post and having it land on a page, only to have it smothered in an HTML layer or — in the following case — promptly redirected to “Pay up, buddy!” sites is frustrating at best. I’ll gladly debate the efficacy of paywalls vs other means of generating revenue in another post (or even in the comments, if civil). I primarily wanted to write this post to both show the silliness of the implementation of Foster’s Daily Democrat’s paywall and point out a serious deficiency in Chrome.

First up, lame paywall. You get three free direct story link visits prior to be asked to register and eventually pay for content. NOTE: You could just be going to the same story three times (say, after a browser crash) and each counts as a visit. After those visits, you have to register and give up what little anonymity you have on the Internet to be able to view up to an additional ten free direct story links before then being forced to pay up. If you are a print subscriber, you do get access for “free”, but there’s that tracking thing again. Foster’s uses a service called Clickshare to handle the subscription and tracking. Just how many places do you need to have your data stored/tracked just to read a (most likely) mediocre piece of news?

The paywall setup is accomplished by a simple “Meta Refresh” tag. In its most basic form, it is an instruction that tells the browser to load a particular URL after a certain amount of time. In the case of Foster’s paywall, the HTML tag/directive looks like this:

[code lang=”html”]<meta http-equiv="refresh" content="0;url=https://home.fosters.com/clickshare/authenticateUserSubscription.do?CSAuthReq=1&CSTargetURL=…"/>[/code]

It’s telling your browser to double-check with their Clickshare code immediately after teasing you with the article content. And, it’s easy to circumvent. Mostly. The problem is, I’m a Chrome user 99% of the time and Google has not seen fit to allow control over the meta refresh directive. To jump the paywall, you’ll need to fire up Firefox. And enter “about:config” in the location bar (and click through the warning message).

Once there, filter for “refresh”, find the setting for “blockautorefresh” and set it to “true“.

Now, every time you visit a web site that attempts to auto-refresh full browser pages, you’ll see a warning (with the option to allow the action):

Why Chrome has not implemented a way to control this is beyond me. Since Safari also has no ability to control this setting, it may have something to do with the webkit core that both browsers are based on.

This doesn’t stop the frustration with the RSS-click-to-read and it doesn’t help iOS/Android users, but it does provide a means help keep a bit of anonymity (if you also use other extensions and controls) and should force these sites to kick their paywall game up a notch.

The FBI made a tool to help you determine if you were a victim of the DNSChanger malware.

If you’re like many casual Internet users, you have no idea how to get the information to plug into the input box.

Unfortunately, the security model of most modern browsers makes it impossible to easily retrieve this information. However, it is possible to grab the DNS entries if the user is willing to trust the requesting source.

To help make it easier to determine if you’re infected, I wrote DNSChanger Detector. It’s a small Java applet that requires the user to allow it to have privileged access to the DNS entries via a call to sun.net.dns.ResolverConfiguration to get the nameservers. Once it has them, there is some jQuery glue in place to let Javascript access the results.

I understand why the FBI didn’t attempt to go this route, but it will hopefully be useful to folks who don’t wish to walk their friends and family through the process.

What can the @lulzsec senate.gov dump tell us about how the admins maintained their system/site?

[code light=”true”]SunOS a-ess-wwwi 5.10 Generic_139555-08 sun4u sparc SUNW,SPARC-Enterprise[/code]

means they haven’t kept up with OS patches. [-1 patch management]

[code light=”true”]celerra:/wwwdata 985G 609G 376G 62% /net/celerra/wwwdata[/code]

tells us they use EMC NAS kit for web content.

The ‘last‘ dump shows they were good about using normal logins and (probably) ‘sudo‘, and used ‘root‘ only on the console. [+1 privileged id usage]

They didn’t show the running apache version (just the config file…I guess I could have tried to profile that to figure out a range of version numbers). There’s decent likelihood that it was not at the latest patch version (based on not patching the OS) or major vendor version.

[code light=”true”]Alias /CFIDE /WEBAPPS/Apache/htdocs/CFIDE
Alias /coldfusion /WEBAPPS/Apache/htdocs/coldfusion
LoadModule jrun_module /WEBAPPS/coldfusionmx8/runtime/lib/wsconfig/1/mod_jrun22.so
JRunConfig Bootstrap 127.0.0.1:51800[/code]

Those and other entries says they are running Cold Fusion, an Adobe web application server/framework, on the same system. The “mx8” suggests an out of date, insecure version. [-1 layered product lifecycle management]

[code light=”true”] SSLEngine on
SSLCertificateFile /home/Apache/bin/senate.gov.crt
SSLCertificateKeyFile /home/Apache/bin/senate.gov.key
SSLCACertificateFile /home/Apache/bin/sslintermediate.crt[/code]

(along with the file system listing) suggests the @lulzsec folks have everything they need to host fake SSL web sites impersonating senate.gov.

Sadly,

[code light=”true”]LoadModule security_module modules/mod_security.so

<IfModule mod_security.c>
# Turn the filtering engine On or Off
SecFilterEngine On

# Make sure that URL encoding is valid
SecFilterCheckURLEncoding On

# Unicode encoding check
SecFilterCheckUnicodeEncoding Off

# Only allow bytes from this range
SecFilterForceByteRange 0 255

# Only log suspicious requests
SecAuditEngine RelevantOnly

# The name of the audit log file
SecAuditLog logs/audit_log

# Debug level set to a minimum
SecFilterDebugLog logs/modsec_debug_log    
SecFilterDebugLevel 0

# Should mod_security inspect POST payloads
SecFilterScanPOST On

# By default log and deny suspicious requests
# with HTTP status 500
SecFilterDefaultAction &quot;deny,log,status:500&quot;

</IfModule>[/code]

shows they had a built-in WAF available, but either did not configure it well enough or did not view the logs from it. [-10 checkbox compliance vs security]

[code light=”true”]-rw-r–r– 1 cfmx 102 590654 Feb 3 2006 66_00064d.jpg[/code]

(many entries with ‘102’ instead of a group name) shows they did not do identity & access management configurations well. [-1 IDM]

The apache config file discloses pseudo-trusted IP addresses & hosts (and we can assume @lulzsec has the passwords as well).

As I tweeted in the wee hours of the morning, this was a failure on many levels since they did not:

  • Develop & use secure configuration of their servers & layered products + web applications
  • Patch their operating systems
  • Patch their layered products

They did have a WAF, but it wasn’t configured well and they did not look at the WAF logs or – again, most likely – any system logs. This may have been a case where those “white noise port scans” everyone ignores was probably the intelligence probe that helped bring this box down.

Is this a terrible breach of government security? No. It’s a public web server with public data. They may have gotten to a firewalled zone, but it’s pretty much a given that no sensitive systems were on that same segment. This is just an embarrassment with a bit of extra badness in that the miscreants have SSL certs. It does show just how important it is to make sure server admins maintain systems well (note, I did not say security admins) and that application teams keep current, too. It also shows that we should be looking at all that log content we collect.

This wasn’t the first @lulzsec hack and it will not be the last. They are providing a good reminder to organizations to take their external network presence seriously.

If you are concerned about the Dropbox design flaw exposed by the dbClone attack, then have we got a link for you!

The intrepid DB devs have tossed up a forum release which purports to fix all the thorny security issues. You can no longer just copy a config file to a separate machine to clone a filesystem and the file itself is now also encrypted. (Forum builds do not automagically download like standard Dropbox updates)

Given the fact that Dropbox did not prompt me for any credentials when I started the new version, I’m still a bit skeptical that it has truly fixed the problem. Given my schedule today, I doubt I’ll have time to poke at it before someone else does, but the thoroughness of this fix does need to be independently validated. The local Dropbox client has to be getting the encryption key/passphrase from *somewhere*, and if it’s not prompting me on start, then it’s stored online or locally and that’s a recipe for another hack.

There is nothing overt in the application bundle (looking on OS X) or quickly discernable from a dump of a few of the app’s .pyc files. Granted, a bit of obfuscation will stop the current hack and dissuade some other sophomoric attempts, but I can almost guarantee that the passphrase (or the algorithm one needs to discern the passphrase) will be found by folks.

The new build replaces your local configuration file with a new, encrypted one (now named config.dbx). I didn’t see signs of either SQLiteEncrypt, SEE, SQLCipher or SQLiteCrypt but haven’t had time to dig more thoroughly. It’s completely possible the Dropbox devs just built an encryption layer over the Dropbox calls themselves (which is not a difficult task).

Please note that forum builds are not necessarily stable and that this is a pretty major architecture change. I had no issues on OS X, but I suspect that any micro-errors in your SQLite config.db may cause some heartache if you do attempt the upgrade. Best to wait for a full production release if you do not have your Dropbox backed up somewhere.

Spent some time today updating the missing bits of the OS X version of the Dropbox cloner I uploaded last night. You can just grab the executable or grab the whole project from the github repository.

The app can now backup/restore of local config, clone dropbox configs to a URL/file and also impersonate a captured Dropbox config.

Use it all at your own risk. As stated in the original post, all comments, bugs, additions, fixes etc. are welcome either here or at github.