Skip navigation

Category Archives: AppSec

The 2018 IEEE Security & Privacy Conference is in May but they’ve posted their full proceedings and it’s better to grab them early than to wait for it to become part of a paid journal offering.

There are alot of papers. Not all match my interests but (fortunately?) many did and I’ve filtered down a list of the more interesting (to me) ones. It’s encouraging to see academic cybersecurity researchers branching out across a whole host of areas.

I can’t promise a the morning paper-esque daily treatment of these on the blog but I’ll likely exposit a few of them over the coming weeks. I’ve emoji’d a few that stood out. Order is the order I read them in (no other meaning to the order).

I caught a mention of this project by Pete Warden on Four Short Links today. If his name sounds familiar, he’s the creator of the DSTK, an O’Reilly author, and now works at Google. A decidedly clever and decent chap.

The project goal is noble: crowdsource and make a repository of open speech data for researchers to make a better world. Said sourcing is done by asking folks to record themselves saying “Yes”, “No” and other short words.

As I meandered over the blog post I looked in horror on the URL for the application that did the recording: https://open-speech-commands.appspot.com/.

Why would the goal of the project combined with that URL give pause? Read on!

You’ve Got Scams!

Picking up the phone and saying something as simple as ‘Yes’ has been a major scam this year. By recording your voice, attackers can replay it on phone prompts and because it’s your voice it makes it harder to refute the evidence and can foil recognition systems that look for your actual voice.

As the chart above shows, the Better Business Bureau has logged over 5,000 of these scams this year (searching for ‘phishing’ and ‘yes’). You can play with the data (a bit — the package needs work) in R with scamtracker.

Now, these are “analog” attacks (i.e. a human spends time socially engineering a human). Bookmark this as you peruse section 2.

Integrity Challenges in 2017

I “trust” Pete’s intentions, but I sure don’t trust open-speech-commands.appspot.com (and, you shouldn’t either). Why? Go visit https://totally-harmless-app.appspot.com. It’s a Google App Engine app I made for this post. Anyone can make an appspot app and the https is meaningless as far as integrity & authenticity goes since I’m running on google’s infrastructure but I’m not google.

You can’t really trust most SSL/TLS sessions as far as site integrity goes anyway. Let’s Encrypt put the final nail in the coffin with their Certs Gone Wild! initiative. With super-recent browser updates you can almost trust your eyes again when it comes to URLs, but you should be very wary of entering your info — especially uploading voice, prints or eye/face images — into any input box on any site if you aren’t 100% sure it’s a legit site that you trust.

Tracking the Trackers

If you don’t know that you’re being tracked 100% of the time on the internet then you really need to read up on the modern internet.

In many cases your IP address can directly identify you. In most cases your device & browser profile — which most commercial sites log — can directly identify you. So, just visiting a web site means that it’s highly likely that web site can know that you are both not a dog and are in fact you.

Still Waiting for the “So, What?”

Many states and municipalities have engaged in awareness campaigns to warn citizens about the “Say ‘Yes'” scam. Asking someone to record themselves saying ‘Yes’ into a random web site pretty much negates that advice.

Folks like me regularly warn about trust on the internet. I could have cloned the functionality of the original site to open-speech-commmands.appspot.com. Did you even catch the 3rd ‘m’ there? Even without that, it’s an appspot.com domain. Anyone can set one up.

Even if the site doesn’t ask for your name or other info and just asks for your ‘Yes’, it can know who you are. In fact, when you’re enabling the microphone to do the recording, it could even take a picture of you if it wanted to (and you’d likely not know or not object since it’s for SCIENCE!).

So, in the worst case scenario a malicious entity could be asking you for your ‘Yes’, tying it right to you and then executing the post-scam attacks that were being performed in the analog version.

But, go so far as to assume this is a legit site with good intentions. Do you really know what’s being logged when you commit your voice info? If the data was mishandled, it would be just as easy to tie the voice files back to you (assuming a certain level of data logging).

The “so what” is not really a warning to users but a message to researchers: You need to threat model your experiments and research initiatives, especially when innocent end users are potentially being put at risk. Data is the new gold, diamonds and other precious bits that attackers are after. You may think you’re not putting folks at risk and aren’t even a hacker target, but how you design data gathering can reinforce good or bad behaviour on the part of users. It can solidify solid security messages or tear them down. And, you and your data may be more of a target than you really know.

Reach out to interdisciplinary colleagues to help threat model your data collection, storage and dissemination methods to ensure you aren’t putting yourself or others at risk.

FIN

Pete did the right thing:

and, I’m sure the site will be on a “proper” domain soon. When it is, I’ll be one of the first in line to help make a much-needed open data set for research purposes.

It’s usually a good thing when my and infosec worlds collide. Unfortunately, this time it’s a script that R folk running on OS X can use to see if they are using a version of XQuartz that has a nasty vulnerability in the framework it uses to auto-update. If this test comes back with the warning, try to refrain from using XQuartz on insecure networks until the developers fix the issue.

**UPDATE**

Thanks to a gist prodding by @bearloga, here’s a script to scan all your applications for the vulnerability:

library(purrr)
library(dplyr)
library(XML)
 
read_plist <- safely(readKeyValueDB)
safe_compare <- safely(compareVersion)
 
apps <- list.dirs(c("/Applications", "/Applications/Utilities"), recursive=FALSE)
 
# if you have something further than this far down that's bad you're on your own
 
for (i in 1:4) {
  moar_dirs <- grep("app$", apps, value=TRUE, invert=TRUE)
  if (length(moar_dirs) > 0) { apps <- c(apps, list.dirs(moar_dirs, recursive=FALSE)) }
}
apps <- unique(grep("app$", apps, value=TRUE))
 
pb <- txtProgressBar(0, length(apps), style=3)
 
suppressWarnings(map_df(1:length(apps), function(i) {
 
  x <- apps[i]
 
  setTxtProgressBar(pb, i)
 
  is_vuln <- FALSE
  version <- ""
 
  app_name <- sub("\\.app$", "", basename(x))
  app_loc <- sub("^/", "", dirname(x))
 
  to_look <- c(sprintf("%s/Contents/Frameworks/Autoupdate.app/Contents/Info.plist", x),
               sprintf("%s/Contents/Frameworks/Sparkle.framework/Versions/A/Resources/Info.plist", x),
               sprintf("%s/Contents/Frameworks/Sparkle.framework/Versions/A/Resources/Autoupdate.app/Contents/Info.plist", x))
 
  is_there <- map_lgl(c(sprintf("%s/Contents/Frameworks/Sparkle.framework/", x), to_look), file.exists)
 
  has_sparkle <- any(is_there)
 
  to_look <- to_look[which(is_there[-1])]
 
  discard(map_chr(to_look, function(x) {
    read_plist(x)$result$CFBundleShortVersionString %||% NA
  }), is.na) -> vs
 
  if (any(map_dbl(vs, function(v) { safe_compare(v, "1.16.1")$result %||% -1 }) < 0)) {
    is_vuln <- TRUE
    version <- vs[1]
  }
 
  data_frame(app_loc, app_name, has_sparkle, is_vuln, version)
 
})) -> app_scan_results
 
close(pb)
 
select(arrange(filter(app_scan_results, has_sparkle), app_loc, app_name), -has_sparkle)

This is an inaugural post for @MetricsHulk, on the condition that there are few – if any – “ALL CAPS” bits. Q3&4 tend to be “report season”, and @MetricsHulk usually has some critiques, praises, opines and suggestions (some smashes, too) to offer as we are inundated with a blitz of infographics.

The always #spiffy @WhiteHatSec released their 2011 Web Site Security stats report [direct link (PDF)] last week (here’s one of their teaser tweets):

With over 7,000 sites and hundreds of diverse organizations represented in the report, it is a great resource for folks to see how they stack up (more on that in a bit). Security folks should also take some encouragement from the report since:

  • Real vulnerabilities are down (significantly)
  • WAFs can help
  • Vulnerabilities are getting fixed faster (when found)

@WhiteHatSec does a fine job summarizing key & extended findings (hint: read the report), and they are awesomely up-front and honest with regard to the findings (see pages 4 & 5 for their analysis on why the ‘good stats’ might be so good).

The report is chock-full of data. Real. Data. The only way it could have been better data-wise is if they provided a Google Docs bundle of raw numbers. (NOTE: I didn’t get all the data in there, but it has decent amount from the report)

I do think there is some room for improvement. Take, for example, the – sigh – donut chart on page 9. I might be inclined to refrain from comment if this was one of those hipster infographics that seem to be everywhere these days. A pie chart isn’t much better, but at least we’re able to process the relative sizes a bit better when the actual angles are present. Here’s a before/after makeover for your comparison/opine (click for larger version):

We get an immediate sense of scale from the bars and it removes the need for the “Frosted Lucky Charms” color-wheel effect. The @WhiteHatSec folk use bars (very appropriately) almost everywhere else, so I’m not sure what the design decision was for deviating for this part of the report.

The next bit that confused me was Figure 18 (page 15). I’m having difficulty both figuring out where the “79” value comes from (I can’t get to it by averaging the values presents) and grok’ing the magnitude of the differences from the bubbles. So, here’s another before/after makeover for your comparison/opine (click for larger version):

Finally, I think Figure 23 & 24 could do with a bit of a slopegraph makeover, as the spirit of the visualization is to show year-over-year differences. The first two slopegraphs used the “Tufte binning technique“, so you’ll need to refer to the companion data tables if you want exact numbers for comparison (the trend is more important, IMO).

Average Days Open

Average Days to Close

Remediation Rates by Year

(You can also download easier to read PDFs of the slopegraphs)

Absolutely no one should take the makeover suggestions as report slander. As stated at the beginning of the post, @WhiteHatSec is open about the efficacy of their data and analysis, plus they provide actual data. The presentation of stats & trending by industry and vulnerability type should help any organization with an appsec program figure out if they are doing better or worse the others in their sector and see if they are smashing bugs with similar success. It also gives the general infosec community a view that we would otherwise not have. I would encourage other organizations to follow @WhiteHatSec’s example, even if it means more donut charts (mmm…donuts).

What information did you glean from the WhiteHat report, or what makeovers would you encourage for the next one?