It’s Davos time again. Each year the World Economic Forum (WEF) gathers the global elite together to discuss how they’re going to shape our collective future. WEF also releases their annual Global Risks Report at the same time. I read it every year and have, in the past, borrowed some risk communication visualization idioms from it since — in theory — senior executives are supposed to be able to understand them (the report is designed for consumption by senior leaders across the globe).
I also read it to gauge what the general public groks about risks associated with cybersecurity (since that’s a core part of my day job). One way to do that is to see how many “cyber”-related things appear in the report and get rated in their top 30 risks section. They intuit these risks from conducting surveys across samples of many populations (this is their visualization for the composition):
This post is less about “cyber” and more about bringing three “details” out of the report.
Detail #1 : Methodology Matters
Page 65 of the PDF report indicates they modified both the underlying foundations for the initial Impact and Likelihood scales and mentions that:
It is worth noting that, as a consequence of the scale modification, the impact results cannot be compared with those of previous years.
One more time: you can’t compare this year over year. They’ve done this re-jiggering before, so unless you’re just looking at relative positions of named big risk buckets you really can’t glean anything by looking at previous reports next to latter reports or previous risks to current risks in any detailed way. Remember: Always read the “methodology” section of any report. If it doesn’t have one, consider using it for kindling.
Detail #2 : CYA (Check Your Axes)
I generally end up giving the production team behind the Global Risks Report minor-to-major kudos each year as they usually produce a slick PDF (I’m sure the printed version looks spiffy as well) with occasional usefully-innovative visualization choices. One of their seminal charts is the global risks likelihood/impact scatterplot that I’m sure most folks familiar with risk-oriented visualizations have seen:
There’s a visible area “mini-map” with both the online and print versions:
I believe there’s going to be a very real tendency to overlook the minimap and to read the risk chart without looking at the axis values and interpret it as the risks in the lower left hand corder as being “low” and those in the upper right hand corner as being “high” (it’s how risk scatterplots are designed).
The zoomed in view will also likely cause readers to think that some risks are very widely separated from others. They’re not. They are pretty much in 3 buckets and are pseudo-squishy-medium-yellow-ish risks (which can mean that the risk estimators hedged their guesses). I realize they zoomed in to enable seeing the labels for the risks and possibly compare the diamond sizes; while not as pristine, we can show them with their names on the full risk board (tap/click it to focus only on the chart):
Remember, these are the top risks and they are squishy. They could be squishier and it’s worth noting that they also look much closer together in the macro view. Perhaps the world isn’t so risky after all.
Detail #3 : Group Responses
The methodology section of the report provides equations that document how they aggregated the risk estimates across their sample groups. The team behind the report also made interactive versions. You can & should go there to see how each group assessed the pool of risks. Thankfully, they didn’t use a closed and expensive system like Tableau to make those interactive graphics, which means we can peek behind the scenes and grab the data ourselves (github link for the cleaned & combined data at the end of the post). Here are the zoomed-in/zoomed-out breakdowns between the groups:
WEF brought “opposite” groups to the table to make risk estimates, so we can also use this data to compare their overall risk scores (with a lazy impact x likelihood valuation) between opposite groups:
Environmental risks bubbled to the top, possibly as a result of holding the survey in a time when drought and breaking ice-shelfs are top-of-mind. Keen observers will note that those are zoomed in views. Here are the same slopegraphs on the full range of risk values:
That paints an even better picture of the squishiness of said risks.
FIN
One could deep-dive into many of those charts to pull out some interesting things, especially the how various opposite groups rated various risks (though there were only a few semi-stark contrasts). Actually, you can deep dive into them as the full data from the risk rating interactive visualizations and the R code that generated the above charts are all at https://github.com/hrbrmstr/devils_in_the_davos_details.
Even if there is little efficacy to this annual event, you now have some ++gd data to practice with in ggplot2
, dplyr
and tidyr
.
North Carolina’s Neighborhood
When I saw the bombastic headline “North Carolina is no longer classified as a democracy” pop up in my RSS feeds today (article link: http://www.newsobserver.com/opinion/op-ed/article122593759.html) I knew it’d help feed polarization bear that’s been getting fat on ‘Murica for the past decade. Sure enough, others picked it up and ran with it. I can’t wait to see how the opposite extreme reacts (everybody’s gotta feed the bear).
As of this post, neither site linked to the actual data, so here’s an early Christmas present: The Electoral Integrity Project Data. I’m very happy this is public data since this is the new reality for “news” intake:
Data literacy is even more important than it has been.
Back to the title of the post: where exactly does North Carolina fall on the newly assessed electoral integrity spectrum in the U.S.? Right here (click to zoom in):
Focusing solely on North Carolina is pretty convenient (I know there’s quite a bit of political turmoil going on down there at the moment, but that’s no excuse for cherry picking) since — frankly — there isn’t much to be proud of on that entire chart. Here’s where the ‘States fit on the global rankings (we’re in the gray box):
You can page through the table to see where our ‘States fall (we’re between Guana & Latvia…srsly). We don’t always have the nicest neighbors:
This post isn’t a commentary on North Carolina, it’s a cautionary note to be very wary of scary headlines that talk about data but don’t really show it. It’s worth pointing out that I’m taking the PEI data as it stands. I haven’t validated the efficacy of their process or checked on how “activist-y” the researchers are outside the report. It’s somewhat sad that this is a necessary next step since there’s going to be quite a bit of lying with data and even more lying about-and/or-without data over the next 4+ years on both sides (more than in the past eight combined, probably).
The PEI folks provide methodology information and data. Read/study it. They provide raw and imputed confidence intervals (note how large some of those are in the two graphs) – do the same for your research. If their practices are sound, the ‘States chart is pretty damning. I would hope that all the U.S. states would be well above 75 on the rating scale and the fact that we aren’t is a suggestion that we all have work to do right “here” at home, beginning with ceasing to feed the polarization bear.
If you do download the data, here’s the R code that generated the charts: