Before digging into this post, I need to set some context.
Friday, May 13, 2022 was my last day at my, now, former employer of nearly seven years. I’m not mentioning the company name because this post is not about them.
This post is about burnout and the importance of continuous monitoring and maintenance of you.
Occasionally, I mention that I’m one of those Peloton cult members. Each instructor has a pull-list of inspirational quotes that they interject in sessions, and I’ve worked pretty hard across many decades curating mental firewall rules for such things, as words can have real power and should not be consumed lightly.
Like any firewall, some unintended packets get through, and one of Jess King’s mantras kept coming back to me recently as I was post-processing my decision to quit.
My biggest fear is waking up tomorrow and repeating today.
Many events ensued, both over the years and very recently, prior to giving notice, which was three weeks before my last day. Anyone who has built a fire by hand, by which I mean use a technique such as a bow drill vs strike a match, knows that it can take a while for the pile of kindling to finally go from docile carbon to roaring flame. For those more inclined to books than bivouacs, it’s also a bit like bankruptcy:
“How did you go bankrupt?” Bill asked.
“Two ways,” Mike said. “Gradually and then suddenly.”
That’s how I’d describe finally making the decision.
Personal Observability Failures
Observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs. I’m using that term as many folks reading this will have come from similar technical backgrounds and it has been my (heh) observation that technically inclined folks seem to have a harder time with emotional language than they do with technical language. I certainly do.
The day after officially giving notice, I went — as usual — to the DatCave to begin the day’s work after getting #4 and $SPOUSE ready for school(s). After about an hour, I looked down and noticed I wasn’t using my wrist braces.
I should probably describe why that was a Big Deal™.
For the past ~2.5 years I’ve had to wear wrist braces when doing any keyboard typing at all. I’ve had a specific RSI condition since high school that has, on occasion, required surgery to correct. Until this flare-up started, I had not needed any braces, or had any RSI pain, for ages.
But, ~2.5 years ago I started to have severe pain when typing to the point where, even with braces, there were days I really couldn’t type at all. Even with braces, this bout of RSI also impacted finger coordination to the extent that I had to reconfigure text editors to not do what they usually would for certain key combinations, and craft scripts to fix some of the more common errors said lack of coordination caused. I could tell surgery could have helped this flare-up, but there’s no way I was going for elective surgery during a pandemic.
Seeing full-speed, error-free, painless typing sans-braces was a pretty emotional event. It was shortly thereafter when I realized that I had pretty much stopped reading my logs (what normal folks would might say as “checking in with myself”) ~3 years ago.
Fans of observability know that a failing complex system may continue to regularly send critical event logs, but if nothing is reading and taking action on those logs, then the system will just continue to degrade or fail completely over time, often in unpredictable ways.
After a bit more reflection, I realized that, at some point, I became Bill Murray, waking up each day and just repeating the last day, at least when it came to work. I think I can safely say Jess’ (and Phil‘s) biggest fear is now at least in my own top five.
Burnout, general stress, the Trump years, the rise of Christian nationalism, the pandemic, and the work situation all contributed to this personal, Academy Award-winning performance of Groundhog Day and I’m hoping a small peek into what I saw and what I’m doing now will help at least one other person out there.
Personal Failure Mode Effects And Mitigations
There’s a process in manufacturing called “failure mode and effects analysis” that can be applied to any complex system, including one’s self. It’s the structured act of reviewing as many components, assemblies, and subsystems as possible to identify potential failure modes in a complex system and their causes and effects.
Normal folks would likely just call this “self-regulation, recovery, and stress management”,.
My human complex system was literally injuring itself (my particular RSI is caused by ganglia sac growth; the one in my left wrist is now gone and the right wrist is reducing, both without medical intervention, ever since quitting), but rather than examine the causes, I just attributed it to “getting old”, and kept on doing the same thing every day.
I’ll have some more time for self-reflection during this week of funemployment, but I’ve been assessing the failure modes, reading new recovery and management resources, and wanted to share a bit of what I learned.
Some new resources linked-to in the footnotes, and found in annotated excerpts below, that I have found helpful in understanding and designing corrective systems for my personal failure modes are from Cornell.
- Don’t be afraid of change: For someone who is always looking to the future and who groks “risk management”, I’m likely one of the most fundamentally risk-averse folks you’ve encountered.
I let myself get stuck in a pretty unhealthy situation mostly due to fear of change and being surface-level comfortable. If I may show my red cult colors once again, “allow yourself the opportunity to get uncomfortable” should apply equally to work as it does to watts.
Please do not let risk aversion and surface-level comfort keep you in a bad situation. My next adventure is bolder than any previous one, and is, in truth, a bit daunting. It is far from comfortable, and that’s O.K.
-
Take care of your physical needs: Getting a good night’s rest, eating well, and exercising are all essential to being able to feel satisfaction in life. They’re also three things that have been in scarce supply for many folks during the pandemic.
I like to measure things, but I finally found the Apple Watch lacking in quantified self utility and dropped some coin on a Whoop band, and it was one of the better investments I’ve made. I started to double-down on working out when I learned I was going to be a pampa, as I really want to be around to see him grow up and keep up with him. I’ve read a ton about exercise, diet, etc. over the years, but the Whoop (and Peloton + Supernatural coaches) really made me understand the importance of recovery.
Please make daily time to check in with your mental and physical stress levels and build recovery paths into your daily routines. A good starting point is to regularly ask yourself something like “When I listen to my body, what does it need? A deep breath? Movement? Nourishment? Rest?”
-
Engage in activities that build a sense of achievement: The RSI made it nigh impossible to engage with the R and data science communities, something which I truly love doing, but now realize I was also using as a coping mechanism for the fact that a large chunk of pay-the-bills daily work was offering almost no sense of achievement. I’m slowly getting back into engaging with the communities again, and I know for a fact that the it will be 100% on me if I do not have a daily sense of achievement at the new pay-the-bills daily workplace.
It’d be easy for me to say “please be in a job that gives you this sense of daily achievement”, but, that would be showing my privilege. As long as you can find something outside of an achievement-challenged job to give you that sense of achievement (without falling into the similar trap I did) then that may be sufficient. The next bullet may also help for both kinds of work situations.
You can also be less hard on yourself outside of work/communities and let yourself feel achieved for working out, taking a walk, or even just doing other things from the first bullet.
-
Changing thoughts is easier than changing feelings: Thoughts play a critical role in how we experience a situation. When you notice yourself first becoming frustrated or upset, try to evaluate what you are thinking that is causing that emotion.
This is also known as cognitive re-framing/restructuring. That footnote goes to a paper series, but a less-heady read is Framers, which is fundamentally about the power of mental models to make better decisions. I’d note that you cannot just “stop caring” to dig yourself out of a bad situation. You will just continue to harm yourself.
Note that this last bullet can be super-hard for those of us who have a strong sense of “justice”, but hang in there and don’t stop working on re-framing.
FIN
I let myself get into a situation that I never should have.
Hindsight tells me that I should have made significant changes about four years ago, and I hope I can remember this lesson moving forward since there are fewer opportunities for “four year mistakes” ahead of me than there are behind me.
Burnout — which is an underlying component of above — takes years to recover from. Not minutes. Not hours. Not days. Not weeks. Not months. Years.
I’m slowly back to trying to catch up to mikefc when it comes to crazy R packages. I have more mental space available than I did a few years ago, and I’m healthier and more fit than I have been in a long time. I am nowhere near recovered, though.
If you, too, lapsed when it comes to checking in with yourself, there’s no time like the present to restart that practice. The resources I posted here may not work for you, but there are plenty of good ones out there.
If you’ve been doing a good job on self-care, make sure to reach out to others you may sense aren’t in the same place you are. You could be a catalyst for great change.
AI Proofing Your It/cyber Career: The Human Only Capabilities That Matter
In the past ~4 weeks I have personally observed some irrefutable things in “AI” that are very likely going to cause massive shocks to employment models in IT, software development, systems administration, and cybersecurity. I know some have already seen minor shocks. They are nothing compared to what’s highly probably ahead.
Nobody likely wants to hear this, but you absolutely need to make or take time this year to identify what you can do that AI cannot do and create some of those items if your list is short or empty.
The weavers in the 1800s used violence to get a 20-year pseudo-reprieve before they were pushed into obsolescence. We’ve got ~maybe 18 months. I’m as pushback-on-this-“AI”-thing as makes sense. I’d like for the bubble to burst. Even if it does, the rulers of our clicktatorship will just fuel a quick rebuild.
Four human-only capabilities in security
In my (broad) field, I think there are some things that make humans 110% necessary. Here’s my list — and it’d be great if folks in very subdomain-specific parts of cyber would provide similar ones. I try to stay in my lane.
1. Judgment under uncertainty with real consequences
These new “AI” systems can use tools to analyze a gazillion sessions and cluster payloads, but they do not (or absolutely should not) bear responsibility for the “we’re pulling the plug on production” decision at 3am. This “weight of consequence” shapes human expertise in ways that inform intuition, risk tolerance, and the ability to act decisively with incomplete information.
Organizations will continue needing people who can own outcomes, not just produce analysis.
2. Adversarial creativity and novel problem framing
The more recent “AI” systems are actually darn good at pattern matching against known patterns and recombining existing approaches. They absolutely suck at the “genuinely novel” — the attack vector nobody has documented, the defensive technique that requires understanding how a specific organization actually operates versus how it should operate.
The best security practitioners think like attackers in ways that go beyond “here are common TTPs.”
3. Institutional knowledge and relationship capital
A yuge one.
Understanding that the finance team always ignores security warnings — especially Dave — during quarter-close. That the legacy SCADA system can’t be patched because the vendor went bankrupt in 2019. That the CISO and CTO have a long-running disagreement about cloud migration.
This context shapes what recommendations are actually actionable. Many technically correct analyses are organizationally useless.
4. The ability to build and maintain trust
The biggest one.
When a breach happens, executives don’t want a report from an “AI”. They want someone who can look them in the eye, explain what happened, and take ownership of the path forward. The human element of security leadership is absolutely not going away.
How to develop these capabilities
Develop depth in areas that require your presence or legal accountability. Disciplines such as incident response, compliance attestation, or security architecture for air-gapped or classified environments. These have regulatory and practical barriers to full automation.
Build expertise in the seams between systems. Understanding how a given combination of legacy mainframe, cloud services, and OT environment actually interconnects requires the kind of institutional archaeology (or the powers of a sexton) that doesn’t exist in training data.
Get comfortable being the human in the loop. I know this will get me tapping mute or block a lot, but you’re going to need to get comfortable being the human in the loop for “AI”-augmented workflows. The analyst who can effectively direct tools, validate outputs (b/c these things will always make stuff up), and translate findings for different audiences has a different job than before but still a necessary one.
Learn to ask better questions. Bring your hypotheses, domain expertise, and knowing which threads are worth pulling to the table. That editorial judgment about what matters is undervalued, and is going to take a while to infuse into “AI” systems.
We’re all John Henry now
A year ago, even with long covid brain fog, I could out-“John Henry” all of the commercial AI models at programming, cyber, and writing tasks. Both in speed and quality.
Now, with the fog gone, I’m likely ~3 months away from being slower than “AI” on a substantial number of core tasks that it can absolutely do. I’ve seen it. I’ve validated the outputs. It sucks. It really really sucks. And it’s not because I’m feeble or have some other undisclosed brain condition (unlike 47). These systems are being curated to do exactly that: erase all of us John Henrys.
The folks who thrive will be those who can figure out what “AI” capabilities aren’t complete garbage and wield them with uniquely human judgment rather than competing on tasks where “AI” has clear advantages.
The pipeline problem
The very uncomfortable truth: there will be fewer entry-level positions that consist primarily of “look at alerts and escalate.” That pipeline into the field is narrowing at a frightening pace.
What concerns me most isn’t the senior practitioners. We’ll adapt and likely become that much more effective. It’s the junior folks who won’t get the years of pattern exposure that built our intuition in the first place.
That’s a pipeline problem the industry hasn’t seriously grappled with yet — and isn’t likely to b/c of the hot, thin air in the offices and boardrooms of myopic and greedy senior executives.