May 2026 dropped three critical Linux vulnerabilities on a near-weekly cadence, and the security discourse has mostly treated them as three separate bad days. They’re not. Together they form a reliable, race-free, forensically quiet kill chain from the public internet to root, and if you’re running nginx in front of anything that matters, you need to stop and read this.
CVE-2026-42945, dubbed NGINX Rift, landed May 13 courtesy of depthfirst. It’s a heap buffer overflow in ngx_http_rewrite_module that’s been sitting in every nginx build since 2008. An unauthenticated attacker sends a single crafted HTTP request and overwrites the heap, getting remote code execution in the worker process – no auth, no prior session, no prerequisites beyond a network path to port 80 or 443. The root cause is a mismatch between two passes over the rewrite directives: the length calculation runs with is_args=0 (raw byte count) while the copy pass runs with is_args=1 (URI-escaped), so the write overruns the allocation. The trigger is a configuration pattern that’s everywhere: a rewrite directive with an unnamed PCRE capture ($1, $2) and a question mark in the replacement string, followed by another rewrite, if, or set in the same block. CVSS 9.2, and it earns it.
CVE-2026-31431, “Copy Fail,” came from Theori on April 29. It’s a logic bug in the authencesn cryptographic template that lets an unprivileged local user write 4 controlled bytes into the page cache of any readable file, then pivot to root. The exploit is 732 bytes of Python (no races, no disk writes, no forensic residue – the page cache corruption means file integrity checks pass because the underlying file on disk was never touched). It works on every distro shipped since 2017. CISA added it to the Known Exploited Vulnerabilities catalog with a May 15 remediation deadline.
Then there’s CVE-2026-43284 and CVE-2026-43500, “Dirty Frag,” disclosed May 7 by Hyunwoo Kim. It’s a two-bug chain that lands in the same place as Copy Fail – page-cache-to-root LPE – but routes around the Copy Fail mitigation entirely. If you blacklisted algif_aead thinking you were covered, Dirty Frag gets there through xfrm-ESP or rxrpc instead. Microsoft’s already seeing in-the-wild activity: SSH foothold, stage an ELF binary, escalate via su. Deterministic. No races. Same bug class, different sink.
Why does the combination matter more than any single bug? Exploit chains are usually academic exercises, published to demonstrate feasibility and then left to rot in a CTF writeup. This isn’t that. CVE-2026-42945 hands you a foothold from the internet. CVE-2026-31431 or CVE-2026-43284 hands you root once you’re on the box. Neither step requires races, user interaction, or authentication. Neither leaves obvious forensic traces on disk. Both have working, published proof-of-concept code as of this writing.
The surface area here is genuinely uncomfortable. NGINX is the most-deployed web server on the planet. WordPress – with scads of massively-deployed plugins recommended NGINX configuration contains the exact vulnerable rewrite pattern (I checked; it’s right there in the docs) – powers something north of 40% of the web. That means whitehouse.gov, NASA, the UK Government, the Australian Government, the State of California, and essentially every major US university is potentially in scope. Every federal agency required by the 21st Century IDEA Act to maintain a public web presence. Every municipality running WordPress on a LEMP stack. Every SaaS app behind an NGINX ingress controller. An attacker doesn’t need a zero-day chain for any of these; they need access to data from a public internet scanner, a grep for vulnerable version strings, and the ability to send one HTTP request.
I shipped a static configuration scanner for the NGINX Rift pattern. Single bash script, no dependencies beyond bash 4+ and grep, runs offline against config files without touching a live nginx process:
git clone https://git.sr.ht/~hrbrmstr/cve-2026-42945-scanner
cd cve-2026-42945-scanner
./scan-nginx-rift.sh /etc/nginx
Run it on every box running nginx. Add --json in CI. Point it at ingress controller configmaps. The output tells you the file, the line number, the vulnerable directive, and which following directive creates the exploitability condition:
[VULN] sites-enabled/wordpress.conf:8 – rewrite ^/([^/]+?)-sitemap([0-9]+)?.xml$
followed by "if" at line 12
If you find a hit, you’ve got two options in order of preference:
- Upgrade nginx to 1.30.1 (stable) or 1.31.0 (mainline).
- Replace unnamed captures with named captures in every affected
rewrite:
# Before (vulnerable)
rewrite ^/([^/]+?)-sitemap([0-9]+)?.xml$ /index.php?sitemap=$1&sitemap_n=$2 last;
# After (safe)
rewrite ^/(?<term>[^/]+?)-sitemap(?<num>[0-9]+)?.xml$ /index.php?sitemap=$term&sitemap_n=$num last;
For the kernel side, check your distro’s patch status now and don’t trust “we’ll get to it.” If you can’t patch immediately, blacklisting algif_aead blocks Copy Fail but does nothing for Dirty Frag. For Dirty Frag, unload xfrm_algo.ko and rxrpc.ko if your workload doesn’t need them, and make sure AppArmor or SELinux policy is blocking unprivileged user namespaces.
Three critical Linux CVEs in three weeks, all with published exploits, all in code that’s been shipping for years. The gap between disclosure and working exploit is now measured in hours, not months. The scanner above closes one piece of that gap for the nginx side. The rest depends on whether you check your configs today or wait until something in your logs looks wrong – at which point the forensic-residue-free LPE means “looking wrong” may be all you ever see.

AI Proofing Your It/cyber Career: The Human Only Capabilities That Matter
In the past ~4 weeks I have personally observed some irrefutable things in “AI” that are very likely going to cause massive shocks to employment models in IT, software development, systems administration, and cybersecurity. I know some have already seen minor shocks. They are nothing compared to what’s highly probably ahead.
Nobody likely wants to hear this, but you absolutely need to make or take time this year to identify what you can do that AI cannot do and create some of those items if your list is short or empty.
The weavers in the 1800s used violence to get a 20-year pseudo-reprieve before they were pushed into obsolescence. We’ve got ~maybe 18 months. I’m as pushback-on-this-“AI”-thing as makes sense. I’d like for the bubble to burst. Even if it does, the rulers of our clicktatorship will just fuel a quick rebuild.
Four human-only capabilities in security
In my (broad) field, I think there are some things that make humans 110% necessary. Here’s my list — and it’d be great if folks in very subdomain-specific parts of cyber would provide similar ones. I try to stay in my lane.
1. Judgment under uncertainty with real consequences
These new “AI” systems can use tools to analyze a gazillion sessions and cluster payloads, but they do not (or absolutely should not) bear responsibility for the “we’re pulling the plug on production” decision at 3am. This “weight of consequence” shapes human expertise in ways that inform intuition, risk tolerance, and the ability to act decisively with incomplete information.
Organizations will continue needing people who can own outcomes, not just produce analysis.
2. Adversarial creativity and novel problem framing
The more recent “AI” systems are actually darn good at pattern matching against known patterns and recombining existing approaches. They absolutely suck at the “genuinely novel” — the attack vector nobody has documented, the defensive technique that requires understanding how a specific organization actually operates versus how it should operate.
The best security practitioners think like attackers in ways that go beyond “here are common TTPs.”
3. Institutional knowledge and relationship capital
A yuge one.
Understanding that the finance team always ignores security warnings — especially Dave — during quarter-close. That the legacy SCADA system can’t be patched because the vendor went bankrupt in 2019. That the CISO and CTO have a long-running disagreement about cloud migration.
This context shapes what recommendations are actually actionable. Many technically correct analyses are organizationally useless.
4. The ability to build and maintain trust
The biggest one.
When a breach happens, executives don’t want a report from an “AI”. They want someone who can look them in the eye, explain what happened, and take ownership of the path forward. The human element of security leadership is absolutely not going away.
How to develop these capabilities
Develop depth in areas that require your presence or legal accountability. Disciplines such as incident response, compliance attestation, or security architecture for air-gapped or classified environments. These have regulatory and practical barriers to full automation.
Build expertise in the seams between systems. Understanding how a given combination of legacy mainframe, cloud services, and OT environment actually interconnects requires the kind of institutional archaeology (or the powers of a sexton) that doesn’t exist in training data.
Get comfortable being the human in the loop. I know this will get me tapping mute or block a lot, but you’re going to need to get comfortable being the human in the loop for “AI”-augmented workflows. The analyst who can effectively direct tools, validate outputs (b/c these things will always make stuff up), and translate findings for different audiences has a different job than before but still a necessary one.
Learn to ask better questions. Bring your hypotheses, domain expertise, and knowing which threads are worth pulling to the table. That editorial judgment about what matters is undervalued, and is going to take a while to infuse into “AI” systems.
We’re all John Henry now
A year ago, even with long covid brain fog, I could out-“John Henry” all of the commercial AI models at programming, cyber, and writing tasks. Both in speed and quality.
Now, with the fog gone, I’m likely ~3 months away from being slower than “AI” on a substantial number of core tasks that it can absolutely do. I’ve seen it. I’ve validated the outputs. It sucks. It really really sucks. And it’s not because I’m feeble or have some other undisclosed brain condition (unlike 47). These systems are being curated to do exactly that: erase all of us John Henrys.
The folks who thrive will be those who can figure out what “AI” capabilities aren’t complete garbage and wield them with uniquely human judgment rather than competing on tasks where “AI” has clear advantages.
The pipeline problem
The very uncomfortable truth: there will be fewer entry-level positions that consist primarily of “look at alerts and escalate.” That pipeline into the field is narrowing at a frightening pace.
What concerns me most isn’t the senior practitioners. We’ll adapt and likely become that much more effective. It’s the junior folks who won’t get the years of pattern exposure that built our intuition in the first place.
That’s a pipeline problem the industry hasn’t seriously grappled with yet — and isn’t likely to b/c of the hot, thin air in the offices and boardrooms of myopic and greedy senior executives.