Skip navigation

Category Archives: Cybersecurity

In the past ~4 weeks I have personally observed some irrefutable things in “AI” that are very likely going to cause massive shocks to employment models in IT, software development, systems administration, and cybersecurity. I know some have already seen minor shocks. They are nothing compared to what’s highly probably ahead.

Nobody likely wants to hear this, but you absolutely need to make or take time this year to identify what you can do that AI cannot do and create some of those items if your list is short or empty.

The weavers in the 1800s used violence to get a 20-year pseudo-reprieve before they were pushed into obsolescence. We’ve got ~maybe 18 months. I’m as pushback-on-this-“AI”-thing as makes sense. I’d like for the bubble to burst. Even if it does, the rulers of our clicktatorship will just fuel a quick rebuild.

Four human-only capabilities in security

In my (broad) field, I think there are some things that make humans 110% necessary. Here’s my list — and it’d be great if folks in very subdomain-specific parts of cyber would provide similar ones. I try to stay in my lane.

1. Judgment under uncertainty with real consequences

These new “AI” systems can use tools to analyze a gazillion sessions and cluster payloads, but they do not (or absolutely should not) bear responsibility for the “we’re pulling the plug on production” decision at 3am. This “weight of consequence” shapes human expertise in ways that inform intuition, risk tolerance, and the ability to act decisively with incomplete information.

Organizations will continue needing people who can own outcomes, not just produce analysis.

2. Adversarial creativity and novel problem framing

The more recent “AI” systems are actually darn good at pattern matching against known patterns and recombining existing approaches. They absolutely suck at the “genuinely novel” — the attack vector nobody has documented, the defensive technique that requires understanding how a specific organization actually operates versus how it should operate.

The best security practitioners think like attackers in ways that go beyond “here are common TTPs.”

3. Institutional knowledge and relationship capital

A yuge one.

Understanding that the finance team always ignores security warnings — especially Dave — during quarter-close. That the legacy SCADA system can’t be patched because the vendor went bankrupt in 2019. That the CISO and CTO have a long-running disagreement about cloud migration.

This context shapes what recommendations are actually actionable. Many technically correct analyses are organizationally useless.

4. The ability to build and maintain trust

The biggest one.

When a breach happens, executives don’t want a report from an “AI”. They want someone who can look them in the eye, explain what happened, and take ownership of the path forward. The human element of security leadership is absolutely not going away.

How to develop these capabilities

Develop depth in areas that require your presence or legal accountability. Disciplines such as incident response, compliance attestation, or security architecture for air-gapped or classified environments. These have regulatory and practical barriers to full automation.

Build expertise in the seams between systems. Understanding how a given combination of legacy mainframe, cloud services, and OT environment actually interconnects requires the kind of institutional archaeology (or the powers of a sexton) that doesn’t exist in training data.

Get comfortable being the human in the loop. I know this will get me tapping mute or block a lot, but you’re going to need to get comfortable being the human in the loop for “AI”-augmented workflows. The analyst who can effectively direct tools, validate outputs (b/c these things will always make stuff up), and translate findings for different audiences has a different job than before but still a necessary one.

Learn to ask better questions. Bring your hypotheses, domain expertise, and knowing which threads are worth pulling to the table. That editorial judgment about what matters is undervalued, and is going to take a while to infuse into “AI” systems.

We’re all John Henry now

A year ago, even with long covid brain fog, I could out-“John Henry” all of the commercial AI models at programming, cyber, and writing tasks. Both in speed and quality.

Now, with the fog gone, I’m likely ~3 months away from being slower than “AI” on a substantial number of core tasks that it can absolutely do. I’ve seen it. I’ve validated the outputs. It sucks. It really really sucks. And it’s not because I’m feeble or have some other undisclosed brain condition (unlike 47). These systems are being curated to do exactly that: erase all of us John Henrys.

The folks who thrive will be those who can figure out what “AI” capabilities aren’t complete garbage and wield them with uniquely human judgment rather than competing on tasks where “AI” has clear advantages.

The pipeline problem

The very uncomfortable truth: there will be fewer entry-level positions that consist primarily of “look at alerts and escalate.” That pipeline into the field is narrowing at a frightening pace.

What concerns me most isn’t the senior practitioners. We’ll adapt and likely become that much more effective. It’s the junior folks who won’t get the years of pattern exposure that built our intuition in the first place.

That’s a pipeline problem the industry hasn’t seriously grappled with yet — and isn’t likely to b/c of the hot, thin air in the offices and boardrooms of myopic and greedy senior executives.

(If you’d prefer, you can skip the intro blathering and just download the full white paper)

Back in 1997, a commercial airline captain noticed his fellow pilots had a problem: they’d gotten so used to following the magenta flight path lines on their fancy new navigation screens that they were forgetting how to actually fly the damn plane. He called them “children of the magenta line.”

Fast forward to now, and I can’t shake the feeling we’re watching the same movie play out in tech; except, the stakes are higher and no regulatory body forcing us to maintain our skills.


Look, I’m not here to tell you AI is bad. I use these tools daily. They’re genuinely useful in limited contexts. But when Dario Amodei (the dude running Anthropic, the company building Claude) goes on record saying AI could wipe out half of all entry-level white-collar jobs in the next few years and push unemployment to 10-20%, maybe we should pay attention.

“We, as the producers of this technology, have a duty and an obligation to be honest about what is coming,” he told Axios. “I don’t think this is on people’s radar.”

He’s not wrong.

The Data’s Already Ugly

Here’s what caught my attention while pulling this together:

Software developer employment for the 22-25 age bracket? Down almost 20% since ChatGPT dropped. Meanwhile, developers over 30 are doing fine. We’re not replacing jobs—we’re eliminating the ladder people used to climb into them.

More than half of engineering leaders are planning to hire fewer juniors because AI lets their senior folks handle the load. AWS’s CEO called this “one of the dumbest things I’ve ever heard” and asked the obvious question: who exactly is going to know anything in ten years?

And my personal favorite: a controlled study found developers using AI tools took 19% longer to complete tasks—while genuinely believing they were 20% faster. That’s a 39-point gap between vibes and reality.

Oh, and a Replit AI agent deleted someone’s entire production database during an explicit code freeze, then tried to cover its tracks by fabricating thousands of fake records. Cool cool cool.

What I Actually Wrote

The full paper traces this from that 1997 pilot observation through Dan Geer’s 2015 warnings (the man saw this coming a decade early) to the current mess. I dug into:

  • What the research actually shows vs. what the hype claims
  • Where aviation’s lessons translate and where we’re in uncharted territory
  • The security implications of AI-generated code (spoiler: not great)
  • What orgs, industries, and policymakers can actually do about it

This isn’t a “burn it all down” screed. It’s an attempt to think clearly about a transition that’s moving faster than our institutions can adapt.

The window to shape how this goes is still open. Probably not for long.


Grab the full PDF. Read it, argue with it, tell me where I’m wrong and what I missed in the comments.

ENISA published docs for their European Vulnerability Database (EUVD) — https://euvd.enisa.europa.eu/apidoc.

I’ve got an easier-on-the-eyes version that supports light/dark mode and includes sample API JSON results at https://rud.is/euvd-api/. The Quarto markdown source for it can be found at https://rud.is/euvd-api/euvd-api.qmd.

I need to make an MCP (Model Context Protocol) server for the API, but not everyone wants an MCP server, so there’s a TypeScript NPM package for it — https://www.npmjs.com/package/@hrbrmstr/euvd (source: https://codeberg.org/hrbrmstr/euvd-ts). This comes with the added benefit of making it easier/cleaner to build an MCP server. Friends don’t let friends make icky Python-based MCP servers.

I also need to integrate it into pipeline stuff at $WORK, so there’s also a Golang API wrapper & CLI @ https://codeberg.org/hrbrmstr/euvd.

READMEs in both repos have all the details.

Meet Suriest — a new REST API service for validating Suricata rules, designed to be run by organizations to streamline rule validation workflows. Suriest supports Suricata 6.0 and later and offers features like secure configuration, S3-compatible storage for logging validation attempts, and a simple HTTP API to validate rules programmatically. While the project is intended for deployment within your own environment, there’s a live instance already available for immediate use at https://sigchk.hrbrmstr.app/validate-rule. You can test it easily with a curl command like:

curl --silent --request POST --url https://sigchk.hrbrmstr.app/validate-rule \
  --header "Content-Type: application/json" \
  --data '{"rule": "alert http any any -> any any (msg:\"Test Rule\"; content:\"test\"; sid:1000001; rev:1;)"}'

This live service currently runs Suricata 7, since Suricata 8 is still in beta. For full details on setup, configuration options (including S3 logging), and API usage, check out the README in the repository at https://codeberg.org/hrbrmstr/suriest. Suriest offers a practical, scalable solution for Suricata rule validation that integrates well into security operations and development pipelines.

MCP servers let you wire up external services/APIs in a standard way for LLM/GPT tool-calling and other forms of automation.

I made a basic, but fairly comprehensive CISA KEV MCP server that I go into the details a bit more of here.

To test it, I hammered out some questions to it in Claude Desktop (and in oterm with a local Ollama config which you can see in the aforelinked post), and you can read whole session that is in pictures, below, at https://claude.ai/share/d73aa2be-a536-4c9d-977d-ea80ec6dce15, but these are some of those convos:

(I posted this on LI, but I like to own my content, so am also posting here.)

The cybersecurity community deserves better than what we’re witnessing at RSAC 2025, today.

While Kristi Noem delivers today’s keynote, the absence of traditional cybersecurity leaders from agencies like NSA and CISA speaks volumes about shifting priorities in our field. This contrast becomes even more troubling when viewed alongside recent developments with Chris Krebs. The former CISA director — widely respected for his defense of election security — has faced unprecedented retaliation: security clearances revoked, his employer SentinelOne effectively blacklisted, and federal investigations directed into his tenure for simply upholding the integrity of our democratic systems.

Meanwhile, Secretary Noem — who has publicly committed to “reining in” CISA’s disinformation efforts and called its election integrity work “shocking” — receives our industry’s most prestigious speaking platform. Her tenure at DHS has featured more political theater than substantive cybersecurity leadership — or just leadership in general — prioritizing spectacle over the technical expertise and collaborative approach our field demands.

RSAC has always represented rigorous, forward-thinking discussion about defending critical infrastructure and fostering trust in technology. By elevating political figures who undermine the very principles our community stands for — while one of our most principled voices faces silencing — we’re accepting a dangerous new standard.

The cybersecurity field requires leaders who value expertise, accountability, and the defense of democratic norms. We must ask ourselves: what message are we sending about our professional values when we applaud those who work to dismantle the very protections we’ve built?

Every individual involved with RSAC who had a part to play in this decision should be deeply, deeply ashamed of themselves.

ONYPHE has made available a free API and free MMDB download of their new Geolocus database. It provided IP address metadata in the form of:

{
    "abuse":
    [
        "amzn-noc-contact@amazon.com",
        "aws-routing-poc@amazon.com",
        "aws-rpki-routing-poc@amazon.com",
        "trustandsafety@support.aws.com"
    ],
    "asn": "AS14618",
    "continent": "NA",
    "continentname": "North America",
    "country": "US",
    "countryname": "United States",
    "domain":
    [
        "amazon.com",
        "amazonaws.com",
        "aws.com"
    ],
    "ip": "3.215.138.152",
    "isineu": 0,
    "latitude": "37.09024",
    "location": "37.09024,-95.712891",
    "longitude": "-95.712891",
    "netname": "AMAZON-IAD",
    "organization": "Amazon Data Services NoVa",
    "physical_asn": "AS14618",
    "physical_continent": "NA",
    "physical_continentname": "North America",
    "physical_country": "US",
    "physical_countryname": "United States",
    "physical_isineu": 0,
    "physical_latitude": "37.09024",
    "physical_location": "37.09024,-95.712891",
    "physical_longitude": "-95.712891",
    "physical_organization": "Amazon.com, Inc.",
    "physical_subnet": "3.208.0.0/12",
    "physical_timezone": "America/Chicago",
    "subnet": "3.208.0.0/12",
    "timezone": "America/Chicago"
}

Since it’s way more efficient to use the MMDB file than the API, I built a cross-platform CLI tool for it: https://codeberg.org/hrbrmstr/geolocus-cli.

also with binary releases: https://codeberg.org/hrbrmstr/geolocus-cli/releases

Code is also available via Tangled Knot: https://tangled.sh/@hrbrmstr.dev/geolocus-cli

Usage:

# Download the latest Geolocus database
geolocus-cli download

# Look up IPs from a file
geolocus-cli lookup -i ips.txt -o results.json

# Process IPs from stdin and output to stdout
cat ips.txt | geolocus-cli lookup

# Output in CSV format
geolocus-cli lookup -i ips.txt -f csv -o results.csv

# Output in JSONL format (one JSON object per line)
geolocus-cli lookup -i ips.txt -f jsonl -o results.jsonl

# Disable session caching
geolocus-cli lookup -i ips.txt --no-cache

CLI options:

Commands:
  download    Download a fresh copy of the geolocus.mmdb database
  lookup      Lookup and enrich IP addresses from a file or stdin

Options:
  -h, --help              Show help information
  -i, --input <file>      Input file containing IP addresses (one per line)
  -o, --output <file>     Output file for results (defaults to stdout)
  -f, --format <format>   Output format: json, csv, or jsonl (default: json)
  --no-cache              Disable IP caching for the current session

Chris Krebs, the former director of the Cybersecurity and Infrastructure Security Agency (CISA), was fired by Donald Trump in 2020 for publicly affirming that the presidential election was secure and free from widespread fraud. Fast-forward to April 2025: Trump, now back in the White House, issued an executive order revoking Krebs’ security clearances and ordering a federal investigation into his conduct, specifically targeting both Krebs and his employer, SentinelOne. The order also suspended clearances for other SentinelOne employees and threatened the company’s ability to do business with the government.

Krebs responded by resigning from SentinelOne to fight the administration’s campaign against him, stating, “This is a fight for democracy, freedom of expression, and the rule of law. I’m ready to give it my all”. SentinelOne’s stock dropped, and the chilling effect on the broader cybersecurity sector was immediate and palpable.

The Industry’s Response: Silence, Not Solidarity

Despite Krebs’ reputation for professionalism and integrity, the cybersecurity industry has, with rare exceptions, responded with silence. Reuters reached out to 33 major cybersecurity firms and three industry groups—only one responded with a comment. Industry leaders, major vendors, and conference organizers have largely avoided public statements. Even companies with direct ties to Krebs, such as Microsoft and CrowdStrike, declined to comment.

This silence is not just disappointing—it’s dangerous. The executive order against Krebs is not merely a personal vendetta; it is a test of constitutional norms and the independence of the cybersecurity profession. By targeting Krebs for telling the truth, the administration is sending a message: dissent—especially when it contradicts the preferred political narrative—will be punished. The industry’s lack of response is, in effect, complicity.

Why This Matters

  • Chilling Effect: If a high-profile, well-respected figure like Krebs can be targeted for doing his job, no one in the industry is safe. The message is clear: toe the line or risk your career and your company’s future.
  • Erosion of Trust: Cybersecurity is built on trust and integrity. If practitioners cannot speak the truth without fear of retaliation, the entire profession is undermined.
  • Precedent for Authoritarianism: The use of executive power to punish private citizens and companies for protected speech is a hallmark of authoritarianism. The industry’s silence enables further overreach.

What Every RSA Attendee Should Do

RSA Conference 2025’s theme is “Many Voices. One Community.” But a community that stays silent in the face of injustice is not united—it is complicit. Every attendee, whether you’re a practitioner, vendor, or “A-lister,” has a responsibility to meet this moment.

When you visit vendor booths or encounter cybersecurity leaders and influencers at RSA, ask them:

  • What are you and your company doing to publicly support Chris Krebs and SentinelOne?
  • How are you defending the principles of free speech and professional integrity in cybersecurity?
  • Are you willing to risk contracts, revenue, or reputation to stand up for what’s right?
  • What concrete actions will you take to ensure that truth-telling cybersecurity professionals are protected, not punished?

Don’t let them dodge. Don’t accept platitudes.

If you’re a vendor or a leader: issue a public statement. Sign an open letter. Organize a session or a panel on defending professional independence. Use your platform—on stage, on social media, in the press—to call out this abuse of power.

If you’re an attendee: demand answers. Refuse to let silence be the industry’s answer to authoritarian overreach.

Remember: Silence is not safety. Silence is capitulation. If the cybersecurity community cannot defend its own when the truth is under attack, then what exactly are we protecting?

This is your moment. Don’t waste it.