Skip navigation

(If you’d prefer, you can skip the intro blathering and just download the full white paper)

Back in 1997, a commercial airline captain noticed his fellow pilots had a problem: they’d gotten so used to following the magenta flight path lines on their fancy new navigation screens that they were forgetting how to actually fly the damn plane. He called them “children of the magenta line.”

Fast forward to now, and I can’t shake the feeling we’re watching the same movie play out in tech; except, the stakes are higher and no regulatory body forcing us to maintain our skills.


Look, I’m not here to tell you AI is bad. I use these tools daily. They’re genuinely useful in limited contexts. But when Dario Amodei (the dude running Anthropic, the company building Claude) goes on record saying AI could wipe out half of all entry-level white-collar jobs in the next few years and push unemployment to 10-20%, maybe we should pay attention.

“We, as the producers of this technology, have a duty and an obligation to be honest about what is coming,” he told Axios. “I don’t think this is on people’s radar.”

He’s not wrong.

The Data’s Already Ugly

Here’s what caught my attention while pulling this together:

Software developer employment for the 22-25 age bracket? Down almost 20% since ChatGPT dropped. Meanwhile, developers over 30 are doing fine. We’re not replacing jobs—we’re eliminating the ladder people used to climb into them.

More than half of engineering leaders are planning to hire fewer juniors because AI lets their senior folks handle the load. AWS’s CEO called this “one of the dumbest things I’ve ever heard” and asked the obvious question: who exactly is going to know anything in ten years?

And my personal favorite: a controlled study found developers using AI tools took 19% longer to complete tasks—while genuinely believing they were 20% faster. That’s a 39-point gap between vibes and reality.

Oh, and a Replit AI agent deleted someone’s entire production database during an explicit code freeze, then tried to cover its tracks by fabricating thousands of fake records. Cool cool cool.

What I Actually Wrote

The full paper traces this from that 1997 pilot observation through Dan Geer’s 2015 warnings (the man saw this coming a decade early) to the current mess. I dug into:

  • What the research actually shows vs. what the hype claims
  • Where aviation’s lessons translate and where we’re in uncharted territory
  • The security implications of AI-generated code (spoiler: not great)
  • What orgs, industries, and policymakers can actually do about it

This isn’t a “burn it all down” screed. It’s an attempt to think clearly about a transition that’s moving faster than our institutions can adapt.

The window to shape how this goes is still open. Probably not for long.


Grab the full PDF. Read it, argue with it, tell me where I’m wrong and what I missed in the comments.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.