The Three Layers of Technical Credibility
AI Eats Software? Not This
The most-repeated phrase on CNBC in the last few weeks is probably “AI is eating software alive.” You’ve heard it. Maybe over coffee, maybe in passing while someone had Bloomberg on in the background. $285 billion disappeared from software stocks in 48 hours back in February. Traders at Jefferies started calling it the “SaaSpocalypse.” Atlassian dropped 35% in a week. The software ETF entered a bear market.
I watched those numbers and thought: that’s a credibility story disguised as a stock story.
Because if you open LinkedIn right now, your feed is telling you the same thing, just on a six-month delay. Scroll back. Six months ago it was full of engineers posting about weekend framework deep-dives. “Spent my Saturday with [shiny new tool]. Staying sharp.” Hundreds of likes. Dozens of “respect” comments. I liked a few of those posts myself.
Fast forward to today. The best AI coding models are solving nearly 80% of real-world software engineering tasks. AI coding tools are pulling in billions in revenue less than a year after launch. An AI agent can scaffold the whole setup those weekend posts were celebrating in about twenty minutes. My point being - that knowledge didn’t just get old - it got replaced.
But buried in the same feed, around the same time, was a quieter post. An engineering director who caught a pipeline design flaw during a review. Saved her team weeks in production. Got maybe 40 likes. Nobody shared it.
Her knowledge is still worth exactly what it was six months ago. Probably more.
I keep thinking about those two posts. Because they represent two completely different kinds of technical knowledge. And the market, in its blunt way, is telling us one of them just went to zero while the other got more valuable. Most of us treat “staying technical” like it’s one thing. It’s not. It’s actually three different things, and they’re moving in opposite directions.
1. The stuff AI already knows
Frameworks, libraries, vendor configs, syntax. This is where most of us pour our “staying sharp” hours. Conference talks, tutorials, weekend side projects, that one Udemy course you bought at 2 AM. I’ve done all of it. It feels productive. Looks great on LinkedIn. And I’m not going to pretend I didn’t enjoy the dopamine of spinning up something new on a Saturday.
That knowledge was already expiring fast. Technical skills lose relevance in about two and a half years now. That was bad enough before AI showed up. AI made it worse. Way worse. Because AI doesn’t just change tools faster. It turns tool knowledge into a commodity the moment you learn it.
The head of Claude Code at Anthropic told Fortune that “pretty much 100%” of their code is now AI-generated. Over 25% of Google’s new code comes from AI. If the companies building these models have already replaced the tool layer internally, what do you think happens to the rest of us? Your team already has a platform engineer AND an AI assistant for Kubernetes. They don’t need you to know the configs.
Maybe we’re all studying for a test that got canceled?
2. The stuff your team can’t see because they’re too close
You’ve probably seen this already, or you will soon. An AI-generated service passes every unit test, looks clean in review, ships without a flag. Two months later it’s the reason another team’s migration is stalled. The coupling was invisible at the PR level. You had to zoom out to see it. And the engineer who prompted it was too close to the code to see the system.
That’s not a one-off. AI now writes over 40% of all production code. Throughput is up across the board. Everyone’s shipping faster. But nobody talks about what CodeRabbit found when they looked at the quality side: AI-generated code carries 1.7x more issues. Incidents per pull request climbed 23.5% last year. Change failure rates rose 30%.
More code. More failures. Same or fewer releases.
So where did the bottleneck go? It moved from writing the code to knowing whether the code should ship. Your team is generating more output than ever. But who’s looking at how it all fits together? Who notices that the service your team just shipped is going to collide with a migration two teams over? That’s not in any unit test. That’s the kind of thing you only catch if you’ve been living in the reviews and the debugging, not the feature work. And right now, most teams are so focused on shipping faster that nobody’s asking whether faster is actually better.
3. Scar tissue
Data pipelines will always be underestimated. Rewrites always take 3x longer than anyone promises. Distributed systems always have consistency trade-offs. These truths don’t change with the framework of the year. They don’t show up in a tutorial. You earn them by watching things break for a decade.
They’re exactly what AI gets wrong. AI generates code that works in isolation. But it doesn’t know what happened the last time someone tried this architecture at scale. The schema migration that locked the table for 20 minutes. The abstraction that looked clean in staging but collapsed under real load. The model can’t see those. You can. Because you were there when it broke.
That’s the competence your team actually cares about. Goodall’s research across thousands of U.S. and British workers found that a boss’s technical competence is the single strongest predictor of job satisfaction. Their judgment, not their output. Jeff Dean’s credibility at Google wasn’t built by knowing their current codebase. It was built over 25 years of watching what works and what breaks at planetary scale. That kind of knowledge doesn’t expire. It compounds.
This is the only layer getting more valuable. Everything else is getting cheaper.
Now connect those dots
AI gutted the first layer and made the third worth more at the same time. “I can write that” stopped being impressive. “I know why that will break at scale” is the scarce thing now. The data across hundreds of thousands of engineers backs this up: top AI adopters ship 2x the throughput, but most teams aren’t even measuring whether that throughput is producing better outcomes.
Boards went from “are you experimenting with AI?” to “show me results.” The engineering manager who can judge AI output before it ships, who knows what the model’s blind spots will cost in production — that’s what credibility looks like in 2026. Not “I learned the new framework.” That’s already automated.
Come back to the feed
Go look at your LinkedIn feed again. The posts with the most engagement are still about tools. Meanwhile, the engineering director who saved her team three weeks by catching a design flaw? She’s not posting about it. Nobody writes “I convinced my team NOT to build something today” and gets 500 likes. But that’s the work that compounds. And the market is starting to agree.
Years ago, when I moved from engineering into leadership, the hardest thing wasn’t learning to manage. It was letting go of the first layer. I’d built my identity on being the person who knew the stack, who could jump in and write the fix. Letting go of that felt like losing something. It took me a while to realize I wasn’t losing credibility. I was shifting where it lived. The moments my teams actually trusted me most had nothing to do with code I wrote. It was the architecture call I made before a system buckled. The migration risk I caught because I’d watched a similar one fail five years earlier at a different company. Pattern recognition, not syntax.
That transition used to take years. You’d grow into it gradually as you moved from IC to manager to director. AI just compressed it. Today, a first-time engineering manager is watching their team generate production code with an AI agent on day one. The first layer isn’t something they’ll slowly outgrow. It’s already gone. The question that used to be “will I stay technical enough?” is now “which layer of technical am I building?” And if you’re still answering that question with frameworks and tutorials, AI already answered it for you.
So here is my question to you: your last 10 “staying technical” hours — how many went to stuff AI can already do for your team, and how many went to the judgment it never will?
#Leadership #EngineeringManagement #AI #SoftwareEngineering #TechnicalLeadership


