AI is turning software development into a Kafkaesque fever dream
What that means for tech leaders

As an Engineering Manager, I was always under the impression that employee performance was to be measured in terms of team and company impact:
How effective is this employee as a functional member of the team?
How has this employee contributed to the success of the organization?
With lots of nuance, of course, but generally all boiling down to those two metrics.
The problem with measuring these things in 2026 is that “AI-first” strategies are making a mockery of it all.
Firstly, software engineering is becoming much less collaborative and more of a solo activity. For most of my career I tried to break down silos, and championed communication and teamwork, only to witness the emergence of AI copilots that replace co-contributors, and automated workflows that replace peer review. And vibe coding has more or less spelled the end of developers prioritizing quality and accuracy over speed and convenience: there is dwindling motivation for developers to plan, discuss, and share their progress with one another or maintain team standards and coding norms beyond what can be achieved using automated linters and agent spec files. Devs have always had a reputation of being solitary and aloof, but what was once merely a stereotype is now in danger of becoming ingrained. So how does a manager evaluate a team member when the team itself is barely a shell, an org chart of people who don’t really talk to each other or share any common ground? Don’t pretend this isn’t happening.
Secondly, there’s growing concern that excessive outsourcing to AI is impairing workers and leading to the loss of skills over time and possibly even some cognitive impediment and loss of confidence, and the loss of comprehension in software engineering is becoming more and more apparent. How can one measure employee growth when they’re literally shrinking backwards into AI dependency and mental atrophy? What new skills are being developed when the only skill being used is writing a prompt and setting up a Ralph Wiggum loop? What impact can a siloed individual have on the rest of the organization? What obstacles are overcome to cause the personal and professional growth of the individual who exerts practically no effort to achieve their goals? How does any one individual using AI stand out from any other individual using the same AI?
Third: so much AI-generated code actually doesn’t contribute to the success of the company. So even the idea of focusing on outcomes rather than contributions becomes a non-starter. What do we measure when every team with an AI mandate is essentially failing, most of the time? When even the so-called AI productivity boost is merely a mirage? It sounds almost comical. The only way to guarantee a win with the current crop of AI tools is to pick very low hanging fruit: automate rudimentary processes that are easy to predict, easy to automate, and avoid letting AI near any critical systems. Some safe bets, duplicated by thousands of companies at this point, include customer service chatbots, automated data scraping and sentiment analysis, or (poor quality) content generation. Many AI projects are barely more sophisticated than what was achieved with regular natural language processing a decade ago. Most CIO’s don’t even know the difference between a true AI agent and a basic automation workflow in N8N or Zapier. And most corporations are woefully unprepared for anything more advanced because they don’t have their house in order. At the end of the day, the normal reaction to so many AI feature announcements should be: so what?
Managers know this, even if a complete lack of job security means they’re too afraid to acknowledge it. 96% of corporate initiatives fail, and the customers we’re designing for are actively resistant to AI. But CEO’s have become so authoritarian and misguided in their AI fantasies that they won’t stop pouring investment and resources into AI projects no matter how many of them fail. Like something straight out of a Kafka novel, any manager in charge of a failing AI project is going to be held accountable, even when the work involved was entirely vibe coded, and all evidence and common sense always pointed to the inevitability of failure. So how is the success of AI work measured in the Twilight Zone between hype and reality, and how does one keep one’s head? Probably through masterful manipulation of the truth and crossing one’s fingers: become best friends with the CFO and make sure your numbers can be fudged enough to save your team from elimination.
We’re so confused and defeated by the onslaught of AI-first thinking, devoid of reason or common sense, that we’re going back to measuring developer progress in terms of lines of code, number of code commits, or number of open PRs. This is completely backwards and undoes decades of progress that helped software development evolve from an unglamorous factory process into a serious and respected craft. In 2026, instead of methodical, analytical, and thoughtful software design, we’re going back to performative productivity that leads to brittle, insecure applications and crippling tech debt. We’re prioritizing convenience and making business owners happy, rather than staying focused on security, uptime, and stable business operations. Mark my words: this won’t end well for anyone.

And even if you somehow manage to come out of all this with some kind of rubric for measuring one team member’s success outcome, how do you reward them? The technical career ladder is fast losing all its rungs, ascending nowhere as AI adoption brings about the great flattening. Won’t future SWE’s be managed by a supervisor algorithm like Amazon warehouse employees? And won’t they in turn be supervisors of AI agents by default? If so, where is the advancement opportunity? Software Architects and Staff Engineers only make sense in a system that respects human judgement and expertise, which can hardly be expressed in cave-man prompts, and middle managers seem to be heading towards extinction. As for the CTO, what role can they possibly play in this new AI-dominated world, except as a punchbag or sponge for the CEO?
We’re heading off a cliff if we continue to put AI first and humans last. Not only are the current trends ruining careers and sinking profits, we’re also cutting off a future pipeline of talented engineers by pretending that “AI skills” are more important than a solid foundation in software engineering principles and best practices. We’re being steamrolled by the C-suite into jumping on the AI train blind-folded with our hands tied. We’re not pushing back on the hype and we’re not including the slide in our powerpoint demo that shows how the project failed because AI was too incompetent.
Heck, we’re even being explicitly told to keep doing our jobs under extreme surveillance until such a time that AI can replace us. We’re training our own proposed replacements on the remaining borrowed time that exists in our apparently soon to be cut-short careers. Even Black Mirror didn’t predict this distinctly dystopian outcome.
The only way out of this surreal nightmare, as far as I can see, is to find the courage to start talking honestly about the lackluster nature of what we’re really dealing with. For some of us that means we need to wake up from the fever dream of AI inevitability and start documenting the failures as they happen in real time. When we meet with our peers and leaders we need to be honest about the inadequacies of this technology and the negative impact it’s having on us and our direct reports. We have to sound some kind of alarm that work culture is being eroded and talk about how we might be on a path that leads to mistakes we can never undo. And if all that sounds too human-centric and you think your superiors might not care, perhaps work on communicating how AI is a security nightmare, or how costs are rising exponentially as the bill comes due for the extreme data center and GPU costs of the models that are now married to all your systems. Show them the proof that there’s no ROI you can measure, only magical thinking and waste. Not in a dramatic way that would seed contempt and burn bridges, just lay it all out there and don’t dress up the truth to make leaders feel better about their complete lack of foresight or common sense.
I know all too well about the fear of being fired for speaking up, but if we’re all headed that way anyway, even if we acquiesce, isn’t there a reason to risk it? The alternative looks like slow decay and burnout, maybe even PTSD. And you deserve better than that.



https://systemshaywire.substack.com/p/the-day-ai-burned-my-brain-out
"We’re heading off a cliff if we continue to put AI first and humans last." Well said Jim. So much of this also applies to the increasing reliance on generative AI in the creative industries.