Yes, AI Is Going to Replace You. And I Know Because It Already Replaced Me.
The 10 things every developer tells themselves. And why none of them hold up.
Yes, AI Is Going to Replace You. And I Know Because It Already Replaced Me.
Ten PM, I’m lying on the couch, scrolling. I’d been a guest on a tech podcast a few hours earlier and my phone hadn’t stopped buzzing since. I said I don’t write a single line of code myself, that the AI knows my codebase better than I do, and that the word “programmer” is about to stop meaning what it means today.
The internet didn’t like that.
Engineers from Google, NVIDIA, Monday, everywhere - came at me. “Cause you don’t write real code” “You’re just riding the hype.” “We heard this during the dot-com bubble too, nothing happened.”
So I did something most people don’t do when they’re under attack. I stopped and read everything. Every comment. Every argument. I sat down and made a list. These are the 10 most common claims - and why I think they all miss the point.
1. “AI doesn’t actually think”
Let’s start with the basics. AI doesn’t think like us. There are no neurons, no chemistry, no consciousness. There’s a neural network with billions of parameters trained on nearly every piece of text the internet has ever produced, and given an input - it computes the most statistically likely continuation. Is it considered thinking? Not really. That’s pattern matching at superhuman scale.
But does it matter?
What are we humans even doing when we “think”? When you’re debugging, what happens in your head? You recall a similar bug. You search for patterns. You try things that worked before. Sometimes you “know” what the problem is before you’ve even looked - we call that intuition. But it’s not divine inspiration. It’s your brain doing pattern matching on everything it’s ever seen. And compared to an LLM - that’s not a lot.
Story time. I got an alert the other day - something that had always worked suddenly broke in production. There were tons of code changes but nothing that looked related. I went through everything, tried to reproduce it - nothing.
I gave the problem to Claude. It pulled up previous commits, asked one question about a dependency that had been updated between versions - something I’d missed among all the changes. And boom - that was the cause. It got there in under five minutes.
Now, did it actually “think”? I don’t know. And I don’t care. It arrived at a solution that my “thinking” didn’t.
Let’s play a game. Take your entire life and imagine putting it in a GitHub repo. Folders by year, subfolders by month, day, hour. Every interaction is a commit. Every conversation tagged with labels - who was there, what was said, how you felt. Every decision you made, every moment you remembered and forgot - all there, organized, searchable.
And I’ll ask you: “Why do you love your elementary school teacher?”
What would you answer? Something vague. “She was warm.” “She believed in me.” Maybe a mental image surfaces - the smell of chalk, light from the window, a general feeling of warmth. But when exactly? What did she say? What day was it? You don’t know. You can’t grep your memory. You can’t git blame the feeling. Your brain kept the emotion and deleted the source.
Now give AI access to that repo. It’ll find the exact commit - November 15th, third period, she stopped the whole class because someone was laughing at you and said one sentence that stuck. It’ll cross-reference with subsequent commits and find that from that day on, you behaved differently. Not because it “understands” better. Because it searches better.
So the debate “does AI really think” is interesting for philosophers. But for anyone who needs to deliver results? It’s irrelevant. It’s like the joke about two people running from a bear. Mid-run, the first one stops to put on sneakers. The second one yells, “Why are you stopping?! Run!” And the first one answers, “I don’t need to outrun the bear. I just need to outrun you.”
To replace you, AI doesn’t need to “truly think.” It just needs to reach the right answer before you do.
2. “We’ve heard this before, they always say programmers are done”
Every few years some new tool is supposed to “eliminate developers.” And every few years there’s a crisis that’s supposed to “end tech.” And the people who’ve survived both look at AI and say, “Seen this movie. Know how it ends.”
And to be honest, history really is on their side. COBOL was supposed to let managers write software themselves. Low-code was supposed to replace 65% of development activity by 2024. Didn’t happen. And anyone who survived the dot-com bubble, 2008, the layoffs of 2022 - they have real resilience today. They know the sky doesn’t fall even when it looks like it.
But this time it’s different.
Because there’s a fundamental difference between what came before and what’s happening now. Every previous tool automated execution. COBOL automated syntax. Low-code automated interfaces. They all did what you told them to, just faster. And every previous crisis was cyclical - demand dropped, companies cut, but the work itself didn’t change. After the recession passed, if you knew how to write code - you knew how to write code. Your skills held their value. You just waited for the market to come back, and the market always came back.
But AI isn’t just automating the “how to write.” It’s automating the “what to write.” And it’s not creating a demand crisis - demand for software is exploding. What’s changing isn’t how much work there is, but who does it, and how. The skills needed for this position are changing. When a tool automates the “what” and the “how,” it threatens everyone working at the decision layer - not just programmers. PMs writing specs, designers defining flows, everyone.
But COBOL in 2025 is the same COBOL from 1960. Low-code in 2025 is nearly the same low-code from 2015. They were created, hit a ceiling, and got stuck there. AI from a year ago? It’s a joke compared to today. You can’t learn where it tops out because it doesn’t top out - it moves, exponentially. You’re comparing tools with fixed inputs and outputs to a machine that changes itself. It’s not the same category.
3. “Garbage in, garbage out”
Forgive me, but this claim is a bit disconnected from reality. I remember this debate from two years ago. Back in the hallucination era of Copilot’s tab completion - Stack Overflow was already cratering. The platform every developer in the world lived inside. From 200,000 questions a month to nearly zero by December 2025. A 78% drop in traffic. Real garbage, huh?
But for the sake of argument, let’s dig in. Here’s a group exercise: open your AI conversation history. Not from today - from a year ago. And read the results you got out loud.
Embarrassing, right?
Back in the day the claim was “AI output is full of hallucinations, you can’t trust it.” And they were right. Then it improved. So the claim shifted to “okay, but you need very precise prompts.” Then chat arrived that writes entire functions. “Yeah, but it doesn’t have your project’s context.” Then Claude Code arrived, reads your entire codebase, and builds complete features.
See the pattern? At every stage the claim is “correct” - but only about the present. The problem is we look at today’s limitation as if it’s a law of nature. But it’s not. It’s a bug being fixed in the next release.
Consider the pace. On the SWE-Bench benchmark, which tests the ability to solve real bugs from GitHub, in 2023 AI solved less than 2% of issues. A year later? Over 50%. That’s not human-scale improvement.
And if you say “every technology follows an S-curve and plateaus” - you’re right. Maybe AI will plateau. But even if it gets stuck at today’s level, with zero improvement from here to eternity - what it already does right now has already changed the work. And I’m not talking about AGI. I’m talking about coding agents that we work with daily.
Or as Karpathy said, “The hottest programming language right now is English.”
4. “Automated tools didn’t eliminate carpenters”
True. Every technological revolution until now automated execution. Steam replaced muscles. Electricity replaced lanterns. Calculators replaced manual computation. In every case - the tool did what you told it, just faster. And humans “fled upward,” from physical work to technical work, from technical work to cognitive work. There was always an “upward” to escape to.
Take accounting for example. There used to be millions of clerks doing calculations by hand. Excel and QuickBooks wiped out most of them. Who survived? High-level accountants - advisors, not calculators. The human fled upward, to the judgment layer. And it worked - because the tool didn’t know how to judge.
But AI is fundamentally different. It automates cognition.
Imagine a tool that analyzes. And also judges. That proposes strategy and checks its own work. So where do you flee now?
Let me tell you a real story.
The first task I ever got as a developer. I’d just transitioned from QA to development, they gave me a shot and I had to prove myself. I sat on the ticket for about two straight days, almost finished - then stopped.
While working on it, I realized this was code that would be thrown away in a week, because most users were migrating to a different version. I went to my team lead and said quietly, “Look, I almost finished, but I don’t think we should ship this.”
I was bummed. I hadn’t merged a single line of code to master. I had nothing to say in tomorrow’s standup. I wanted to be one of the big kids already.
And my team lead said, “Excellent. That’s exactly why I gave you the opportunity. Because you think. Just next time, stop even earlier.”
Our value isn’t measured in lines of code. So maybe we shouldn’t flee to a faster “how.” Maybe we should flee to “whether.”
5. “At the end of the day, someone needs to be accountable”
Fair point. Someone needs to take responsibility when a system crashes. But why does that someone also need to be the one who writes the code? A CTO takes responsibility without writing a line. A pilot takes responsibility without building the engine.
And anyway - what does accountability actually look like in practice? In the average company it’s not “the developer who wrote it.” It’s an entire chain. The user is angry, chat support responds, a ticket is opened, there’s triage, calming down, gathering context - and only then does it reach the team that fixes and deploys.
And ironically, that’s also the first layer already being cut because of AI. Customer success and support are vanishing wholesale - nearly every site already has a chatbot up front. Klarna said their AI assistant handled 2.3 million conversations per month, two-thirds of all chats - equivalent to 700 agents. And gradually it gets more permissions: first it just answered questions, then it opened tickets, then it issued refunds, then it fixed the bug itself.
The human layer isn’t disappearing - it’s moving further from the action itself. And with every step back, it gets thinner.
6. “AI doesn’t understand the business, the customer, the context”
Really? Let’s check.
Already today AI sits in your Zoom meetings, summarizes them, pulls out action items, and cross-references with previous meetings. It analyzes sales calls and can tell you exactly what the customer said three months ago that contradicts what they’re saying now. It does market research while you’re still typing your Google search. Every virtual interaction point - it’s already there. And this is just the beginning.
The next step is AI entering the places where business context actually lives.
It’ll sit on your Mixpanel and see in real-time what users actually do - not what they say, what they do. Where they get stuck, where they drop off, which flows they invent for themselves that you never even planned.
It’ll sit on your Jira and see the history of every product decision - why we ended that feature, what’s the pattern of recurring bugs.
It’ll sit on your Confluence and cross-reference what the tech spec demands with what the code actually does.
It’ll be your on-call. Plugged into GCP, Grafana, Sentry, Microsoft Clarity, Datadog - watching logs, traces, session replays, error spikes, all at once. A production incident at 3am? It’ll correlate the alert with the latest deploy, pull the relevant logs, trace the request path, check if the same user segment is affected, and hand you a root cause analysis before you finish rubbing your eyes. Not “here’s a dashboard.” Here’s what broke, why, and here’s the fix.
And the moment AI is plugged into user intent, product data, and the organization’s institutional knowledge - it won’t just “understand the business.” It’ll understand it better than any single person at the company, because no human can hold all those layers in their head simultaneously.
And it won’t stop at digital. Soon it’ll be on your collar, in physical meetings, hearing and remembering every word. The one who summarizes, remembers, and cross-references - that won’t be us. We’ll just direct the action.
So saying AI doesn’t understand the business? How many developers actually understand it? Most developers I know get a ticket and execute it. They don’t sit with the customer. They don’t see the strategic data. They don’t know why this feature is prioritized over that one. They execute. And execution - AI already does.
Lastly, consider this. Back in 2022, a Chinese gaming company called NetDragon appointed an AI as CEO of their subsidiary. Not symbolically for PR - like, for real. She processed hundreds of thousands of approvals, tracked employee performance, managed projects. The stock rose 10% after the appointment. And in 2024 she received the award for “Best Virtual Employee in China.” True, she’s not sitting on strategy yet - she’s managing operations. But the line keeps blurring.
7. “Give AI a real system and watch it fall apart”
A claim that comes up a lot from senior developers, and from a real place. Systems built over years, code that nobody dares delete because someone wrote “DO NOT TOUCH!!!” above it, and architecture that someone designed five years ago and nobody’s touched since.
I would have agreed with this. A year ago. AI was dangerous in environments like these. It would fix one line and break three. It didn’t understand dependencies, didn’t know about side effects, and wrote code that looked correct but blew up the pipeline.
But here’s what’s changing. Agents like Claude Code, Codex, Devin, Cline, Jules, OpenCode, all of them, they already don’t just read a single file - they read entire repositories. Configs, tests, pipelines, schemas. They understand dependencies between modules and suggest refactoring that accounts for all downstream systems.
Place those coding agents in a monorepo and they're a superpower. You change a database schema and they instantly trace the blast radius - which API endpoints break, which frontend components consume them, which shared types need updating, which tests fail, which CI pipeline configs need adjusting. End to end, backend to frontend to infrastructure, in one pass. It reads your Dockerfile next to your Terraform next to your application code and understands that the refactor it's proposing won't just pass tests - it'll actually build and deploy.
No human holds all those layers in their head. The backend dev doesn’t think about the pipeline. The DevOps engineer doesn’t know the React components. The frontend dev has never opened the Terraform folder. The AI reads all of it, every time. They run tests before proposing a change, open PRs with explanations for why they changed each line, and when there’s a regression - they catch and fix it.
AI fails in real-world systems? Let’s look at a real-world example then:
Spotify.
Not a startup with three people, not a side project, but a massive codebase, and endless services.
In February 2026, their CTO casually said on the earnings call that their best developers hadn’t written a single line of code since December 2025. Zero. They built an internal system called Honc, connected to Claude Code, that lets a developer describe a task from Slack on their phone on the way to work - and the AI writes the code, opens a PR, and waits for review.
That’s how they shipped over 50 features in 2025, with more than 1,500 merged PRs generated by AI. On a complex codebase. In production. With hundreds of millions of users.
And while you’re worried AI will break your complex system, You know what actually breaks complex systems? Your senior dev putting in their two weeks notice.
A dev team is the hive mind - the collective knowledge of everyone who’s ever touched the code. The person who remembers why that service is built that way. The person who knows that if you move that table, everything blows up. The problem? The hive erodes constantly. Every developer who leaves takes a piece of collective memory that can’t be recovered. Every new developer who joins needs six months before they’re effective - and during that time they consume more from the team than they contribute. 15–20% annual turnover? Within three years the team’s hive mind has been almost completely replaced.
AI is the exact opposite. It doesn’t leave. It doesn’t need onboarding. And more importantly - it didn’t just learn your code. It learned everyone’s code. Every architecture, every pattern, every mistake anyone made on a similar project. A senior developer with twenty years of experience has maybe seen fifteen large systems in their career. AI has seen millions.
And besides - this argument assumes complex systems will keep looking like they do today. But everything we know - TypeScript, Redis, Kubernetes, microservices - these are solutions humans built for humans. Once AI builds for AI, why would it be stuck with our tools? It’ll invent its own frameworks on the fly, tailored exactly to the problem.
Actually scratch that - if code is generated anyway, why not generate it live, tailored directly to the hardware? Why would there even be source code? One source? Why? AI can generate different source codes for different users, tailor-made. And why would code even need to be readable? Programming languages are a human invention, so humans can understand what the machine is doing. If there’s no human in the loop, there’s no need for an intermediate language. And if no code persists - there’s no legacy. No technical debt. No architecture that everyone’s afraid to touch. Just rebuild from scratch when needed, because it’s cheaper than maintaining.
8. “Jevons Paradox - there will be more demand, not less”
The most justified claim that came up. And this really is what happened with every revolution. Ease of production goes up, and demand goes up even more. I agree completely. There will be more people building software, not fewer.
But that doesn’t mean there will be more demand for the technical work as we know it - the writing-code-by-hand kind.
Think about telephone operators. In the 1920s, it was one of the most common jobs for women in America. Every call went through a human who physically connected cables. “Number, please?” - and they connected you. It was a respectable job, with training, a career path, a pension. Then the rotary dial arrived.
Automation didn’t eliminate phone calls - it eliminated the people who connected them. Demand for calls increased a millionfold. Demand for operators dropped to zero.
At Y Combinator, 25% of startups in the latest batch wrote 95% of their code with models. 10 engineers now do the work of 50 to 100. Demand for software is exploding. Demand for the people who manually type it? In decline.
Entry-level developer job postings dropped 67% since 2022. Computer engineering graduates now face higher unemployment than art history majors. Meanwhile, AI skill requirements in job postings jumped 136% in a single year. Companies aren’t hiring fewer engineers - they’re hiring different ones. The job listing doesn’t say “5 years of React.” It says “show me a project you built with AI.”
Jevons Paradox, programming edition: Task failed successfully.
9. “AI is a tool that helps you, it doesn’t replace you”
Let’s first talk about the word “tool.” Psychologically, it’s a security blanket. A tool is a passive thing - it sits in a toolbox waiting for you to activate it. Calling AI a “tool” gives us a pleasant illusion of hierarchy: we’re the managers giving orders, it’s the worker typing fast. As long as we call it a “tool,” we feel like we’re in control.
But for a manager to actually manage, they need to be able to oversee the output. And when you examine that oversight through cognitive science, you find that human hardware simply can’t handle the load. We’re trying to manage superhuman processing power with a brain that got stuck in the Stone Age.
To understand how disconnected this idea is, look at what’s happening today in dev teams. Software on speed. Code is written and generated at a pace we’ve never seen. A developer finds themselves managing three tickets in parallel, jumping from reviewing 800 lines that were written in a second to debugging a different feature, and every action demands countless micro-decisions. Until the amount of context-switching, cognitive load, volume of code to digest, the speed at which things change - has spiked tenfold.
The human brain can hold a maximum of 3–4 things in working memory before it starts losing context. That’s not an opinion - it’s biological fact. You can’t read a thousand lines of code someone else wrote and keep all the variables and implications in your head.
But while we’re buckling under that kind of environment, AI actually flourishes and keeps leveling up - it’s doubling context size every quarter, scanning entire repositories in seconds, breaking through ceilings. To come with our 200,000-year-old hardware brain and think we’re the ones who’ll “manage” this rate of production? That’s like stepping into the ring with Mike Tyson after taking a boxing class.
The more reliable an automated system becomes, the more humans stop checking it. Pilots stop monitoring instruments that are always right. Doctors stop questioning diagnoses that never err. And developers? They approve PRs they didn’t really read. Because “it’s Claude, it’s usually right.” The “tool” is already managing you. You just don’t notice, because the feeling of control is still there.
And that shouldn’t surprise anyone. In 1983, a researcher described a phenomenon she called “the irony of automation”: the more automated a system becomes, the less capable the human overseeing it is of doing so. Why? Because they’ve fallen out of the loop. They don’t practice, don’t build intuition, don’t develop the muscle. And then when the system fails and they need to intervene - they’re helpless. This is documented in aviation, nuclear reactors, medicine. And it’s exactly what’s happening now in software development. The more AI writes your code, the more you lose the ability to catch mistakes, understand what was written, and intervene when something breaks. Your “oversight” becomes theater. Tell me more about the “deep review” you did on that PR with 80 changed files.
So you think AI will need you for reviews? You’re the weakest link in the chain. In 1948, they measured radar operators in World War II and found that after 20–30 minutes of monitoring, the ability to detect anomalies drops dramatically. Not because of laziness - because of biology. The brain simply wasn’t built to sit and check output for hours on end. No serious researcher has disputed this since. So when someone says “the future is a developer doing code review for AI all day” - they’re describing a job the human brain is physiologically incapable of doing well. Eight hours of “check what the AI did”? That’s not a future. That’s a punishment.
The more you lean on a tool that does the work for you, the more your ability to do it yourself erodes. These phenomena have been documented since the Industrial Revolution. It’s not by choice - it’s atrophy. Decay. A developer who’s worked with AI for a year already feels it: opens a blank file and finds it hard to start. Forgets syntax they knew in their sleep. Feels “helpless” when they run out of tokens. And it’s completely rational. Why would you write it yourself if AI does it better? But rational or not, the result is the same: you stop practicing, the muscle disappears, and one day you discover you can’t function without it even if you wanted to. So you’re not really “using a tool.” You’re dependent on it.
So no. AI is not “a tool that helps you.” A tool doesn’t improve every week while you stay the same. A tool doesn’t learn from every mistake anyone has ever made. A tool doesn’t approach the point where it doesn’t need you to tell it what to do. The future isn’t going to look like “a human managing a machine.” It’s going to flip. The machine will manage the process, and if it needs human input for some reason - it’ll ask. And as long as it doesn’t need you - it won’t bother you. A human on call.
10. “There will always be things only humans can do”
To imagine what our future as programmers will be, you first need to imagine what AI’s future in the job market will be, and subtract one from the other. What’s left? That’s ours.
So take a few seconds with me. Imagine it... Go on... Done?... What’s your remainder?... Zero, right? Nothing’s left. Not writing code. Not review. Not architecture. Not debugging. Nothing.
Because if code and artificial cognition become cheap, fast, and abundant - what exactly is our value? Being the ones who press Enter? So in the long run, I have nothing to offer you. But we’re not in the long run yet. We’re in the transition phase - and in the transition phase, there’s a window.
There’s a saying floating around the tech world: “AI raises the floor, humans raise the ceiling.” And it explains this window exactly. Every time technology makes something easy, what was impressive yesterday becomes basic today. The floor rises. But humans don’t settle - they start demanding the next thing. The ceiling rises with it. And as long as there’s a gap between the floor and the ceiling - there’s room for humans.
Take Lovable as an example. A year ago, a site built there was impressive - “You built that in ten minutes? Wow.” Today, when you visit a site and recognize the Lovable look - the same gradients, the same animations, the same whiff of purple - the impression doesn’t just vanish, it turns negative. “Oh, that’s Lovable.”
That’s the floor.
And AI raises it faster than any tool before it.
Every previous revolution gave us time - years, sometimes decades - to adapt to the new floor, figure out where the ceiling was, and find our place in the gap. AI doesn’t give us that time. The floor rises every week. You barely manage to understand what the new baseline is - and it’s already risen again. The gap between the floor and the ceiling, the window where there’s room for humans - is shrinking.
On one hand, it’s a paradise of opportunity. The barrier to entry has never been lower. Anyone can build a product. But on the other hand? It’s hell. Because the same low barrier that lets you in - also lets the giants wipe you out with a single product announcement.
In February 2026, Anthropic released Claude Co-Work with industry plugins. Legal, financial, sales. Within three days, 285 billion dollars was wiped from the market cap of software companies. Thomson Reuters dropped 16%. LegalZoom dropped 20%. They called it the SaaSocalypse. One product announcement. Three days. 285 billion dollars.
A few weeks later, Anthropic released Claude Code Security - a tool that scans code and finds security vulnerabilities at the level of a security researcher. JFrog crashed 25% in a single day. CrowdStrike and Okta collapsed 10%. Cloudflare dropped 7%. One event wiped SaaS, a second event wiped cyber. And no sector is immune.
To close with a moment of self-criticism: Maybe I’m wrong. Maybe the models will plateau. Maybe there’s a physical energy ceiling we can’t see yet. It’s easy to be a prophet of doom. Maybe the bubble will simply burst and this whole project gets shelved.
And still, for the love of me, I can’t shake the feeling that the value of my cognition is heading toward zero, just like the cost of electricity, storage, and bandwidth did before it. And when something goes to zero, everyone selling that thing - earns less. Not because they’re worse at what they do, but because there’s something more cost-effective at what they do.
So I look at myself and ask: what’s my floor? What are the things I do today that tomorrow will be free? And what’s my ceiling? Where are the cracks in the ceiling that I can patch? What are the weird combinations only I know how to connect - the creativity that comes from playing like a kid with things nobody asked for?
So for now, I keep walking with my hands in my pockets, thinking, looking up like an idiot. Searching for cracks in the ceiling. Before it falls again.
By Adir Duchan, AI Engineer at Elementor


