JPMorgan Chase just told 65,000 engineers they must use AI coding tools by end of March 2026, with dashboards tracking whether you are a "light," "heavy," or "non" user. One anonymous developer described the mood as: "Those who don't use AI risk being seen as underperforming." That is a mandate for augmentation, not a memo announcing layoffs. The distinction matters more than most coverage admits.
I have been running Claude Code, GitHub Copilot, and Cursor on real projects for months. They are genuinely fast at the stuff that used to eat my afternoons: scaffolding a new Express route, writing the fifteenth variation of a SQL migration, generating test fixtures. Copilot autocompletes things I would have typed anyway. That is real productivity. I am not going to pretend otherwise.
But here is where the replacement argument falls apart in practice, not in theory. Last month I watched an AI agent confidently generate a database schema that technically compiled and completely ignored a business rule buried in a Confluence doc from 2022. The code was syntactically correct. It would have caused a billing discrepancy that nobody would have caught until a customer called. The AI had no idea the rule existed. It had no way to know. Asking it to "understand the business context" is like asking Webpack to understand your product roadmap.
\h2>The Gap Between Generating and DecidingNetlify grew from 6 million to 11 million developers in under a year, and Ivan Zarea at QCon London credited AI agents writing code for a lot of that growth. That number is real. What it actually measures is the floor dropping on who can ship a working prototype. Domain experts, product managers, and analysts can now build internal tools that used to require a sprint and a ticket. Good. That is genuinely useful.
What it does not measure is who owns the system when it breaks at 2am, who decides whether to use Postgres or DynamoDB given the read/write pattern, or who pushes back when a PM wants to store PII in a field that feeds a third-party analytics pipeline. Those decisions require context, accountability, and the ability to say no. AI tools do not say no. They generate plausible-looking answers to whatever you asked.
The fair point from the replacement camp: senior developers do spend a lot of time on work that AI handles well now, and that time compression will eventually affect headcount. I believe that. Some roles, particularly junior positions focused on pure implementation work, will shrink. Companies will hire fewer people to maintain the same output.
But "fewer" is not "none," and the roles that survive will skew toward the skills AI cannot fake: system design, incident ownership, security review, and the judgment call when two requirements contradict each other and someone has to pick. Those are not tasks you can vibe-code your way through.
What Builders Should Actually Do
If you are writing production software in 2026 and not using at least one AI coding assistant, you are leaving real speed on the table. Set up Copilot or Cursor, learn which prompts get useful output, and stop treating adoption as a political statement. The JPMorgan mandate is heavy-handed, but the underlying logic is sound: these tools accelerate the mechanical parts of the job.
What you should not do is let the tool make architectural decisions, approve its own pull requests, or write security-sensitive code you have not read line by line. The blast radius of a bad AI-generated commit is identical to the blast radius of a bad human-written one. You are still the one paged at 2am. Act like it.