Back to blog
The 10x Trap: Why AI Made Me Work More, Not Less

The 10x Trap: Why AI Made Me Work More, Not Less

Want a quick summary of this post? Tune in 🎧
Table of Contents

1. Introduction

If you are reading this, you already know most of what’s going on with AI and you follow the scene quite closely. We all thought AI would give us back our time, and life would be a bit easier right? AI would do the boring parts, automate the repetitive work, let us focus on the fun stuff. The list goes on and on.

Well, yeah, not quite. After 2 years of using AI daily, i must admit i’ve been actually working more, not less. Keep in mind not everything is negative, i’ve been doing more projects, more open source, learning more, so its not all bad. But the hours? The hours went up.

How about you? Do you feel like you’re working more with AI? I know its hard to keep up with the “San Francisco mindset” so you don’t feel left behind, but is there a better way? Lets dig in.

ℹ️

Scientific American (March 2026) reported that out-of-hours code commits rose 19.6% among engineers using AI, and engineers scored 17% lower on comprehension quizzes compared to the control group. We are working more AND understanding less. Something is off.

So lets talk about why this is happening, what i’ve been feeling on my own machine, and where i think we’re heading.

2. The Parallelism Trap 🪤

Well, if you are a developer you’re probably using some agentic workflow tool like Cursor, Claude Code, Superset, Conductor, Codex, or similar. These tools let you multiplex your workflow across multiple worktrees, multiple projects, and multiple agents all running in parallel. Looks crazy right? Are we in the future? Yes we are!

Developer juggling too many things at once

But there is a catch (there is always a catch, right?). While doing all this can be a massive productivity boost if you planned things perfectly beforehand, it also increases the amount of attention and context you need to hold in your valuable brain.

This often leads to a lot of context switching, and your mental bandwidth is constantly getting drained. You will probably:

  • Feel like you have done more, when you actually have done less.
  • Stop paying attention to details, flow control, code quality, possible bugs, etc.
  • End up feeling way more tired at the end of the day, because there is a lot to ingest at all times.
  • Make poor architecture decisions without realising it.
ℹ️

The 2025 DORA Report backs this up, developers using AI interact with 67.4% more PR contexts per day, work restarts are up 13.8%, and 26% more in-progress tasks show no activity for 7+ days. Parallelism has a real cost.

Personally, i’ve been doing a maximum of 2 worktrees per project and 2 to 3 agents per worktree. The only exception where i scale more is when its repetitive work, like implementing a test suite for a bunch of modules at the same time.

BUT there is still a catch, lets check on the next section!

3. The Review Bottleneck 🔍

So, given all the above, you’re producing code like crazy, and a lot of companies have been sold this dream of “10x productivity”. But well, not quite the 10x we were thinking of.

Because we produce more code now (and the GitHub Octoverse 2025 report shows this clearly, nearly 1 billion commits in 2025, up 25.1% year over year, with ~100 million commits in August alone), it means developers (mediors and seniors) are also now reviewing way more code. So not only are we writing more code, we are also reviewing more code. Yeah, MORE work for us!!! 😅

And if you want the scariest datapoint, there’s a public tracker showing that Claude Code alone is responsible for around 9.7% of all public GitHub commits as of mid-March 2026, up from 4% in February. AI agent PRs jumped from roughly 4M in September 2025 to 17M in March 2026. Someone still has to read all of that.

More code more problems

ℹ️

According to Faros AI research, developers are completing 21% more tasks, 98% more PRs merged, but PR review time is up 91% and PRs are 154% larger. The 2025 Stack Overflow Developer Survey also found that 45% of devs say debugging AI-generated code is the most time-consuming part of their day, and 66% are frustrated by AI output that is “almost right, but not quite”.

Companies are also not helping here, because they are doing massive layoffs thinking a developer can now do the work of 3 developers. When in fact, in my opinion, they are just putting more work on the shoulders of the remaining developers.

Eventually those developers will either:

  • Get burned out and leave the company.
  • Start letting sloppy code pass, because they are under pressure to deliver and can’t properly review all of it.

And this is simply because we don’t fully trust AI code yet, but lets check on the next section!

4. The Trust Gap 🤖

We are getting to the point where models are becoming more and more capable of doing the work of a developer (or so we think). As time goes by, i believe in about 2 years models will be able to produce actually good code that we can somewhat trust a bit more.

But lets not forget that AI is still JUST a next-token predictor. There’s a lot of things that aren’t there yet, models still have a “tunnel vision” about your project, they create unnecessary helpers, and a bunch of other things we call slop.

AI slop generator

Developers currently use codebase indexers, semantic search, and greps to power up the AI workflow, but its still not enough. If you’re solo vibing with AI for coding, you already know you’re playing a gambling game every time you throw a new session in. There’s a good chance you get something NICE, and also a good chance you get something BAD.

That’s why we humans still need to keep a close eye on the output, not just blindly trust it. The ones doing that are the so-called vibe coders, the same ones who will then cry 3 days later that their Supabase credentials got leaked, or can’t figure out why the website is not working, burning a zillion hours and tokens trying to understand what went wrong.

And yes, you can use Replit, CodeRabbit, or pass your code to 10 different AI models, and all of them will still let slop code pass. As of today, at least.

According to the 2025 Stack Overflow Developer Survey (49,000+ devs across 177 countries), 84% of developers use or plan to use AI tools, but only 29% trust the output, down from 40% the year before. Even worse, just 3% say they “highly trust” it, and the more experienced the developer, the lower the trust (senior devs report the highest “highly distrust” rate at 20%). So yeah, we might not be writing as much code ourselves, but we still need to understand code, and how to validate it. Thats on us.

5. Wrap Up 🧘

AI is still one of the best things happening at the moment, no doubt. But lets not forget (including myself) that there is still life outside the terminal, don’t burn yourself out, its ok to take a break and do a bit less.

Lets also try to spread the word about this. Its all new, not only for us developers but also for companies and managers thinking AI will magically solve every problem and that they can “tokenmaxx” their way to success. The tool doesn’t fix broken systems, it amplifies them. If your team ships fast and reviews slow, AI won’t save you, it will just expose the cracks.

We developers are here to stay for a few more years at least! Stay positive, keep learning, keep building, and keep sharing your knowledge with the community. Thank you! 🙏

Frequently Asked Questions

Are developers actually working more hours since adopting AI coding tools?
Recent data suggests yes. Scientific American reported in March 2026 that out-of-hours code commits rose 19.6% among engineers using AI, and 96% of frequent AI users now work evenings or weekends a few times a month or more. The 2025 DORA report also shows developers interacting with 67.4% more pull request contexts per day, which correlates with higher cognitive load even when individual tasks feel faster.
What is the AI code review bottleneck?
AI lets developers generate and merge pull requests much faster, but human review time has not scaled at the same rate. Faros AI research shows PR volume up 98% and PR size up 154%, while review time per PR is up 91%. According to the 2025 Stack Overflow Developer Survey, 45% of developers say debugging AI-generated code is now the most time-consuming part of their day.
How much do developers currently trust AI-generated code?
According to the 2025 Stack Overflow Developer Survey of 49,000+ developers, 84% use or plan to use AI tools but only 29% trust the output, down from 40% in 2024. Just 3% say they highly trust it, and 46% actively distrust AI output. Senior developers are the most skeptical group, with the highest highly-distrust rate at around 20%.
How does the DORA 2025 report describe AI's impact on teams?
The 2025 DORA report found that AI adoption among developers rose to 90%, and over 80% say AI improved their productivity. However, organizational delivery metrics remained flat or slightly dropped, suggesting AI amplifies existing team practices rather than fixing broken ones on its own.
What are practical ways to avoid burnout when working with AI agents?
Common approaches include limiting the number of parallel worktrees and agents per session, reserving parallel execution for repetitive work like test generation, treating AI output as a draft that requires review, and protecting dedicated deep-work time away from notifications and agent dashboards.
Like this post? Sharing it means a lot to me! ❤️