The timing problem no one is talking about — and what it means for how we work with AI
If you wrote software in the 1990s or early 2000s, you remember the build wait. Hit compile on a large C++ project and your machine was gone. CPU pegged, fan screaming, everything else sluggish or frozen. You’d get up, get coffee, check email, maybe start something else, but never fully, because part of your brain was still on the code.
Some developers bought a second computer just to keep working while the first one was tied up. Smart workaround. But it never fully solved the problem. The real issue wasn’t the hardware. It was the break in flow. Returning to where you were mentally was always harder than returning to your desk.
I’m watching the same problem resurface — wearing different clothes.
The Dead Zone
Most AI tasks I run are under a minute. Worth the wait. But then a research task starts, or a document generation, and suddenly I’m ten minutes in with no clear signal of when it’ll finish. When I’m working with Claude Code, most coding tasks take a couple of minutes. Database tasks can take 10, 20, even 60. Same tool, wildly different timing, no heads-up either way.
Two to ten minutes is too long to just sit and wait. So I switch to something else. But it’s also too short to justify a real context switch, the kind where you fully commit to another piece of work. What happens in practice? I open something else, half-engage, check back too early, find the AI still working, half-engage again, and eventually return 20 to 30 minutes after the task finished.
The AI’s timing falls into a dead zone: too slow to watch, too fast to walk away from properly. You lose focus going out and lose more coming back.
Here’s the thing that nags at me as a PM: the AI already knows. It just built the task list. It has some sense of whether this is a 30-second job or a 30-minute one. A simple heads-up — “this looks like a longer one, check back in about 10 minutes” — would change the whole experience. More on that fix below.
The Research Backs This Up
Gloria Mark at UC Irvine found that it takes roughly 23 minutes to fully recover focus after an interruption. Not 23 minutes of staring at the wall. It’s 23 minutes of being half-useful before your brain actually re-engages. The 2–10 minute AI window forces an interruption without giving you enough time to finish anything real. You end up interrupted but not recovered.
Here’s what caught my attention. A 2026 study by BCG and UC Riverside coined the term “AI brain fry” after surveying nearly 1,500 workers and finding that 14% experienced cognitive fatigue from overseeing AI tools. The part that got me: the most draining activity wasn’t using AI directly. It was monitoring AI outputs: the checking, the waiting, the evaluating.
The dead zone is where that drain lives. Every half-check, every glance back at a still-running task, every partial context switch. It’s all monitoring. And monitoring, it turns out, is more exhausting than doing the work yourself.
As a product manager, I’m focused on the design gap: the timing problem that AI tools aren’t solving yet. But the cognitive risks go deeper than timing. Jason Weeby’s recent LinkedIn article, “AI will make me dumber if I let it. So I made a ‘helmet,’” is a good companion read if you want to explore the full range, from skill atrophy to automation bias to what he calls agency decay. His central point resonates: just knowing about these risks isn’t enough. You need structure. And the dead zone? That’s a place where nobody’s built any structure yet.
Fast, Reliable, Invisible?
I’ve spent 25 years working on software products, and there’s a lens I keep returning to: great software should be fast, reliable, and invisible. Not invisible in the sense of hidden — invisible in the sense that the user stays the protagonist. The software shouldn’t ask for your attention. It shouldn’t make you wait and wonder. It’s not a universal rule, but it’s shaped how I evaluate the tools I use — including the ones I like.
When I put on my product manager hat and look at current AI agents through that lens, they struggle on all three. They’re not fast enough to feel instantaneous. Their outputs are good but not consistently reliable. And they’re decidedly not invisible — they make you wait, they make you check, and they never tell you when to come back.
Speed alone won’t fix this. Some tasks just take time: research, document creation, anything that requires the AI to actually think across multiple steps. The gap isn’t going away. The question is what we do with it.
A Simple Fix We’re Not Using
Here’s what I think would help: time commitment.
Before starting a long task, the AI should estimate completion time with a confidence interval, something like: “This will take approximately 8 minutes. I’m 90% confident it will be done by 3:47 PM. Come back then.”
That changes everything. Instead of anxious checking, you have a commitment. You can walk away, work on something real, and return at a defined time. Even if the AI finishes early, you don’t need to know. The gap is now a scheduled break, not a dead zone.
A surgeon tells you how long the procedure will take. A contractor gives you a completion date. These aren’t guarantees — they’re commitments that let you plan. We just forgot to build that into AI.
The Bigger Pattern
Most people, when they think about AI productivity, imagine AI making them faster. More output, less time. And that’s largely true. The quality and volume of work is genuinely better.
What nobody tells you is how much you have to change the way you work to use these tools well. Right now most of us are figuring that out the hard way, by burning through focus in the dead zone between “I submitted the task” and “I’m back and ready to use the output.”
I think about how developers eventually adapted to long build times. The ones who figured it out made a decision before they hit compile: they already knew what they were doing next. The wait became a natural break between two defined pieces of work. The ones who didn’t figure it out just wandered, checked Slack, poked at something unrelated, drifted back when the build finished and spent ten minutes remembering where they were.
There’s a concept from cognitive science that helped me think about why this matters beyond just time management. Robert and Elizabeth Bjork have studied what they call “desirable difficulty” since the 1990s: the finding that challenges which slow you down in the short term — working through errors, struggling with material, grappling with hard problems — actually strengthen learning and retention in the long term. AI systematically removes this effort. That’s what makes it useful. It’s also what makes it risky.
The developers who pre-planned their next task before hitting compile were, in a sense, creating their own desirable difficulty. They were using the forced pause to do the harder cognitive work — deciding what mattered next, structuring their approach, thinking ahead — rather than passively waiting for the machine to finish. The wait became productive struggle, not dead time.
We’re in the early period with AI agents. The tools aren’t going to slow down or start explaining themselves better anytime soon. We’re the ones who have to change. The question is whether we do that deliberately or just keep drifting through the dead zone without ever calling it what it is.
What I’m Trying
I know that more experienced AI users have workarounds for some of this: scheduling tasks, batching work, using skills that re-evaluate prompts before replying. These certainly help. But the fact that we need workarounds at all is the product design gap I keep coming back to. The timing problem is real, and the tools themselves should be solving it.
In the meantime, here’s what I’m experimenting with:
- Ask for a time estimate before every long task. Even a rough one changes my behavior. Instead of anxious monitoring, I have a window. I can commit to something else and come back.
- Ask the AI to leave a “where we left off” summary when it finishes. Re-entry is the hidden cost of the dead zone. A short summary of what was done and what’s next cuts the 23-minute refocus penalty significantly.
- Batch AI tasks. Kick off several at once, then go do real focused work while they run. This turns multiple dead zones into one productive block.
- Use scheduled tasks for recurring work. Let the AI run in the background without requiring your attention at all. If you don’t need to monitor it, you can’t get brain fry from it.
- Pre-plan what you’ll do during the wait. This is the desirable difficulty move. Before you hand the task to the AI, decide what you’re doing next. Make the gap intentional. The developers who survived long build times figured this out. We need to figure it out too.
I keep coming back to this idea: good software should adapt to humans, not train humans to adapt to it. The workarounds help. But every workaround is a small signal that the product hasn’t quite done its job.
None of this is a knock on AI. I like these tools — a lot. The experience of using them all day is just still being worked out, by the companies building them, and by the rest of us trying to fit them into real work. I’m figuring this out as I go, and I suspect most people are too. But I’m starting to think the people getting the most out of AI aren’t just the ones with access to the best models. They’re the ones who’ve thought about what to do with themselves during the eight minutes while those models are running.
Brad Hinkel is a product leader with 25+ years across Microsoft, Amazon, Disney, and Google, currently focused on AI product management and human-AI workflow design.
AI is evolving fast. I think our workflows need to evolve with it, in ways that protect human focus rather than erode it.
What’s your experience with AI wait times? Have you developed workarounds? I’d like to hear what’s working.
References
Gloria Mark — Research on interruptions and refocus time. See Attention Span: A Groundbreaking Way to Restore Balance, Happiness and Productivity (2023) and “The Cost of Interrupted Work: More Speed and Stress” (CHI 2008).
BCG & UC Riverside — “When Using AI Leads to ‘Brain Fry’” (March 2026). Published in Harvard Business Review and BCG.com.
Robert & Elizabeth Bjork — “Desirable Difficulties in Theory and Practice” (2020). Bjork Learning and Forgetting Lab, UCLA.
Jason Weeby — “AI will make me dumber if I let it. So I made a ‘helmet.’” LinkedIn (March 2026).

Leave a Reply