Back to blog
AI Automation2026-04-047 min read

The 35-Minute Rule — How to Decide What to Automate With AI in 2026

There is a number that most productivity frameworks ignore. Thirty-five minutes.

Toby Ord, Oxford philosopher and author of The Precipice, has a framework he calls the boredom threshold — the point at which a human working on a repetitive or shallow task loses concentration and starts making errors. His observation is that most cognitive work, when performed continuously beyond thirty-five minutes, degrades in quality. The errors are not dramatic. They are quiet — a spreadsheet formula that is slightly wrong, an email that is slightly off in tone, a data entry that is slightly misaligned. The work gets done. The quality is below what it should be.

AIMultiple's research translates this into a practical decision rule: any task that takes more than thirty-five minutes and follows a repeatable pattern should be evaluated for AI automation or delegation. The thirty-five-minute threshold is not a productivity hack. It is a cognitive limit. When you task a human to do work that an AI agent can do in thirty-five minutes, you are paying a human wage for degraded attention.

This changes the automation decision from "what can we automate?" to "what should we stop asking humans to do at all?" And that question, answered honestly, is the productivity leverage that most organizations are not capturing.


The Error Propagation Problem — Why the Thirty-Five-Minute Rule Matters

The Galileo research on error propagation cascades describes why the thirty-five-minute threshold is not just about efficiency — it is about error quality.

When a human makes an error on a task that has been running for forty-five minutes, the error does not stay at minute forty-five. It propagates forward into every subsequent step. A wrong cell reference in a spreadsheet model at minute thirty contaminates the analysis at minute fifty. An incorrectly coded data entry at the start of a data migration contaminates the database at the end. The human is fatigued, is making small errors, and each error compounds in the system they are building.

AI agents do not have a boredom threshold. They do not degrade after thirty-five minutes. They do not make more errors on the hundredth iteration than on the first. When a task is below the thirty-five-minute threshold and follows a repeatable pattern, the quality argument for AI over human execution is not marginal — it is structural.

The error propagation cascade is most damaging in tasks where the output feeds directly into another system. A CRM update that is wrong feeds bad data into the sales pipeline. A financial model that has a wrong assumption feeds bad data into the budget. A customer email that is off-tone creates a relationship problem that requires management time to resolve. The cost of the error is not the time to fix it. It is the downstream contamination.

The productivity leverage is therefore not in automating the task. It is in preventing the error cascade before it starts.


AI-Native Task Decomposition — Breaking Work Below the Threshold

The practical application of the thirty-five-minute rule requires a discipline that most organizations have not developed: AI-native task decomposition.

Traditional task decomposition — from project management methodologies — breaks work into logical units for human execution. The unit of work is sized for human attention spans, human scheduling, human fatigue patterns.

AI-native task decomposition breaks work into units sized for AI agents. The question is not "how should a human execute this?" but "how should an AI agent execute this?" The answer requires thinking about three things simultaneously: what the AI agent can do reliably, what the human needs to review before it propagates further, and what the cost of an error at each step would be.

The decomposition framework has three questions that should be asked for every task before it is assigned to a human or an AI:

Is this task below the thirty-five-minute threshold? If yes, it is a candidate for AI execution. If it takes more than thirty-five minutes for a human, it will degrade in quality. The AI agent will not. A thirty-five-minute task for a human might take an AI agent thirty seconds. That is a feature, not a constraint.

Is the error propagation risk acceptable? If the task output feeds into a downstream system — a CRM, a financial model, a database — the cost of an error is not the time to fix the task. It is the downstream contamination. High propagation risk tasks require human review gates. Low propagation risk tasks can run autonomously.

Can the output be verified before costly action? An AI agent that generates a report should have its conclusions spot-checked before the report goes to a client. An AI agent that schedules a meeting can execute without review. An AI agent that sends an email to a customer should probably have a human read it before it goes. The verification requirement is a function of the cost of an incorrect output.


The Three-Question Scoping Framework

Question one: Is this task repeatable and below thirty-five minutes for a human? If the answer is no — if it takes two hours, or if it is genuinely novel every time — it is not an AI automation candidate. It is a human task. Assign it to the human and do not try to automate it. The thirty-five-minute rule does not say "automate everything." It says "automate the things that humans are bad at because of their cognition."

Question two: What is the cost of an error at each step? The thirty-five-minute rule is not a reason to remove human judgment from complex tasks. It is a reason to be honest about what human judgment adds and what it costs. If the cost of an error is low, the AI can run autonomously. If the cost of an error is high, the human must review before the output propagates.

Question three: Does the output require human judgment to be valuable? Some outputs are valuable as pure data — a list of qualified leads, a scheduled meeting, a populated CRM record. Some outputs require a human to read the context, apply judgment, and decide whether to act: a sensitive response to a customer complaint, a strategic recommendation. The AI can draft these. The human must decide.


The Organizational Shift — From Task Assignment to System Design

The thirty-five-minute rule changes the productivity conversation from "how do we make humans more efficient?" to "how do we design systems where AI and humans each do what they are best at?"

This is a systems design question, not a task management question. It requires thinking about work as a flow rather than as a collection of tasks. The thirty-five-minute rule applied to individual tasks is a useful heuristic. Applied to a workflow — a series of tasks connected by data flows — it becomes a system architecture question.

The workflow where the thirty-five-minute rule creates the most leverage is the one where most of the tasks are below the threshold, most of the outputs feed into other steps, and the error propagation cost is understood at each step. The AI agents handle the low-threshold, high-frequency tasks. The humans handle the judgment calls, the exception processing, and the final approval on anything that propagates beyond the system.


The Honest Calibration — When the Rule Does Not Apply

The thirty-five-minute rule is a decision framework, not a law of nature.

Complex creative work — strategy, design, narrative, negotiation — is not below the thirty-five-minute threshold in any meaningful sense. It is not repetitive. It does not have a correct answer that can be verified. The thirty-five-minute rule does not apply to work where the value is in the human's judgment.

Relationship-dependent work is in a different category. A performance review, a difficult customer conversation, a sensitive negotiation — these are tasks that a human must own, not because they are technically complex, but because the relationship context requires it.

Novel problem-solving is not a threshold task. A problem that has never been seen before, that requires original reasoning — this is not automatable at the task level, and the AI agent that attempts it will produce confident errors that are more damaging than silence.


The Productivity Question Worth Asking

The thirty-five-minute rule is ultimately a question about what you are paying for when you assign work to a human.

You are paying for attention. Human attention is finite, degrades after thirty-five minutes of continuous concentration on repetitive tasks, and costs the same whether it is fresh or fatigued. When you assign a thirty-five-minute task to a human, you are paying for fresh attention and getting degraded attention after the threshold.

The productivity leverage is not in making humans faster. It is in stopping the allocation of expensive human attention to tasks that degrade it. The question worth asking at every meeting where work is being assigned: is this a thirty-five-minute task that we should be giving to an AI agent, or is this a judgment task that requires human attention?

The organizations that get this right are not the ones with the most AI tools. They are the ones who are most honest about the difference.

Ready to let AI handle your busywork?

Book a free 20-minute assessment. We'll review your workflows, identify automation opportunities, and show you exactly how your AI corps would work.

From $199/month ongoing, cancel anytime. Initial setup is quoted based on your requirements.