Skip to content
Psychology 2026-02-24

Breaking the Sunk Cost Spiral: Why the 2-Corrections Rule Saves Your Week

You've spent an hour correcting AI output. Walking away feels like admitting failure. But the psychology says: you lost this fight three prompts ago.

The Pattern Everyone Recognizes

Developer Samuel Faure identified a cycle that every AI-assisted coder has lived through:

  1. Prompt AI. Get output that’s close but not right.
  2. Re-prompt to fix. The fix introduces new bugs, or the AI gets “completely lost.”
  3. Realize you would have been done sooner without AI.
  4. Feel “in too deep” to stop. You’ve invested time, built context, refined the conversation.
  5. Keep prompting anyway.

Colin Cornaby captured the feeling precisely: after spending three hours with Claude Code on a single task, he questioned whether doing it manually would have been faster. But walking away at that point felt like admitting failure.

That feeling, the one that says “just one more prompt,” is not a productivity instinct. It’s a cognitive trap.


The Psychology: Why You Can’t Walk Away

The sunk cost fallacy is the tendency to continue investing in something because of already-invested resources, even when continuing is irrational (The Decision Lab). In traditional settings, it applies to money and time. In AI-assisted coding, it applies to something more insidious: cognitive effort and context.

Loss Aversion

Every correction you give the AI represents invested thought. You analyzed the output, identified the problem, crafted a precise instruction. That effort feels like it would be “lost” if you start over. Loss aversion, the well-documented tendency to weigh losses more heavily than equivalent gains, makes abandoning feel twice as painful as it should.

Context Window Sunk Costs

AI conversations have a unique amplifier: the context window. Over multiple prompts, you’ve built up context: explained your architecture, clarified constraints, corrected misunderstandings. That accumulated context feels valuable. Abandoning the conversation means rebuilding it from scratch.

But here’s what your brain won’t tell you: if the AI is “completely lost” after three corrections, that context isn’t helping anymore. It may actually be hurting. The model is now working from a confused chain of contradictory instructions.


The 43-Point Blindspot

This is where it gets worse. The METR randomized controlled trial found that developers predicted AI would speed them up by 24%, believed afterward it made them 20% faster, but were actually 19% slower. That’s a 43-point gap between perception and reality.

The sunk cost spiral is one reason for that gap. Every hour spent in a failing correction loop is an hour the developer perceives as “almost working.” They feel close to a breakthrough, while the clock measures it as pure loss. The perception of progress masks the absence of it.

You don’t feel the spiral while you’re in it. You feel productive. You feel like the next prompt will fix it. That’s the trap.


The 2-Corrections Protocol

The fix is mechanical, not motivational. Willpower doesn’t beat cognitive bias. Protocol does.

After 2 failed corrections, STOP. Restart with a better prompt.

That’s the entire rule. Here’s why it works:

  1. It makes the decision automatic. You’re not deciding whether to abandon your investment. You’re following a protocol. The emotional weight disappears because the decision was already made before you started.

  2. It reframes the action. You’re not “admitting failure.” You’re executing a methodology step. The same way a pilot follows a checklist, you follow the 2-corrections rule. No ego involved.

  3. It breaks the loop at the right point. By the third correction, the AI’s context is likely contaminated with conflicting instructions. A fresh prompt with better structure will outperform a sixth attempt to fix a broken conversation.

  4. It captures what you learned. The two failed corrections taught you what the AI misunderstands. Your restart prompt can address those gaps directly. The time wasn’t wasted. It was reconnaissance.


When to Apply It

The 2-corrections rule applies to any AI interaction where the output requires fixing:

  • Code generation that produces incorrect logic
  • Refactoring that introduces regressions
  • Test generation that misunderstands requirements
  • Architecture suggestions that miss constraints

It does NOT mean “give up after two prompts.” It means: if you’ve given the AI two specific corrections and the output still isn’t right, the conversation is broken. Start a new one.

The difference between developers who use AI effectively and those who lose hours to it is not skill or experience. It’s knowing when to stop. The 2-corrections rule makes that knowledge a habit instead of a judgment call.


Sources: Samuel Faure, developer experience with AI correction loops · Colin Cornaby, Claude Code usage report (3-hour session) · METR Randomized Controlled Trial (2025, 16 devs, 246 tasks) · The Decision Lab, Sunk Cost Fallacy · Paranoid Verification Methodology, 2-Corrections Rule

Take the Diagnostic to find out where you stand, including whether sunk cost spirals are silently eating your productivity.

The Complete Guide

Master Paranoid Verification

80+ pages of methodology, prompt patterns, verification systems, and real-world strategies. Everything you need to build AI-assisted software you can actually trust.

$19 · PDF, 80+ pages