Skip to main content
 /  Around 9 minutes to read

Debugging Your Decisions

Deploy a feature, watch it flop, fire up the debugger.

Here's the thing. By the time you're stepping through code, the real bug is already ancient history. It's sitting in a Slack thread from three weeks ago, or buried in a meeting note where someone said "let's just ship it and see what happens."

Debugging Your Decisions

The bug wasn't in your code. It was in your thinking.

Most teams treat failed features like syntax errors: isolate the broken component, patch it, move on. But features don't fail because of bad implementations. They fail because someone, somewhere, made a decision based on incomplete information, cognitive bias, or, let's be honest, a hunch that felt right at the time.

The Scientific Method for Shipping Shit

Researchers Amos Tversky and Daniel Kahneman introduced the concept of cognitive bias back in 1972, and since then we've cataloged over a hundred ways our brains take shortcuts that lead us astray. These shortcuts deviate from rational objectivity, which is a polite way of saying we're all making half-assed assumptions and calling them strategy.

In product development, this plays out predictably. Anchoring bias makes us hold onto particular values even when objectively they might seem irrational. Like that pricing model you picked because it's what the first competitor you looked at was charging ¯\_(ツ)_/¯

Confirmation bias has us believing and focusing on facts that support our beliefs while ignoring the facts that don't. You think your product needs a new feature, so you cherry-pick customer feedback that agrees with you and mentally dismiss everyone else.

The pattern is everywhere.

Cognitive bias can lead designers to build the wrong product or prioritize features that matter most to them instead of what matters to the target audience And here's the kicker: you won't catch it in code review. You won't catch it in QA. You'll catch it when your feature launches and nobody uses it.

So treat every product decision like a hypothesis. Write it down. Make it falsifiable. Set criteria for what success looks like. And when it fails (because it will eventually fail), you'll have a paper trail back to the moment where your thinking went sideways.

The Decision Log: Your Product's Black Box

When a plane crashes, investigators pull the black box. When your feature crashes, you should pull your decision log.

Except most teams don't have one, so they're left reconstructing decisions from memory, which is like debugging from a stack trace someone drew from memory three weeks later.

Here's a format that works. It's lightweight enough that you'll actually use it:

Decision Log Template

  • Date: [When the decision was made]
  • Decision: [One sentence: what you decided]
  • Context: [What was happening that prompted this? What problem were you solving?]
  • Hypothesis: [What did you believe would happen if you made this choice?]
  • Alternatives Considered: [What else was on the table? Why didn't you choose those?]
  • Success Criteria: [How will you know if this worked? Specific metrics, timelines, user behaviors]
  • Decision Maker(s): [Who had the final call?]
  • Key Assumptions: [What did you assume was true? What data were you missing?]

And here's an actual example:

  • Date: November 15, 2025
  • Decision: Add a free trial extension feature for users who complete 80% of onboarding
  • Context: 60% of free trial users weren't completing onboarding. Hypothesis was they needed more time.
  • Hypothesis: Giving users more time will increase conversion because time pressure is the blocker
  • Alternatives Considered: Simplify onboarding (too much engineering time), offer 1:1 demos (doesn't scale)
  • Success Criteria: 25% increase in conversions from users who receive extension within 60 days
  • Decision Makers: Product Lead, Engineering Manager
  • Key Assumptions: Time is the constraint (not value perception or product-market fit), users want to complete onboarding but can't, 80% completion indicates intent

Notice what this does: it makes your thinking explicit. When that feature inevitably doesn't move the needle, you can go back and audit not just what you decided, but why you thought it would work.

That's where the learning lives.

The Post-Mortem Questions That Actually Matter

Google's Site Reliability Engineering teams pioneered blameless postmortems as a way to focus on identifying contributing causes without indicting individuals. The philosophy is simple: it's assumed that every team and employee acted with the best intentions based on the information they had at the time.

But here's what blameless postmortems get wrong when applied to product decisions: they're designed for system failures, not hypothesis failures.

When your database goes down, there's usually a clear timeline and a recoverable state. When your feature flops, you need to debug your mental model.

Root Cause Questions for Failed Features

Here's a framework adapted from incident postmortems but rebuilt for product decisions:

  1. What did we believe was true?
    Not what you wished was true. What did you actually think when you made the call?
  2. What evidence supported that belief?
    Customer interviews? Data? A competitor doing it? Your gut? Be specific. "Everyone wanted this" is not evidence. Three customers in Slack asking for it is evidence.
  3. What evidence did we ignore or discount?
    This is the hard one. What contradictory data was available that you wrote off? Confirmation bias leads us to favor information that reinforces assumptions while overlooking contradictory details.
  4. What did we assume without verifying?
    "Users will figure it out." "Marketing can sell this." "This scales." Whatever. Write them down.
  5. What was the first domino?
    The original Tversky and Kahneman observation on anchoring bias showed how individuals set their initial value and make subsequent adjustments from that point. What was your anchor? That first piece of information or that initial assumption that everything else was built on?
  6. Would more data have changed the decision?
    If yes: why didn't you get it? If no: why not? If you would've made the same call regardless of the data, that's telling.
  7. What would we need to believe for this feature to succeed in the future?
    This forces you to articulate the worldview required for success. Often you'll realize it's unrealistic.

Pre-Mortem: Debug Before You Ship

Post-mortems are great for learning. But you know what's better? Not shipping the damn bug in the first place.

A pre-mortem is the inverse of a post-mortem. Instead of asking "what went wrong?" after failure, you ask "what could go wrong?" before you commit. It's a forcing function that makes you interrogate your decision while you still have time to course-correct.

When you're solo or running a tiny team, you don't have the luxury of six people in a conference room stress-testing your ideas. You've got you, maybe a co-founder, and probably a Discord channel with three beta users. That's fine. The pre-mortem still works, you just run it differently.

The Solo/Small Team Pre-Mortem

Step 1: Write the Failure

Open a doc and write this at the top:

"It's six months from now. I shipped [feature]. It completely failed. Here's why:"

Then brain-dump every reason it could fail. Don't filter, don't organize, just write. You're looking for:

  • Technical disasters: "The Stripe integration was way harder than I thought and took 3 months"
  • Market misses: "Nobody wanted this because they're already solving it with a spreadsheet"
  • Execution gaps: "I built it but had no plan to get users to actually try it"
  • Resource traps: "This took so much time I couldn't work on the thing that actually makes money"
  • Assumption bombs: "I assumed users would understand the UI without onboarding"

Write until the timer stops or you run out of ideas. You want 10-15 failure scenarios minimum. If you're stuck, look at your decision log and ask "what if that assumption is wrong?"

Step 2: Talk to Someone Who Isn't You

If you've got a co-founder, send them the doc. If you're solo, grab a founder friend, a technical advisor, or even a user. Give them context in two sentences, then ask:

"I'm about to build this. It's six months later and it failed. Why?"

They'll see stuff you can't because they're not anchored to your original thinking. They haven't spent three weeks convincing themselves this is a good idea.

If you're completely solo with no one to ask, post it to an indie hacker forum or a Discord you trust. You'd be surprised how useful cold reads are.

Step 3: Highlight Your Top 3 "Oh Shit" Moments

Read through everything. Which failure modes make your stomach drop? Which ones feel inevitable if you're honest with yourself?

Those are your bugs. The ones where you think "yeah, that could actually happen" or "fuck, I haven't thought about that at all."

Star them. Those are what you're debugging.

Step 4: Decide: Fix, Monitor, or Accept

For each of your top 3 failure modes:

  • Fix it now: Can you change the plan to avoid this? Maybe that means descoping, maybe it means validating an assumption before you build, maybe it means a totally different approach.
  • Monitor it: Can you set up an early warning system? A metric that'll tell you this is happening so you can pivot fast? "If we don't have 10 signups in the first week, we kill it" is a monitor.
  • Accept it: Sometimes you know the risk and you're taking it anyway. That's fine. Just be explicit. "This might take 3x longer than planned, but we're betting it's worth it" is different from "oh shit, this took 3x longer and now we're screwed."

Write down your choice for each one. Update your decision log.

The Bootstrapper's Reality Check

Here's the thing about pre-mortems when you're bootstrapped: you're not trying to de-risk everything. You can't de-risk everything. You're always shipping with incomplete information, constrained resources, and too many things on your plate.

The pre-mortem is about making sure you're not walking into an obvious trap while your head's down in the code.

Because when you're solo or small, every decision has opportunity cost.

That feature you're building? That's three weeks you're not fixing bugs, talking to users, or working on distribution. If it flops, you don't just lose the feature, you lose the time.

The pre-mortem is your last chance to ask: "Is this the right bet?", not "will this definitely work?", just "is this where I should be spending my next month?"

When To Run It (AKA: When You're About To Do Something Stupid)

You don't need a pre-mortem for every little thing. If you're a solo founder/dev, you're already stretched thin. But run one when:

  • You're about to commit serious time: Anything that'll take more than a week of actual work.
  • You're excited but can't explain why: If you're pumped about an idea but struggle to articulate the value prop, that's a red flag.
  • You've been thinking about it for a while: Ideas that marinate get attached to your identity. Pre-mortem helps you kill your darlings.
  • Someone else thinks it's a bad idea: When your co-founder, a user, or a trusted friend says "I don't know about this," listen.
  • You're about to spend money: Paid tools, contractors, ads... anything with actual cash at stake deserves 30 minutes of paranoia.

What You'll Actually Find

Most pre-mortems surface one of three things:

  • The Fatal Flaw: "Oh fuck, this entire plan depends on an API that doesn't exist." You just saved yourself a month. Kill it or pivot.
  • The Missing Piece: "We have no idea how users will find this." Not fatal, but you now know you need a distribution plan, not just a feature.
  • The Clarifying Assumption: "We're betting that users will pay for speed, not features." Okay. Now you know what you're testing. Build the fast version, not the feature-rich version.

Sometimes (rarely) you run the pre-mortem and everything checks out. The failure modes are acceptable, the assumptions are solid, the risks are worth it. Great. Ship it with confidence. At least now you know why you're confident instead of just hoping it works.

Making This Stick (Because You Won't Otherwise)

Here's the uncomfortable truth: most teams won't do this. Decision logs feel like overhead, and post-mortems feel like blame sessions in disguise. And examining your own cognitive biases feels like therapy you didn't sign up for.

But the alternative is worse. The alternative is shipping features based on hunches, watching them fail, and having no idea why beyond "market conditions" or "timing wasn't right." Right?

Design is a dynamic, decision-driven process often guided by intuition and experience, making it susceptible to cognitive biases that have been recognized as influential factors affecting expert judgment.

You can't eliminate bias. But you can make it visible.

So start small:

  • Next feature decision: Write down your hypothesis before you build anything
  • Next launch: Set explicit success criteria that would prove you wrong
  • Next failure: Run a 30-minute post-mortem focused on the decision, not the execution

The goal is just to be less wrong next time. Bugs start in thinking. Fix the thinking, and the code takes care of itself.

The Bottom Line

Your codebase might not be the issue, but your decision-making process could very well be. Treat product choices like hypotheses, log your reasoning, and when stuff breaks, trace it back to the moment you decided to ship it.

That's where the bug lives. That's where you learn. Everything else is just moving deck chairs around.