Where Things Go Wrong
An honest friction log from 180 sessions — buggy code, planning paralysis, and deployment gotchas
Every post about AI coding tools tells you how great they are. This one tells you where they break.
After 180 sessions, I've accumulated a detailed friction log. Not theoretical concerns — actual things that went wrong, cost time, and sometimes killed entire sessions. Here's the unfiltered version.
Buggy Code Shipped Without Adequate Verification
Claude frequently delivers code with runtime bugs that only surface when I test or deploy, forcing me into multiple fix cycles. I could ask Claude to run builds, write quick smoke tests, or verify its own changes more thoroughly before declaring a task done.
Real examples from my sessions:
- Multiple bugs shipped in the embeddable chat demo — importing a client-only function in a server component, a missing env var for the widget URL, and assigning a string instead of a function — all requiring I to report each bug back individually across three rounds of fixes.
- The simulation run insert was not awaited, causing intermittent data loss on Vercel serverless that I had to discover and report myself, because Claude didn't consider async behavior in a serverless environment.
Excessive Planning and Exploration Without Producing Output
In multiple sessions, Claude spent the entire available time reading files and writing plans without generating any actual code, forcing me to interrupt. I could front-load context in my prompt (e.g., specifying key files and desired architecture) and explicitly instruct Claude to skip extended planning and start implementing immediately.
Real examples from my sessions:
- I asked for a Remotion animated explainer video, but Claude spent the entire 8-minute session reading files and planning without producing a single line of Remotion code before I interrupted.
- I asked for a Vercel-hosted dev log blog site, but Claude got stuck in extensive codebase exploration and plan-writing without delivering any code, leading I to interrupt a second time on what was essentially the same task.
Git and Deployment Workflow Missteps
Claude makes errors in git operations and deployment steps — pushing to wrong branches, failing to push changes, and struggling with environment configuration — which derails my shipping flow. I could establish explicit workflow instructions (e.g., always push to main after merge, always verify target branch) in my CLAUDE.md or session prompts.
Real examples from my sessions:
- A URL regex bug required a cherry-pick because Claude merged the PR before the fix was pushed, and I had to ask twice about pushing to main because Claude didn't initially push the cherry-pick commit.
- Claude pushed prompt improvements to a feature branch after it had already been merged instead of pushing to main, and separately struggled with Vercel CLI project linking (auto-matching to the wrong project), forcing me to set env vars manually.
The Honest Numbers
53buggy code incidents 47wrong approaches
These aren't edge cases. Across 180 sessions and 256 commits, roughly 1 in 4 sessions hit meaningful friction. The productive output still far outweighs the cost — but pretending friction doesn't exist makes you worse at managing it.
What I've Learned About Managing Friction
The single biggest improvement: make Claude verify its own work before declaring done. Adding npx tsc --noEmit after every implementation pass catches the majority of shipped bugs. It's a 5-second check that saves 15-minute debugging cycles.
The counterintuitive lesson: The solution to buggy AI code isn't more careful prompting — it's faster verification loops. Don't try to prevent bugs; catch them immediately.
The second biggest improvement: interrupt early when Claude starts over-planning. If Claude is reading files and writing plans after 2 minutes without producing code, it's stuck. Kill it, restate the goal more concretely, and tell it to start implementing immediately.