11.03.2026 14:25Author: Viacheslav Vasipenok

Mastering AI-Assisted Coding: Boris Tane's Disciplined Workflow with Claude Code

News image

In the rapidly evolving world of AI-driven development, where tools like Claude Code promise to revolutionize how we build software, many developers chase "magical prompts" or intricate systems to coax the best results from large language models. But Boris Tane—founder of Baselime and current lead for Observability at Cloudflare — takes a starkly different path.

In his recent blog post, Tane shares a battle-tested workflow he's honed over nine months of using Claude Code as his primary coding tool. The spoiler? There's no sorcery involved — just rigorous discipline, architectural oversight, and a steadfast rule: never let the model write a single line of code until you've reviewed and approved a detailed written plan.

This approach flips the script on typical AI coding sessions, which often devolve into chaotic back-and-forths fixing half-baked outputs. Instead, Tane treats the developer as the architect and reviewer of logic, outsourcing the mechanical grunt work to the AI.

The result? Cleaner code, fewer errors, and massive savings in time and tokens. As Tane puts it, "if the research is wrong, the plan will be wrong, and the implementation will be wrong. Garbage in, garbage out." Let's dive into his structured pipeline, broken into four key phases.


Phase 1: Research – Building a Rock-Solid Foundation

Every project kicks off with a deep dive into the existing codebase. Tane instructs Claude to "read deeply" and document everything in a persistent Markdown file, like `research.md`. The key here is prompting for thoroughness — using words like "deeply," "in great details," "intricacies," and "go through everything" to prevent the model from skimming signatures and moving on prematurely.

For instance, a sample prompt might be: "read this folder in depth, understand how it works deeply, what it does and all its specificities. when that’s done, write a detailed report of your learnings and findings in research.md." 

Or, for a specific system: "study the notification system in great details, understand the intricacies of it and write a detailed research.md document with everything there is to know about how notifications work."

No casual chat conclusions allowed — everything gets fixed in writing. This creates a verifiable artifact that Tane reviews for accuracy, catching misunderstandings early. Without it, implementations risk breaking caches, duplicating logic, or ignoring critical specificities. It's the antithesis of "prompt and pray," ensuring the AI starts from a place of true comprehension.


Phase 2: Planning – Outlining the Blueprint

With research in hand, Tane shifts to generating a comprehensive implementation strategy in `plan.md`. This file details the approach, including which files to modify or create, code snippets, dependencies, and trade-offs. Prompts are straightforward but grounded: "I want to build a new feature <name and description> that extends the system to perform <business outcome>. write a detailed plan.md document outlining how to implement this. include code snippets." 

Or, for a targeted change: "the list endpoint should support cursor-based pagination instead of offset. write a detailed plan.md for how to achieve this. read source files before suggesting changes, base the plan on the actual codebase."

Tane opts for custom Markdown files over Claude's built-in plan mode for better control and editability. A pro tip: Reference open-source examples to guide the AI, like "this is how they do sortable IDs, write a plan.md explaining how we can adopt a similar approach." Before proceeding, he often adds a granular todo list: "add a detailed todo list to the plan, with all the phases and individual tasks necessary to complete the plan - don’t implement yet." This checklist becomes a progress tracker later.


The Annotation Cycle: Iterating on Text, Not Code

Here's where Tane's method shines brightest—and what sets it apart. After Claude drafts `plan.md`, Tane opens it in his editor and inserts inline notes directly: corrections, constraints, scope cuts, or architectural refinements. Examples abound: "use drizzle:generate for migrations, not raw SQL," "no — this should be a PATCH, not a PUT," "remove this section entirely, we don’t need caching here," or "this is wrong, the visibility field needs to be on the list itself, not on individual items. when a list is public, all items are public. restructure the schema section accordingly."

He then feeds it back: "I added a few notes to the document, address all the notes and update the document accordingly. don’t implement yet." This loop —typically 1 to 6 iterations — refines the plan without generating code. Why? Editing text is faster, cheaper on tokens, and avoids the mess of unpicking flawed implementations. The "don’t implement yet" guardrail is crucial, keeping Claude focused on planning. Markdown acts as shared state, injecting Tane's expertise on priorities, user needs, and trade-offs that AI lacks.


Phase 3: Implementation – Execution with Guardrails

Only when the plan is flawless does Tane greenlight coding. His go-to prompt is disciplined and directive: "implement it all. when you’re done with a task or phase, mark it as completed in the plan document. do not stop until all tasks and phases are completed. do not add unnecessary comments or jsdocs, do not use any or unknown types. continuously run typecheck to make sure you’re not introducing new issues."

This ensures comprehensive execution, real-time progress marking, clean output, strict typing, and proactive bug-catching. Implementation becomes "boring" — the creative heavy lifting happened earlier.

If deviations occur, Tane doesn't over-prompt; he references existing patterns: "this table should look exactly like the users table, same header, same pagination, same row density." For frontend tweaks, feedback is rapid and visual, often with screenshots: "wider," "still cropped," or "there’s a 2px gap."

If things go off-rails, revert and rescope: "I reverted everything. Now all I want is to make the list view more minimal — nothing else." Sessions are long and continuous, leveraging Claude's context from prior phases.


Staying in Control: The Developer's Role

Throughout, Tane emphasizes human oversight. He evaluates proposals, cherry-picks (e.g., "for the first one, just use Promise.all"), trims scope ("remove the download feature"), safeguards interfaces ("the signatures of these three functions should not change"), and overrides choices ("use this model instead"). This keeps the developer "in the driver’s seat," delegating mechanics while owning architecture.


Is It Overkill for Small Tasks?

Tane's workflow resonates for complex features in mature systems, where preventing cascades of errors justifies the upfront investment. But for quick bugfixes? It might feel like overkill. As Tane notes, the overhead shines in preventing "downstream errors," but simpler tasks could skip phases. Still, even there, a quick research.md and plan can add discipline without much cost.

Also read:


Wrapping Up: Discipline Over Magic

In essence, Tane's method boils down to: "Read deeply, write a plan, annotate the plan until it’s right, then let Claude execute the whole thing without stopping, checking types along the way." By rejecting hacks and embracing a pipeline that separates thinking from typing, developers gain efficiency and quality. As AI coding tools mature, this architectural mindset could become the gold standard — proving that in software engineering, discipline always trumps magic.

For the full details, check out Tane's blog post at boristane.com/blog/how-i-use-claude-code/.


0 comments
Read more