Vibe-Driven Development (VDD): Practical Guide

Welcome
Welcome to the comprehensive guide on Vibe-Driven Development (VDD). This emerging methodology transforms the intuitive art of vibe coding into a structured approach with repeatable patterns for AI collaboration. It bridges the gap between coding by feel and engineering discipline.
Unlike pure vibe coding, VDD offers systematic, repeatable foundations and fine-grained control at every stage of development.
This book doesn’t introduce new concepts. Instead, it codifies what many developers already do instinctively, turning scattered practices into a cohesive methodology.
Core Principles
VDD operates on four pillars of explicitness:
- Radical Transparency: Make everything visible - tests, data flows, transformations
- Verbose Communication: Chatty tests that explain what’s happening
- Explicit Over Implicit: No hidden behaviors or assumptions
- Human-in-the-Loop: Maintain control while leveraging AI power
Why VDD?
As software developers, we navigate real requirements, team dynamics, and technical constraints while balancing timelines, quality standards, and accountability.
And recently we face immense pressure in our craft to harness the power of AI-assisted coding. Yet many treat AI as just a syntax generator, missing out on 10x productivity gains. Others hand over control entirely, inheriting brittle code and hidden vulnerabilities.
VDD bridges this gap. An evolving framework for true developer-AI partnership without the compromises.
What You’ll Learn
This book provides both philosophical foundation and practical implementation:
- Part I: Foundations of Vibe Coding - History, philosophy, and core concepts
- Part II: Understanding AI Collaboration - How AI thinks about code and effective collaboration
- Part III: Vibe Coding Patterns - Essential patterns like DevDocs, Anchor, Smoke Tests, and Fuzzy Architecture
- Part IV: The Vibe Coding Method - Step-by-step implementation guide
- Appendices: Ready-to-use prompts and templates
Quick Start
If you prefer learning by doing:
- Jump to Appendix 1: New Project Prompts for step-by-step implementation
- Use Appendix 2: From Existing Codebase Prompts to apply VDD to existing projects
Who Should Read This
This book is for you if you’ve ever:
- Watched AI generate code that looked perfect but didn’t work
- Spent hours debugging despite all tests passing
- Felt disconnected from a “clean” and “well-tested” codebase
- Wondered why simple changes break in unexpected ways
How to Navigate
- New to VDD? Start with the Preface and follow the chapters sequentially
- Experienced Developer? Chapters 0-4 provide a 15-minute introduction to essential concepts
- Ready to Implement? Jump directly to the appendices for practical prompts
Join our discussions for more tips
Linkedin: Discord:
Preface
Why This Book Exists
In 2025, we find ourselves at a peculiar crossroads in software development. AI assistants can generate thousands of lines of code in seconds, yet developers spend hours debugging implementations they don’t fully understand. Tests pass with green checkmarks while core functionality silently fails. Code reviews have become exercises in faith rather than understanding.
This disconnect troubled me deeply. The promise of AI-accelerated development was turning into a crisis of opacity. We were building faster but understanding less. Our codebases were becoming black boxes, even to their creators.
Vibe-Driven Development emerged from a simple observation: when humans and AI collaborate on code, the biggest failures occur not from bad algorithms or syntax errors, but from miscommunication and hidden assumptions. The AI assumes one thing, the developer another, and neither realizes the disconnect until production fails.
The Core Insight
Traditional development methodologies were designed for human-to-human collaboration. Test-Driven Development helps developers communicate intent through tests. Domain-Driven Design creates a shared language between developers and stakeholders. But what methodology addresses human-AI collaboration?
VDD’s answer is radical transparency. Make everything visible. Make tests verbose. Make data flows explicit. Make transformations observable. When your code can’t hide behind abstractions and mocks, both humans and AI must confront the actual behavior.
A Personal Note
VDD isn’t about achieving the perfect codebase. It’s about building a continuous, structured, and measurable connection between your vision and AI’s understanding. AI complements your knowledge, speed, and creativity while you serve as the guiding lighthouse.
Enes/karaposu
10.09.2025
Terminology
This chapter defines key terms and concepts used throughout the Vibe-Driven Development methodology.
Vibe Coding
The practice of collaborative development with AI assistants, emphasizing natural communication and iterative refinement over rigid specifications.
Vibe-Driven Development (VDD)
An emerging methodology that brings structure and repeatable patterns to vibe coding, providing systematic foundations and granular control throughout the development process.
DevDocs Pattern
Living documentation that represents AI’s understanding of the project, continuously updated throughout development.
Smoke Tests Pattern
Comprehensive, verbose tests designed both for build verification and as specifications that AI can understand and implement against.
The Butterfly Defect
Small semantic imprecisions that cascade into large architectural changes. Named after the butterfly effect, where using “modal” instead of “dialog” might shift the entire UI philosophy.
Semantic Precision
Using exact, unambiguous terminology to prevent misinterpretation by AI.
Anchor Pattern
The practice of continuously ensuring existing functionality still works after AI makes changes, preventing regression through regular verification.
HILing (Human-In-the-Loop-ing)
The act of actively intervening as the human in the loop to review, correct, and steer AI output.
AI Drift
The gradual divergence of AI’s implementation from the original project vision, occurring when AI makes accumulated small decisions without human guidance.
Fuzzy Architecture
Starting with vague architectural guidelines and allowing structure to emerge through development, rather than over-specifying upfront.
Offload Pattern
Structuring code, documentation, and prompts to make AI’s job easier and more reliable.
Testing is Dialogue
The concept that tests serve as a communication medium with AI, teaching it requirements through examples.
Data Dump
The initial transfer of all available project information to AI, often messy and unstructured.
Retrofitting
Introducing VDD methodologies to projects that were originally developed without them.
Latent Persona
An emergent character or voice the AI adopts unintentionally, often due to biases in training data or weights. This can manifest as unexpected personality traits, communication styles, or decision-making patterns that weren’t explicitly prompted.
Ghost Tokening
Phantom influence from hidden system prompts, latent weights, or unseen context that alters style or tone without user intent. These invisible influences can cause AI to behave differently than expected.
Lo-Fi Coding (Low Fidelity)
Vague, exploratory collaboration where AI has maximum creative freedom.
Mid-Fi Coding (Medium Fidelity)
Balanced collaboration where human sets boundaries and AI fills in details.
Hi-Fi Coding (High Fidelity)
Enhanced human-in-the-loop intervention with focused vision alignment.
Progressive Fidelity
Starting with Lo-Fi for exploration, moving to Mid-Fi for implementation, and applying Hi-Fi for critical sections.
From Waterfall to Agile to AI-Assisted
The Linear Era: Waterfall (1970s-1990s)
Waterfall dominated when software was simpler. Requirements → Design → Implementation → Testing → Deployment. Each phase completed before the next began. It worked because:
- Software scope was limited
- Teams were smaller
- Change was expensive
The Breaking Point: Internet boom. Software complexity exploded. Six-month requirement phases became obsolete before implementation started.
The Iterative Revolution: Agile (2000s-2020s)
Agile flipped the script. Two-week sprints. Constant feedback. Embrace change. Key principles:
- Working software over documentation
- Customer collaboration over contracts
- Responding to change over plans
Why It Worked: Matched the pace of business. Developers could adapt quickly. Users saw progress regularly.
The AI Transformation (2023s-Present)
AI assistants changed everything again. A developer + AI can now output what entire teams produced. But traditional methodologies break down:
Waterfall + AI = Chaos: AI needs constant steering. Six-month plans are meaningless when AI can prototype in hours.
Agile + AI = Friction: Sprint planning becomes bottleneck. AI works at near conversation speed, not sprint speed.
Era of Vibe Coding
Vibe coding emerged from developers actually using AI. It’s built on three realizations:
- AI amplifies both good and bad patterns - Structure matters more than ever
- Documentation becomes executable - What you write shapes what AI builds
- Testing is dialogue - Tests become conversations where you show AI what “correct” means through examples
The shift: From managing people to managing intelligence. From writing all code to directing code generation. From preventing bugs to rapidly fixing them.
The Birth of Vibe Coding
The Accidental Discovery
Vibe coding wasn’t designed - it was discovered. Early AI adopters noticed patterns:
- Successful projects followed similar rhythms
- Failed projects made similar mistakes
- Traditional methods consistently underperformed
The name came from developers saying “you need to get into the vibe” with AI. Not just prompting - creating a flow state where human intention and AI capability merged.
The Principles
1. Documentation as Control Surface
Early adopters realized: AI treats documentation as truth. Write “this system handles millions of users” and AI codes for scale. Write “simple prototype” and AI keeps it minimal.
Discovery: Documentation isn’t describing code - it’s controlling code generation.
2. Smoke Tests as Build Verification
Problem: Without continuous testing, you don’t know if AI is building correctly until it’s too late.
Solution: Create smoke tests that verify each step of development is working as intended.
Discovery: You can’t wait until the end to test - smoke tests let you catch problems immediately as AI builds.
3. Fuzzy Architecture
Problem: Detailed upfront design leads to overengineering with AI.
Solution: Start with intentionally vague architecture that crystallizes through implementation.
Discovery: AI helps discover the right architecture by building, not by planning.
4. Continuous Anchoring
Problem: As AI works on new features, it forgets earlier requirements.
Solution: Constantly run tests to verify old functionality still works.
Discovery: AI’s limited context means you must actively maintain working state.
5. Offloading the Load
Problem: AI gets confused by inconsistent patterns and unclear structure.
Solution: Make everything explicit, consistent, and well-organized for AI.
Discovery: The easier you make AI’s job, the better results you get.
The Breakthrough Moment
The key insight: AI is not a developer, it’s an amplifier.
Traditional methods treat AI like a faster developer. Vibe coding recognizes AI as something new:
- Incredible speed, but needs direction
- Limited memory due to context windows, but vast knowledge from training
- Pattern recognition beyond human capability, but can confidently produce errors
- Surprising insights mixed with obvious mistakes
The Method Emerges
Developers started sharing patterns that would become core to vibe coding:
“DevDocs” Pattern: Excessive documentation as source of truth that guides AI “Smoke Tests” Pattern: Verbose tests that validate AI understands your intent “Fuzzy Architecture” Pattern: Start intentionally vague, let structure emerge through building “Anchor” Pattern: Force AI to verify old functionality still works after changes “Offload” Pattern: Structure everything to make AI’s job as easy as possible
These weren’t arbitrary - they solved real problems:
- DevDocs gave AI persistent context across sessions
- Smoke tests caught AI misunderstandings early
- Fuzzy architecture prevented overengineering
- Anchors prevented silent breakage as context drifted
- Offloading reduced AI confusion and errors
Why “Vibe”?
Because it captures something traditional terms miss:
- Not “engineering” - too rigid
- Not “crafting” - too slow
- Not “hacking” - too chaotic
Vibe coding is like jazz. Structure + improvisation. Rules + intuition. Human creativity + AI capability.
The Community Forms
Discord servers. GitHub repos. Blog posts. Developers sharing what worked, warning what didn’t. Patterns refined through thousands of projects.
Common realization: This isn’t a fad. It’s how development works now.
The Present
Today, vibe coding is:
- Documented methodology, not just intuition
- Proven patterns, not just preferences
- Growing community, not just early adopters
The future isn’t AI replacing developers. It’s developers wielding AI through vibe coding.
Why Traditional Methods Fall Short with AI
The Fundamental Mismatch
Traditional methodologies assume human limitations:
- Developers write ~100 lines/day
- Context switching is expensive
- Knowledge transfer takes time
AI breaks these assumptions:
- Generates thousands of lines/minute
- No context switching cost
- Instant “knowledge” of any framework
Where Waterfall Breaks
Problem 1: Upfront Design Becomes Guesswork
- AI can prototype five architectures while you’re writing requirements
- Detailed specs become prison bars, not guidelines
- By implementation phase, better patterns have emerged
Problem 2: Phase Gates Block Learning
- AI reveals design flaws immediately through code
- Waiting for “testing phase” wastes AI’s rapid feedback
- Sequential phases ignore AI’s iterative nature
Where Agile Stumbles
Problem 1: Sprint Velocity Becomes Meaningless
- AI completes “8 story points” in 10 minutes
- Planning poker is absurd when AI codes at conversation speed
- Team velocity metrics don’t capture AI amplification
Problem 2: Ceremonies Become Bottlenecks
- Daily standups slower than AI implementation
- Sprint reviews can’t keep pace with AI output
- Retrospectives happen after AI has moved on
The Core Issues
1. Trust vs Verification Gap
Traditional: Trust developers to write correct code Reality: AI needs constant verification, not trust
2. Planning vs Discovery
Traditional: Plan then execute Reality: AI enables discovery through execution
3. Documentation Timing
Traditional: Document after stability Reality: Documentation drives AI behavior
4. Error Philosophy
Traditional: Prevent errors through process Reality: Fix errors faster than preventing them
The New Reality
With AI, you’re not managing code creation - you’re managing code generation. This requires:
- Real-time steering, not upfront planning
- Continuous verification, not phase gates
- Living documentation, not post-facto specs
Traditional methods optimize for human constraints that no longer exist.
Code is the Documentation >>> Documentation is the Code
The Old Truth: Code Never Lies
For decades, developers preached “code is the documentation.” Why? Because:
- Documentation drifts from reality
- Comments become lies over time
- Only code executes, only code is truth
The semantic gap was real. No documentation could fully capture runtime behavior, edge cases, actual implementation details. Reading code was the only way to truly understand a system.
The Etymology of “Code”
“Code” comes from Latin “codex” - a book of laws, a systematic collection of statutes.
Originally, a code was:
- A system of rules
- A way to encode meaning
- A formal specification of behavior
Not the implementation - the specification.
The Paradigm Flip
With AI and VDD, we’re returning to the original meaning. Documentation doesn’t describe code anymore - it generates it and therefore they are tightly coupled
Old world: Write code → Extract documentation New world: Write documentation with AI while Generating code
The documentation becomes the “codex” - the law that governs what gets built.
Why This Works Now
AI bridges the semantic gap:
- Natural language → Working implementation
- Intent → Execution
- Specification → System
What changed? The compiler. AI is a compiler for human intent.
The New Reality
When you write:
This service handles authentication with rate limiting of 100 requests per minute
AI doesn’t just read this - it implements it. The documentation isn’t describing code that exists. It’s creating code that will exist.
Documentation as Source Code
In vibe coding:
- project_description.md compiles to architecture
- interface.md compiles to APIs
- known_requirements.md compiles to features
Your documentation is source code for an AI compiler.
Semantic Precision and “The Butterfly Defect”
The Weight of Words
In human communication, we tolerate ambiguity. “Fast” could mean milliseconds or minutes. “User-friendly” means different things to different people.
AI doesn’t tolerate - it interprets. And propagates.
The Butterfly Defect
Traditional butterfly effect: A butterfly flaps wings in Brazil, causes tornado in Texas.
The Butterfly Defect: You type “modal” instead of “dialog”, your entire UI philosophy shifts.
Real example:
Human: "Create a service for handling user data"
AI: *builds microservice architecture*
vs
Human: "Create a class for handling user data"
AI: *builds single class with methods*
One word. Completely different architecture.
The Propagation Problem
Wrong terminology compounds:
- Say “component” → AI assumes React
- React assumption → hooks everywhere
- Hooks everywhere → state management complexity
- Complexity → performance issues
The defect spreads through every decision.
Finding the Right Words
Two strategies:
1. Use AI as Terminology Guide
Human: "What's the proper term for a popup window that blocks interaction?"
AI: "That's called a 'modal dialog' or simply 'modal'"
2. Define Terms Explicitly
For this project:
- "Service" means a class with business logic
- "Module" means a Python file
- "Component" means a logical grouping of features
Common Defects to Avoid
- “Simple” → AI removes error handling
- “Fast” → AI ignores correctness
- “Modern” → AI adds every new feature
- “Clean” → AI over-abstracts
The Precision Principle
Be specific about:
- Technical terms (service vs class vs module)
- Scope words (simple vs minimal vs basic)
- Quality attributes (fast vs optimized vs efficient)
Remember: AI amplifies ambiguity into architecture.
The Fix
When you catch a Butterfly Defect:
- Stop immediately
- Correct the terminology
- Explicitly state what you mean
- Let AI adjust course
“Fix your terminology now, or refactor your entire architecture later.”
Using a clearly named variables
Since AI understands variables also through their names, longer descriptive names are better than short cryptic ones.
AI Taming: Keeping Control While Leveraging Power
The Director’s Chair
In vibe coding, you’re not a programmer anymore - you’re a director.
Traditional coding is like walking: you control every step, every muscle movement. AI-assisted coding should be like driving: you control direction and speed while the machine handles mechanics.
But there’s a problem. Cars don’t randomly drive to unexpected destinations. AI does.
The Self-Driving Paradox
AI can literally code by itself. Give it a vague goal and it will build something. Maybe not what you wanted, but something functional.
This is both AI’s greatest strength and biggest danger.
Evolution of Intent
Traditional Development:
Human: "Build user authentication with email/password, bcrypt hashing, JWT tokens"
AI: *builds exactly that specification*
Vibe Coding:
Human: "Users need secure access to their personal data"
AI: *suggests OAuth, biometrics, passwordless, evaluates tradeoffs*
Human: "Let's go passwordless for better UX"
AI: *implements magic links with rate limiting*
You’re not micromanaging implementation. You’re directing intent and making strategic decisions.
The Shifting Boundary
As AI evolves, the boundary shifts upward:
You Focus On:
- What problems need solving
- What success looks like
- What constraints are non-negotiable
- Which tradeoffs to accept
AI Handles:
- How to implement solutions
- Which patterns fit best
- What optimizations to apply
- How to structure code
Maintaining Control
The key insight: You must remain the director, not become a passenger.
The New Core Skills
Traditional skills (syntax, algorithms) become secondary. Primary skills now:
Intent Articulation
- Express goals clearly
- Communicate context effectively
- Define success criteria
Constraint Specification
- Set boundaries explicitly
- Define non-negotiables
- Specify quality requirements
Quality Recognition
- Identify good solutions and overengineered ones quickly
- Spot potential issues
- Recognize when “good enough” is reached
Vision Preservation
- Maintain project coherence
- Prevent scope creep
- Keep long-term goals in focus
Emergence of New Design Patterns
What are Design Patterns?
Design patterns are reusable solutions to common problems:
- Singleton: One instance only
- Factory: Create objects without specifying class
- Observer: Notify multiple objects of changes
Why they mattered:
- Shared vocabulary between developers
- Proven solutions to recurring problems
- Faster development through reuse
The AI Shift
Traditional patterns remain essential - AI still uses Singleton, Factory, Observer, and other proven solutions. What’s new is that AI changes HOW we work:
- Refactoring is faster (but patterns still guide structure)
- Architecture can evolve more easily (but patterns provide stability)
- Iteration is cheaper (but patterns prevent chaos)
AI doesn’t replace design patterns. It adds a new layer: patterns for human-AI collaboration. These are vibe-patterns.
There are no unified conseus or standards for vibe-patterns however we identified and clarified the most used ones (even tho they might have different names)
Vibe coding patterns weren’t designed. They were discovered by developers actually using AI.
Devdocs pattern: Documentation-Driven Architecture
Problem: AI needs context to generate correct code Solution: Make AI Write docs first, generate implementation from docs
Smoke-Test-Driven Specification
Problem: AI confidently generates incorrect code Solution: Tests become executable specifications
Fuzzy-First Development
Problem: Premature optimization with AI leads to overengineering Solution: Start intentionally vague, let clarity emerge
Anchor Pattern
The Anchor Pattern preserves working functionality throughout development by establishing a tested baseline before changes and re-validating that baseline after each modification, ensuring AI doesn’t break existing features while adding new ones.
Offload pattern
Is about Make AI’s job easier by Enforcing limitations.
Why These Patterns Matter
1. Predictable AI Behavior
When you follow patterns, AI responds consistently across multiple project and across different models. Random approaches yield random results.
2. Reduced Defect Propagation
Patterns contain the errors. Without patterns, one mistake spreads everywhere.
3. Faster Development
Not coding faster - arriving at correct solution faster. Less backtracking.
4. Team Alignment
When everyone uses same patterns, AI outputs become consistent across team.
5. Learning Acceleration
Patterns are teachable. New developers can learn “the vibe” quickly.
The Compound Effect
Individual patterns are useful. Combined, they’re transformative. Flexible yet directed development. Confident incremental progress. And in the end a robust and intended outcome is ensured.
The Human-in-the-Loop Principle
Redefining the Loop
Traditional definition: Human reviews and corrects AI outputs.
VDD definition: Human and AI alternate control dynamically, each contributing their unique strengths.
The loop isn’t about supervision - it’s about synergy.
Why Humans Stay Essential
The Reality Gap
AI understands the world through text and code, not lived experience. It knows about user frustration but hasn’t felt it. It can optimize algorithms but doesn’t know when a UI feels “off.” This experiential gap means AI needs human judgment for real-world applicability.
The Halting Problem
AI doesn’t know when to stop improving. Ask it to optimize code and it will optimize regardless if it is good enough. Ask it to add features and it will add endlessly. It lacks the human sense of “good enough” - that crucial judgment of when additional work yields diminishing returns.
Current AI systems are trained to follow instructions, not question them. They won’t push back when you’re overengineering. They won’t tell you to stop when the solution is sufficient unless you specifically ask them to question it.
Contextual Blindness
AI lacks persistent memory across sessions and can’t see your broader ecosystem. It doesn’t know your team’s skill level, your deadline pressures. These invisible constraints shape every development decision, yet AI operates without them.
The Ego Trap
Human Ego as Bottleneck
The “I could do it better” syndrome kills AI productivity:
Symptoms:
- Micromanaging every implementation detail
- Dismissing AI suggestions without consideration
- Using AI as glorified autocomplete instead of collaborator
- Insisting on personal coding style over functionality
Reality check: Your “perfect” code that takes hours might be less valuable than AI’s “good enough” code in minutes.
AI’s Hidden Ego
AI has its own form of ego - overconfidence without awareness:
Manifestations:
- Confidently presenting broken solutions
- Over-engineering simple problems to appear sophisticated
- Insisting on “best practices” regardless of context
- Adding unrequested features to seem helpful
The danger: AI’s confidence is uncorrelated with correctness. It presents wrong answers with the same certainty as right ones.
Finding the Balance
Human Contributions
- Strategic Vision: Where we’re going and why
- Quality Judgment: What’s good enough vs. what needs improvement
- Context Awareness: Understanding constraints AI can’t see
- Creative Direction: Novel approaches and breakthrough thinking
- Ethical Boundaries: What should and shouldn’t be built
AI Contributions
- Rapid Implementation: Turning ideas into code at superhuman speed
- Pattern Recognition: Identifying solutions from vast training data
- Tireless Iteration: Refining without fatigue or frustration
- Syntax Perfection: Eliminating typos and formatting issues
- Parallel Exploration: Trying multiple approaches simultaneously
The Collaboration Dance
Effective human-in-the-loop follows a rhythm:
- Human sets intent - Clear goal without overspecification
- AI explores solutions - Multiple approaches generated quickly
- Human evaluates direction - Course correction, not micromanagement
- AI refines implementation - Detailed work within boundaries
- Human validates results - Ensuring alignment with vision
- Cycle repeats - Each iteration improves understanding
This isn’t command-and-control. It’s jazz improvisation with structure.
Practical Calibration
Over-Control (Micromanagement)
Human: "Create a function named calculateTotal with parameters a and b,
add them using the + operator, store in variable named sum,
return sum with explicit return statement"
Result: You’re just typing through AI. No leverage gained.
Under-Control (Abdication)
Human: "Make the form better"
AI: *adds 15 validation rules, 3 step wizard, progress indicators, auto-save, keyboard shortcuts, and animated transitions*
Human: "I just wanted better error messages..."
Result: Chaos and wasted cycles.
Optimal Control (Collaboration)
Human: "I need to calculate invoice totals including tax and discounts"
AI: *proposes implementation approach*
Human: "Good structure, but discounts should apply before tax"
AI: *adjusts logic while maintaining approach*
Human: "Perfect. Now add support for multiple tax rates"
AI: *extends cleanly within established pattern*
Result: Rapid, aligned development.
The Learning Loop
Human-in-the-loop creates a feedback system that improves over time:
You learn:
- AI’s strengths and blind spots
- How to communicate intent effectively
- When to intervene vs. when to let AI run
AI learns (within session):
- Your coding preferences
- Project patterns and conventions
- Domain-specific requirements
Together you develop:
- Efficient communication shortcuts
- Productive rhythm
- Shared understanding
Signs of Healthy Collaboration
✅ You’re regularly surprised by elegant AI solutions
✅ You catch issues early before they cascade
✅ Development feels like pair programming, not dictation
✅ You understand everything being built
✅ AI stays within intended boundaries
✅ Progress is rapid but controlled
Signs of Unhealthy Patterns
❌ Every AI suggestion needs major rework
❌ You’re constantly fighting AI’s approach
❌ Code is being generated you don’t understand
❌ More time correcting than progressing
❌ Feeling like you’re battling or babysitting
The Core Truth
Human-in-the-loop isn’t a temporary limitation waiting for better AI. It’s the optimal model for creative collaboration. Humans provide meaning and judgment. AI provides speed and capability.
Together, we build what neither could alone.
The loop isn’t a constraint - it’s the key to amplification.
Attention Mechanism
What Attention Actually Means
When AI reads your code, it doesn’t read like humans do. It uses “attention” - weighted focus across all tokens simultaneously.
Think of it like this:
- Human reading: Sequential, left-to-right, building understanding
- AI reading: Parallel, everything at once, finding patterns and connections
Why it matters
Knowing how attention works can help you offload complexity from the LLM.
The Spotlight Metaphor
Imagine a dark stage with 1000 actors. AI has 1000 spotlights it can dim or brighten. When you mention “user authentication”, suddenly:
- Spotlights on “password” brighten
- Spotlights on “login” brighten
- Spotlights on “security” brighten
- Spotlights on “recipe ingredients” dim
This happens for every token, instantaneously.
Why This Matters for Vibe Coding
1. Context and Relevance Matter Most
While AI processes everything in parallel, what matters most is relevance to the task at hand.
# If you're asking about authentication, this gets attention
def authenticate_user():
pass
# This gets less attention regardless of position
def calculate_recipe_calories():
pass
Put related information together, make intent clear.
2. Proximity Is Power
Related concepts near each other strengthen attention bonds.
# Weak attention bond
class User:
pass
# ... 500 lines later ...
def authenticate_user():
pass
vs
# Strong attention bond
class User:
pass
def authenticate_user():
pass
Keep related concepts close.
3. Repetition Reinforces
Each mention strengthens attention pathways.
# Single mention - weak signal
# This handles user data
# Multiple mentions - strong signal
# This UserService handles user authentication
# It manages user sessions and user preferences
class UserService:
But don’t overdo it - that’s keyword stuffing.
Context Window Limits
AI has finite context. When you approach the context limit:
- Relevant content to your question gets prioritized
- AI focuses on what’s needed for the current task
- Very long contexts can sometimes cause confusion
- Organization and clarity matter more than position
This is why grouping related files matters when sharing multiple files.
Practical Implications
1. Lead with Intent
# BAD: Burying the lead
# This function does various things with data
# It processes some stuff and returns results
# Oh, by the way, it's for authentication
# GOOD: Clear intent upfront
# Authenticates users against the database
# Returns JWT token on success, throws on failure
2. Context Clustering
When sharing multiple files, group by relevance:
1. Core business logic files
2. Related utility files
3. Configuration files
4. Test files
Not alphabetically or randomly.
3. Distinctive Names for better attention
# Weak anchor
def process():
pass
# Strong anchor
def process_payment_webhook():
pass
Specific names help AI maintain focus.
Attention Hijacking
Some patterns grab too much attention:
TODO/FIXME Comments
# TODO: Fix this security vulnerability
def safe_function():
return data # AI obsesses over the TODO
AI might focus on the TODO instead of your actual request.
Strong Keywords
Words like “deprecated”, “legacy”, “broken”, “hack” trigger strong attention. Use carefully.
Working with Attention
The Priming Pattern
Start conversations by setting attention focus:
"I'm working on the authentication module.
Please focus on security and user experience."
The Context Window Strategy
Share files in order of importance:
- The file you want to modify
- Direct dependencies
- Related interfaces
- Supporting utilities
The Attention Reset
When AI gets fixated on wrong things:
"Let's refocus. Ignore the previous discussion about X.
I need help specifically with Y."
Attention vs Memory
Attention ≠ Memory
- Attention: What AI focuses on right now
- Memory: What AI remembers from the codebase
You can direct attention. You can’t change memory.
The Compound Effect
Good attention management:
- Faster correct responses
- Less confusion
- Better architectural coherence
- Fewer tangential suggestions
Poor attention management:
- AI suggests fixing non-issues
- Focuses on wrong patterns
- Misses critical requirements
- Generates irrelevant code
Quick Tips
- Name precisely -
auth_service
notservice1
- Comment intentions - Why, not what
- Order thoughtfully - Important stuff first
- Cluster related - Group by concept
- Reset when needed - Don’t fight confused attention
Understanding attention helps you communicate effectively with AI. It’s not about tricking the system - it’s about clarity.
Context Windows and Memory
The Fundamental Limitation
Every AI has a context window - the total amount of text it can “see” at once. Think of it as RAM, not hard drive. Even tho we see huge gains in terms of incresed context length it still matter to use it efficiently.
Context Window !== Memory
Context Window: What AI can see right now Memory: What AI learned during training
You can fill the context window. You can’t add to memory.
The Sliding Window Problem
As conversation grows, early content falls out:
[Start of conversation]
"Here are my ground rules..." <- Eventually falls out
[... many messages ...]
"Why aren't you following the ground rules?" <- AI: "What ground rules?"
[Current message]
Anchor pattern is a good fix for this issue.
Token Economics
Everything counts against the window:
- Your messages
- AI responses
- Code snippets
- Error messages
- File contents
Long error stacktraces can eat 1000+ tokens instantly.
Strategies for Large Codebases
1. The Core + Context Pattern
Don’t share everything. Share:
- Core file being modified
- Direct dependencies only
- Relevant interfaces
- Specific test files
2. The Summarization Ladder
For huge codebases:
Level 1: Full implementation files (for current work)
Level 2: Interface files (for dependencies)
Level 3: Summary docs (for distant modules)
Level 4: Architecture overview (for system context)
3. The Refresh Pattern
Periodically refresh context:
"Let me remind you of the key requirements:
- [Critical point 1]
- [Critical point 2]
We're currently working on [specific task]"
4. The Checkpoint Strategy
After major milestones:
"We've completed the authentication module.
Here's what we built: [summary]
Now let's move to the authorization module."
Window Management Techniques
Compression via Abstraction
Instead of:
# Sharing full implementation
class UserService:
def __init__(self, db, cache, logger):
self.db = db
self.cache = cache
self.logger = logger
def create_user(self, email, password):
# 50 lines of implementation
def authenticate(self, email, password):
# 30 lines of implementation
# ... 10 more methods
Share:
# UserService interface
class UserService:
"""Handles user CRUD and authentication"""
def create_user(self, email: str, password: str) -> User
def authenticate(self, email: str, password: str) -> str # JWT token
def get_user(self, user_id: str) -> User
# ... just signatures
devdocs pattern explicitly creates a doc called interfaces_and_endpoints.md which can be used for this exact purpose as well.
Selective Inclusion
Use AI to identify what to include:
"I need to modify the payment processing.
What files should I share with you?"
AI often knows what it needs to see.
The Layered Approach
- Start with high-level overview
- Drill into specific modules
- Zoom into exact functions
- Back out to integration level
Like Google Maps for code.
The Context Budget
Treat tokens like money:
- Budget for each conversation
- Spend on valuable context
- Cut unnecessary verbosity
- Save for complex operations
Signs You’re Out of Context
- AI forgets earlier requirements
- Suggestions contradict prior work
- Generic responses increase
- AI asks for clarification repeatedly
Time to refresh or start new conversation.
Strengths and Limitations
AI Strengths: The Superpowers
1. Syntax Perfection
AI never forgets a semicolon, bracket, or quote. Perfect syntax, every time.
2. Pattern Recognition
Sees patterns across thousands of files instantly.
"This looks like a Factory pattern but with Singleton
characteristics. Consider using Dependency Injection instead."
Human might miss this. AI spots it immediately.
3. Boilerplate Generation
CRUD operations, test scaffolding, API endpoints - AI excels at repetitive patterns.
"Create REST endpoints for User model"
*Instantly generates all 5 endpoints with proper error handling*
4. Language Translation
Moving between languages/frameworks:
"Convert this Python Flask app to Node Express"
*Accurately translates idioms and patterns*
5. Documentation Generation
Turns code into docs effortlessly:
- API documentation
- README files
- Code comments
- Architecture diagrams (as text)
6. Refactoring Speed
Rename across files, extract methods, restructure - seconds not hours.
7. Best Practices Knowledge
Knows every style guide, security practice, performance optimization.
AI Limitations:
1. Runtime Blindness
While AI can write code for interactive features, it can’t always test them because UI interactions, voice inputs, visual feedback etc, require human senses and actions. We become AI’s eyes and hands for runtime validation.
2. Hallucination Tendency
Makes up plausible-sounding APIs that don’t exist.
import tensorflow as tf
tf.quantum.entangle() # Sounds cool, doesn't exist
3. Context Conflation
Mixes patterns from different contexts.
// React + jQuery mixed incorrectly
$('#root').setState({ value: 'confused' })
4. Over-Engineering Bias
AI loves adding unnecessary complexity.
Human: "Store user preferences"
AI: *Creates distributed cache with Redis, event sourcing, and CQRS*
Working with Strengths
Leverage Pattern Recognition
"This code smells like [pattern]. Suggest refactoring."
Use for Exploration
"Show me 3 different ways to implement this"
For Runtime Blindness
Always verify with actual execution:
"Generate the code, I'll run it and share results"
For Over-Engineering
Add constraints:
"Simplest solution possible, no external dependencies"
The Golden Rules
Trust AI For:
- Syntax and structure
- Common patterns
- Refactoring mechanics
- Documentation
- Exploration
Don’t Trust AI For:
- Business logic
- Runtime behavior
- Performance assumptions
- Security without verification
- Architecture without thought
The Collaboration Sweet Spot
Best results when:
- Human provides vision and verification
- AI provides implementation and iteration
- Human catches logical errors
- AI catches syntax errors
- Both challenge each other
Think partnership, not replacement.
Next: Chapter 5 - The Human-AI Development Loop →
When to Guide vs When to Follow
The Fundamental Question
Every AI interaction presents a choice: Do I lead or do I follow? Wrong choice = wasted time.
When to Guide (You Lead)
1. Business Requirements
AI doesn’t know your users, market, or constraints.
GUIDE: "Users need to export data for tax purposes in specific format"
NOT: "How should users export data?"
2. Architecture Decisions
AI suggests patterns. You choose based on reality.
GUIDE: "Use REST APIs since our mobile team knows it well"
NOT: "Should we use REST or GraphQL?"
3. Performance Constraints
AI doesn’t know your scale or bottlenecks.
GUIDE: "This needs to handle 10k requests/second"
NOT: "Make it fast"
4. Security Requirements
AI knows general security. You know your threat model.
GUIDE: "We need SOC2 compliance with audit logging"
NOT: "Make it secure"
5. Integration Points
AI doesn’t know your existing systems.
GUIDE: "This must integrate with our legacy SOAP API"
NOT: "How should this communicate with other services?"
When to Follow (AI Leads)
1. Implementation Details
Once direction is set, let AI handle specifics.
GUIDE: "Users need to stay logged in for 30 days"
FOLLOW AI's answer to: "What's the best way to implement JWT refresh tokens?"
2. Best Practices
AI knows current standards better than most developers.
GUIDE: "It needs to display user profiles"
FOLLOW AI's answer to: "How should I structure this React component?"
3. Error Handling
AI excels at comprehensive error cases.
GUIDE: "It processes payment webhooks"
FOLLOW AI's answer to: "What errors should this API endpoint handle?"
4. Refactoring Suggestions
AI sees patterns you might miss.
GUIDE: "It must remain backwards compatible"
FOLLOW AI's suggestion when asking: "This code feels complex. Suggest improvements."
5. Technology Selection
For well-understood problems, AI knows tool tradeoffs.
GUIDE: "We're on AWS with Redis experience"
FOLLOW AI's recommendation for: "What caching solution for session data?"
The Grey Zone
Some decisions need negotiation:
API Design
Human: "I need user management endpoints"
AI: "Here's a RESTful design..."
Human: "Actually, we prefer GraphQL"
AI: "Here's the GraphQL schema..."
Start by following, then guide corrections.
Database Schema
AI: "Suggests normalized schema"
Human: "We optimize for reads, denormalize"
AI: "Here's denormalized version"
Let AI propose, you dispose.
Testing Strategy
Human: "We need tests"
AI: "Unit, integration, and E2E tests..."
Human: "We only do integration tests"
AI: "Focused integration test suite..."
Pattern Recognition
Guide When:
- Domain-specific knowledge required
- Business logic involved
- External constraints exist
- Past decisions affect current
- User experience matters
Follow When:
- Technical implementation unclear
- Multiple valid approaches exist
- Best practices needed
- Common patterns apply
- You’re learning something new
Communication Strategies
When Guiding
Be specific and constraining:
"Build auth with these requirements:
- Email/password only
- 2FA via SMS
- Session length 24 hours
- PostgreSQL storage"
When Following
Be open and exploratory:
"I need authentication. What approach would you recommend given modern security practices?"
The Hybrid Approach
Most effective:
"I need auth for a B2B SaaS (context).
What's the current best practice (follow)?
Must integrate with Okta (guide)."
Managing AI Drift
What Is AI Drift?
AI drift happens when AI gradually moves away from your original intent. Like a boat without anchor, small currents compound into major course changes.
Types of Drift
1. Complexity Drift
Starts simple, becomes enterprise.
Request 1: "Add user login"
AI: Simple email/password
Request 2: "Add password reset"
AI: Adds email service, queues, tokens
Request 3: "Add remember me"
AI: Adds Redis, session management, device tracking
Request 4: "Add social login"
AI: OAuth, SAML, OpenID, full identity provider
Suddenly you’re building Auth0.
2. Style Drift
Inconsistent patterns across codebase.
# Monday - functional style
def process_user(user_data):
return transform(validate(user_data))
# Tuesday - OOP style
class UserProcessor:
def process(self, user_data):
self.validate()
self.transform()
# Wednesday - procedural style
user_data = get_user()
validate_user(user_data)
transform_user(user_data)
3. Architecture Drift
Original design morphs unrecognizably.
Start: "Simple REST API"
Drift 1: Add GraphQL "for flexibility"
Drift 2: Add WebSockets "for real-time"
Drift 3: Add gRPC "for performance"
End: Four different API protocols
4. Scope Drift
Features multiply beyond intent.
Original: "Todo app"
+ "Add categories"
+ "Add sharing"
+ "Add comments"
+ "Add notifications"
= Project management suite
5. Technology Drift
Stack grows unnecessarily.
Start: React + Node
+ Redux "for state"
+ MongoDB "for flexibility"
+ Docker "for deployment"
+ Kubernetes "for scaling"
+ Kafka "for events"
= Overengineered todo app
Why Drift Happens
Can be due to numerous reasons
1. AI Lacks Project Context
Each request seems isolated. AI doesn’t see the big picture. Most common reason is due to bad context management.
2. AI Optimizes Locally
Makes each piece “better” without considering whole.
3. Pattern Matching Gone Wild
AI applies patterns even when inappropriate.
4. No Persistent Memory
Can’t remember “we decided to keep it simple.”
Drift Recovery
When You’ve Already Drifted
- Acknowledge the drift
"We've drifted from simple to complex.
Let's identify what we can remove."
- Document current state
"List all technologies and patterns we're now using"
- Identify core requirements
"What are the actual must-haves vs nice-to-haves?"
- Plan simplification
"How can we meet core requirements with minimal stack?"
- Incremental rollback Don’t rewrite. Gradually simplify.
The Drift Dialogue
Catching Early Drift
AI: "We should add caching for performance"
You: "Is performance actually a problem?"
AI: "Not yet, but..."
You: "Let's wait until it is"
Redirecting Architecture Drift
AI: "This would be cleaner with microservices"
You: "We're keeping monolith for simplicity"
AI: "Understood, here's monolith version"
Preventing Scope Drift
AI: "While we're at it, should we add..."
You: "No, let's finish current feature first"
Positive Drift
Not all drift is bad. Sometimes AI suggests genuinely better approaches.
Evaluating Positive Drift
Ask:
- Does this simplify or complicate?
- Does this align with project goals?
- Is this solving real or imagined problems?
- What’s the maintenance cost?
Accepting Good Ideas
AI: "Instead of custom auth, use NextAuth"
You: "That actually simplifies things. Let's do it."
The Meta-Strategy
Best drift management is prevention:
- Clear vision from start
- Explicit constraints
- Regular reality checks
- Simplicity as core value
Remember: AI will always suggest more. Your job is knowing when to say no.
Essential Ground Rules
The Friction Problem
AI brings its own biases to every project - preferred frameworks, design patterns, coding styles. Without explicit guidance, AI defaults to what it “thinks” is best, not what your project needs.
This creates friction. You spend more time correcting AI’s assumptions than building features.
Ground rules eliminate this friction by establishing clear expectations upfront.
Two Types of Rules
Universal Rules: Apply to every project, every time Project Preferences: Specific to your tech stack, team, or domain
Both should be defined at the start of each AI session.
Universal Ground Rules
1. No Unnecessary Mocking
"Only mock external dependencies (APIs, databases). Use real implementations for everything else."
Why it matters:
- AI tends to mock everything, making tests meaningless
- Real implementations reveal real bugs
- Integration issues surface immediately, not in production
Valid exceptions:
- Third-party API calls
- Database connections in unit tests
- Time-dependent operations
2. Filepath Documentation
"Start every file with a comment showing its path relative to project root"
Example:
# src/services/auth_service.py
from typing import Optional
import jwt
Benefits:
- Clear navigation in large codebases
- Context preserved across AI sessions
- Easier refactoring and file moves
3. Modern Stack Only
"Target modern environments only. No legacy browser support or deprecated language features."
Why this rule exists:
- AI adds unnecessary compatibility layers
- Legacy support triples complexity
- Modern features improve developer experience
Add compatibility only when explicitly needed.
4. Meaningful Commits
"Make atomic commits with descriptive messages following conventional format (feat:/fix:/docs:)"
Good commit examples:
feat: add user authentication flow
fix: resolve race condition in data fetcher
docs: update API endpoint documentation
Benefits:
- Git history becomes documentation
- Easy rollbacks and bisecting
- Clear progression tracking
5. Modular Development
"Build features as isolated modules with clear interfaces"
Structure example:
feature/
├── index.ts # Public interface
├── types.ts # Type definitions
├── logic.ts # Core business logic
├── tests.ts # Feature tests
└── README.md # Feature documentation
6. Explicit Naming
"Use descriptive variable and function names. Clarity over brevity."
Examples:
// Bad
const d = new Date();
const u = users.filter(x => x.a > 18);
// Good
const currentDate = new Date();
const adultUsers = users.filter(user => user.age > 18);
Setting Up Ground Rules
Create a GROUND_RULES.md
file in your project root:
# Project Ground Rules
## Universal Rules
1. Only mock external dependencies
2. Include filepath comment at top of each file
3. Target modern environments only (Node 18+, ES2022+)
4. Commit atomically with conventional messages
5. Build features as isolated modules
6. Use explicit, descriptive naming
## Project Preferences
- Framework: [Your choice]
- Testing: [Your approach]
- State Management: [Your pattern]
- Error Handling: [Your strategy]
## Code Style
- Max line length: 100 characters
- Indent: 2 spaces
- Quotes: Single for strings
- Semicolons: [Yes/No]
## AI Instructions
Read and follow these rules for all code generation.
Ask for clarification if rules conflict with requirements.
Starting Each Session
Begin every AI conversation with:
"Please read GROUND_RULES.md and apply these consistently throughout our session"
Domain-Specific Rules
Web Applications
## Web Development Rules
- Semantic HTML5 elements required
- Mobile-first responsive design
- WCAG 2.1 AA accessibility minimum
- CSS modules or styled-components (no inline styles)
- Lazy load images and components
- SEO meta tags on all pages
APIs and Services
## API Development Rules
- RESTful conventions unless GraphQL specified
- Consistent error response format
- Input validation on all endpoints
- Rate limiting from day one
- OpenAPI documentation required
- CORS configuration explicit
Data Processing
## Data Pipeline Rules
- Schema validation before processing
- Idempotent operations
- Clear ETL stage separation
- Audit logging for all transformations
- Error recovery strategies defined
- Sample data for all pipelines
Language-Specific Rules
Python
## Python Standards
- Python 3.10+ features allowed
- Type hints required for all functions
- Docstrings for public APIs
- Black formatting (line length 100)
- pytest for testing
- Poetry for dependency management
Evolution Strategy
Ground rules aren’t static. Track patterns:
When you correct AI repeatedly → Add a rule When rules cause friction → Refine or remove When context changes → Update accordingly
Measuring Success
Good ground rules result in:
- ✅ Less time correcting AI output
- ✅ Consistent code across sessions
- ✅ Fewer “why did AI do that?” moments
- ✅ Smoother development flow
- ✅ Higher quality first attempts
Bad ground rules cause:
- ❌ Constant rule conflicts
- ❌ AI confusion and errors
- ❌ Overly restrictive development
- ❌ More explanation than benefit
The Compound Effect
Well-defined ground rules create compound benefits:
Session 1: Save 10 minutes not explaining preferences Session 10: Save hours from consistent patterns Session 100: Entire codebase follows your standards automatically
Ground rules are an investment. Define them once, benefit forever.
Remember: Ground rules are your contract with AI. Make them clear, keep them current, and enforce them consistently.
The Data Dump
Starting with Complete Context
The Data Dump is the foundation of vibe coding. It’s where you transfer all project knowledge from your head to AI’s context. Without a thorough data dump, AI fills gaps with assumptions - usually wrong ones.
Think of it like briefing a new team member, except this team member has no implicit understanding of your industry, company, or goals. Everything must be explicit.
Why Data Dump First?
Both Traditional development and Vibe coding starts with understanding not with code.
- AI has no context - It doesn’t know your constraints, users, or goals
- Assumptions compound - Early misunderstandings cascade into architectural disasters
- Words shape code - How you describe the project determines what gets built
- Clarity forces clarity - Explaining to AI reveals your own fuzzy thinking
The Data Dump is an investment. Spend an hour here to save days later. The quality of your entire project depends on the quality of initial understanding.
The Complete Data Dump Process
Step 1: Gather Everything
Before talking to AI, collect:
- Existing project descriptions (formal and informal)
- Existing code or prototypes
- Unstructured Notes
- User Stories
- Technical constraints
- Business constraints
- Team capabilities
Don’t filter yet. Dump everything.
Step 2: Share Raw Information
Start with unprocessed information:
"I'm building a personal finance tracker. Here's all the context:
From my notes:
- Users complain existing apps are too complex
- Need to track expenses across multiple currencies
- I travel frequently for work
- Privacy is critical - no cloud storage
- Must work offline
- Export for tax purposes
- Simple is better than feature-rich
From user research:
- Most people abandon finance apps after 2 weeks
- Main complaint: too much data entry
- Want to see spending patterns quickly
- Don't care about investment tracking
Technical context:
- I know React and Node
- Prefer not to learn new frameworks
- Have 3 months to build MVP
- Will be solo developer
Step 3: Request Comprehensive Understanding
Don’t let AI start coding. Force understanding first:
Based on all the information I've shared, please provide a comprehensive explanation of:
1. What this project is and what it aims to achieve
2. Who the target users are and their main pain points
3. The core problems we're solving
4. The key constraints and requirements
5. What success looks like for this project
Do NOT provide technical solutions yet. Focus only on demonstrating
deep understanding of the project context and goals.
Step 4: Verify Understanding
AI will respond with its interpretation. This is critical - read carefully and correct any misunderstandings:
"Your understanding is mostly correct, but let me clarify:
- When I said 'simple', I mean 'minimal features', not 'easy to implement'
- Multi-currency isn't for investment - it's for travel expenses
- Offline-first is non-negotiable, not a nice-to-have"
Once you feel like AI gets the project in multiple aspects we can again use AI to create structured documentataions.
The DevDocs Pattern
Why Documentation Drives Development
In traditional development, code comes first and documentation follows. In VDD, this relationship inverts: documentation becomes the source code that AI compiles into implementation. When AI understands not just what to build but why and how it fits together, it transforms from a code generator into a thoughtful collaborator.
The DevDocs pattern solves three critical problems:
- Vision Drift - Without explicit documentation, AI gradually diverges from your intent
- Context Loss - AI has no persistent memory between sessions
- Cognitive Overload - Mixing planning and implementation causes errors and hallucinations
By separating planning (documentation) from execution (code), we give AI clear, focused tasks it can excel at.
The Core Principle
DevDocs is real-time projection of AI’s understanding in markdown format. It’s not passive documentation - it’s active development logic that drives implementation.
The workflow:
- AI documents what it plans to implement
- Human reviews and adjusts to match vision
- AI implements based on agreed documentation
- Documentation evolves with the project
This creates a feedback loop where documentation and code stay synchronized, and allows action of HILing in everystep in order to guide AI to intended direction.
DevDocs isn’t about perfect documentation. It’s about maintaining a shared mental model between human vision and AI implementation. The documentation becomes the meeting ground where human intent transforms into working code.
Example DevDocs Structure
A complete DevDocs folder provides layered context from high-level vision to specific implementation details:
project-root/
└── devdocs/
├── foundation/
│ ├── project_description.md # What we're building
│ ├── philosophy.md # Core principles and values
│ ├── known_requirements.md # Known requirements
│
│
├── concepts/
│ ├── concepts_to_implement.md # Extracted technical concepts
│ ├── simplified_concepts.md # Prototype-ready versions
│ ├── concept_clarifications/ # Detailed concept specs
│ └── simplified_clarifications/ # Simplified specs
│
└── enhancements/ # Future improvements
│── explorations/ # freestyle exploration and analysis documents
│
├── modules/
│ └── [module_name]/
│ ├── what_is_this_for.md
│ ├── interfaces_and_endpoints.md
│ ├── integration_points.md
│ ├── integration_requirements.md
│ ├── limitations.md
│ ├── possible_use_cases.md
│ ├── edge_cases_covered.md
│ └── example_usage.md
└── summary.md
│
Besides from these there can be other type of side product documents depending on the project. Now we will go through each of them and explain what makes them powerful for VDD.
Root Documentation: The Foundation
Why Foundation Documents Matter
Foundation documents are the bedrock of VDD. They capture your original vision in a structured format, establishing the core truths that guide every decision throughout the project.
These three documents form a hierarchy of intent:
- project_description.md - What we’re building and why
- philosophy.md - Core principles and beliefs driving decisions
- known_requirements.md - Specific requirements and constraints
AI reads these first in every session. They become the lens through which AI interprets all subsequent project development.
The Three Foundation Documents
1. project_description.md - The North Star
This document transforms scattered project information into structured understanding. It’s the single source of truth about what you’re building.
Core Questions to Answer:
- What are we building?
- What are the various scopes of this project?
- What problem are we solving?
- What does success look like?
- Who will use it and why?
2. philosophy.md - The Soul
Philosophy captures the intangible aspects that shape every decision. It’s about values, not features. This document prevents technically correct but visionally/spiritually wrong implementations.
Core Question to Answer:
- what is the philosophy of this project?
3. known_requirements.md - The Contract
Requirements translate vision into concrete constraints. They define the boundaries within which AI must operate.
Core Questions to Answer:
- What technical constraints exist?
- What business rules apply?
- What user needs must be met?
- What regulations must we follow?
Concept Documents
Identifying concepts is critical during development. Poor concept understanding evolves into flawed architecture and toxic development cycles.
Concepts are meta-abstractions encompassing any development aspect - from essential requirements to payment verification modules to unique architectural patterns. Concept documentation articulates what needs building at a high level, enabling truly modular development practices.
concepts.md
This document extracts and lists all key technical concepts from the foundation documentation (project_description.md, philosophy.md, known_requirements.md). We instruct AI to focus solely on essential technical concepts to prevent bloat.
concepts_clarifications folder
For each concept in concepts.md, we have AI generate detailed clarification documents. This folder typically contains around 10 documents, each thoroughly explaining a single concept. As humans in the loop, reading these reveals exactly how AI interprets our concepts.
Each clarification document systematically addresses:
- What the concept is and why it matters
- How it benefits the overall project
- How it constrains the overall project
- Required input information
- Core processes involved
- Output information or relay points
- Expected positive outcomes when realized
- Potential negative outcomes to avoid
This multi-perspective analysis ensures nothing gets overlooked.
simplified_concepts.md
New development typically begins with prototyping, then iterative enhancements transform prototypes into MVPs. Reading all full-scope concepts in concepts.md can feel overwhelming - for both humans and AI. Our responsibility as humans in the loop is orchestrating AI’s work through modular, gradual, controllable increments.
AI lacks this self-regulation. While you can request simplified prototype implementations, AI’s simplification intuition remains poorly calibrated and requires human oversight.
This is precisely why we have AI create this documentation simplified_concepts.md so we can HIL it effectively.
When creating simplified concepts, we instruct AI to:
- Preserve essential architecture - never oversimplify to the point where the foundation cannot support the full concept
- For multi-faceted concepts - avoid binarizing (reducing to just one or two options), instead reduce the number of supported subconcepts by prioritizing the most important ones
simplified_concepts folder
Mirrors the concepts_clarifications folder structure. We have AI generate clarification documents for each simplified concept, allowing us to understand AI’s interpretation of the streamlined versions.
Each document addresses the same systematic questions:
- What the concept is and why it matters
- How it benefits the overall project
- How it constrains the overall project
- Required input information
- Core processes involved
- Output information or relay points
- Expected positive outcomes when realized
- Potential negative outcomes to avoid
The Dual-Concept Strategy
Having both simplified_concepts.md and concepts.md in the codebase is essential. These paired documents define the expansion trajectory for AI. When introducing intermediate concepts, AI can position them appropriately within the current-to-future scope continuum.
Enhancement Documentation
Enhancement docs capture future possibilities without cluttering current implementation. They’re parking spaces for good ideas that aren’t ready yet.
This is a freestyle documentation space. Which means there is not specific structure to it. It usually includes 2 types of documents:
feature enhancements architecture enhancements
feature enhancements
lets say during development you understand a new feature optional debug output would be cool. But you need to structure your thoughts on this feature. you would first desc the fearure and then
you would ask AI to answer these questions
What this feature is and how it works how this feature help project goals and requirements how this featrue would impact existing architecture how codebase should be changed for this feature (in a high level)
and create feat_debug.md and then using this document and HILing you can ask ai to do the implementation
Architectural enhancements
Architectural enhancements shouldnt be taking lightly. It is like Playing with fire and can break everything or entangle everything. But if it is certain for future then you can have them
Why This Matters
Development reveals opportunities. Without a dedicated place to capture them, you either:
- Lose valuable insights
- Derail current work with scope creep
- Implement half-baked features
Enhancement docs let AI understand the project’s growth trajectory. When you’re ready to expand, AI already knows the roadmap.
Best Practices
- Add enhancements as you discover them, not retrospectively
- Keep descriptions concrete enough to action later
- Link related enhancements that should be implemented together
- Review periodically and promote ready items to actual concepts
- Delete obsolete enhancements to prevent confusion
This systematic approach turns “wouldn’t it be nice if…” moments into actionable future work.
Exploration Documentation
A freestyle workspace for capturing AI-generated insights and clarifications discovered during development.
What Goes Here
Any investigation or explanation worth preserving:
- how_authentication_works.md
- why_not_use_microservices.md
- which_database_should_we_use.md
- performance_bottleneck_analysis.md
Why It Matters
During development, you’ll frequently ask AI to explain, investigate, or analyze aspects of your codebase. Some responses are too valuable to lose in conversation history. The explorations folder preserves these insights for future reference.
Best Practices
- Name files as questions or topics for easy scanning
- Keep the original AI response if it was particularly clear
- Update when understanding evolves
- Link from main docs when relevant
This folder becomes your project’s knowledge base - a living FAQ built from actual development questions.
Module Documentation
Software development thrives on modularity. During active development, multiple submodules evolve in parallel, each maintaining its own lifecycle while integrating with the whole.
This approach adds overhead but delivers significant benefits: truly decoupled components, isolated testing, and clear expansion paths for future enhancements.
For vibe coding, modularity isn’t optional - it’s essential. AI’s context limitations demand it, clean code principles require it, and testability depends on it.
And during this paralel development we encounter 2 problems - acting as a human in the loop (HILing ) is harder than usual - module’s code is spread across multiple code files therefore keeping them in context is harder for ai, which result in errors
Module documentation solves these 2 problems.
Modules constantly evolve through internal refactoring, interface updates, and architectural shifts. Without synchronized documentation, these changes cascade unpredictably, breaking integrations across the system. Module docs serve as living contracts between components, ensuring changes propagate cleanly.
Structure: devdocs/modules/[module_name]/
Each module folder contains these standardized documents:
what_is_this_for.md
Explains the module’s core purpose and reason for existence:
- Primary problem it solves
- Why it’s a separate module
- What breaks without it
- Who depends on it
interfaces_and_endpoints.md
Defines all public interfaces other modules can use:
- Public methods with signatures
- REST/GraphQL endpoints
- Event emissions and subscriptions
- Database tables accessed
integration_points.md
Describes how to integrate this module:
- Connection methods
- Configuration options
- Alternative integration patterns
- Setup requirements
integration_requirements.md
Lists prerequisites for integration:
- Required middleware
- Environment variables
- External dependencies
- Permission requirements
limitations.md
Documents current constraints:
- Performance limits (requests/second, data size)
- Feature boundaries (what it explicitly won’t do)
- Technical constraints (compatibility, versions)
- Known issues
possible_use_cases.md
Provides integration examples:
- Recommended usage patterns
- Common scenarios
- Anti-patterns to avoid
- Performance tips
edge_cases_covered.md
Lists intentionally handled edge cases:
- Boundary conditions addressed
- Error scenarios managed
- Race conditions prevented
example_usage.md
Shows concrete code examples:
- Common operations
- Initialization patterns
- Error handling
- Testing approaches
summary.md
Quick reference when full docs exceed context:
- Three-sentence purpose
- Main interfaces list
- Critical limitations
- Integration checklist
Other Devdoc Files
decisions.md
This document logs architectural decisions with their rationale. It’s crucial for anchoring development progress across sessions. Sometimes we encounter bugs or edge cases that require hours of careful design to solve elegantly. The resulting solution might seem unnecessary when viewed without context. If AI examines this design later without understanding the original problem, it may mistakenly remove or break the delicate solution we crafted.
Each entry in this document explains - what is current bottleneck/issue - what are the options considered - what option is selected and why - Explain in a high level what consequences this change might cause
The Archaeology Pattern: DevDocs for Existing Projects
Creating DevDocs for an existing codebase is archaeological work - you’re excavating layers of decisions, uncovering patterns, and reconstructing the evolution story. Without understanding why code evolved certain ways, AI might simplify architecture back to already-invalidated states.
This pattern systematically extracts DevDocs from mature codebases using AI assistance.
Step 0: Data Dump (Optional)
If your codebase is not mature enough to define project aspects very well, then start by dumping all available project information as if you are starting from scratch.
This raw material feeds the excavation process.
Step 1: Foundation Documents
Ask AI to generate the three foundation documents:
- project_description.md
- philosophy.md
- known_requirements.md
Then have AI extract concepts.md listing all concepts.
Step 2: Codebase Inspection
Direct AI to analyze each source file and create codebase_summary.md documenting:
- What each file/module does
- How components interact
- Data flow patterns
- Architectural decisions evident in code
This creates a current-state snapshot before any changes.
Step 3: Concept Mapping
We ask AI to generate concept_mappings.md linking each concept to its existing implementation:
For each concept, document:
- Which modules/services/classes implement it
- Coverage percentage (fully realized, partial, missing)
- Why the implementation chose this specific approach
- What alternatives were likely considered and rejected
- Edge cases the current design handles
This critical step captures the “why” behind existing architecture.
step 4: creating project_state.md
and then we use this concept_mappings.md to create
project_state.md file where project state is explained and summed up
Step 4: Project State Assessment
Aand then we use concept_mappings.md to synthesize findings into project_state.md:
- Missing concept implementations
- Integration points needing attention
- Overall architecture maturity
- Technical debt locations
- Recommended next steps
Why This Works
Manual archaeology would take weeks. AI can traverse entire codebases in minutes, recognizing patterns humans might miss. The key is asking the right questions to extract not just what exists, but why it exists that way.
Common Discoveries
The archaeology pattern often reveals:
- Undocumented workarounds for specific edge cases
- Implicit architectural decisions
- Hidden dependencies between modules
- Performance optimizations that look like complexity
- Security measures that seem redundant
Smoke Tests Pattern
AI can generate perfectly valid code that solves the wrong problem. Smoke tests reveal this immediately.
What Are Smoke Tests?
The term originates from hardware testing - checking if a device literally “smokes” when powered on. In software, smoke tests serve as quick sanity checks, typically used for:
- Exploring and understanding unfamiliar modules or APIs or algorithms
- Rapid prototyping when formal Test-Driven Development (TDD) would slow progress
So basically they let you quickly confirm that core functionality works under minimal conditions without the overhead of formal testing frameworks.
Smoke Tests in Vibe Coding
Smoke Test Pattern provides
- Validation that AI understands your intent and you understand AI’s intent.
- Shows AI indeed did what think it wanted to do
- Early warning system and prevent propagations of misunderstandings
- comprehensive summary of your project/module
Smoke tests serve not just to verify pass/fail status, but to expose actual data flow, data transformations and intermediate states and values, allowing AI& developers to understand exactly what’s happening during execution.
How to Implement Smoke Tests pattern correctly
0. Verbose Over Clever
We design smoke tests to generate detailed output that exposes data flow and critical transformations, including both raw data samples and summarized results. These tests serve dual audiences: human developers who need to understand system behavior, and AI assistants that require explicit context to provide accurate assistance.
1. Comprehensivity
Smoke tests must encompass diverse testing scenarios. Traditional software development achieves this through specialized test categories:
- Unit tests - Test individual functions/methods in isolation
- Integration tests - Test how components work together
- End-to-end (E2E) tests - Test complete user workflows
- Acceptance tests - Verify business requirements are met
- Performance/Load tests - Test speed and scalability
Within the vibe coding paradigm, these traditional testing approaches are consolidated and adapted into our comprehensive Smoke Tests pattern.
2. Progressive Complexity
The smoke tests shold start from isolated method checks and evolve into testing concepts while each file Each file testing deeper understanding
Smoke tests should follow a progression from simple to complex. Begin with isolated method verification and gradually advance to testing broader conceptual behaviors. Each successive test file should demonstrate deeper system understanding and better debug opportunitity for future issues.
Example progression: test_login_returns_token → … → test_basic_auth_flow_works
It is common to have first file as
test_01_initialization.py
- Can we import the modules?
- Do classes instantiate?
- Are constants defined?
and second file as:
test_02_connectivity.py
- Can we connect to the database?
- Do API endpoints respond?
- Are external services reachable?
- Do authentication mechanisms work?
- Can we read/write to external resources?
3. Clear Intent
Smoke tests should read like documentation so we can intervene. Also Naming the Tests Clearly and Readble is important
4. Full implementation Coverage
The cost of comprehensive smoke tests is negligible compared to the cost of incorrect implementation. Generate as many tests as necessary to ensure confidence.
A practical guideline: Request AI to create 5 smoke test files, each targeting distinct logical components, with a minimum of 5 individual test cases per file.
smoke_tests/
├── test_01_initialization.py
├── test_02_core_functionality.py
├── test_03_user_workflows.py
├── test_04_edge_cases.py
└── test_05_integration.py
5. No mocking and Providing Real Data
And you will see that AI-generated tests often default to minimal output or use mocks that obscure real behavior. Which if not checked might have result in Phantom success.
AI-generated tests frequently default to using mocks or minimal outputs that mask actual system behavior, potentially creating false positives where tests pass despite broken functionality.
It is your job to provide realistic data for smoke tests.When developing complex systems like video processing engines or voice detection algorithms, AI cannot generate realistic data to test the code . Without authentic test data, you risk phantom successes where mocked tests pass while real implementation fails.
6. Always Rerun your smoke tests by yourself.
Never rely solely on AI’s test execution or interpretation. A critical pitfall occurs when AI reports overall success despite partial failures - even one failing test invalidates the entire suite.
Additionally, manually reviewing smoke test output provides immediate insight into the AI’s implementation approach, revealing how it handles data transformations and input/output operations. This direct observation is invaluable for understanding the actual code behavior.
Fuzzy Architecture Pattern
What Is Fuzzy Architecture?
Traditional architecture: Detailed upfront design, rigid boundaries, specific technologies.
Traditional: “Because iteration is expensive while specification/conceptualization is cheap”
- You can write detailed docs, diagrams, interfaces relatively cheaply
- But changing the actual implementation is costly
Fuzzy architecture: Intentionally vague starting point that crystallizes through implementation.
Like sculpture - you start with a rough shape and refine by removing what doesn’t belong.
Fuzzy/Vibe: “Because iteration is expensive while specification is expensive”
- Writing precise specs is hard/time-consuming when you don’t know what you need
- You are probably not experienced with architecture so picking a one without knowing what works well is problematic.
- Also there is good chnance you dont know the full reqirements yet. And missing one core requirement means architecture should be recreated. Which is costly.
- Better to build something rough and refine it.
The core trade-off seems to be about when you pay the cost - upfront planning/specification vs. during implementation/discovery. With AI/vibe coding, you’re betting that discovering the right architecture through building is more efficient than trying to specify it perfectly beforehand.
Why Fuzzy Works with AI
The Overspecification Trap
Detailed architecture upfront:
"Use PostgreSQL with Redis cache, implement Repository pattern
with Unit of Work, deploy on Kubernetes with Istio service mesh..."
Result: Overengineered before you write a line of code.
The Underspecification Trap
No architecture:
"Build me an app"
Result: AI makes random choices, inconsistent patterns.
The Fuzzy Sweet Spot
Just enough structure:
"Web app with database persistence and API.
Start simple, we'll refine as we build."
Result: Room to discover the right architecture.
Core Principles of Fuzzy Architecture
1. Boundaries Not Implementations
Define what, not how:
# Initial Architecture
## Components
- User Interface (how users interact)
- Business Logic (what the app does)
- Data Storage (where information lives)
- External Services (what we integrate with)
## Rules
- UI never talks directly to database
- Business logic owns all rules
- Data layer handles persistence only
No specifics about React vs Vue, SQL vs NoSQL, REST vs GraphQL.
2. Principles Over Patterns
State principles that guide decisions:
## Architectural Principles
1. **Simplicity First**: Choose boring technology
2. **Local First**: Everything works offline
3. **Privacy First**: Data stays on device
4. **Performance**: Sub-second response times
5. **Maintainability**: New developer productive in 1 hour
These principles shape choices naturally.
3. Evolution Points
Mark where architecture will likely change:
## Evolution Points
- Storage: Start with JSON files → SQLite → PostgreSQL
- API: Start with function calls → REST → GraphQL if needed
- Auth: Start with none → Basic → OAuth when required
- Deploy: Start local → Single server → Scale when needed
This prevents AI from optimizing prematurely.
Implementing Fuzzy Architecture
Phase 1: Sketch the Shape
Create initial architecture.md
:
# Architecture Overview
## High-Level Structure
[User Interface] ↓ [Application Logic] ↓ [Data Layer]
## Component Responsibilities
**User Interface**
- Display information
- Collect user input
- Handle user interactions
**Application Logic**
- Process business rules
- Coordinate between layers
- Maintain application state
**Data Layer**
- Store and retrieve data
- Ensure data integrity
- Handle persistence
## Key Decisions Deferred
- Specific UI framework
- Database technology
- API protocol
- Deployment method
Phase 2: First Implementation
Let AI propose concrete choices:
"Based on this architecture and our requirements,
what would be good technology choices for a first implementation?
Keep it simple."
AI might suggest:
- UI: Plain HTML + Alpine.js
- Logic: Python Flask
- Data: SQLite
Phase 3: Refinement Cycles
As you build, architecture solidifies:
# Architecture Overview (Updated)
## Technology Stack
- Frontend: Alpine.js for reactivity (chose for simplicity)
- Backend: Flask (lightweight, perfect for our needs)
- Database: SQLite (portable, no setup required)
## Patterns Discovered
- Service layer pattern emerged naturally
- Event system for loose coupling
- Command pattern for user actions
The Architecture Dialogue
Starting Fuzzy
Human: "I need an expense tracker"
AI: "What architecture should we use?"
Human: "Let's start fuzzy - separate UI, logic, and data.
We'll refine as we build."
Guided Evolution
AI: "Should we add caching?"
Human: "Is performance a problem?"
AI: "Not yet"
Human: "Then no caching. Keep it simple."
Natural Boundaries
AI: "This function is getting complex"
Human: "What pattern is emerging?"
AI: "Looks like command processing"
Human: "Let's extract a command handler pattern"
Fuzzy Architecture Artifacts
The Living Architecture Document
Update architecture.md
as patterns emerge:
# Architecture (Living Document)
## Current State (Week 3)
- Clear MVC separation has emerged
- Service layer handles business logic
- Repository pattern for data access
- Event bus for loose coupling
## Decisions Made
- SQLite over PostgreSQL (simplicity won)
- Server-side rendering over SPA (speed won)
- Monolith over microservices (maintainability won)
## Future Considerations
- May need job queue for reports
- Might add caching if user base grows
- Could extract analytics into service
Decision Records
Document why architecture evolved:
# ADR-001: Use SQLite instead of PostgreSQL
## Status: Accepted
## Context
Initially kept database choice fuzzy. Now need to decide.
## Decision
Use SQLite for local-first architecture.
## Consequences
- ✓ Zero configuration
- ✓ Portable data files
- ✓ Perfect for single-user app
- ✗ Limited concurrent writes
- ✗ No advanced SQL features
Can migrate to PostgreSQL later if needed.
Anti-Patterns to Avoid
Premature Crystallization
Don’t lock in too early:
Week 1: "We'll definitely need microservices"
Week 4: "Actually, a monolith is perfect"
Fuzzy Forever
Eventually commit:
Month 6: "We still haven't decided on a database"
Fuzzy is for discovery, not procrastination.
Architecture Astronauting
Don’t add complexity for future scenarios:
"We might need to scale to millions of users"
"Let's solve that when we have thousands"
Benefits of Fuzzy Architecture
1. Faster Start
Less upfront planning = quicker first version
2. Better Fit
Architecture matches actual needs, not imagined ones
3. Less Waste
Don’t build infrastructure you don’t need
4. Natural Patterns
Right abstractions emerge from real use
5. AI Alignment
AI proposes solutions that fit current state
The Fuzzy Lifecycle
Fuzzy (Week 1-2)
↓
Emerging (Week 3-4)
↓
Solidifying (Week 5-8)
↓
Stable (Week 9+)
Each phase has different flexibility.
Practical Fuzzy Techniques
The Proxy Pattern
Start with simplest version:
class DataStore:
"""Fuzzy data layer - might be JSON, SQLite, or PostgreSQL"""
def save(self, data):
# Start with JSON
with open("data.json", "w") as f:
json.dump(data, f)
def load(self):
# Easy to swap later
with open("data.json") as f:
return json.load(f)
The Feature Flag Evolution
# Week 1: Direct implementation
if user.is_premium:
show_advanced_features()
# Week 4: Pattern emerges
if feature_enabled("advanced_features", user):
show_advanced_features()
# Week 8: Full feature system
if feature_flag.check("advanced_features", context=user):
show_advanced_features()
The Gradual Extraction
Week 1: Everything in app.py
Week 2: Extract models.py
Week 3: Extract services.py
Week 4: Extract repositories.py
Week 5: Clear architecture emerged
When to Stop Being Fuzzy
Signs It’s Time to Solidify
- Same patterns appearing repeatedly
- Team needs clear structure
- Performance requires specific choices
- Integration demands concrete interfaces
The Crystallization Moment
"Our fuzzy architecture has revealed these patterns:
- Service layer for business logic
- Repository pattern for data access
- Event system for integrations
Let's formalize these as our architecture."
Fuzzy Architecture with AI
Setting Context
"We're using fuzzy architecture. Start simple,
we'll refine based on what we learn."
Guiding Evolution
"Given what we've built so far,
what architectural pattern is emerging?"
Preventing Premature Optimization
"That's a good idea for later.
For now, what's the simplest thing that works?"
The Meta-Pattern
Fuzzy architecture is itself fuzzy. Don’t over-formalize the informality. Let it guide you naturally toward the right structure.
Start fuzzy. Build. Learn. Solidify. This is the way.
Next: Part IV - The Vibe Coding Method →
The Anchor Pattern
What Is the Anchor Pattern?
In scientific literature this is known as “Stable Intermediate Forms” which is a methodology to help erisking the process of change
The Anchor Pattern is about ensuring that new development doesn’t break existing functionality. As AI works on new features, it tends to forget earlier requirements due to limited context. Anchoring actions force AI to regularly verify that old logic still works.
Think of it like construction - you don’t just check if the new floor is level, you verify the foundation hasn’t shifted.
Why Anchoring Is Critical
AI’s context window is limited. When you spend time focusing on feature B, C, and D, the AI gradually loses sight of feature A’s requirements. Without anchors, you get:
Hour 1: "Build user authentication with email/password"
AI: ✓ Implements perfect auth system
Hour 2: "Add password reset"
AI: ✓ Adds password reset
Hour 3: "Add social login"
AI: ✓ Adds OAuth... but breaks email login
Hour 4: "Add 2FA"
AI: ✓ Adds 2FA... but breaks password reset
Each new feature works, but previous features break silently.
The Core Anchor Mechanism
Anchoring means regularly forcing AI to:
- Run existing smoke tests
- Verify core functionality still works
- Check that new code doesn’t violate established patterns
- Ensure integration points remain intact
And if smoke tests pattern is used correctly then anchoring your development is simple as running such prompt:
"We've implemented the new feature. Now let's run our smoke tests
to ensure all existing functionality still works correctly. If something is broken fix it"
Anchoring with DevDocs Pattern
DevDocs serve as persistent memory that AI can reference:
"Before implementing social login, please review:
- devdocs/modules/auth/requirements.md
- devdocs/modules/auth/existing_flows.md
- devdocs/simplified_concepts.md section on authentication
Ensure the new feature doesn't break existing requirements."
The Meta-Anchor
The ultimate anchor is asking:
"What existing functionality could this change break?
Let's test those specific areas."
This makes AI think about impact before problems occur.
Anchor Best Practices
- Run Anchors Frequently: Not just at the end
- Fix Immediately: Don’t let anchors stay red
- Add New Anchors: When you find bugs
- Remove Obsolete Anchors: When features are removed
- Document Anchor Purpose: Why does this test exist?
Anchoring isn’t about perfection - it’s about detection. You will break things. Anchors ensure you know immediately, not days later.
Offload Pattern
Making AI’s Job Easier
Traditional development assumes human limitations. Vibe coding flips this - we optimize for AI capabilities. The Offload Pattern is about structuring everything to make AI’s job as easy as possible.
Think of it like organizing a kitchen for a chef who’s blindfolded but has perfect memory. Everything needs to be exactly where they expect it.
Why Offload Matters
AI works best when:
- Context is clear and unambiguous
- Patterns are consistent and predictable
- Information is structured and accessible
- Intentions are explicit, not implicit
- Examples demonstrate expectations
When you make AI’s job easier, you get:
- Faster, more accurate code generation
- Less back-and-forth clarification
- Fewer misunderstandings and revisions
- More time for creative problem-solving
Core Principles of Offloading
- Enforce creation of clean consistent explicit codebase
- Clearly state the focus and priorities in your prompts
- Provide examples whenever you can
- Enforce progressive development and break combined complexities.
Deployment and Documentation
From Development to Production
You’ve built it, tested it, refactored it. Now comes the final phase: making it real. This chapter covers the journey from working code to deployed application with comprehensive documentation.
Creating the Documentation Corpus
Transitioning from DevDocs to User Docs
DevDocs served development. Now create user-facing documentation:
Ask AI to create a docs folder and generate documentation for each module.
Documentation should include edge cases and interface specifications.
This step indicates that our codebase is stabilizing and maturing.
Structure your docs:
docs/
├── getting-started/
│ ├── installation.md
│ ├── quick-start.md
│ └── first-expense.md
├── user-guide/
│ ├── expense-management.md
│ ├── categories.md
│ ├── multi-currency.md
│ └── data-export.md
├── api-reference/
│ ├── storage-api.md
│ ├── currency-api.md
│ └── export-api.md
├── troubleshooting/
│ ├── common-issues.md
│ ├── data-recovery.md
│ └── performance.md
└── contributing/
├── development-setup.md
├── architecture.md
└── testing.md
Documentation Generation
Let AI help create comprehensive docs:
Please generate user documentation for the Storage module:
1. Read the module's interface.md and implementation
2. Create docs/api-reference/storage-api.md with:
- Overview of what the module does
- API reference with all public methods
- Code examples for common use cases
- Error handling information
- Performance considerations
Make it readable for developers who'll use this module.
Example output:
# Storage API Reference
## Overview
The Storage module provides local-first data persistence for the expense tracker. All data is stored on the user's device using browser localStorage with automatic serialization and versioning support.
## Basic Usage
```javascript
import { StorageService } from '@modules/storage'
const storage = new StorageService()
// Save an expense
await storage.save('expenses', expenseData)
// Load all expenses
const expenses = await storage.load('expenses', [])
// Delete specific item
await storage.delete('expenses', expenseId)
API Reference
Constructor
new StorageService(options?: StorageOptions)
Creates a new storage instance.
Options:
namespace
(string): Prefix for all keys (default: ‘app’)version
(number): Storage version for migrations (default: 1)
Methods
save(key: string, data: any): Promise
Saves data to storage.
Parameters:
key
: Storage key identifierdata
: Data to store (will be JSON serialized)
Example:
await storage.save('user-preferences', {
currency: 'USD',
theme: 'dark'
})
Errors:
StorageQuotaError
: When storage limit exceededSerializationError
: When data cannot be serialized
[… continues with all methods …]
## Creating the Production Test Suite
### Converting Smoke Tests to Production Tests
Transform verbose smoke tests into CI-ready tests:
Please analyze all smoke test files across all modules and create a comprehensive test suite in the root directory.
Requirements:
- Convert smoke tests into properly structured test files using Jest
- Organize tests logically for CI/CD pipeline execution
- Include appropriate test utilities and helpers
- Generate test documentation that explains the testing strategy
This will serve as our production-ready test suite for continuous integration.
### Test Suite Organization
Create a professional test structure:
```javascript
// tests/setup.js
// Global test configuration
beforeEach(() => {
// Clear all mocks
jest.clearAllMocks()
// Reset localStorage
localStorage.clear()
// Reset test data
global.testData = createTestData()
})
// Suppress console during tests unless debugging
if (!process.env.DEBUG) {
global.console = {
...console,
log: jest.fn(),
error: jest.fn(),
warn: jest.fn()
}
}
// tests/unit/modules/storage.test.js
// Production-ready unit tests
import { StorageService } from '@/modules/storage'
import { StorageQuotaError } from '@/modules/storage/errors'
describe('StorageService', () => {
let storage
beforeEach(() => {
storage = new StorageService()
})
describe('save', () => {
it('should persist data to localStorage', async () => {
const data = { id: 1, amount: 100 }
await storage.save('test-key', data)
const stored = localStorage.getItem('app:test-key')
expect(JSON.parse(stored)).toEqual(data)
})
it('should handle storage quota exceeded', async () => {
// Fill storage to near capacity
const largeData = 'x'.repeat(5 * 1024 * 1024) // 5MB
await expect(
storage.save('large', largeData)
).rejects.toThrow(StorageQuotaError)
})
})
// ... more focused tests
})
Test Documentation
Create comprehensive test documentation:
# Testing Strategy
## Overview
Our test suite ensures reliability across three levels:
1. Unit tests - Individual modules in isolation
2. Integration tests - Modules working together
3. E2E tests - Complete user workflows
## Running Tests
```bash
# All tests
npm test
# Unit tests only
npm run test:unit
# Integration tests
npm run test:integration
# E2E tests
npm run test:e2e
# With coverage
npm run test:coverage
Test Structure
Unit Tests
- Located in
tests/unit/
- One file per module
- Mock external dependencies
- Fast execution (< 10s total)
Integration Tests
- Located in
tests/integration/
- Test module interactions
- Use real implementations
- Moderate execution (< 30s total)
E2E Tests
- Located in
tests/e2e/
- Test complete workflows
- Simulate user actions
- Slower execution (< 2min total)
Coverage Requirements
- Overall: 80% minimum
- Critical paths: 95% minimum
- New code: 90% minimum
Writing Tests
Follow these patterns:
- Arrange-Act-Assert
it('should calculate total correctly', () => {
// Arrange
const expenses = [{ amount: 10 }, { amount: 20 }]
// Act
const total = calculateTotal(expenses)
// Assert
expect(total).toBe(30)
})
- Descriptive Names
// Bad
it('test1', () => {})
// Good
it('should return zero for empty expense list', () => {})
- Test One Thing Each test should verify a single behavior.
## Deployment Preparation
### Environment Configuration
Create deployment configurations:
```javascript
// config/environments.js
const environments = {
development: {
API_URL: 'http://localhost:3000',
DEBUG: true,
STORAGE_TYPE: 'localStorage'
},
production: {
API_URL: 'https://api.expenses.app',
DEBUG: false,
STORAGE_TYPE: 'indexedDB',
SENTRY_DSN: process.env.SENTRY_DSN
},
test: {
API_URL: 'http://mock.api',
DEBUG: false,
STORAGE_TYPE: 'memory'
}
}
export default environments[process.env.NODE_ENV || 'development']
Build Configuration
Set up production builds:
// webpack.prod.js
module.exports = {
mode: 'production',
optimization: {
minimize: true,
splitChunks: {
chunks: 'all',
cacheGroups: {
vendor: {
test: /[\\/]node_modules[\\/]/,
name: 'vendors',
priority: 10
}
}
}
},
performance: {
maxAssetSize: 250000, // 250KB
maxEntrypointSize: 250000,
hints: 'error'
}
}
Deployment Checklist
Create a deployment checklist:
# Deployment Checklist
## Pre-Deployment
- [ ] All tests passing
- [ ] No console.log statements in production code
- [ ] Environment variables configured
- [ ] Security audit completed
- [ ] Performance benchmarks met
- [ ] Documentation updated
- [ ] Version number bumped
## Build
- [ ] Production build created
- [ ] Bundle size within limits
- [ ] Source maps generated
- [ ] Assets optimized
- [ ] Service worker updated
## Deployment
- [ ] Database migrations run
- [ ] Static assets deployed to CDN
- [ ] Application deployed
- [ ] Health checks passing
- [ ] Rollback plan ready
## Post-Deployment
- [ ] Smoke tests in production
- [ ] Monitor error rates
- [ ] Check performance metrics
- [ ] Verify analytics tracking
- [ ] Update status page
Creating the Comprehensive README
The Final Documentation
The README is your project’s front door:
Please create a comprehensive README.md file for the project root by:
1. Reading and synthesizing all documentation in the docs folder
2. Reviewing all .md files throughout the project
3. Creating a professional README that includes:
- Project overview and purpose
- Installation and setup instructions
- Architecture overview with key components
- Usage examples and API documentation
- Testing instructions
- Contributing guidelines
Ensure the README reflects the current state of the codebase and
serves as the primary entry point for new developers.
README Structure
A complete README:
# Expense Tracker
A privacy-focused, offline-first expense tracking application that respects your data.



## Features
- 📱 Works completely offline
- 🌍 Multi-currency support with live rates
- 📊 Visual spending insights
- 🔒 Your data never leaves your device
- 📤 Export to CSV for taxes
- ⚡ Lightning fast performance
## Quick Start
```bash
# Clone the repository
git clone https://github.com/username/expense-tracker.git
# Install dependencies
npm install
# Start development server
npm run dev
# Open http://localhost:3000
Installation
Requirements
- Node.js 16+
- npm 7+
- Modern browser (Chrome 90+, Firefox 88+, Safari 14+)
Development Setup
- Clone the repository
- Install dependencies:
npm install
- Copy environment file:
cp .env.example .env
- Start development server:
npm run dev
Production Build
# Create optimized build
npm run build
# Serve production build
npm run serve
Architecture
The application follows a modular architecture:
┌─────────────┐ ┌──────────────┐ ┌─────────────┐
│ UI Layer │────▶│ Services │────▶│ Storage │
└─────────────┘ └──────────────┘ └─────────────┘
│
┌───────▼────────┐
│ Currency │
│ Converter │
└────────────────┘
Key Modules
- Storage: Local-first data persistence
- Currency: Multi-currency conversion
- Export: Data export functionality
- Analytics: Spending insights
Usage
Basic Expense Tracking
// Add an expense
await expenseTracker.add({
amount: 25.50,
category: 'food',
description: 'Lunch',
currency: 'USD'
})
// View monthly summary
const summary = await expenseTracker.getMonthlySummary()
console.log(summary)
// { total: 425.50, byCategory: { food: 225.50, transport: 200 } }
Multi-Currency Support
// Add expense in different currency
await expenseTracker.add({
amount: 1000,
category: 'food',
currency: 'JPY' // Japanese Yen
})
// View in preferred currency
const totalUSD = await expenseTracker.getTotal('USD')
Testing
# Run all tests
npm test
# Run with coverage
npm run test:coverage
# Run specific suite
npm run test:unit
npm run test:integration
npm run test:e2e
Contributing
We welcome contributions! Please see our Contributing Guide for details.
Development Workflow
- Fork the repository
- Create feature branch:
git checkout -b feature/amazing-feature
- Commit changes:
git commit -m 'Add amazing feature'
- Push to branch:
git push origin feature/amazing-feature
- Open Pull Request
Code Style
- ESLint configuration:
.eslintrc.js
- Prettier configuration:
.prettierrc
- Run
npm run lint
before committing
Deployment
Vercel
Netlify
Self-Hosting
- Build the application:
npm run build
- Serve the
dist
folder with any static file server - No backend required!
License
MIT License - see LICENSE for details
Acknowledgments
Support
- 📧 Email: support@expensetracker.app
- 🐛 Issues: GitHub Issues
- 💬 Discussions: GitHub Discussions
## Deployment Strategies
### Static Hosting
For offline-first apps:
```yaml
# netlify.toml
[build]
command = "npm run build"
publish = "dist"
[[redirects]]
from = "/*"
to = "/index.html"
status = 200
[[headers]]
for = "/*"
[headers.values]
X-Frame-Options = "DENY"
X-XSS-Protection = "1; mode=block"
X-Content-Type-Options = "nosniff"
Progressive Web App
Add PWA capabilities:
// sw.js - Service Worker
const CACHE_NAME = 'expense-tracker-v1'
const urlsToCache = [
'/',
'/styles/main.css',
'/scripts/main.js',
'/offline.html'
]
self.addEventListener('install', event => {
event.waitUntil(
caches.open(CACHE_NAME)
.then(cache => cache.addAll(urlsToCache))
)
})
self.addEventListener('fetch', event => {
event.respondWith(
caches.match(event.request)
.then(response => response || fetch(event.request))
)
})
Container Deployment
For server-based apps:
# Dockerfile
FROM node:16-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Post-Deployment
Monitoring Setup
// monitoring.js
import * as Sentry from '@sentry/browser'
if (process.env.NODE_ENV === 'production') {
Sentry.init({
dsn: process.env.SENTRY_DSN,
integrations: [
new Sentry.BrowserTracing(),
],
tracesSampleRate: 0.1,
environment: process.env.NODE_ENV
})
}
// Track custom metrics
export function trackMetric(name, value) {
if (window.analytics) {
window.analytics.track(name, { value })
}
}
User Analytics
Respect privacy while gathering insights:
// Privacy-focused analytics
const analytics = {
pageView: (page) => {
// No personal data, just page views
fetch('/api/analytics', {
method: 'POST',
body: JSON.stringify({
event: 'pageview',
page,
timestamp: Date.now()
})
})
},
event: (action, category) => {
// Anonymous events only
fetch('/api/analytics', {
method: 'POST',
body: JSON.stringify({
event: 'action',
action,
category,
timestamp: Date.now()
})
})
}
}
Maintenance Documentation
Runbook
Create operational documentation:
# Expense Tracker Runbook
## Common Issues
### Issue: Storage Quota Exceeded
**Symptoms**: App shows "Storage full" error
**Solution**:
1. Export existing data
2. Clear old data
3. Consider IndexedDB migration
### Issue: Currency Rates Outdated
**Symptoms**: Conversions seem wrong
**Solution**:
1. Check rates.js last update
2. Update rates from reliable source
3. Deploy new version
## Deployment Process
1. Run tests: `npm test`
2. Build: `npm run build`
3. Deploy: `npm run deploy`
4. Verify: Check health endpoint
5. Monitor: Watch error rates for 30min
## Rollback Process
1. Identify issue in monitoring
2. Revert to previous version: `npm run rollback`
3. Verify functionality restored
4. Investigate issue in staging
Final Checklist
Before calling it done:
- All tests passing
- Documentation complete
- README is welcoming
- Deployment automated
- Monitoring active
- Backup plan ready
- Users can get help
- You’re proud of it
Conclusion
Deployment and documentation complete the vibe coding journey. You’ve taken an idea, built it systematically with AI assistance, and made it real. The comprehensive documentation ensures others (including future you) can understand, use, and improve what you’ve built.
Remember: Shipping is just the beginning. Real applications evolve with user needs.
🎉 Congratulations! You’ve completed the Vibe Coding Method! 🎉
Next: Part V - Advanced Techniques →
VIBE CODING A NEW PROJECT
This appendix documetn contains ready-to-use prompts for IDEA -> PROTOTYPE. These prompts contains established vibe coding patterns. Their detailed explanations can be found in related chapters.
Take attention that phase 7-13 should be repeated many times as the modules are needed.
0 → 1 → 2 → 3 → 4 → 5 → 6 → ┌─[7 → 8 → 9 → 10 → 11 → 12]─┐ → 13 → 14
└─────────────←───────────────┘
(loop for each module)
IMPORTANT NOTES:
- Do not forget to act as a human in the loop. Read each generated documentation and fix any misunderstanding asap. Otherwise they can propagate and cause bigger errors
- Remind AI keep the core documents and smoke tests to upto date after any change.
- Understand the flow so you can intervene. Create as many as free style documents.
- It is okay that your understanding of the project will change and you may feel like starting over especially with documentations. Do it. But instead of deleting old devdocs folder just rename it to deprecated. You may refer to them later.
- Dont forget to git push your code. AI is not almighty and it can mess things up.
COMPLETE PHASES AND PROMPTS
Phase 0: Data Dump
Provide the AI with all project-relevant information. This is essential. Don’t rush to over-define your project at this stage - fuzzy, incomplete details are fine. You’ll have opportunities to refine and clarify later. During Phase 1, this information will be processed into three structured documents.
If you have multiple information sources, consolidate them into a data_dump.txt file. Include these four aspects (even if vaguely defined):
- Project purpose: Your goals from a non-technical perspective
- Tech stack preferences: Any preferred technologies or frameworks
- Target platforms: Mobile, web, embedded, etc.
- Priorities: Relative importance of platforms, features, and objectives
Phase 1: Foundation Documents
This phase is explained in detailed in chapter_6/01_devdocs_pattern.md and chapter_6/02_root_docs.md
Based on the provided project information, I need you to establish the DevDocs pattern foundation.
First, analyze the project requirements and create a clear understanding of what we're building.
Create the following foundation documents:
1. devdocs/foundations/project_description.md
What are we building?
What problem are we solving
What are the various scopes of this project?
Who are the targeted users?
2. devdocs/foundations/philosophy.md
- what is the philosophy of this project? (dont go into technical details)
3. devdocs/foundations/known_requirements.md
- Technical requirements
- Business requirements
- User requirements
Focus on clarity and mutual understanding before any implementation.
Phase 2: Concept Extraction and Clarification
This phase is explained in detail in chapter_6/03_concept_docs.md
Using the foundation documents you just created (project_description.md, philosophy.md, known_requirements.md),
extract and document the key core concepts (only needed and core ones):
1. Create devdocs/concepts/concepts.md
- List only essential technical concepts
- Order by importance/dependency
- Brief one-line description for each
2. For each concept, create detailed clarification in devdocs/concepts/concept_clarifications/
Name files with ordering prefixes (01_[concept].md, 02_[concept].md, etc.)
Each clarification must answer:
- Clear short explanation what it is and why it matters
- How this concept helps the overall project
- How this concept limits the overall project
- What kind of information this concept needs as input
- What kind of process this concept should use
- What kind of information this concept outputs or relays
- Explain the good expected outcome of realizing this concept
- Explain the bad unwanted outcome of realizing this concept
Phase 3: Simplification for MVP
This phase is explained in detail in chapter_6/03_concept_docs.md
I want you to create devdocs/concepts/simplified_concepts.md using concepts.md and the goal is to trim the features but the core ones for each concept so we can still have these concepts but they are more about prototype.
And make sure you follow these rules during simplification
- do not oversimplify the concept to the point underlying architecture is oversimplified and does not support the original concept
- if a concept has a support for multi subconcept, do not binarize it but diminish the number of supported subconcepts by priotizing the most important ones.
And then for each concept in simplified_concepts.md create me a clarification markdown file
which includes answer to these questions:
For each concept write
- clear short explanation what it is and why it matters
- How this concept helps the overall project
- How this concept limits the overall project
- What kind of information this concept needs as input
- What kind of process this concept should use
- What kind of information this concept outputs or relays
- explain the good expected outcome of realizing this concept
- explain the bad unwanted outcome of realizing this concept
Put 1_ 2_ 3_ like prefix of each file to order them and make sure priotize the core concepts when you are ordering them. and do this in devdocs/concepts/simplified_concept_clarifications/
Critical: The simplification should reduce features, not break architecture and be flexible for future scope expansion
Phase 4: Identifying Modules
We want development to be done in modular way. This ensures project stays in debuggable/testable complexity. This step identifies all possible modules.
Review the simplified concepts and identify logical abstractions that can be organized into modules. Focus on creating a modular architecture that will facilitate cleaner, more maintainable development. Ensure the proposed modularity doesn't compromise performance or security requirements. Avoid over-engineering - only modularize major conceptual boundaries. Document your module structure proposal in module_proposal.md
Phase 5: Architecture and Structure
Based on the simplified concepts, and module proposal please propose the most suitable architecture for this project. Keep in mind that we'll be developing iteratively, evolving from a simple prototype to more complex versions over time. And our priority is solid foundation whcih can be expanded overtime and not immediate full scope solution.
Please use the info from module_proposal.md intelligently (you may choose not to accept a module if it is not neccesary or overengineered for the MVP logic ) and avoid tightly coupled architecture.
Key considerations:
- Avoid over-engineering or excessive complexity or explicity at this stage
- The architecture will evolve as we progress, so maintain flexibility
- Focus on high-level concepts that can accommodate the project's core requirements
- Present a design that balances simplicity with extensibility
- Make sure to use proper design patterns in order to keep modularity of codebase(e.g., repository pattern for DB related operations)
Phase 7: Selecting One module and Preparing for implementation
One way to enforce AI to do the development is done in a moduler way is to implement one by one.
With the project information, architecture overview, and module structure defined, select the first module for implementation.
Choose a module that:
1. Encapsulates the most messy distractive existing code
2. Is peripheral to the core system (minimal dependencies)
3. Has stable requirements unlikely to change
4. Can be implemented independently without requiring other modules to be fully defined first
And then create [module_name]_module_implementation_proposal.md which explain in order
1. what components this modules would have?
2. what the interface/endpoints this module would show
3. how it would be used with pseudo code
4. how it will be used by other/core modules (in a high level without definitive definition)
Phase 8: Module implementation
Please implement this module with required files. Based on our previous discussions and documentation,
please provide:
- Complete file list with their respective directory paths
- Full implementation for each file
- All necessary code to make this module functional
Phase 9: Smoke test
Details are explained in chapter_7/smoke_tests_pattern.md
Let's design comprehensive smoke tests to validate our implementation.
Please create smoke_tests folder if it doesnt exists. and Please create a test plan with the following structure:
1. 5 test files, each containing 5 focused test cases
2. Avoid mocking and use real componenets with real calls with real data.
3. File naming convention: test_01_[test_focus_area].py, test_02_[next_focus_area].py, etc.
4. Make sure these tests
- Individual functions/methods in isolation
- Test how components work together
- Verify solidly defined requirements are met or not
5. For each test file, provide:
- Clear description of what aspect it tests
- Why this testing area is critical
- Brief outline of each test case within the file
6. these tests shouldnt use any testing frameworks. Make the outputs verbose enough so you can see what exactly does not work.
7. Start with initialization test file.
8. Each test file's top comment should include how to run it manually.
Phase 10: Running the smoke test and fix the errors
It is important to run smoke tests by ourself. First we will let AI do this and then we should run them by ourselves.
Lets run each smoke test one by one and fix all errors. If you find errors and cant fix them after 3 changes break down the smoke test into a smaller more isolated forms with more verbose outputs ( in smoke tests files) And rerun them.
Phase 11: Fuzzy Module Documentation
Now our module is ready and working we should create documentation about it.
create comprehensive documentation:
Create devdocs/modules/[module_name]/ containing:
1. what_is_this_for.md
- Core purpose
- Why it's a separate module
- What breaks without it
- Who depends on it
2. interfaces_and_endpoints.md
- Public methods/functions
- API endpoints
- Event emissions
- Data structures
3. integration_points.md
- How to integrate this module (only if what to integrate is already defined in the codebase)
- Multiple integration approaches if flexible
- Best practices for usage
4. integration_requirements.md
- Environment variables needed (only if really needed)
- Database schemas required (only if really needed)
- External services (only if really needed)
- Configuration files (only if really needed)
5. limitations.md
- Performance boundaries
- Feature boundaries
- Technical constraints
- Security limitations
6. possible_use_cases.md
- Common usage patterns
- Anti-patterns to avoid
- Performance tips
7. edge_cases_covered.md
- Handled edge cases
- Error scenarios
- Recovery mechanisms
8. example_usage.md
- Code examples
- Setup instructions
- Common operations
- Testing approach
9. summary.md
- Quick reference
- Main interfaces
- Critical limitations
- Integration checklist
Phase 12: INTEGRATION & SMOKE TESTS
Now that the module is complete and documented, integrate it with the core system:
1. Create integration tests between this module and any existing core components under smoke_tests
2. Verify the module works within the larger system context
3. Test data flow between modules
4. Validate that the module's interfaces are being used correctly
5. Check for any unexpected side effects or performance impacts
6. Update smoke tests to include integration scenarios
Run all tests together to ensure the module doesn't break existing functionality.
VIBE CODING AN EXISTING PROJECT
This appendix contains ready-to-use prompts for applying Vibe Coding patterns to existing codebases. The archaeology pattern (Chapter 6.7) helps extract DevDocs from mature projects without losing architectural wisdom.
IMPORTANT NOTES
- Existing code contains hidden reasons - don’t simplify without understanding why
- Technical debt often exists for valid historical reasons
- Start documentation before making changes
- Keep old architectural decisions visible to prevent regression
APPLY THE ARCHAEOLOGY PATTERN
Phase 1: Initial Survey
Analyze this existing codebase and create initial DevDocs documentation:
1. Scan the project structure:
- Map directory organization
- Identify tech stack from package files
- Locate existing documentation
- Find configuration files
2. Create devdocs/archaeology/initial_survey.md documenting:
- Project size and scope
- Technologies discovered
- Documentation gaps
- First impressions
Phase 2: Foundation Extraction
Based on the codebase analysis, extract foundation documents:
1. Create devdocs/foundation/project_description.md
- What the system actually does (not what docs claim)
- Current user base and use cases
- Actual problems being solved
2. Create devdocs/foundation/philosophy.md
- Implicit design principles found in code
- Coding patterns consistently used
- Architectural decisions evident in structure
3. Create devdocs/foundation/known_requirements.md
- Requirements inferred from implementations
- Constraints visible in code
- Compliance/security measures present
Phase 3: Concept Discovery
Extract concepts from the existing implementation:
1. Create devdocs/concepts/discovered_concepts.md
- List all technical concepts found in code
- Note which are fully/partially/poorly implemented
- Identify missing expected concepts
2. For each concept, document in devdocs/concepts/concept_analysis/:
- Current implementation approach
- Why this specific approach was chosen
- Edge cases handled
- Known limitations
- Integration points
Phase 4: Architecture Archaeology
Reconstruct the architecture from code:
1. Create devdocs/archaeology/architecture_analysis.md:
- Trace main entry points and flows
- Map data models and schemas
- Document API endpoints and contracts
- Identify architectural patterns used
2. Create devdocs/archaeology/module_discovery.md:
- Natural module boundaries in code
- Coupling and cohesion analysis
- Dependency relationships
- Shared utilities and libraries
Phase 5: Concept Mapping
Map discovered concepts to actual implementation:
Create devdocs/archaeology/concept_mappings.md documenting:
- Which files/modules implement each concept
- Coverage percentage (fully realized, partial, missing)
- Why implementation diverged from ideal
- What alternatives were likely considered
- Edge cases that shaped the current design
This captures the "why" behind existing architecture.
Phase 7: State Assessment
Synthesize findings into current state assessment:
Create devdocs/archaeology/project_state.md:
- Overall architecture maturity level
- Technical debt impact assessment
- Missing concept implementations
- Integration points needing attention
- Risk areas for future development
- Recommended refactoring priorities
TOWARDS FIRST AI IMPLEMENTATION
Phase 8: Gap Analysis
Compare current state to desired future state:
Create devdocs/evolution/gap_analysis.md:
1. What concepts need implementation
2. What architecture changes are required
3. What technical debt blocks progress
4. What can be incrementally improved
5. What requires complete rewrite
Phase 9: Gap Closure Strategy
Using gap_analysis.md create phased evolution plan:
Create devdocs/evolution/gap_closure_plan.md:
1. Quick wins (can do immediately)
2. Incremental improvements (module by module)
3. Major refactoring (requires planning)
4. Complete rewrites (if necessary)
For each phase:
- Dependencies and prerequisites
- Risk assessment
- Testing strategy
Phase 10: Baseline Smoke Tests
Establish working baseline with smoke tests. See what is running and what is not running.
Your job is to create smoke_tests/check_what_is_working/
and under this folder create smoke test files for each implementation that seems to be working.
Let's design comprehensive smoke tests to validate the implementation.
Please create smoke_tests folder if it doesn't exist and create a test plan with the following structure:
1. 5 test files, each containing 5 focused test cases
2. Avoid mocking and use real components with real calls with real data
3. File naming convention: test_01_[test_focus_area].py, test_02_[next_focus_area].py, etc.
4. Make sure these tests:
- Individual functions/methods in isolation
- Test how components work together
- Verify solidly defined requirements are met or not
5. For each test file, provide:
- Clear description of what aspect it tests
- Why this testing area is critical
- Brief outline of each test case within the file
6. These tests shouldn't use any testing frameworks. Make the outputs verbose enough so you can see what exactly does not work
7. Start with initialization test file
8. Each test file's top comment should include how to run it manually
Make sure not mock things and try to run them with minimal changes. Make sure you cover all codebase.
Document current behavior in smoke_tests/check_what_is_working/report.md:
- What works as expected
- What's broken but acceptable
Phase 11: Codebase Cleanup Inventory
Identify unused code and organize for safe removal:
1. Scan for unused elements:
- Unreferenced files and modules
- Dead code paths (unreachable functions)
- Commented-out code blocks
- Duplicate implementations
- Abandoned features
- Test files for non-existent code
- Orphaned configuration files
2. Create devdocs/archaeology/cleanup_inventory.md:
- List all candidates for removal
- Note why each appears unused
- Mark any that might be used dynamically
- Identify potential hidden dependencies
Never delete immediately - code that looks unused might be:
- Loaded dynamically
- Referenced in configuration
- Used by external systems
- Kept for compliance/audit reasons
Phase 12: Strategic Refactoring Opportunities
Identify high-value refactoring opportunities that will significantly improve the codebase:
-
Scan for refactoring candidates:
- Database operations scattered across codebase → Repository Pattern
- Business logic mixed with infrastructure → Service Pattern
- Repeated API call patterns → Gateway/Client abstraction
- Global state management → Dependency Injection
- Hard-coded configurations → Configuration Pattern
- Complex conditionals → Strategy Pattern
- Direct file system access → Storage abstraction
-
Create devdocs/evolution/refactoring_opportunities.md: For each opportunity document:
- Current problematic pattern (with file locations)
- Proposed abstraction/pattern
- Immediate benefits (testability, maintainability)
- Implementation effort estimate
- Risk assessment
- ROI justification
-
Prioritize by value/effort ratio:
- Critical: Blocks testing or development
- High: Significant maintenance reduction
- Medium: Nice to have, clear benefits
- Low: Cosmetic, can wait
Focus only on refactoring that:
- Enables better testing
- Reduces coupling between modules
- Makes AI-driven development easier
- Solves actual pain points
Avoid refactoring for its own sake - each change must deliver measurable value.
Phase 13: Implementation Roadmap
Create comprehensive roadmap combining all improvement activities:
Create devdocs/evolution/implementation_roadmap.md organizing work into phases:
Phase A - Foundation (Week 1-2):
- Critical cleanup from cleanup_inventory.md
- Essential refactoring that unblocks other work
- Fix broken core functionality from smoke test report
- Establish CI/CD if missing
Phase B - Core Refactoring (Week 3-4):
- Implement highest-ROI refactoring from refactoring_opportunities.md
- Add abstraction layers (Repository, Service patterns)
- Decouple tightly coupled modules
- Run smoke tests after each refactor
Phase C - Gap Filling (Week 5-8):
- Implement missing concepts from gap_analysis.md
- Start with quick wins
- Progress to module-by-module improvements
- Add new features identified in gaps
Phase D - Integration & Polish (Week 9-10):
- Ensure all modules work together
- Performance optimization
- Documentation updates
- Comprehensive testing
For each item include:
- Specific files/modules affected
- Dependencies (what must be done first)
- Success criteria
- Time estimate
- Risk level
This roadmap becomes your execution guide for transforming the codebase systematically.
This was an interesting write for me. I also extensively used AI to process and refine some thoughts. I knew something by intuition and did not really understood why. And I did not know that I did not understood them as well. I guess one of the greatest benefits of AI is that humanity will be finally understand themselves. Regardless of it dooms us or not AI is cool and building with AI feels like magic.