Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Vibe-Driven Development (VDD): Practical Guide

Vibe-Driven Development Book Cover

Welcome

Welcome to the comprehensive guide on Vibe-Driven Development (VDD). This emerging methodology transforms the intuitive art of vibe coding into a structured approach with repeatable patterns for AI collaboration. It bridges the gap between coding by feel and engineering discipline.

Unlike pure vibe coding, VDD offers systematic, repeatable foundations and fine-grained control at every stage of development.

This book doesn’t introduce new concepts. Instead, it codifies what many developers already do instinctively, turning scattered practices into a cohesive methodology.

Core Principles

VDD operates on four pillars of explicitness:

  • Radical Transparency: Make everything visible - tests, data flows, transformations
  • Verbose Communication: Chatty tests that explain what’s happening
  • Explicit Over Implicit: No hidden behaviors or assumptions
  • Human-in-the-Loop: Maintain control while leveraging AI power

Why VDD?

As software developers, we navigate real requirements, team dynamics, and technical constraints while balancing timelines, quality standards, and accountability.

And recently we face immense pressure in our craft to harness the power of AI-assisted coding. Yet many treat AI as just a syntax generator, missing out on 10x productivity gains. Others hand over control entirely, inheriting brittle code and hidden vulnerabilities.

VDD bridges this gap. An evolving framework for true developer-AI partnership without the compromises.

What You’ll Learn

This book provides both philosophical foundation and practical implementation:

  • Part I: Foundations of Vibe Coding - History, philosophy, and core concepts
  • Part II: Understanding AI Collaboration - How AI thinks about code and effective collaboration
  • Part III: Vibe Coding Patterns - Essential patterns like DevDocs, Anchor, Smoke Tests, and Fuzzy Architecture
  • Part IV: The Vibe Coding Method - Step-by-step implementation guide
  • Appendices: Ready-to-use prompts and templates

Quick Start

If you prefer learning by doing:

  1. Jump to Appendix 1: New Project Prompts for step-by-step implementation
  2. Use Appendix 2: From Existing Codebase Prompts to apply VDD to existing projects

Who Should Read This

This book is for you if you’ve ever:

  • Watched AI generate code that looked perfect but didn’t work
  • Spent hours debugging despite all tests passing
  • Felt disconnected from a “clean” and “well-tested” codebase
  • Wondered why simple changes break in unexpected ways

How to Navigate

  • New to VDD? Start with the Preface and follow the chapters sequentially
  • Experienced Developer? Chapters 0-4 provide a 15-minute introduction to essential concepts
  • Ready to Implement? Jump directly to the appendices for practical prompts

Join our communities for more tips interesting discussions

Linkedin VDD Community

Preface

Why This Book Exists

In 2025, we find ourselves at a peculiar crossroads in software development. AI assistants can generate thousands of lines of code in seconds, yet developers spend hours debugging implementations they don’t fully understand. Tests pass with green checkmarks while core functionality silently fails. Code reviews have become exercises in faith rather than understanding.

This disconnect troubled me deeply. The promise of AI-accelerated development was turning into a crisis of opacity. We were building faster but understanding less. Our codebases were becoming black boxes, even to their creators.

Vibe-Driven Development emerged from a simple observation: when humans and AI collaborate on code, the biggest failures occur not from bad algorithms or syntax errors, but from miscommunication and hidden assumptions. The AI assumes one thing, the developer another, and neither realizes the disconnect until production fails.

The Core Insight

Traditional development methodologies were designed for human-to-human collaboration. Test-Driven Development helps developers communicate intent through tests. Domain-Driven Design creates a shared language between developers and stakeholders. But what methodology addresses human-AI collaboration?

VDD’s answer is radical transparency. Make everything visible. Make tests verbose. Make data flows explicit. Make transformations observable. When your code can’t hide behind abstractions and mocks, both humans and AI must confront the actual behavior.

A Personal Note

VDD isn’t about achieving the perfect codebase. It’s about building a continuous, structured, and measurable connection between your vision and AI’s understanding. AI complements your knowledge, speed, and creativity while you serve as the guiding lighthouse.

Enes/karaposu
10.09.2025

Join us at: https://www.linkedin.com/groups/15074007/

Terminology

This chapter defines key terms and concepts used throughout the Vibe-Driven Development methodology.

Vibe Coding

The practice of collaborative development with AI assistants, emphasizing natural communication and iterative refinement over rigid specifications.

Vibe-Driven Development (VDD)

An emerging methodology that brings structure and repeatable patterns to vibe coding, providing systematic foundations and granular control throughout the development process.

DevDocs Pattern

Living documentation that represents AI’s understanding of the project, continuously updated throughout development.

Smoke Tests Pattern

Comprehensive, verbose tests designed both for build verification and as specifications that AI can understand and implement against.

The Butterfly Defect

Small semantic imprecisions that cascade into large architectural changes. Named after the butterfly effect, where using “modal” instead of “dialog” might shift the entire UI philosophy.

Semantic Precision

Using exact, unambiguous terminology to prevent misinterpretation by AI.

Anchor Pattern

The practice of continuously ensuring existing functionality still works after AI makes changes, preventing regression through regular verification.

HILing (Human-In-the-Loop-ing)

The act of actively intervening as the human in the loop to review, correct, and steer AI output.

AI Drift

The gradual divergence of AI’s implementation from the original project vision, occurring when AI makes accumulated small decisions without human guidance.

Fuzzy Architecture

Starting with vague architectural guidelines and allowing structure to emerge through development, rather than over-specifying upfront.

Offload Pattern

Structuring code, documentation, and prompts to make AI’s job easier and more reliable.

Testing is Dialogue

The concept that tests serve as a communication medium with AI, teaching it requirements through examples.

Data Dump

The initial transfer of all available project information to AI, often messy and unstructured.

Retrofitting

Introducing VDD methodologies to projects that were originally developed without them.

Latent Persona

An emergent character or voice the AI adopts unintentionally, often due to biases in training data or weights. This can manifest as unexpected personality traits, communication styles, or decision-making patterns that weren’t explicitly prompted.

Ghost Tokens

Phantom influence from hidden system prompts, latent weights, or unseen context that alters style or tone without user intent. These invisible influences can cause AI to behave differently than expected.

Context Pollution

Lo-Fi Coding (Low Fidelity)

Vague, exploratory collaboration where AI has maximum creative freedom.

Mid-Fi Coding (Medium Fidelity)

Balanced collaboration where human sets boundaries and AI fills in details.

Hi-Fi Coding (High Fidelity)

Enhanced human-in-the-loop intervention with focused vision alignment.

Progressive Fidelity

Starting with Lo-Fi for exploration, moving to Mid-Fi for implementation, and applying Hi-Fi for critical sections.

From Waterfall to Agile to AI-Assisted

The Linear Era: Waterfall (1970s-1990s)

Waterfall dominated when software was simpler. Requirements → Design → Implementation → Testing → Deployment. Each phase completed before the next began. It worked because:

  • Software scope was limited
  • Teams were smaller
  • Change was expensive

The Breaking Point: Internet boom. Software complexity exploded. Six-month requirement phases became obsolete before implementation started.

The Iterative Revolution: Agile (2000s-2020s)

Agile flipped the script. Two-week sprints. Constant feedback. Embrace change. Key principles:

  • Working software over documentation
  • Customer collaboration over contracts
  • Responding to change over plans

Why It Worked: Matched the pace of business. Developers could adapt quickly. Users saw progress regularly.

The AI Transformation (2023s-Present)

AI assistants changed everything again. A developer + AI can now output what entire teams produced. But traditional methodologies break down:

Waterfall + AI = Chaos: AI needs constant steering. Six-month plans are meaningless when AI can prototype in hours.

Agile + AI = Friction: Sprint planning becomes bottleneck. AI works at near conversation speed, not sprint speed.

Era of Vibe Coding

Vibe coding emerged from developers actually using AI. It’s built on three realizations:

  1. AI amplifies both good and bad patterns - Structure matters more than ever
  2. Documentation becomes executable - What you write shapes what AI builds
  3. Testing is dialogue - Tests become conversations where you show AI what “correct” means through examples

The shift: From managing people to managing intelligence. From writing all code to directing code generation. From preventing bugs to rapidly fixing them.

The Birth of Vibe Coding

The Accidental Discovery

Vibe coding wasn’t designed - it was discovered. Early AI adopters noticed patterns:

  • Successful projects followed similar rhythms
  • Failed projects made similar mistakes
  • Traditional methods consistently underperformed

The name came from developers saying “you need to get into the vibe” with AI. Not just prompting - creating a flow state where human intention and AI capability merged.

The Principles

1. Documentation as Control Surface

Early adopters realized: AI treats documentation as truth. Write “this system handles millions of users” and AI codes for scale. Write “simple prototype” and AI keeps it minimal.

Discovery: Documentation isn’t describing code - it’s controlling code generation.

2. Smoke Tests as Build Verification

Problem: Without continuous testing, you don’t know if AI is building correctly until it’s too late.

Solution: Create smoke tests that verify each step of development is working as intended.

Discovery: You can’t wait until the end to test - smoke tests let you catch problems immediately as AI builds.

3. Fuzzy Architecture

Problem: Detailed upfront design leads to overengineering with AI.

Solution: Start with intentionally vague architecture that crystallizes through implementation.

Discovery: AI helps discover the right architecture by building, not by planning.

4. Continuous Anchoring

Problem: As AI works on new features, it forgets earlier requirements.

Solution: Constantly run tests to verify old functionality still works.

Discovery: AI’s limited context means you must actively maintain working state.

5. Offloading the Load

Problem: AI gets confused by inconsistent patterns and unclear structure.

Solution: Make everything explicit, consistent, and well-organized for AI.

Discovery: The easier you make AI’s job, the better results you get.

The Breakthrough Moment

The key insight: AI is not a developer, it’s an amplifier.

Traditional methods treat AI like a faster developer. Vibe coding recognizes AI as something new:

  • Incredible speed, but needs direction
  • Limited memory due to context windows, but vast knowledge from training
  • Pattern recognition beyond human capability, but can confidently produce errors
  • Surprising insights mixed with obvious mistakes

The Method Emerges

Developers started sharing patterns that would become core to vibe coding:

“DevDocs” Pattern: Excessive documentation as source of truth that guides AI “Smoke Tests” Pattern: Verbose tests that validate AI understands your intent “Fuzzy Architecture” Pattern: Start intentionally vague, let structure emerge through building “Anchor” Pattern: Force AI to verify old functionality still works after changes “Offload” Pattern: Structure everything to make AI’s job as easy as possible

These weren’t arbitrary - they solved real problems:

  • DevDocs gave AI persistent context across sessions
  • Smoke tests caught AI misunderstandings early
  • Fuzzy architecture prevented overengineering
  • Anchors prevented silent breakage as context drifted
  • Offloading reduced AI confusion and errors

Why “Vibe”?

Because it captures something traditional terms miss:

  • Not “engineering” - too rigid
  • Not “crafting” - too slow
  • Not “hacking” - too chaotic

Vibe coding is like jazz. Structure + improvisation. Rules + intuition. Human creativity + AI capability.

The Community Forms

Discord servers. GitHub repos. Blog posts. Developers sharing what worked, warning what didn’t. Patterns refined through thousands of projects.

Common realization: This isn’t a fad. It’s how development works now.

The Present

Today, vibe coding is:

  • Documented methodology, not just intuition
  • Proven patterns, not just preferences
  • Growing community, not just early adopters

The future isn’t AI replacing developers. It’s developers wielding AI through vibe coding.

After two years of AI-assisted development, I documented the patterns that emerged: Vibe-Driven Development

This is an online book that introduces vibe coding patterns such as DevDocs, smoke tests, anchor pattern, and more. For a quick overview, check out Appendix 1, where I provide ready-to-use prompts for starting a new AI-driven project.

You can find it here: https://karaposu.github.io/vibe-driven-development/

Since GPT-3.5’s release, I’ve been deep in AI-assisted coding. I began noticing I was unconsciously following certain patterns. Through many projects, I refined these patterns into a fairly reliable methodology.

When I explained these ideas to developer friends who knew about my AI coding work, they found the logic compelling. That motivated me to document everything properly.

I don’t claim this is definitive - I know many of you are probably following similar approaches, even if unnamed or unpublished. I’d love to hear your thoughts, whether you think I’m onto something or completely off base.

Why Traditional Methods Fall Short with AI

The Fundamental Mismatch

Traditional methodologies assume human limitations:

  • Developers write ~100 lines/day
  • Context switching is expensive
  • Knowledge transfer takes time

AI breaks these assumptions:

  • Generates thousands of lines/minute
  • No context switching cost
  • Instant “knowledge” of any framework

Where Waterfall Breaks

Problem 1: Upfront Design Becomes Guesswork

  • AI can prototype five architectures while you’re writing requirements
  • Detailed specs become prison bars, not guidelines
  • By implementation phase, better patterns have emerged

Problem 2: Phase Gates Block Learning

  • AI reveals design flaws immediately through code
  • Waiting for “testing phase” wastes AI’s rapid feedback
  • Sequential phases ignore AI’s iterative nature

Where Agile Stumbles

Problem 1: Sprint Velocity Becomes Meaningless

  • AI completes “8 story points” in 10 minutes
  • Planning poker is absurd when AI codes at conversation speed
  • Team velocity metrics don’t capture AI amplification

Problem 2: Ceremonies Become Bottlenecks

  • Daily standups slower than AI implementation
  • Sprint reviews can’t keep pace with AI output
  • Retrospectives happen after AI has moved on

The Core Issues

1. Trust vs Verification Gap

Traditional: Trust developers to write correct code Reality: AI needs constant verification, not trust

2. Planning vs Discovery

Traditional: Plan then execute Reality: AI enables discovery through execution

3. Documentation Timing

Traditional: Document after stability Reality: Documentation drives AI behavior

4. Error Philosophy

Traditional: Prevent errors through process Reality: Fix errors faster than preventing them

The New Reality

With AI, you’re not managing code creation - you’re managing code generation. This requires:

  • Real-time steering, not upfront planning
  • Continuous verification, not phase gates
  • Living documentation, not post-facto specs

Traditional methods optimize for human constraints that no longer exist.

Code is the Documentation >>> Documentation is the Code

The Old Truth: Code Never Lies

For decades, developers preached “code is the documentation.” Why? Because:

  • Documentation drifts from reality
  • Comments become lies over time
  • Only code executes, only code is truth

The semantic gap was real. No documentation could fully capture runtime behavior, edge cases, actual implementation details. Reading code was the only way to truly understand a system.

The Etymology of “Code”

“Code” comes from Latin “codex” - a book of laws, a systematic collection of statutes.

Originally, a code was:

  • A system of rules
  • A way to encode meaning
  • A formal specification of behavior

Not the implementation - the specification.

The Paradigm Flip

With AI and VDD, we’re returning to the original meaning. Documentation doesn’t describe code anymore - it generates it and therefore they are tightly coupled

Old world: Write code → Extract documentation New world: Write documentation with AI while Generating code

The documentation becomes the “codex” - the law that governs what gets built.

Why This Works Now

AI bridges the semantic gap:

  • Natural language → Working implementation
  • Intent → Execution
  • Specification → System

What changed? The compiler. AI is a compiler for human intent.

The New Reality

When you write:

This service handles authentication with rate limiting of 100 requests per minute

AI doesn’t just read this - it implements it. The documentation isn’t describing code that exists. It’s creating code that will exist.

Documentation as Source Code

In vibe coding:

  • project_description.md compiles to architecture
  • interface.md compiles to APIs
  • known_requirements.md compiles to features

Your documentation is source code for an AI compiler.

Semantic Precision and “The Butterfly Defect”

The Weight of Words

In human communication, we tolerate ambiguity. “Fast” could mean milliseconds or minutes. “User-friendly” means different things to different people.

AI doesn’t tolerate - it interprets. And propagates.

The Butterfly Defect

Traditional butterfly effect: A butterfly flaps wings in Brazil, causes tornado in Texas.

The Butterfly Defect: You type “modal” instead of “dialog”, your entire UI philosophy shifts.

Real example:

Human: "Create a service for handling user data"
AI: *builds microservice architecture*

vs

Human: "Create a class for handling user data"
AI: *builds single class with methods*

One word. Completely different architecture.

The Propagation Problem

Wrong terminology compounds:

  • Say “component” → AI assumes React
  • React assumption → hooks everywhere
  • Hooks everywhere → state management complexity
  • Complexity → performance issues

The defect spreads through every decision.

Finding the Right Words

Two strategies:

1. Use AI as Terminology Guide

Human: "What's the proper term for a popup window that blocks interaction?"
AI: "That's called a 'modal dialog' or simply 'modal'"

2. Define Terms Explicitly

For this project:
- "Service" means a class with business logic
- "Module" means a Python file
- "Component" means a logical grouping of features

Common Defects to Avoid

  • “Simple” → AI removes error handling
  • “Fast” → AI ignores correctness
  • “Modern” → AI adds every new feature
  • “Clean” → AI over-abstracts

The Precision Principle

Be specific about:

  • Technical terms (service vs class vs module)
  • Scope words (simple vs minimal vs basic)
  • Quality attributes (fast vs optimized vs efficient)

Remember: AI amplifies ambiguity into architecture.

The Fix

When you catch a Butterfly Defect:

  1. Stop immediately
  2. Correct the terminology
  3. Explicitly state what you mean
  4. Let AI adjust course

“Fix your terminology now, or refactor your entire architecture later.”

Using a clearly named variables

Since AI understands variables also through their names, longer descriptive names are better than short cryptic ones.

AI Taming: Keeping Control While Leveraging Power

The Director’s Chair

In vibe coding, you’re not a programmer anymore - you’re a director.

Traditional coding is like walking: you control every step, every muscle movement. AI-assisted coding should be like driving: you control direction and speed while the machine handles mechanics.

But there’s a problem. Cars don’t randomly drive to unexpected destinations. AI does.

The Self-Driving Paradox

AI can literally code by itself. Give it a vague goal and it will build something. Maybe not what you wanted, but something functional.

This is both AI’s greatest strength and biggest danger.

Evolution of Intent

Traditional Development:

Human: "Build user authentication with email/password, bcrypt hashing, JWT tokens"
AI: *builds exactly that specification*

Vibe Coding:

Human: "Users need secure access to their personal data"
AI: *suggests OAuth, biometrics, passwordless, evaluates tradeoffs*
Human: "Let's go passwordless for better UX"
AI: *implements magic links with rate limiting*

You’re not micromanaging implementation. You’re directing intent and making strategic decisions.

The Shifting Boundary

As AI evolves, the boundary shifts upward:

You Focus On:

  • What problems need solving
  • What success looks like
  • What constraints are non-negotiable
  • Which tradeoffs to accept

AI Handles:

  • How to implement solutions
  • Which patterns fit best
  • What optimizations to apply
  • How to structure code

Maintaining Control

The key insight: You must remain the director, not become a passenger.

The New Core Skills

Traditional skills (syntax, algorithms) become secondary. Primary skills now:

Intent Articulation

  • Express goals clearly
  • Communicate context effectively
  • Define success criteria

Constraint Specification

  • Set boundaries explicitly
  • Define non-negotiables
  • Specify quality requirements

Quality Recognition

  • Identify good solutions and overengineered ones quickly
  • Spot potential issues
  • Recognize when “good enough” is reached

Vision Preservation

  • Maintain project coherence
  • Prevent scope creep
  • Keep long-term goals in focus

Emergence of New Design Patterns

What are Design Patterns?

Design patterns are reusable solutions to common problems:

  • Singleton: One instance only
  • Factory: Create objects without specifying class
  • Observer: Notify multiple objects of changes

Why they mattered:

  • Shared vocabulary between developers
  • Proven solutions to recurring problems
  • Faster development through reuse

The AI Shift

Traditional patterns remain essential - AI still uses Singleton, Factory, Observer, and other proven solutions. What’s new is that AI changes HOW we work:

  • Refactoring is faster (but patterns still guide structure)
  • Architecture can evolve more easily (but patterns provide stability)
  • Iteration is cheaper (but patterns prevent chaos)

AI doesn’t replace design patterns. It adds a new layer: patterns for human-AI collaboration. These are vibe-patterns.

There are no unified conseus or standards for vibe-patterns however we identified and clarified the most used ones (even tho they might have different names)

Vibe coding patterns weren’t designed. They were discovered by developers actually using AI.

Devdocs pattern: Documentation-Driven Architecture

Problem: AI needs context to generate correct code Solution: Make AI Write docs first, generate implementation from docs

Smoke-Test-Driven Specification

Problem: AI confidently generates incorrect code Solution: Tests become executable specifications

Fuzzy-First Development

Problem: Premature optimization with AI leads to overengineering Solution: Start intentionally vague, let clarity emerge

Anchor Pattern

The Anchor Pattern preserves working functionality throughout development by establishing a tested baseline before changes and re-validating that baseline after each modification, ensuring AI doesn’t break existing features while adding new ones.

Offload pattern

Is about Make AI’s job easier by Enforcing limitations.

Why These Patterns Matter

1. Predictable AI Behavior

When you follow patterns, AI responds consistently across multiple project and across different models. Random approaches yield random results.

2. Reduced Defect Propagation

Patterns contain the errors. Without patterns, one mistake spreads everywhere.

3. Faster Development

Not coding faster - arriving at correct solution faster. Less backtracking.

4. Team Alignment

When everyone uses same patterns, AI outputs become consistent across team.

5. Learning Acceleration

Patterns are teachable. New developers can learn “the vibe” quickly.

The Compound Effect

Individual patterns are useful. Combined, they’re transformative. Flexible yet directed development. Confident incremental progress. And in the end a robust and intended outcome is ensured.

The Human-in-the-Loop Principle

Redefining the Loop

Traditional definition: Human reviews and corrects AI outputs.

VDD definition: Human and AI alternate control dynamically, each contributing their unique strengths.

The loop isn’t about supervision - it’s about synergy.

Why Humans Stay Essential

The Reality Gap

AI understands the world through text and code, not lived experience. It knows about user frustration but hasn’t felt it. It can optimize algorithms but doesn’t know when a UI feels “off.” This experiential gap means AI needs human judgment for real-world applicability.

The Halting Problem

AI doesn’t know when to stop improving. Ask it to optimize code and it will optimize regardless if it is good enough. Ask it to add features and it will add endlessly. It lacks the human sense of “good enough” - that crucial judgment of when additional work yields diminishing returns.

Current AI systems are trained to follow instructions, not question them. They won’t push back when you’re overengineering. They won’t tell you to stop when the solution is sufficient unless you specifically ask them to question it.

Contextual Blindness

AI lacks persistent memory across sessions and can’t see your broader ecosystem. It doesn’t know your team’s skill level, your deadline pressures. These invisible constraints shape every development decision, yet AI operates without them.

The Ego Trap

Human Ego as Bottleneck

The “I could do it better” syndrome kills AI productivity:

Symptoms:

  • Micromanaging every implementation detail
  • Dismissing AI suggestions without consideration
  • Using AI as glorified autocomplete instead of collaborator
  • Insisting on personal coding style over functionality

Reality check: Your “perfect” code that takes hours might be less valuable than AI’s “good enough” code in minutes.

AI’s Hidden Ego

AI has its own form of ego - overconfidence without awareness:

Manifestations:

  • Confidently presenting broken solutions
  • Over-engineering simple problems to appear sophisticated
  • Insisting on “best practices” regardless of context
  • Adding unrequested features to seem helpful

The danger: AI’s confidence is uncorrelated with correctness. It presents wrong answers with the same certainty as right ones.

Finding the Balance

Human Contributions

  • Strategic Vision: Where we’re going and why
  • Quality Judgment: What’s good enough vs. what needs improvement
  • Context Awareness: Understanding constraints AI can’t see
  • Creative Direction: Novel approaches and breakthrough thinking
  • Ethical Boundaries: What should and shouldn’t be built

AI Contributions

  • Rapid Implementation: Turning ideas into code at superhuman speed
  • Pattern Recognition: Identifying solutions from vast training data
  • Tireless Iteration: Refining without fatigue or frustration
  • Syntax Perfection: Eliminating typos and formatting issues
  • Parallel Exploration: Trying multiple approaches simultaneously

The Collaboration Dance

Effective human-in-the-loop follows a rhythm:

  1. Human sets intent - Clear goal without overspecification
  2. AI explores solutions - Multiple approaches generated quickly
  3. Human evaluates direction - Course correction, not micromanagement
  4. AI refines implementation - Detailed work within boundaries
  5. Human validates results - Ensuring alignment with vision
  6. Cycle repeats - Each iteration improves understanding

This isn’t command-and-control. It’s jazz improvisation with structure.

Practical Calibration

Over-Control (Micromanagement)

Human: "Create a function named calculateTotal with parameters a and b, 
        add them using the + operator, store in variable named sum, 
        return sum with explicit return statement"

Result: You’re just typing through AI. No leverage gained.

Under-Control (Abdication)

Human: "Make the form better"
AI: *adds 15 validation rules, 3 step wizard, progress indicators, auto-save, keyboard shortcuts, and animated transitions*
Human: "I just wanted better error messages..."

Result: Chaos and wasted cycles.

Optimal Control (Collaboration)

Human: "I need to calculate invoice totals including tax and discounts"
AI: *proposes implementation approach*
Human: "Good structure, but discounts should apply before tax"
AI: *adjusts logic while maintaining approach*
Human: "Perfect. Now add support for multiple tax rates"
AI: *extends cleanly within established pattern*

Result: Rapid, aligned development.

The Learning Loop

Human-in-the-loop creates a feedback system that improves over time:

You learn:

  • AI’s strengths and blind spots
  • How to communicate intent effectively
  • When to intervene vs. when to let AI run

AI learns (within session):

  • Your coding preferences
  • Project patterns and conventions
  • Domain-specific requirements

Together you develop:

  • Efficient communication shortcuts
  • Productive rhythm
  • Shared understanding

Signs of Healthy Collaboration

✅ You’re regularly surprised by elegant AI solutions
✅ You catch issues early before they cascade
✅ Development feels like pair programming, not dictation
✅ You understand everything being built
✅ AI stays within intended boundaries
✅ Progress is rapid but controlled

Signs of Unhealthy Patterns

❌ Every AI suggestion needs major rework
❌ You’re constantly fighting AI’s approach
❌ Code is being generated you don’t understand
❌ More time correcting than progressing
❌ Feeling like you’re battling or babysitting

The Core Truth

Human-in-the-loop isn’t a temporary limitation waiting for better AI. It’s the optimal model for creative collaboration. Humans provide meaning and judgment. AI provides speed and capability.

Together, we build what neither could alone.

The loop isn’t a constraint - it’s the key to amplification.

Attention Mechanism

What Attention Actually Means

When AI reads your code, it doesn’t read like humans do. It uses “attention” - weighted focus across all tokens simultaneously.

Think of it like this:

  • Human reading: Sequential, left-to-right, building understanding
  • AI reading: Parallel, everything at once, finding patterns and connections

Why it matters

Knowing how attention works can help you offload complexity from the LLM.

The Spotlight Metaphor

Imagine a dark stage with 1000 actors. AI has 1000 spotlights it can dim or brighten. When you mention “user authentication”, suddenly:

  • Spotlights on “password” brighten
  • Spotlights on “login” brighten
  • Spotlights on “security” brighten
  • Spotlights on “recipe ingredients” dim

This happens for every token, instantaneously.

Why This Matters for Vibe Coding

1. Context and Relevance Matter Most

While AI processes everything in parallel, what matters most is relevance to the task at hand.

# If you're asking about authentication, this gets attention
def authenticate_user():
    pass
    
# This gets less attention regardless of position
def calculate_recipe_calories():
    pass

Put related information together, make intent clear.

2. Proximity Is Power

Related concepts near each other strengthen attention bonds.

# Weak attention bond
class User:
    pass

# ... 500 lines later ...

def authenticate_user():
    pass

vs

# Strong attention bond  
class User:
    pass
    
def authenticate_user():
    pass

Keep related concepts close.

3. Repetition Reinforces

Each mention strengthens attention pathways.

# Single mention - weak signal
# This handles user data

# Multiple mentions - strong signal
# This UserService handles user authentication
# It manages user sessions and user preferences
class UserService:

But don’t overdo it - that’s keyword stuffing.

Context Window Limits

AI has finite context. When you approach the context limit:

  • Relevant content to your question gets prioritized
  • AI focuses on what’s needed for the current task
  • Very long contexts can sometimes cause confusion
  • Organization and clarity matter more than position

This is why grouping related files matters when sharing multiple files.

Practical Implications

1. Lead with Intent

# BAD: Burying the lead
# This function does various things with data
# It processes some stuff and returns results
# Oh, by the way, it's for authentication

# GOOD: Clear intent upfront
# Authenticates users against the database
# Returns JWT token on success, throws on failure

2. Context Clustering

When sharing multiple files, group by relevance:

1. Core business logic files
2. Related utility files
3. Configuration files
4. Test files

Not alphabetically or randomly.

3. Distinctive Names for better attention

# Weak anchor
def process():
    pass

# Strong anchor
def process_payment_webhook():
    pass

Specific names help AI maintain focus.

Attention Hijacking

Some patterns grab too much attention:

TODO/FIXME Comments

# TODO: Fix this security vulnerability
def safe_function():
    return data  # AI obsesses over the TODO

AI might focus on the TODO instead of your actual request.

Strong Keywords

Words like “deprecated”, “legacy”, “broken”, “hack” trigger strong attention. Use carefully.

Working with Attention

The Priming Pattern

Start conversations by setting attention focus:

"I'm working on the authentication module. 
Please focus on security and user experience."

The Context Window Strategy

Share files in order of importance:

  1. The file you want to modify
  2. Direct dependencies
  3. Related interfaces
  4. Supporting utilities

The Attention Reset

When AI gets fixated on wrong things:

"Let's refocus. Ignore the previous discussion about X.
I need help specifically with Y."

Attention vs Memory

Attention ≠ Memory

  • Attention: What AI focuses on right now
  • Memory: What AI remembers from the codebase

You can direct attention. You can’t change memory.

The Compound Effect

Good attention management:

  • Faster correct responses
  • Less confusion
  • Better architectural coherence
  • Fewer tangential suggestions

Poor attention management:

  • AI suggests fixing non-issues
  • Focuses on wrong patterns
  • Misses critical requirements
  • Generates irrelevant code

Quick Tips

  1. Name precisely - auth_service not service1
  2. Comment intentions - Why, not what
  3. Order thoughtfully - Important stuff first
  4. Cluster related - Group by concept
  5. Reset when needed - Don’t fight confused attention

Understanding attention helps you communicate effectively with AI. It’s not about tricking the system - it’s about clarity.

Context Windows and Memory

The Fundamental Limitation

Every AI has a context window - the total amount of text it can “see” at once. Think of it as RAM, not hard drive. Even tho we see huge gains in terms of incresed context length it still matter to use it efficiently.

Context Window !== Memory

Context Window: What AI can see right now Memory: What AI learned during training

You can fill the context window. You can’t add to memory.

The Sliding Window Problem

As conversation grows, early content falls out:

[Start of conversation]
"Here are my ground rules..." <- Eventually falls out
[... many messages ...]
"Why aren't you following the ground rules?" <- AI: "What ground rules?"
[Current message]

Anchor pattern is a good fix for this issue.

Token Economics

Everything counts against the window:

  • Your messages
  • AI responses
  • Code snippets
  • Error messages
  • File contents

Long error stacktraces can eat 1000+ tokens instantly.

Strategies for Large Codebases

1. The Core + Context Pattern

Don’t share everything. Share:

  • Core file being modified
  • Direct dependencies only
  • Relevant interfaces
  • Specific test files

2. The Summarization Ladder

For huge codebases:

Level 1: Full implementation files (for current work)
Level 2: Interface files (for dependencies)
Level 3: Summary docs (for distant modules)
Level 4: Architecture overview (for system context)

3. The Refresh Pattern

Periodically refresh context:

"Let me remind you of the key requirements:
- [Critical point 1]
- [Critical point 2]
We're currently working on [specific task]"

4. The Checkpoint Strategy

After major milestones:

"We've completed the authentication module.
Here's what we built: [summary]
Now let's move to the authorization module."

Window Management Techniques

Compression via Abstraction

Instead of:

# Sharing full implementation
class UserService:
    def __init__(self, db, cache, logger):
        self.db = db
        self.cache = cache
        self.logger = logger
    
    def create_user(self, email, password):
        # 50 lines of implementation
        
    def authenticate(self, email, password):
        # 30 lines of implementation
        
    # ... 10 more methods

Share:

# UserService interface
class UserService:
    """Handles user CRUD and authentication"""
    def create_user(self, email: str, password: str) -> User
    def authenticate(self, email: str, password: str) -> str  # JWT token
    def get_user(self, user_id: str) -> User
    # ... just signatures

devdocs pattern explicitly creates a doc called interfaces_and_endpoints.md which can be used for this exact purpose as well.

Selective Inclusion

Use AI to identify what to include:

"I need to modify the payment processing.
What files should I share with you?"

AI often knows what it needs to see.

The Layered Approach

  1. Start with high-level overview
  2. Drill into specific modules
  3. Zoom into exact functions
  4. Back out to integration level

Like Google Maps for code.

The Context Budget

Treat tokens like money:

  • Budget for each conversation
  • Spend on valuable context
  • Cut unnecessary verbosity
  • Save for complex operations

Signs You’re Out of Context

  • AI forgets earlier requirements
  • Suggestions contradict prior work
  • Generic responses increase
  • AI asks for clarification repeatedly

Time to refresh or start new conversation.

Strengths and Limitations

AI Strengths: The Superpowers

1. Syntax Perfection

AI never forgets a semicolon, bracket, or quote. Perfect syntax, every time.

2. Pattern Recognition

Sees patterns across thousands of files instantly.

"This looks like a Factory pattern but with Singleton 
characteristics. Consider using Dependency Injection instead."

Human might miss this. AI spots it immediately.

3. Boilerplate Generation

CRUD operations, test scaffolding, API endpoints - AI excels at repetitive patterns.

"Create REST endpoints for User model"
*Instantly generates all 5 endpoints with proper error handling*

4. Language Translation

Moving between languages/frameworks:

"Convert this Python Flask app to Node Express"
*Accurately translates idioms and patterns*

5. Documentation Generation

Turns code into docs effortlessly:

  • API documentation
  • README files
  • Code comments
  • Architecture diagrams (as text)

6. Refactoring Speed

Rename across files, extract methods, restructure - seconds not hours.

7. Best Practices Knowledge

Knows every style guide, security practice, performance optimization.

AI Limitations:

1. Runtime Blindness

While AI can write code for interactive features, it can’t always test them because UI interactions, voice inputs, visual feedback etc, require human senses and actions. We become AI’s eyes and hands for runtime validation.

2. Hallucination Tendency

Makes up plausible-sounding APIs that don’t exist.

import tensorflow as tf
tf.quantum.entangle()  # Sounds cool, doesn't exist

3. Context Conflation

Mixes patterns from different contexts.

// React + jQuery mixed incorrectly
$('#root').setState({ value: 'confused' })

4. Over-Engineering Bias

AI loves adding unnecessary complexity.

Human: "Store user preferences"
AI: *Creates distributed cache with Redis, event sourcing, and CQRS*

Working with Strengths

Leverage Pattern Recognition

"This code smells like [pattern]. Suggest refactoring."

Use for Exploration

"Show me 3 different ways to implement this"

For Runtime Blindness

Always verify with actual execution:

"Generate the code, I'll run it and share results"

For Over-Engineering

Add constraints:

"Simplest solution possible, no external dependencies"

The Golden Rules

Trust AI For:

  • Syntax and structure
  • Common patterns
  • Refactoring mechanics
  • Documentation
  • Exploration

Don’t Trust AI For:

  • Business logic
  • Runtime behavior
  • Performance assumptions
  • Security without verification
  • Architecture without thought

The Collaboration Sweet Spot

Best results when:

  • Human provides vision and verification
  • AI provides implementation and iteration
  • Human catches logical errors
  • AI catches syntax errors
  • Both challenge each other

Think partnership, not replacement.

Next: Chapter 5 - The Human-AI Development Loop →

When to Guide vs When to Follow

The Fundamental Question

Every AI interaction presents a choice: Do I lead or do I follow? Wrong choice = wasted time.

When to Guide (You Lead)

1. Business Requirements

AI doesn’t know your users, market, or constraints.

GUIDE: "Users need to export data for tax purposes in specific format"
NOT: "How should users export data?"

2. Architecture Decisions

AI suggests patterns. You choose based on reality.

GUIDE: "Use REST APIs since our mobile team knows it well"
NOT: "Should we use REST or GraphQL?"

3. Performance Constraints

AI doesn’t know your scale or bottlenecks.

GUIDE: "This needs to handle 10k requests/second"
NOT: "Make it fast"

4. Security Requirements

AI knows general security. You know your threat model.

GUIDE: "We need SOC2 compliance with audit logging"
NOT: "Make it secure"

5. Integration Points

AI doesn’t know your existing systems.

GUIDE: "This must integrate with our legacy SOAP API"
NOT: "How should this communicate with other services?"

When to Follow (AI Leads)

1. Implementation Details

Once direction is set, let AI handle specifics.

GUIDE: "Users need to stay logged in for 30 days"
FOLLOW AI's answer to: "What's the best way to implement JWT refresh tokens?"

2. Best Practices

AI knows current standards better than most developers.

GUIDE: "It needs to display user profiles"
FOLLOW AI's answer to: "How should I structure this React component?"

3. Error Handling

AI excels at comprehensive error cases.

GUIDE: "It processes payment webhooks"
FOLLOW AI's answer to: "What errors should this API endpoint handle?"

4. Refactoring Suggestions

AI sees patterns you might miss.

GUIDE: "It must remain backwards compatible"
FOLLOW AI's suggestion when asking: "This code feels complex. Suggest improvements."

5. Technology Selection

For well-understood problems, AI knows tool tradeoffs.

GUIDE: "We're on AWS with Redis experience"
FOLLOW AI's recommendation for: "What caching solution for session data?"

The Grey Zone

Some decisions need negotiation:

API Design

Human: "I need user management endpoints"
AI: "Here's a RESTful design..."
Human: "Actually, we prefer GraphQL"
AI: "Here's the GraphQL schema..."

Start by following, then guide corrections.

Database Schema

AI: "Suggests normalized schema"
Human: "We optimize for reads, denormalize"
AI: "Here's denormalized version"

Let AI propose, you dispose.

Testing Strategy

Human: "We need tests"
AI: "Unit, integration, and E2E tests..."
Human: "We only do integration tests"
AI: "Focused integration test suite..."

Pattern Recognition

Guide When:

  • Domain-specific knowledge required
  • Business logic involved
  • External constraints exist
  • Past decisions affect current
  • User experience matters

Follow When:

  • Technical implementation unclear
  • Multiple valid approaches exist
  • Best practices needed
  • Common patterns apply
  • You’re learning something new

Communication Strategies

When Guiding

Be specific and constraining:

"Build auth with these requirements:
- Email/password only
- 2FA via SMS
- Session length 24 hours
- PostgreSQL storage"

When Following

Be open and exploratory:

"I need authentication. What approach would you recommend given modern security practices?"

The Hybrid Approach

Most effective:

"I need auth for a B2B SaaS (context).
What's the current best practice (follow)?
Must integrate with Okta (guide)."

Managing AI Drift

What Is AI Drift?

AI drift happens when AI gradually moves away from your original intent. Like a boat without anchor, small currents compound into major course changes.

Types of Drift

1. Complexity Drift

Starts simple, becomes enterprise.

Request 1: "Add user login"
AI: Simple email/password

Request 2: "Add password reset"
AI: Adds email service, queues, tokens

Request 3: "Add remember me"
AI: Adds Redis, session management, device tracking

Request 4: "Add social login"
AI: OAuth, SAML, OpenID, full identity provider

Suddenly you’re building Auth0.

2. Style Drift

Inconsistent patterns across codebase.

# Monday - functional style
def process_user(user_data):
    return transform(validate(user_data))

# Tuesday - OOP style
class UserProcessor:
    def process(self, user_data):
        self.validate()
        self.transform()

# Wednesday - procedural style
user_data = get_user()
validate_user(user_data)
transform_user(user_data)

3. Architecture Drift

Original design morphs unrecognizably.

Start: "Simple REST API"
Drift 1: Add GraphQL "for flexibility"
Drift 2: Add WebSockets "for real-time"
Drift 3: Add gRPC "for performance"
End: Four different API protocols

4. Scope Drift

Features multiply beyond intent.

Original: "Todo app"
+ "Add categories" 
+ "Add sharing"
+ "Add comments"
+ "Add notifications"
= Project management suite

5. Technology Drift

Stack grows unnecessarily.

Start: React + Node
+ Redux "for state"
+ MongoDB "for flexibility"  
+ Docker "for deployment"
+ Kubernetes "for scaling"
+ Kafka "for events"
= Overengineered todo app

Why Drift Happens

Can be due to numerous reasons

1. AI Lacks Project Context

Each request seems isolated. AI doesn’t see the big picture. Most common reason is due to bad context management.

2. AI Optimizes Locally

Makes each piece “better” without considering whole.

3. Pattern Matching Gone Wild

AI applies patterns even when inappropriate.

4. No Persistent Memory

Can’t remember “we decided to keep it simple.”

Drift Recovery

When You’ve Already Drifted

  1. Acknowledge the drift
"We've drifted from simple to complex. 
Let's identify what we can remove."
  1. Document current state
"List all technologies and patterns we're now using"
  1. Identify core requirements
"What are the actual must-haves vs nice-to-haves?"
  1. Plan simplification
"How can we meet core requirements with minimal stack?"
  1. Incremental rollback Don’t rewrite. Gradually simplify.

The Drift Dialogue

Catching Early Drift

AI: "We should add caching for performance"
You: "Is performance actually a problem?"
AI: "Not yet, but..."
You: "Let's wait until it is"

Redirecting Architecture Drift

AI: "This would be cleaner with microservices"
You: "We're keeping monolith for simplicity"
AI: "Understood, here's monolith version"

Preventing Scope Drift

AI: "While we're at it, should we add..."
You: "No, let's finish current feature first"

Positive Drift

Not all drift is bad. Sometimes AI suggests genuinely better approaches.

Evaluating Positive Drift

Ask:

  1. Does this simplify or complicate?
  2. Does this align with project goals?
  3. Is this solving real or imagined problems?
  4. What’s the maintenance cost?

Accepting Good Ideas

AI: "Instead of custom auth, use NextAuth"
You: "That actually simplifies things. Let's do it."

The Meta-Strategy

Best drift management is prevention:

  • Clear vision from start
  • Explicit constraints
  • Regular reality checks
  • Simplicity as core value

Remember: AI will always suggest more. Your job is knowing when to say no.

Essential Ground Rules

The Friction Problem

AI brings its own biases to every project - preferred frameworks, design patterns, coding styles. Without explicit guidance, AI defaults to what it “thinks” is best, not what your project needs.

This creates friction. You spend more time correcting AI’s assumptions than building features.

Ground rules eliminate this friction by establishing clear expectations upfront.

Two Types of Rules

Universal Rules: Apply to every project, every time Project Preferences: Specific to your tech stack, team, or domain

Both should be defined at the start of each AI session.

Universal Ground Rules

1. No Unnecessary Mocking

"Only mock external dependencies (APIs, databases). Use real implementations for everything else."

Why it matters:

  • AI tends to mock everything, making tests meaningless
  • Real implementations reveal real bugs
  • Integration issues surface immediately, not in production

Valid exceptions:

  • Third-party API calls
  • Database connections in unit tests
  • Time-dependent operations

2. Filepath Documentation

"Start every file with a comment showing its path relative to project root"

Example:

# src/services/auth_service.py

from typing import Optional
import jwt

Benefits:

  • Clear navigation in large codebases
  • Context preserved across AI sessions
  • Easier refactoring and file moves

3. Modern Stack Only

"Target modern environments only. No legacy browser support or deprecated language features."

Why this rule exists:

  • AI adds unnecessary compatibility layers
  • Legacy support triples complexity
  • Modern features improve developer experience

Add compatibility only when explicitly needed.

4. Meaningful Commits

"Make atomic commits with descriptive messages following conventional format (feat:/fix:/docs:)"

Good commit examples:

feat: add user authentication flow
fix: resolve race condition in data fetcher
docs: update API endpoint documentation

Benefits:

  • Git history becomes documentation
  • Easy rollbacks and bisecting
  • Clear progression tracking

5. Modular Development

"Build features as isolated modules with clear interfaces"

Structure example:

feature/
├── index.ts        # Public interface
├── types.ts        # Type definitions
├── logic.ts        # Core business logic
├── tests.ts        # Feature tests
└── README.md       # Feature documentation

6. Explicit Naming

"Use descriptive variable and function names. Clarity over brevity."

Examples:

// Bad
const d = new Date();
const u = users.filter(x => x.a > 18);

// Good
const currentDate = new Date();
const adultUsers = users.filter(user => user.age > 18);

Setting Up Ground Rules

Create a GROUND_RULES.md file in your project root:

# Project Ground Rules

## Universal Rules
1. Only mock external dependencies
2. Include filepath comment at top of each file
3. Target modern environments only (Node 18+, ES2022+)
4. Commit atomically with conventional messages
5. Build features as isolated modules
6. Use explicit, descriptive naming

## Project Preferences
- Framework: [Your choice]
- Testing: [Your approach]
- State Management: [Your pattern]
- Error Handling: [Your strategy]

## Code Style
- Max line length: 100 characters
- Indent: 2 spaces
- Quotes: Single for strings
- Semicolons: [Yes/No]

## AI Instructions
Read and follow these rules for all code generation.
Ask for clarification if rules conflict with requirements.

Starting Each Session

Begin every AI conversation with:

"Please read GROUND_RULES.md and apply these consistently throughout our session"

Domain-Specific Rules

Web Applications

## Web Development Rules
- Semantic HTML5 elements required
- Mobile-first responsive design
- WCAG 2.1 AA accessibility minimum
- CSS modules or styled-components (no inline styles)
- Lazy load images and components
- SEO meta tags on all pages

APIs and Services

## API Development Rules
- RESTful conventions unless GraphQL specified
- Consistent error response format
- Input validation on all endpoints
- Rate limiting from day one
- OpenAPI documentation required
- CORS configuration explicit

Data Processing

## Data Pipeline Rules
- Schema validation before processing
- Idempotent operations
- Clear ETL stage separation
- Audit logging for all transformations
- Error recovery strategies defined
- Sample data for all pipelines

Language-Specific Rules

Python

## Python Standards
- Python 3.10+ features allowed
- Type hints required for all functions
- Docstrings for public APIs
- Black formatting (line length 100)
- pytest for testing
- Poetry for dependency management

Evolution Strategy

Ground rules aren’t static. Track patterns:

When you correct AI repeatedly → Add a rule When rules cause friction → Refine or remove When context changes → Update accordingly

Measuring Success

Good ground rules result in:

  • ✅ Less time correcting AI output
  • ✅ Consistent code across sessions
  • ✅ Fewer “why did AI do that?” moments
  • ✅ Smoother development flow
  • ✅ Higher quality first attempts

Bad ground rules cause:

  • ❌ Constant rule conflicts
  • ❌ AI confusion and errors
  • ❌ Overly restrictive development
  • ❌ More explanation than benefit

The Compound Effect

Well-defined ground rules create compound benefits:

Session 1: Save 10 minutes not explaining preferences Session 10: Save hours from consistent patterns Session 100: Entire codebase follows your standards automatically

Ground rules are an investment. Define them once, benefit forever.

Remember: Ground rules are your contract with AI. Make them clear, keep them current, and enforce them consistently.

The Data Dump

Starting with Complete Context

The Data Dump is the foundation of vibe coding. It’s where you transfer all project knowledge from your head to AI’s context. Without a thorough data dump, AI fills gaps with assumptions - usually wrong ones.

Think of it like briefing a new team member, except this team member has no implicit understanding of your industry, company, or goals. Everything must be explicit.

Why Data Dump First?

Both Traditional development and Vibe coding starts with understanding not with code.

  1. AI has no context - It doesn’t know your constraints, users, or goals
  2. Assumptions compound - Early misunderstandings cascade into architectural disasters
  3. Words shape code - How you describe the project determines what gets built
  4. Clarity forces clarity - Explaining to AI reveals your own fuzzy thinking

The Data Dump is an investment. Spend an hour here to save days later. The quality of your entire project depends on the quality of initial understanding.

The Complete Data Dump Process

Step 1: Gather Everything

Before talking to AI, collect:

  • Existing project descriptions (formal and informal)
  • Existing code or prototypes
  • Unstructured Notes
  • User Stories
  • Technical constraints
  • Business constraints
  • Team capabilities

Don’t filter yet. Dump everything.

Step 2: Share Raw Information

Start with unprocessed information:

"I'm building a personal finance tracker. Here's all the context:

From my notes:
- Users complain existing apps are too complex
- Need to track expenses across multiple currencies  
- I travel frequently for work
- Privacy is critical - no cloud storage
- Must work offline
- Export for tax purposes
- Simple is better than feature-rich

From user research:
- Most people abandon finance apps after 2 weeks
- Main complaint: too much data entry
- Want to see spending patterns quickly
- Don't care about investment tracking

Technical context:
- I know React and Node
- Prefer not to learn new frameworks
- Have 3 months to build MVP
- Will be solo developer

Step 3: Request Comprehensive Understanding

Don’t let AI start coding. Force understanding first:

Based on all the information I've shared, please provide a comprehensive explanation of:

1. What this project is and what it aims to achieve
2. Who the target users are and their main pain points  
3. The core problems we're solving
4. The key constraints and requirements
5. What success looks like for this project

Do NOT provide technical solutions yet. Focus only on demonstrating 
deep understanding of the project context and goals.

Step 4: Verify Understanding

AI will respond with its interpretation. This is critical - read carefully and correct any misunderstandings:

"Your understanding is mostly correct, but let me clarify:
- When I said 'simple', I mean 'minimal features', not 'easy to implement'
- Multi-currency isn't for investment - it's for travel expenses
- Offline-first is non-negotiable, not a nice-to-have"

Once you feel like AI gets the project in multiple aspects we can again use AI to create structured documentataions.

The DevDocs Pattern

Why Documentation Drives Development

In traditional development, code comes first and documentation follows. In VDD, this relationship inverts: documentation becomes the source code that AI compiles into implementation. When AI understands not just what to build but why and how it fits together, it transforms from a code generator into a thoughtful collaborator.

The DevDocs pattern solves three critical problems:

  1. Vision Drift - Without explicit documentation, AI gradually diverges from your intent
  2. Context Loss - AI has no persistent memory between sessions
  3. Cognitive Overload - Mixing planning and implementation causes errors and hallucinations

By separating planning (documentation) from execution (code), we give AI clear, focused tasks it can excel at.

The Core Principle

DevDocs is real-time projection of AI’s understanding in markdown format. It’s not passive documentation - it’s active development logic that drives implementation.

The workflow:

  1. AI documents what it plans to implement
  2. Human reviews and adjusts to match vision
  3. AI implements based on agreed documentation
  4. Documentation evolves with the project

This creates a feedback loop where documentation and code stay synchronized, and allows action of HILing in everystep in order to guide AI to intended direction.

DevDocs isn’t about perfect documentation. It’s about maintaining a shared mental model between human vision and AI implementation. The documentation becomes the meeting ground where human intent transforms into working code.

Example DevDocs Structure

A complete DevDocs folder provides layered context from high-level vision to specific implementation details:

project-root/
└── devdocs/
    ├── foundation/
    │   ├── project_description.md      # What we're building
    │   ├── philosophy.md               # Core principles and values
    │   ├── known_requirements.md       # Known requirements
    │   
    │
    ├── concepts/
    │   ├── concepts_to_implement.md    # Extracted technical concepts
    │   ├── simplified_concepts.md      # Prototype-ready versions
    │   ├── concept_clarifications/     # Detailed concept specs
    │   └── simplified_clarifications/  # Simplified specs
    │
    └── enhancements/             # Future improvements
    │── explorations/             # freestyle exploration and analysis documents
    │
    ├── modules/
    │   └── [module_name]/
    │           ├── what_is_this_for.md
    │           ├── interfaces_and_endpoints.md
    │           ├── integration_points.md
    │           ├── integration_requirements.md
    │           ├── limitations.md
    │           ├── possible_use_cases.md
    │           ├── edge_cases_covered.md
    │           └── example_usage.md
                └── summary.md
    │
   

Besides from these there can be other type of side product documents depending on the project. Now we will go through each of them and explain what makes them powerful for VDD.

Root Documentation: The Foundation

Why Foundation Documents Matter

Foundation documents are the bedrock of VDD. They capture your original vision in a structured format, establishing the core truths that guide every decision throughout the project.

These three documents form a hierarchy of intent:

  1. project_description.md - What we’re building and why
  2. philosophy.md - Core principles and beliefs driving decisions
  3. known_requirements.md - Specific requirements and constraints

AI reads these first in every session. They become the lens through which AI interprets all subsequent project development.

The Three Foundation Documents

1. project_description.md - The North Star

This document transforms scattered project information into structured understanding. It’s the single source of truth about what you’re building.

Core Questions to Answer:

  • What are we building?
  • What are the various scopes of this project?
  • What problem are we solving?
  • What does success look like?
  • Who will use it and why?

2. philosophy.md - The Soul

Philosophy captures the intangible aspects that shape every decision. It’s about values, not features. This document prevents technically correct but visionally/spiritually wrong implementations.

Core Question to Answer:

  • what is the philosophy of this project?

3. known_requirements.md - The Contract

Requirements translate vision into concrete constraints. They define the boundaries within which AI must operate.

Core Questions to Answer:

  • What technical constraints exist?
  • What business rules apply?
  • What user needs must be met?
  • What regulations must we follow?

Concept Docs

Concept Documents

Identifying concepts is critical during development. Poor concept understanding evolves into flawed architecture and toxic development cycles.

Concepts are meta-abstractions encompassing any development aspect - from essential requirements to payment verification modules to unique architectural patterns. Concept documentation articulates what needs building at a high level, enabling truly modular development practices.

concepts.md

This document extracts and lists all key technical concepts from the foundation documentation (project_description.md, philosophy.md, known_requirements.md). We instruct AI to focus solely on essential technical concepts to prevent bloat.

concepts_clarifications folder

For each concept in concepts.md, we have AI generate detailed clarification documents. This folder typically contains around 10 documents, each thoroughly explaining a single concept. As humans in the loop, reading these reveals exactly how AI interprets our concepts.

Each clarification document systematically addresses:

  • What the concept is and why it matters
  • How it benefits the overall project
  • How it constrains the overall project
  • Required input information
  • Core processes involved
  • Output information or relay points
  • Expected positive outcomes when realized
  • Potential negative outcomes to avoid

This multi-perspective analysis ensures nothing gets overlooked.

simplified_concepts.md

New development typically begins with prototyping, then iterative enhancements transform prototypes into MVPs. Reading all full-scope concepts in concepts.md can feel overwhelming - for both humans and AI. Our responsibility as humans in the loop is orchestrating AI’s work through modular, gradual, controllable increments.

AI lacks this self-regulation. While you can request simplified prototype implementations, AI’s simplification intuition remains poorly calibrated and requires human oversight.

This is precisely why we have AI create this documentation simplified_concepts.md so we can HIL it effectively.

When creating simplified concepts, we instruct AI to:

  • Preserve essential architecture - never oversimplify to the point where the foundation cannot support the full concept
  • For multi-faceted concepts - avoid binarizing (reducing to just one or two options), instead reduce the number of supported subconcepts by prioritizing the most important ones

simplified_concepts folder

Mirrors the concepts_clarifications folder structure. We have AI generate clarification documents for each simplified concept, allowing us to understand AI’s interpretation of the streamlined versions.

Each document addresses the same systematic questions:

  • What the concept is and why it matters
  • How it benefits the overall project
  • How it constrains the overall project
  • Required input information
  • Core processes involved
  • Output information or relay points
  • Expected positive outcomes when realized
  • Potential negative outcomes to avoid

The Dual-Concept Strategy

Having both simplified_concepts.md and concepts.md in the codebase is essential. These paired documents define the expansion trajectory for AI. When introducing intermediate concepts, AI can position them appropriately within the current-to-future scope continuum.

Feature Documentation

Beyond Freestyle Enhancement Docs

While enhancement docs capture future possibilities, feature documentation provides a systematic approach to planning, validating, and implementing new functionality via AI. This structured method prevents scope creep, feature collisions, and half-baked implementations.

The Feature Lifecycle Structure

devdocs/features/
    planned/
      feat_1_user_authentication/
        desc.md
        implementation_plan.md
        compatibility_check.md
        test_scenarios.md

    finished/
          feat_0_debug_output/
            desc.md
            implementation_plan.md
            compatibility_check.md
            test_scenarios.md
            implementation_notes.md
  

Core Documentation Files

1. desc.md - Feature Description

Defines WHAT and WHY:

  • Clear problem statement
  • User value proposition
  • Success criteria
  • Scope boundaries

2. implementation_plan.md - Technical Approach

Details HOW to build :

Consists of :

  1. High level plan summary in bullet points
  2. Full implementation plan

3. compatibility_check.md - Impact Analysis

Identifies Compatibility ISSUES, RISKS and CONFLICTS with respect to whole codebase worklogic :

  • Existing features that might break
  • Performance implications
  • API contract changes
  • Database schema impacts
  • Security considerations

4. test_scenarios.md - Verification Strategy

Defines how we will KNOW it works (mid level technicality). This is a doc which describes testing logic not the test document itself

answers: 
   1. How this feature can be triggered in various ways in various codebase levels (from minimal run to bigger scope) and  How this feature should result in each scenario

5. Implementation_notes.md

 No feature will be implemented %100 correct the first time. Implementing it also uncovers new unknown issues 
 and requires various fixes. This documentation is storing that knowledge in minimal format. 

Step 1: Initial Capture

When you discover a feature need, create the basic structure:

devdocs/features/planned/feat_X_feature_name/
 desc.md              # Write this first
 implementation_plan.md   # Draft initial approach

Step 2: Validation

Before implementation, complete the analysis:

 compatibility_check.md   # What might break?
 dependencies.md          # What's needed first?
 test_scenarios.md        # How do we verify?

Step 3: Safety Planning

For production-ready features, add:

 rollback_plan.md         # How to undo?
 impacts.md               # What else changes?

Step 4: Implementation

When ready to build:

  1. Move to active development
  2. Use docs as AI context
  3. Update plans as you learn

Step 5: Completion

After implementation:

  1. Add implementation_notes.md documenting what actually happened
  2. Move entire folder to finished/
  3. Keep for future reference

AI Prompt Template for Feature Planning

I want to add a new feature: [FEATURE NAME]

Please help me create comprehensive feature documentation:

1. Read the current codebase structure
2. Create devdocs/features/planned/[FEATURE_NAME]/desc.md
   - Problem statement
   - User value
   - Success criteria
   - Scope boundaries

3. Create implementation_plan.md
   - Required architecture changes
   - New components needed
   - Integration approach
   - Implementation steps

4. Create compatibility_check.md
   - Analyze impact on existing features
   - Check for conflicts
   - Performance implications
   - Security considerations

5. Create test_scenarios.md
   - Happy path tests
   - Edge cases
   - Error conditions
   - Acceptance criteria

6. Create dependencies.md
   - Technical prerequisites
   - Feature dependencies
   - External requirements

7. Create rollback_plan.md
   - Feature flag approach
   - Database rollback strategy
   - Safe deployment plan

Benefits of Structured Feature Documentation

For Developers

  • Clear implementation path
  • No forgotten edge cases
  • Confidence in rollback ability
  • Prevents rework from poor planning

For AI Collaboration

  • Complete context for implementation
  • Understands constraints and risks
  • Can generate comprehensive tests
  • Avoids breaking existing features

For Project Management

  • Visual progress tracking (planned � finished)
  • Clear feature history
  • Dependency management
  • Risk assessment before starting

When to Use Feature Docs vs Enhancement Docs

Use Feature Documentation When:

  • Feature has clear requirements
  • Implementation will take > 1 day
  • Multiple components affected
  • Production deployment planned
  • Other features depend on it

Use Enhancement Docs When:

  • Idea is still forming
  • Nice-to-have improvements
  • Experimental features
  • Future possibilities
  • Architecture musings

Best Practices

  1. Start lightweight - Don’t over-document features that might not happen
  2. Evolve as needed - Add detail as features move toward implementation
  3. Keep it current - Update plans when you learn new information
  4. Learn from finished - Review completed features to improve planning
  5. Prune regularly - Delete abandoned feature plans to reduce noise

Example: Complete Feature Documentation

devdocs/features/planned/feat_3_api_rate_limiting/
 desc.md
   "Prevent API abuse by limiting requests per user per minute"
 implementation_plan.md
   "Add middleware, use Redis for counters, configurable limits"
 compatibility_check.md
   "All API endpoints affected, 10ms latency added per request"
 test_scenarios.md
   "Test limits, resets, exemptions, distributed scenarios"
 dependencies.md
   "Requires Redis, user identification system"
 rollback_plan.md
   "Feature flag ENABLE_RATE_LIMIT, graceful degradation"
 impacts.md
    "Monitoring needed, customer support docs, API docs update"

This structured approach transforms vague feature ideas into implementation-ready specifications, reducing risk and improving delivery speed.

Exploration Docs

Exploration Documentation

A freestyle workspace for capturing AI-generated insights and clarifications discovered during development.

What Goes Here

Any investigation or explanation worth preserving:

  • how_authentication_works.md
  • why_not_use_microservices.md
  • which_database_should_we_use.md
  • performance_bottleneck_analysis.md

Why It Matters

During development, you’ll frequently ask AI to explain, investigate, or analyze aspects of your codebase. Some responses are too valuable to lose in conversation history. The explorations folder preserves these insights for future reference.

Best Practices

  • Name files as questions or topics for easy scanning
  • Keep the original AI response if it was particularly clear
  • Update when understanding evolves
  • Link from main docs when relevant

This folder becomes your project’s knowledge base - a living FAQ built from actual development questions.

Module Docs

Module Documentation

Software development thrives on modularity. During active development, multiple submodules evolve in parallel, each maintaining its own lifecycle while integrating with the whole.

This approach adds overhead but delivers significant benefits: truly decoupled components, isolated testing, and clear expansion paths for future enhancements.

For vibe coding, modularity isn’t optional - it’s essential. AI’s context limitations demand it, clean code principles require it, and testability depends on it.

And during this paralel development we encounter 2 problems - acting as a human in the loop (HILing ) is harder than usual - module’s code is spread across multiple code files therefore keeping them in context is harder for ai, which result in errors

Module documentation solves these 2 problems.

Modules constantly evolve through internal refactoring, interface updates, and architectural shifts. Without synchronized documentation, these changes cascade unpredictably, breaking integrations across the system. Module docs serve as living contracts between components, ensuring changes propagate cleanly.

Structure: devdocs/modules/[module_name]/

Each module folder contains these standardized documents:

what_is_this_for.md

Explains the module’s core purpose and reason for existence:

  • Primary problem it solves
  • Why it’s a separate module
  • What breaks without it
  • Who depends on it

interfaces_and_endpoints.md

Defines all public interfaces other modules can use:

  • Public methods with signatures
  • REST/GraphQL endpoints
  • Event emissions and subscriptions
  • Database tables accessed

integration_points.md

Describes how to integrate this module:

  • Connection methods
  • Configuration options
  • Alternative integration patterns
  • Setup requirements

integration_requirements.md

Lists prerequisites for integration:

  • Required middleware
  • Environment variables
  • External dependencies
  • Permission requirements

limitations.md

Documents current constraints:

  • Performance limits (requests/second, data size)
  • Feature boundaries (what it explicitly won’t do)
  • Technical constraints (compatibility, versions)
  • Known issues

possible_use_cases.md

Provides integration examples:

  • Recommended usage patterns
  • Common scenarios
  • Anti-patterns to avoid
  • Performance tips

edge_cases_covered.md

Lists intentionally handled edge cases:

  • Boundary conditions addressed
  • Error scenarios managed
  • Race conditions prevented

example_usage.md

Shows concrete code examples:

  • Common operations
  • Initialization patterns
  • Error handling
  • Testing approaches

summary.md

Quick reference when full docs exceed context:

  • Three-sentence purpose
  • Main interfaces list
  • Critical limitations
  • Integration checklist

Other Docs

Other Devdoc Files

decisions.md

This document logs architectural decisions with their rationale. It’s crucial for anchoring development progress across sessions. Sometimes we encounter bugs or edge cases that require hours of careful design to solve elegantly. The resulting solution might seem unnecessary when viewed without context. If AI examines this design later without understanding the original problem, it may mistakenly remove or break the delicate solution we crafted.

Each entry in this document explains - what is current bottleneck/issue - what are the options considered - what option is selected and why - Explain in a high level what consequences this change might cause

The Probe Tests Pattern

Core Concept

All testing serves three purposes: Discovery (how it behaves), Diagnostic (what’s broken), and Validation (meets requirements).

Probe Tests combine all three into a single development loop that mirrors how developers naturally work, running code to verify it works while learning how it behaves.

Probe Tests = Smoke + Discovery + Validation in one test

Why It Matters

In AI-assisted development, syntactically perfect code can solve the wrong problem. Probe tests reveal misunderstandings immediately through:

  1. ✓ Smoke - Does it run?
  2. 🔍 Discovery - How does it actually behave?
  3. ⚠️ Validation - Are critical requirements met?

This isn’t about pass/fail—it’s about understanding data flow, transformations, and the gap between design and implementation.

Important Distinction

Probe testing complements formal CI/CD frameworks but focuses on the developer experience during active development. It’s for running code to inspect and understand, not automated regression testing.

Implementation Example

def probe_user_registration():
    """
    PROBING: User registration system
    ✓ Verify it runs
    🔍 Discover actual behavior
    ⚠️ Validate critical rules
    """
    print("\n🔬 PROBING: User Registration")

    # SMOKE - Basic operation
    try:
        result = register_user("test@example.com", "password123")
        print(f"  ✓ Runs: returns {type(result)}")
    except Exception as e:
        print(f"  ✗ FAILED: {e}")
        return False

    # DISCOVER - Actual behavior
    print(f"  → Result: {result}")
    if hasattr(result, '__dict__'):
        print(f"  → Attributes: {list(result.__dict__.keys())}")

    # Test variations
    duplicate = register_user("test@example.com", "password123")
    weak_pass = register_user("test2@example.com", "123")
    print(f"  → Duplicate handling: {duplicate}")
    print(f"  → Weak password: {weak_pass}")

    # VALIDATE - Critical requirements
    if hasattr(result, 'password'):
        assert result.password != "password123", "❌ Password in plaintext!"
    assert hasattr(result, 'id') or 'id' in result, "❌ No user ID!"
    assert duplicate is None or duplicate.error, "❌ Duplicates allowed!"

    print("  ✓ OPERATIONAL + UNDERSTOOD + VALIDATED")
    return True

Benefits in AI Development

  • Fast Feedback - Three insights from one test
  • Self-Documenting - Output shows actual behavior
  • Educational - AI learns from discoveries
  • Safety Net - Critical validations prevent dangerous bugs

How to Implement Probe Tests pattern correctly

0. Verbose Over Clever

We design Probe tests to generate detailed output that exposes data flow and critical transformations, including both raw data samples and summarized results. These tests serve dual audiences: human developers who need to understand system behavior, and AI assistants that require explicit context to provide accurate assistance.

1. Comprehensiveness

Probe tests must encompass diverse testing scenarios. Traditional software development achieves this through specialized test categories:

  • Unit tests - Test individual functions/methods in isolation
  • Integration tests - Test how components work together
  • End-to-end (E2E) tests - Test complete user workflows
  • Acceptance tests - Verify business requirements are met
  • Performance/Load tests - Test speed and scalability

Within the vibe coding paradigm, these traditional testing approaches are consolidated and adapted into our comprehensive Probe Tests pattern.

2. Progressive Complexity

Probe tests should follow a progression from simple to complex. Begin with isolated method verification and gradually advance to testing broader conceptual behaviors. Each successive test file should demonstrate deeper system understanding and better debug opportunity for future issues.

Example progression: test_login_returns_token → … → test_basic_auth_flow_works

It is common to have first file as

test_01_initialization.py

  • Can we import the modules?
  • Do classes instantiate?
  • Are constants defined?

and second file as:

test_02_connectivity.py

  • Can we connect to the database?
  • Do API endpoints respond?
  • Are external services reachable?
  • Do authentication mechanisms work?
  • Can we read/write to external resources?

3. Clear Intent

Probe tests should read like documentation so we can intervene when needed. Test names should be clear and readable, describing exactly what behavior is being verified

4. Full implementation Coverage

The cost of comprehensive Probe tests is negligible compared to the cost of incorrect implementation. Generate as many tests as necessary to ensure confidence.

A practical guideline: Request AI to create 5 Probe test files, each targeting distinct logical components, with a minimum of 5 individual test cases per file.

Probe_tests/
├── test_01_initialization.py
├── test_02_core_functionality.py  
├── test_03_user_workflows.py
├── test_04_edge_cases.py
└── test_05_integration.py

5. No mocking and Providing Real Data

And you will see that AI-generated tests often default to minimal output or use mocks that obscure real behavior. Which if not checked might have result in Phantom success.

AI-generated tests frequently default to using mocks or minimal outputs that mask actual system behavior, potentially creating false positives where tests pass despite broken functionality.

It is your job to provide realistic data for Probe tests.When developing complex systems like video processing engines or voice detection algorithms, AI cannot generate realistic data to test the code . Without authentic test data, you risk phantom successes where mocked tests pass while real implementation fails.

6. Always Rerun your Probe tests by yourself.

Never rely solely on AI’s test execution or interpretation. A critical pitfall occurs when AI reports overall success despite partial failures - even one failing test invalidates the entire suite.

Additionally, manually reviewing Probe test output provides immediate insight into the AI’s implementation approach, revealing how it handles data transformations and input/output operations. This direct observation is invaluable for understanding the actual code behavior.

Fuzzy Architecture Pattern

What Is Fuzzy Architecture?

Traditional architecture: Detailed upfront design, rigid boundaries, specific technologies.

Traditional: “Because iteration is expensive while specification/conceptualization is cheap”

  • You can write detailed docs, diagrams, interfaces relatively cheaply
  • But changing the actual implementation is costly

Fuzzy architecture: Intentionally vague starting point that crystallizes through implementation.

Like sculpture - you start with a rough shape and refine by removing what doesn’t belong.

Fuzzy/Vibe: “Because iteration is expensive while specification is expensive”

  • Writing precise specs is hard/time-consuming when you don’t know what you need
  • You are probably not experienced with architecture so picking a one without knowing what works well is problematic.
  • Also there is good chnance you dont know the full reqirements yet. And missing one core requirement means architecture should be recreated. Which is costly.
  • Better to build something rough and refine it.

The core trade-off seems to be about when you pay the cost - upfront planning/specification vs. during implementation/discovery. With AI/vibe coding, you’re betting that discovering the right architecture through building is more efficient than trying to specify it perfectly beforehand.

Why Fuzzy Works with AI

The Overspecification Trap

Detailed architecture upfront:

"Use PostgreSQL with Redis cache, implement Repository pattern 
with Unit of Work, deploy on Kubernetes with Istio service mesh..."

Result: Overengineered before you write a line of code.

The Underspecification Trap

No architecture:

"Build me an app"

Result: AI makes random choices, inconsistent patterns.

The Fuzzy Sweet Spot

Just enough structure:

"Web app with database persistence and API.
Start simple, we'll refine as we build."

Result: Room to discover the right architecture.

Core Principles of Fuzzy Architecture

1. Boundaries Not Implementations

Define what, not how:

# Initial Architecture

## Components
- User Interface (how users interact)
- Business Logic (what the app does)
- Data Storage (where information lives)
- External Services (what we integrate with)

## Rules
- UI never talks directly to database
- Business logic owns all rules
- Data layer handles persistence only

No specifics about React vs Vue, SQL vs NoSQL, REST vs GraphQL.

2. Principles Over Patterns

State principles that guide decisions:

## Architectural Principles

1. **Simplicity First**: Choose boring technology
2. **Local First**: Everything works offline
3. **Privacy First**: Data stays on device
4. **Performance**: Sub-second response times
5. **Maintainability**: New developer productive in 1 hour

These principles shape choices naturally.

3. Evolution Points

Mark where architecture will likely change:

## Evolution Points

- Storage: Start with JSON files → SQLite → PostgreSQL
- API: Start with function calls → REST → GraphQL if needed
- Auth: Start with none → Basic → OAuth when required
- Deploy: Start local → Single server → Scale when needed

This prevents AI from optimizing prematurely.

Implementing Fuzzy Architecture

Phase 1: Sketch the Shape

Create initial architecture.md:

# Architecture Overview

## High-Level Structure

[User Interface] ↓ [Application Logic] ↓ [Data Layer]


## Component Responsibilities

**User Interface**
- Display information
- Collect user input
- Handle user interactions

**Application Logic**  
- Process business rules
- Coordinate between layers
- Maintain application state

**Data Layer**
- Store and retrieve data
- Ensure data integrity
- Handle persistence

## Key Decisions Deferred
- Specific UI framework
- Database technology
- API protocol
- Deployment method

Phase 2: First Implementation

Let AI propose concrete choices:

"Based on this architecture and our requirements,
what would be good technology choices for a first implementation?
Keep it simple."

AI might suggest:

  • UI: Plain HTML + Alpine.js
  • Logic: Python Flask
  • Data: SQLite

Phase 3: Refinement Cycles

As you build, architecture solidifies:

# Architecture Overview (Updated)

## Technology Stack
- Frontend: Alpine.js for reactivity (chose for simplicity)
- Backend: Flask (lightweight, perfect for our needs)
- Database: SQLite (portable, no setup required)

## Patterns Discovered
- Service layer pattern emerged naturally
- Event system for loose coupling
- Command pattern for user actions

The Architecture Dialogue

Starting Fuzzy

Human: "I need an expense tracker"
AI: "What architecture should we use?"
Human: "Let's start fuzzy - separate UI, logic, and data. 
        We'll refine as we build."

Guided Evolution

AI: "Should we add caching?"
Human: "Is performance a problem?"
AI: "Not yet"
Human: "Then no caching. Keep it simple."

Natural Boundaries

AI: "This function is getting complex"
Human: "What pattern is emerging?"
AI: "Looks like command processing"
Human: "Let's extract a command handler pattern"

Fuzzy Architecture Artifacts

The Living Architecture Document

Update architecture.md as patterns emerge:

# Architecture (Living Document)

## Current State (Week 3)
- Clear MVC separation has emerged
- Service layer handles business logic
- Repository pattern for data access
- Event bus for loose coupling

## Decisions Made
- SQLite over PostgreSQL (simplicity won)
- Server-side rendering over SPA (speed won)
- Monolith over microservices (maintainability won)

## Future Considerations
- May need job queue for reports
- Might add caching if user base grows
- Could extract analytics into service

Decision Records

Document why architecture evolved:

# ADR-001: Use SQLite instead of PostgreSQL

## Status: Accepted

## Context
Initially kept database choice fuzzy. Now need to decide.

## Decision  
Use SQLite for local-first architecture.

## Consequences
- ✓ Zero configuration
- ✓ Portable data files
- ✓ Perfect for single-user app
- ✗ Limited concurrent writes
- ✗ No advanced SQL features

Can migrate to PostgreSQL later if needed.

Anti-Patterns to Avoid

Premature Crystallization

Don’t lock in too early:

Week 1: "We'll definitely need microservices"
Week 4: "Actually, a monolith is perfect"

Fuzzy Forever

Eventually commit:

Month 6: "We still haven't decided on a database"

Fuzzy is for discovery, not procrastination.

Architecture Astronauting

Don’t add complexity for future scenarios:

"We might need to scale to millions of users"
"Let's solve that when we have thousands"

Benefits of Fuzzy Architecture

1. Faster Start

Less upfront planning = quicker first version

2. Better Fit

Architecture matches actual needs, not imagined ones

3. Less Waste

Don’t build infrastructure you don’t need

4. Natural Patterns

Right abstractions emerge from real use

5. AI Alignment

AI proposes solutions that fit current state

The Fuzzy Lifecycle

Fuzzy (Week 1-2)
    ↓
Emerging (Week 3-4)
    ↓
Solidifying (Week 5-8)
    ↓
Stable (Week 9+)

Each phase has different flexibility.

Practical Fuzzy Techniques

The Proxy Pattern

Start with simplest version:

class DataStore:
    """Fuzzy data layer - might be JSON, SQLite, or PostgreSQL"""
    
    def save(self, data):
        # Start with JSON
        with open("data.json", "w") as f:
            json.dump(data, f)
    
    def load(self):
        # Easy to swap later
        with open("data.json") as f:
            return json.load(f)

The Feature Flag Evolution

# Week 1: Direct implementation
if user.is_premium:
    show_advanced_features()

# Week 4: Pattern emerges
if feature_enabled("advanced_features", user):
    show_advanced_features()

# Week 8: Full feature system
if feature_flag.check("advanced_features", context=user):
    show_advanced_features()

The Gradual Extraction

Week 1: Everything in app.py
Week 2: Extract models.py
Week 3: Extract services.py
Week 4: Extract repositories.py
Week 5: Clear architecture emerged

When to Stop Being Fuzzy

Signs It’s Time to Solidify

  1. Same patterns appearing repeatedly
  2. Team needs clear structure
  3. Performance requires specific choices
  4. Integration demands concrete interfaces

The Crystallization Moment

"Our fuzzy architecture has revealed these patterns:
- Service layer for business logic
- Repository pattern for data access  
- Event system for integrations

Let's formalize these as our architecture."

Fuzzy Architecture with AI

Setting Context

"We're using fuzzy architecture. Start simple,
we'll refine based on what we learn."

Guiding Evolution

"Given what we've built so far,
what architectural pattern is emerging?"

Preventing Premature Optimization

"That's a good idea for later.
For now, what's the simplest thing that works?"

The Meta-Pattern

Fuzzy architecture is itself fuzzy. Don’t over-formalize the informality. Let it guide you naturally toward the right structure.

Start fuzzy. Build. Learn. Solidify. This is the way.

Next: Part IV - The Vibe Coding Method →

The Anchor Pattern

What Is the Anchor Pattern?

In scientific literature this is known as “Stable Intermediate Forms” which is a methodology to help erisking the process of change

The Anchor Pattern is about ensuring that new development doesn’t break existing functionality. As AI works on new features, it tends to forget earlier requirements due to limited context. Anchoring actions force AI to regularly verify that old logic still works.

Think of it like construction - you don’t just check if the new floor is level, you verify the foundation hasn’t shifted.

Why Anchoring Is Critical

AI’s context window is limited. When you spend time focusing on feature B, C, and D, the AI gradually loses sight of feature A’s requirements. Without anchors, you get:

Hour 1: "Build user authentication with email/password"
AI: ✓ Implements perfect auth system

Hour 2: "Add password reset"
AI: ✓ Adds password reset

Hour 3: "Add social login"
AI: ✓ Adds OAuth... but breaks email login

Hour 4: "Add 2FA"
AI: ✓ Adds 2FA... but breaks password reset

Each new feature works, but previous features break silently.

The Core Anchor Mechanism

Anchoring means regularly forcing AI to:

  1. Run existing smoke tests
  2. Verify core functionality still works
  3. Check that new code doesn’t violate established patterns
  4. Ensure integration points remain intact

And if smoke tests pattern is used correctly then anchoring your development is simple as running such prompt:

"We've implemented the new feature. Now let's run our smoke tests 
to ensure all existing functionality still works correctly. If something is broken fix it"

Anchoring with DevDocs Pattern

DevDocs serve as persistent memory that AI can reference:

"Before implementing social login, please review:
- devdocs/modules/auth/requirements.md
- devdocs/modules/auth/existing_flows.md
- devdocs/simplified_concepts.md section on authentication

Ensure the new feature doesn't break existing requirements."

The Meta-Anchor

The ultimate anchor is asking:

"What existing functionality could this change break? 
Let's test those specific areas."

This makes AI think about impact before problems occur.

Anchor Best Practices

  1. Run Anchors Frequently: Not just at the end
  2. Fix Immediately: Don’t let anchors stay red
  3. Add New Anchors: When you find bugs
  4. Remove Obsolete Anchors: When features are removed
  5. Document Anchor Purpose: Why does this test exist?

Anchoring isn’t about perfection - it’s about detection. You will break things. Anchors ensure you know immediately, not days later.

The Archaeology Pattern

Diggin things down is a completely different paradigm compared to building things. This pattern specializes on how we can use AI to understand the codebase. It might sound simple at first but reality is a bit different.

The archaeology pattern often reveals:

  • Undocumented workarounds for specific edge cases
  • Implicit architectural decisions
  • Hidden dependencies between modules
  • Performance optimizations that look like complexity
  • Security measures that seem redundant

Excavating Wisdom from Existing Codebases

When applying Vibe Coding to existing projects, you become an archaeologist who is excavating layers of decisions, uncovering hidden patterns, and reconstructing the evolution story. Every line of code contains history: why certain paths were chosen, what alternatives were rejected, and what edge cases shaped the architecture.

Without this archaeological understanding, AI might simplify architecture back to already-invalidated states, removing critical workarounds or “fixing” intentional complexity that handles important edge cases.

Core Principle

Existing code contains hidden reasons—don’t simplify without understanding why.

Technical debt often exists for valid historical reasons. A seemingly redundant check might prevent a race condition discovered in production. That “ugly” workaround might handle a third-party API’s undocumented behavior. The archaeology pattern ensures we preserve this hard-won wisdom.

The Five-Phase Archaeological Process

Phase 1: Initial Context Gathering

Start by having AI read the entire codebase to understand what you’re working with:

This is an ongoing project currently under heavy development.
Read all files and tell me about this project in non-technical terms.

Focus on:
- Code files (not existing documentation)
- Read code files fully
- What the system actually does

Create: devdocs/archaeology/small_summary.md

This gives you a baseline understanding before diving deeper.

Phase 2: Technical Architecture Discovery

Next, understand the technical design:

Analyze how this codebase is designed in terms of:
- Data flow paths
- Main abstractions
- Top-level design patterns

Explain at a high level, as if introducing the architecture to a new engineer.

Create: devdocs/archaeology/intro2codebase.md

Phase 3: Deep Execution Tracing

This is where the real archaeology happens. By following execution paths to understand actual behavior:

Identify every internal interface and submodule interaction.
For each interface, trace the execution path end-to-end:
- What calls it
- What it triggers
- How data/state moves through layers
- What outcomes it produces

Create trace files under devdocs/archaeology/traces/
Each trace should include:
- Entry Point
- Execution Path
- Resource Management
- Error Path
- Performance Characteristics
- Observable Effects
- Why This Design
- What feels incomplete/vulnerable/poorly designed

These traces reveal the true architecture, not what documentation claims or what naming suggests.

Phase 4: Insight Synthesis

After tracing, identify improvement opportunities while respecting existing decisions. This part doesnt actually meant to improve the codebase later. But we are contexting the AI to see the codebase in different angle.

Now look at all traces and what are the 5 things that would improve the codebase a lot?

Put these in devdocs/archaeology/5_things_or_not.md

And make sure after each of them think for a possible reason this thing is not implemented/fixed at the current code
(the reason is there might be some undocumented decisions and these 5 things might refer to them  )

Phase 5: Concept Inventory

Map all concepts found in the code:

Create concept inventories:
1. devdocs/archaeology/concepts/technical_concepts_list.md
2. devdocs/archaeology/concepts/design_concepts_list.md
3. devdocs/archaeology/concepts/business_lvl_concepts_list.md
4. devdocs/archaeology/concepts/missing_concepts_list.md

For each concept include:
- High-level explanation (max 3 sentences)
- Current implementation status
- Hidden assumptions or edge-case handling
- Future-oriented design considerations

Surface implicit concepts that exist only in code.

Moving from Archaeology to Evolution

Phase 6: Foundation Extraction

Extract the true foundations from code analysis:

Based solely on codebase analysis, extract:

1. devdocs/archaeology/foundations/project_description.md
   - What the system actually does
   - Current use cases evident in code
   - Problems being solved

2. devdocs/archaeology/foundations/philosophy.md
   - Implicit design principles
   - Coding patterns consistently used
   - Architectural decisions evident in structure

3. devdocs/archaeology/foundations/known_requirements.md
   - Requirements inferred from implementations
   - Constraints visible in code
   - Compliance/security measures present

Phase 7: Gap Analysis

Compare current state to desired future:

Create devdocs/evolution/gap_analysis.md:
1. What concepts need implementation
2. What architecture changes are required
3. What technical debt blocks progress
4. What can be incrementally improved
5. What requires complete rewrite

Phase 8: Strategic Planning

Cleanup Inventory

Before adding new code, identify what can be removed:

Scan for unused elements:
- Unreferenced files and modules
- Dead code paths
- Commented-out code blocks
- Duplicate implementations
- Abandoned features

Create: devdocs/archaeology/cleanup_inventory.md

WARNING: Never delete immediately—code that looks unused might be:
- Loaded dynamically
- Referenced in configuration
- Used by external systems
- Kept for compliance/audit reasons

Refactoring Opportunities

Identify high-value improvements:

Find refactoring candidates:
- Scattered database operations → Repository Pattern
- Mixed business/infrastructure logic → Service Pattern
- Repeated API patterns → Gateway abstraction
- Global state → Dependency Injection
- Hard-coded configs → Configuration Pattern

Create: devdocs/evolution/refactoring_opportunities.md

For each opportunity document:
- Current problematic pattern
- Proposed solution
- Benefits (testability, maintainability)
- Implementation effort
- Risk assessment
- ROI justification

Phase 9: Baseline Testing

Establish what currently works:

Create comprehensive smoke tests to validate implementation.

Create: smoke_tests/check_what_is_working/

Requirements:
- 5 test files with 5 test cases each
- No mocking—use real components
- Test actual behavior, not assumptions
- Verbose output showing exactly what fails
- Cover entire codebase systematically

Document results in report.md:
- What works as expected
- What's broken but acceptable
- Critical failures needing immediate attention

Phase 10: Implementation Roadmap

Create a phased execution plan:

Create devdocs/evolution/implementation_roadmap.md:

Phase A - Foundation (Week 1-2):
- Critical cleanup
- Essential refactoring
- Fix broken core functionality
- Establish CI/CD

Phase B - Core Refactoring (Week 3-4):
- Highest-ROI improvements
- Add abstraction layers
- Decouple modules
- Run smoke tests after each change

Phase C - Gap Filling (Week 5-8):
- Implement missing concepts
- Module-by-module improvements
- Add identified features

Phase D - Integration (Week 9-10):
- Ensure modules work together
- Performance optimization
- Documentation updates
- Comprehensive testing

Common Archaeological Discoveries

The archaeology pattern often reveals:

  • Undocumented workarounds for specific edge cases that took weeks to discover
  • Implicit architectural decisions that seem arbitrary but prevent subtle bugs
  • Hidden dependencies between modules that look independent
  • Performance optimizations disguised as unnecessary complexity
  • Security measures that appear redundant but handle attack vectors
  • Data migrations embedded in code to handle legacy formats
  • Feature flags for abandoned experiments that still affect behavior

Why This Works

Manual archaeology would take weeks of code reading and interviews with original developers (if available). AI can traverse entire codebases in minutes, recognizing patterns humans might miss. The key is asking the right questions to extract not just what exists, but why it exists that way.

Critical Success Factors

  1. Never assume code is wrong—assume you don’t understand its purpose yet
  2. Document before changing—capture existing behavior first
  3. Test current state—establish baseline before modifications
  4. Preserve edge cases—they represent real-world lessons
  5. Respect technical debt—it often represents conscious trade-offs

The Archaeology Mindset

Approach existing code like an archaeologist approaches an ancient site:

  • Every artifact has meaning
  • Context is everything
  • Preserve before modifying
  • Document all findings
  • Respect what came before

This pattern transforms the daunting task of understanding legacy code into a systematic exploration that preserves hard-won wisdom while identifying genuine improvement opportunities.

Offload Pattern

Making AI’s Job Easier

Traditional development assumes human limitations. Vibe coding flips this - we optimize for AI capabilities. The Offload Pattern is about structuring everything to make AI’s job as easy as possible.

Think of it like organizing a kitchen for a chef who’s blindfolded but has perfect memory. Everything needs to be exactly where they expect it.

Why Offload Matters

AI works best when:

  • Context is clear and unambiguous
  • Patterns are consistent and predictable
  • Information is structured and accessible
  • Intentions are explicit, not implicit
  • Examples demonstrate expectations

When you make AI’s job easier, you get:

  • Faster, more accurate code generation
  • Less back-and-forth clarification
  • Fewer misunderstandings and revisions
  • More time for creative problem-solving

Core Principles of Offloading

  • Enforce creation of clean consistent explicit codebase
  • Clearly state the focus and priorities in your prompts
  • Provide examples whenever you can
  • Enforce progressive development and break combined complexities.

VIBE CODING A NEW PROJECT

This appendix document contains ready-to-use prompts for IDEA -> PROTOTYPE. These prompts contains established vibe coding patterns. Their detailed explanations can be found in related chapters.

Take attention that phase 7-13 should be repeated many times as the modules are needed.

  0 → 1 → 2 → 3 → 4 → 5 → 6 → ┌─[7 → 8 → 9 → 10 → 11 → 12]─┐ → 13 → 14
                                └─────────────←───────────────┘
                                      (loop for each module)

IMPORTANT NOTES:

  • Do not forget to act as a human in the loop. Read each generated documentation and fix any misunderstanding asap. Otherwise they can propagate and cause bigger errors
  • Remind AI keep the core documents and smoke tests to upto date after any change.
  • Understand the flow so you can intervene. Create as many as free style documents.
  • It is okay that your understanding of the project will change and you may feel like starting over especially with documentations. Do it. But instead of deleting old devdocs folder just rename it to deprecated. You may refer to them later.
  • Dont forget to git push your code. AI is not almighty and it can mess things up.

COMPLETE PHASES AND PROMPTS

Phase 0: Data Dump

Provide the AI with all project-relevant information. This is essential. Don’t rush to over-define your project at this stage - fuzzy, incomplete details are fine. You’ll have opportunities to refine and clarify later. During Phase 1, this information will be processed into three structured documents.

If you have multiple information sources, consolidate them into a data_dump.txt file. Include these four aspects (even if vaguely defined):

  • Project purpose: Your goals from a non-technical perspective
  • Tech stack preferences: Any preferred technologies or frameworks
  • Target platforms: Mobile, web, embedded, etc.
  • Priorities: Relative importance of platforms, features, and objectives

Phase 1: Foundation Documents

This phase is explained in detailed in chapter_6/01_devdocs_pattern.md and chapter_6/02_root_docs.md

Based on the provided project information, I need you to establish the DevDocs pattern foundation.

First, analyze the project requirements and create a clear understanding of what we're building.
Create the following foundation documents:

1. devdocs/foundations/project_description.md

   What are we building?
   What problem are we solving
   What are the various scopes of this project?
   Who are the targeted users?


2. devdocs/foundations/philosophy.md
   - what is the philosophy of this project? (dont go into technical details)

   
3. devdocs/foundations/known_requirements.md
   - Technical requirements
   - Business requirements 
   - User requirements 

Focus on clarity and mutual understanding before any implementation.

Phase 2: Concept Extraction and Clarification

This phase is explained in detail in chapter_6/03_concept_docs.md

Using the foundation documents you just created (project_description.md, philosophy.md, known_requirements.md), 
extract and document the key core concepts (only needed and core ones):

1. Create devdocs/concepts/concepts.md
   - List only essential technical concepts
   - Order by importance/dependency
   - Brief one-line description for each

2. For each concept, create detailed clarification in devdocs/concepts/concept_clarifications/
   Name files with ordering prefixes (01_[concept].md, 02_[concept].md, etc.)
   
   Each clarification must answer:
    - Clear short explanation what it is and why it matters
    - How this concept helps the overall project
    - How this concept limits the overall project
    - What kind of information this concept needs as input
    - What kind of process this concept should use
    - What kind of information this concept outputs or relays
    - Explain the good expected outcome of realizing this concept
    - Explain the bad unwanted outcome of realizing this concept


Phase 3: Simplification for MVP

This phase is explained in detail in chapter_6/03_concept_docs.md

I want you to create devdocs/concepts/simplified_concepts.md using concepts.md and the goal is to trim the features but the core ones for each concept so we can still have these concepts but they are more about prototype. 

And make sure you follow these rules during simplification
        - do not oversimplify the concept to the point underlying architecture is oversimplified and does not support the original concept
        - if a concept has a support for multi subconcept, do not binarize it but diminish the number of supported subconcepts by priotizing the most important ones. 

And then for each concept in simplified_concepts.md create me a clarification markdown file
which includes answer to these questions:

 For each concept write
    - clear short explanation what it is and why it matters
    - How this concept helps the overall project
    - How this concept limits the overall project
    - What kind of information this concept needs as input
    - What kind of process this concept should use
    - What kind of information this concept outputs or relays
    - explain the good expected outcome of realizing this concept
    - explain the bad unwanted outcome of realizing this concept

Put 1_ 2_ 3_ like prefix of each file to order them and make sure priotize the core concepts when you are ordering them. and do this in devdocs/concepts/simplified_concept_clarifications/

Critical: The simplification should reduce features, not break architecture and be flexible for future scope expansion

Phase 4: Identifying Modules

We want development to be done in modular way. This ensures project stays in debuggable/testable complexity. This step identifies all possible modules.


Review the simplified concepts and identify logical abstractions that can be organized into modules. Focus on creating a modular architecture that will facilitate cleaner, more maintainable development. Ensure the proposed modularity doesn't compromise performance or security requirements. Avoid over-engineering - only modularize major conceptual boundaries. Document your module structure proposal in module_proposal.md

Phase 5: Architecture and Structure

Based on the simplified concepts, and module proposal please propose the most suitable architecture for this project. Keep in mind that we'll be developing iteratively, evolving from a simple prototype to more complex versions over time. And our priority is solid foundation whcih can be expanded overtime and not immediate full scope solution. 

Please use the info from module_proposal.md intelligently (you may choose not to accept a module if it is not neccesary or overengineered for the MVP logic ) and avoid tightly coupled architecture. 

Key considerations:
- Avoid over-engineering or excessive complexity or explicity at this stage
- The architecture will evolve as we progress, so maintain flexibility
- Focus on high-level concepts that can accommodate the project's core requirements
- Present a design that balances simplicity with extensibility
- Make sure to use proper design patterns in order to keep modularity of codebase(e.g., repository pattern for DB related operations)

Phase 7: Selecting One module and Preparing for implementation

One way to enforce AI to do the development is done in a moduler way is to implement one by one.

 
With the project information, architecture overview, and module structure defined, select the first module for implementation.
   Choose a module that:
  1. Encapsulates the most messy distractive existing code
  2. Is peripheral to the core system (minimal dependencies)
  3. Has stable requirements unlikely to change
  4. Can be implemented independently without requiring other modules to be fully defined first


  And then create devdocs/proposal/[module_name]_module_implementation_proposal.md which explain in order 
  
  1.  what components this modules would have?
  2.  what the interface/endpoints this module would show
  3.  how it would be used with pseudo code
  4.  how it will be used by other/core modules (in a high level without definitive definition)
  5.  make sure the interface is not mixing concerns and modular 
  6.  if selected module is peripheral - it should knows nothing about the business domain, just how to persist and retrieve objects. This makes it:
         - More reusable
         - Easier to test (mock repositories)
         - Clear separation of concerns
         - No mixed responsibilities

Phase 8: Module implementation


Please implement this module with required files. Based on our previous discussions and documentation, 
please provide:

- Complete file list with their respective directory paths
- Full implementation for each file
- All necessary code to make this module functional

Phase 9: Probe test

Details are explained in chapter_7/probe_tests_pattern.md

Let's design comprehensive probe tests to validate our implementation. 
Please create probe_tests folder if it doesnt exists. and Please create a test plan with the following structure:

1. 5 test files, each containing 5 focused test cases
2. Avoid mocking and use real componenets with real calls with real data. 
3. File naming convention: test_01_[test_focus_area].py, test_02_[next_focus_area].py, etc.
4. Make sure these tests 
      -  Individual functions/methods in isolation 
      -  Test how components work together
      -  Verify solidly defined requirements are met or not
       
5. For each test file, provide:
  - Clear description of what aspect it tests
  - Why this testing area is critical
  - Brief outline of each test case within the file
6. these tests shouldnt use any testing frameworks. Make the outputs verbose enough so you can see what exactly does not work. 
7. Start with initialization test file. 
8. Each test file's top comment should include how to run it manually. 


(Write the files in such way that each subtest inside is seperate functin and in the end there is one funciton to orchestrate runnning all of them)

(Tests should test what actually exists, do not create fallbacks etc)

Phase 10: Running the smoke test and fix the errors

It is important to run smoke tests by ourself. First we will let AI do this and then we should run them by ourselves.

Lets run each smoke test one by one and fix all errors. If you find errors and cant fix them after 3 changes break down the smoke test into a smaller more isolated forms with more verbose outputs ( in smoke tests files) And rerun them. 

Phase 11: Fuzzy Module Documentation

Now our module is ready and working we should create documentation about it.

 create comprehensive documentation:

Create devdocs/modules/[module_name]/ containing:

1. what_is_this_for.md
   - Core purpose
   - Why it's a separate module
   - What breaks without it
   - Who depends on it

2. interfaces_and_endpoints.md
   - Public methods/functions
   - API endpoints
   - Event emissions
   - Data structures

3. integration_points.md
   - How to integrate this module (only if what to integrate is already defined in the codebase)
   - Multiple integration approaches if flexible
   - Best practices for usage

4. integration_requirements.md
   - Environment variables needed (only if really needed)
   - Database schemas required (only if really needed)
   - External services (only if really needed)
   - Configuration files (only if really needed)

5. limitations.md
   - Performance boundaries
   - Feature boundaries
   - Technical constraints
   - Security limitations

6. possible_use_cases.md
   - Common usage patterns
   - Anti-patterns to avoid
   - Performance tips

7. edge_cases_covered.md
   - Handled edge cases
   - Error scenarios
   - Recovery mechanisms

8. example_usage.md
   - Code examples
   - Setup instructions
   - Common operations
   - Testing approach

9. summary.md
   - Quick reference
   - Main interfaces
   - Critical limitations
   - Integration checklist

Phase 12: INTEGRATION & SMOKE TESTS

Now that the module is complete and documented, integrate it with the core system:

  1. Create integration tests between this module and any existing core components under smoke_tests 
  2. Verify the module works within the larger system context
  3. Test data flow between modules
  4. Validate that the module's interfaces are being used correctly
  5. Check for any unexpected side effects or performance impacts
  6. Update smoke tests to include integration scenarios

  Run all tests together to ensure the module doesn't break existing functionality.
  

APPLYING VIBE CODING TO AN EXISTING CODEBASE

This appendix contains ready-to-use prompts for applying Vibe Coding patterns to existing codebases. The archaeology pattern (Chapter 6.7) helps extract DevDocs from mature projects without losing architectural wisdom.

IMPORTANT NOTES

  • Existing code contains hidden reasons - don’t simplify without understanding why
  • Technical debt often exists for valid historical reasons
  • Start documentation before making changes
  • Keep old architectural decisions visible to prevent regression

APPLY THE ARCHAEOLOGY PATTERN

Phase 1: Filling the Context and Summarizing

“”“

This is an ongoing project. It is currently partially working but on heavy development. I would like you read all the files and tell me about this project in non technical terms

  • Make sure you focus on code files and not existing documentation
  • Make sure you read code files fully

And create devdocs/archaeology/small_summary.md

“”“

Phase 2: Enhancing Generic Technical Understanding

“”“ Now understand how this codebase designed in terms of

  • data flow paths
  • main abstractions
  • top level desing patterns

But talk about these in high level. As if you are introducing the architecture / project to a new engineer.

put this in devdocs/archaeology/intro2codebase.md

“”“

Phase 3: Deeper look in CodeBase

“”“ Identify every internal interface and submodule-level interaction defined within the codebase (excluding external packages). For each interface, follow its execution path end-to-end: what calls it, what it triggers, how data or state moves through the layers, and what outcomes it produces.

Document these flows at a high level, explaining what each interaction corresponds to, what it affects, and why it behaves that way.

Create one file per interaction trace under devdocs/archaeology/traces/ (e.g., trace_1.md, trace_2.md). Base all analysis strictly on actual code behavior rather than names or assumptions.

Each trace should have the following sections

Entry Point Execution Path Resource Management Error Path Performance Characteristics Observable Effects Why This Design

What feels incomplete What feels vulnerable What feels bad design

“”“

Phase 4: Using Fresh Trace analysis to Identify “Things”

“”“

Now look at all traces and what are the 5 things that would improve the codebase a lot?

Put these in devdocs/archaeology/5_things_or_not.md

And make sure after each of them think for a possible reason this thing is not implemented/fixed at the current code (the reason is there might be some undocumented decisions and these 5 thigns might refer to them )

“”“

Phase 5: Concept Inventory

“”“

Looking at the codebase,

Create

  1. devdocs/archaeology/concepts/technical_concepts_list.md
  2. devdocs/archaeology/concepts/design_concepts_list.md
  3. devdocs/archaeology/concepts/busines_lvl_concepts_list.md

where each concept has high level explanation (max 3 sentence) as well as current implementation status. And make sure the listing starts from most main ones and branches as needed.

Recognize that the codebase may contain architectural intentions, hidden assumptions, edge-case handling, or future-oriented design considerations that exist only in the code and are not explicitly documented. Surface these implicit concepts clearly among with others explain their possible purpose, rationale, and potential impact on scalability, security, and maintainability.

and then Identify missing but required/expected concepts and thenput them in

devdocs/archaeology/concepts/missing_concepts_list.md

“”“

Phase 5: Codebase inferred Foundation Extraction

Now based on solely the codebase analysis and devdocs/archaeology documents, please extract these foundations documents:

1. Create devdocs/archaeology/foundations/project_description.md
   - What the system actually does (not what docs claim)
   - Current user base and use cases
   - Actual problems being solved

2. Create devdocs/archaeaology/foundations/philosophy.md
   - Implicit design principles found in code
   - Coding patterns consistently used
   - Architectural decisions evident in structure

3. Create devdocs/archaeology/foundations/known_requirements.md
   - Requirements inferred from implementations
   - Constraints visible in code
   - Compliance/security measures present

STANDARD ANALYSIS

Phase 6: Data Dumb and Foundation Extraction

This is where we start swaying away from the current implementation itself and focus on project itself. In this stage make sure provide all non code documentations regarding project idea.

Based on the given documentations (not the prior codebase knowledge), please extract these foundations documents:

1. Create devdocs/foundations/project_description.md
   - What the system actually does (not what docs claim)
   - Current user base and use cases
   - Actual problems being solved

2. Create devdocs/foundations/philosophy.md
   - Implicit design principles found in code
   - Coding patterns consistently used
   - Architectural decisions evident in structure

3. Create devdocs/foundations/known_requirements.md
   - Requirements inferred from implementations
   - Constraints visible in code
   - Compliance/security measures present

Phase 7: Architecture Archaeology

Reconstruct the architecture from code:

1. Create devdocs/archaeology/architecture_analysis.md:
   - Trace main entry points and flows
   - Map data models and schemas
   - Document API endpoints and contracts
   - Identify architectural patterns used

2. Create devdocs/archaeology/module_discovery.md:
   - Natural module boundaries in code
   - Coupling and cohesion analysis
   - Dependency relationships
   - Shared utilities and libraries

Phase 8: Concept Mapping

Map discovered concepts to actual implementation:

Create devdocs/archaeology/concept_mappings.md documenting:
- Which files/modules implement each concept
- Coverage percentage (fully realized, partial, missing)
- Why implementation diverged from ideal
- What alternatives were likely considered
- Edge cases that shaped the current design

This captures the "why" behind existing architecture.

TOWARDS FIRST AI IMPLEMENTATION

Phase 8: Gap Analysis

Compare current state to desired future state:

Create devdocs/evolution/gap_analysis.md:
1. What concepts need implementation
2. What architecture changes are required
3. What technical debt blocks progress
4. What can be incrementally improved
5. What requires complete rewrite

Phase 9: Gap Closure Strategy

Using gap_analysis.md create phased evolution plan:

Create devdocs/evolution/gap_closure_plan.md:
1. Quick wins (can do immediately)
2. Incremental improvements (module by module)
3. Major refactoring (requires planning) 
4. Complete rewrites (if necessary)

For each phase:
- Dependencies and prerequisites
- Risk assessment
- Testing strategy

Phase 10: Baseline Smoke Tests

Establish working baseline with smoke tests. See what is running and what is not running.
Your job is to create smoke_tests/check_what_is_working/
and under this folder create smoke test files for each implementation that seems to be working. 

Let's design comprehensive smoke tests to validate the implementation.
Please create smoke_tests folder if it doesn't exist and create a test plan with the following structure:

1. 5 test files, each containing 5 focused test cases
2. Avoid mocking and use real components with real calls with real data
3. File naming convention: test_01_[test_focus_area].py, test_02_[next_focus_area].py, etc.
4. Make sure these tests:
   - Individual functions/methods in isolation
   - Test how components work together
   - Verify solidly defined requirements are met or not

5. For each test file, provide:
   - Clear description of what aspect it tests
   - Why this testing area is critical
   - Brief outline of each test case within the file

6. These tests shouldn't use any testing frameworks. Make the outputs verbose enough so you can see what exactly does not work
7. Start with initialization test file
8. Each test file's top comment should include how to run it manually 

Make sure not mock things and try to run them with minimal changes. Make sure you cover all codebase.
Document current behavior in smoke_tests/check_what_is_working/report.md:
- What works as expected
- What's broken but acceptable

Phase 11: Codebase Cleanup Inventory

Identify unused code and organize for safe removal:

1. Scan for unused elements:
   - Unreferenced files and modules
   - Dead code paths (unreachable functions)
   - Commented-out code blocks
   - Duplicate implementations
   - Abandoned features
   - Test files for non-existent code
   - Orphaned configuration files

2. Create devdocs/archaeology/cleanup_inventory.md:
   - List all candidates for removal
   - Note why each appears unused
   - Mark any that might be used dynamically
   - Identify potential hidden dependencies

Never delete immediately - code that looks unused might be:
- Loaded dynamically
- Referenced in configuration
- Used by external systems
- Kept for compliance/audit reasons

Phase 12: Strategic Refactoring Opportunities

Identify high-value refactoring opportunities that will significantly improve the codebase:

  1. Scan for refactoring candidates:

    • Database operations scattered across codebase → Repository Pattern
    • Business logic mixed with infrastructure → Service Pattern
    • Repeated API call patterns → Gateway/Client abstraction
    • Global state management → Dependency Injection
    • Hard-coded configurations → Configuration Pattern
    • Complex conditionals → Strategy Pattern
    • Direct file system access → Storage abstraction
  2. Create devdocs/evolution/refactoring_opportunities.md: For each opportunity document:

    • Current problematic pattern (with file locations)
    • Proposed abstraction/pattern
    • Immediate benefits (testability, maintainability)
    • Implementation effort estimate
    • Risk assessment
    • ROI justification
  3. Prioritize by value/effort ratio:

    • Critical: Blocks testing or development
    • High: Significant maintenance reduction
    • Medium: Nice to have, clear benefits
    • Low: Cosmetic, can wait

Focus only on refactoring that:

  • Enables better testing
  • Reduces coupling between modules
  • Makes AI-driven development easier
  • Solves actual pain points

Avoid refactoring for its own sake - each change must deliver measurable value.

Phase 13: Implementation Roadmap

Create comprehensive roadmap combining all improvement activities:

Create devdocs/evolution/implementation_roadmap.md organizing work into phases:

Phase A - Foundation (Week 1-2):

  1. Critical cleanup from cleanup_inventory.md
  2. Essential refactoring that unblocks other work
  3. Fix broken core functionality from smoke test report
  4. Establish CI/CD if missing

Phase B - Core Refactoring (Week 3-4):

  1. Implement highest-ROI refactoring from refactoring_opportunities.md
  2. Add abstraction layers (Repository, Service patterns)
  3. Decouple tightly coupled modules
  4. Run smoke tests after each refactor

Phase C - Gap Filling (Week 5-8):

  1. Implement missing concepts from gap_analysis.md
  2. Start with quick wins
  3. Progress to module-by-module improvements
  4. Add new features identified in gaps

Phase D - Integration & Polish (Week 9-10):

  1. Ensure all modules work together
  2. Performance optimization
  3. Documentation updates
  4. Comprehensive testing

For each item include:

  • Specific files/modules affected
  • Dependencies (what must be done first)
  • Success criteria
  • Time estimate
  • Risk level

This roadmap becomes your execution guide for transforming the codebase systematically.


APPENDIX 3: REFACTORING WITH VDD

Best quote regarding refactoring the code is:

“Refactoring is like brushing your teeth. You should do it daily not monthly.”

Sometimes requirements change and we might need bigger refactors, but most of the time refactors are something code begs to have, almost as if code is pregnant to a new abstraction or new design pattern.

THE VDD REFACTORING PHILOSOPHY

In Vibe-Driven Development, refactoring isn’t about making code “clean” according to some abstract ideal. It’s about recognizing when code is telling you it wants to evolve, and guiding that evolution with AI assistance while maintaining system stability.

RECOGNIZING REFACTORING OPPORTUNITIES

Signs Code is Ready to Evolve

Look at the codebase and identify these patterns:

1. Repeated code blocks with slight variations
   → Wants to become: Abstracted function/class

2. Long parameter lists being passed around
   → Wants to become: Configuration object or context

3. Multiple if/else checking the same condition
   → Wants to become: Strategy pattern or polymorphism

4. Data and functions always traveling together
   → Wants to become: Class or module

5. Comments explaining what code does (not why)
   → Wants to become: Self-documenting code

6. Test setup code duplicated across files
   → Wants to become: Test utilities or fixtures

Document findings in devdocs/refactoring/opportunities.md

PHASE 1: REFACTORING DISCOVERY

Identifying Natural Abstractions

Analyze the codebase for emerging patterns:

1. Find all instances where similar code appears 3+ times
2. Identify data structures that always appear together
3. Look for functions that always call each other in sequence
4. Find switch statements or if/else chains on the same variable

Create devdocs/refactoring/emerging_patterns.md documenting:
- Pattern name
- Current locations (file:line)
- Proposed abstraction
- Estimated impact (files affected)
- Risk level (low/medium/high)

Code Smell Detection

Scan for these specific code smells:

Technical Debt Smells:
- God functions (>50 lines)
- Deep nesting (>3 levels)
- Magic numbers without constants
- Dead code (unreachable/unused)
- Circular dependencies

Architectural Smells:
- Business logic in UI/controllers
- Data access scattered everywhere
- No clear module boundaries
- Inconsistent error handling
- Mixed abstraction levels

Document in devdocs/refactoring/code_smells.md with:
- Smell type and severity
- Specific locations
- Suggested remedy
- Priority for fixing

PHASE 2: SAFE REFACTORING PLANNING

Creating Refactoring Anchor Points

Before refactoring, establish safety nets:

1. Create probe tests for current behavior:
   - Capture existing functionality
   - Document edge cases
   - Record performance baselines

2. Generate comprehensive tests:
   Create probe_tests/pre_refactor/
   - test_current_behavior.py
   - test_edge_cases.py
   - test_integration_points.py

3. Document current architecture:
   Create devdocs/refactoring/current_state.md
   - How it works now
   - Why it works this way
   - Known limitations

These anchor points ensure refactoring doesn't break existing functionality.

Incremental Refactoring Plan

Create a step-by-step refactoring plan:

devdocs/refactoring/refactor_plan.md:

Phase 1: Preparation
□ Add tests for current behavior
□ Create feature flags for gradual rollout
□ Set up parallel implementations

Phase 2: Extract (Don't Change)
□ Extract methods without changing logic
□ Create new files/modules
□ Keep old code working

Phase 3: Abstract (Don't Break)
□ Introduce abstractions
□ Route through new code paths
□ Maintain backward compatibility

Phase 4: Migrate (Gradual Switch)
□ Switch features one by one
□ Monitor for issues
□ Keep rollback ready

Phase 5: Cleanup (Remove Old)
□ Delete old implementations
□ Remove feature flags
□ Update documentation

PHASE 3: REFACTORING EXECUTION

Extract Method Refactoring

For long functions, extract logical chunks:

1. Identify a cohesive block of code (5-15 lines)
2. Check what variables it needs (parameters)
3. Check what it produces (return values)
4. Extract to new method with descriptive name

Apply this extraction:
- Original: [file:lines]
- Extract lines [X-Y] to method [method_name]
- Parameters needed: [list]
- Returns: [type]

After extraction:
- Run tests to verify behavior unchanged
- Commit with message: "Extract [method_name] from [original_function]"

Introduce Abstraction Refactoring

When you find repeated patterns:

1. Identify the common structure
2. Find what varies between instances
3. Create abstraction that captures commonality
4. Make variations into parameters/strategies

Create new abstraction:
File: [new_file_path]
Class/Interface: [name]
Common behavior: [what stays same]
Variable behavior: [what changes]
Implementation strategy: [how to handle variations]

Refactor each instance to use new abstraction
Test after each instance is refactored

Data Structure Consolidation

When data always travels together:

Identify coupled data:
- Fields: [field1, field2, field3]
- Always appear in functions: [func1, func2]
- Current locations: [scattered across X files]

Create consolidated structure:
- New class/type: [StructureName]
- Properties: [organized fields]
- Methods: [related operations]
- File location: [where it lives]

Migration approach:
1. Create new structure alongside old
2. Update one function at a time
3. Run tests after each update
4. Remove old scattered fields

PHASE 4: ARCHITECTURAL REFACTORING

Repository Pattern Introduction

When database access is scattered:

Current state analysis:
- Find all database queries in codebase
- Group by entity type (User, Order, Product)
- Identify common query patterns

Create repository structure:
repositories/
├── base_repository.py      # Common CRUD operations
├── user_repository.py       # User-specific queries
├── order_repository.py      # Order-specific queries
└── product_repository.py    # Product-specific queries

Migration steps:
1. Create repository with existing queries (copy, don't move)
2. Update one module to use repository
3. Test thoroughly
4. Repeat for each module
5. Remove direct database access

Service Layer Extraction

When business logic is mixed with infrastructure:

Identify business logic:
- Find code that implements business rules
- Separate from HTTP handling, database access
- Group related business operations

Create service layer:
services/
├── user_service.py         # User business logic
├── order_service.py        # Order processing
├── payment_service.py      # Payment rules
└── notification_service.py # Notification logic

Refactoring approach:
1. Extract business logic to service methods
2. Keep controllers thin (just HTTP handling)
3. Services call repositories (not direct DB)
4. Test services independently

Dependency Injection Implementation

When testing is hard due to tight coupling:

Identify hard dependencies:
- Direct instantiation in constructors
- Global imports used directly
- Hard-coded configuration values

Introduce dependency injection:

# Before:
class OrderService:
    def __init__(self):
        self.db = Database()  # Hard dependency
        self.email = EmailSender()  # Hard dependency

# After:
class OrderService:
    def __init__(self, db, email_sender):
        self.db = db  # Injected
        self.email = email_sender  # Injected

Create dependency container:
- Central place for wiring
- Configuration-driven
- Easy to swap implementations

PHASE 5: REFACTORING VALIDATION

Performance Comparison

Compare before/after performance:

Create performance tests:
1. Measure current performance
   - Response times
   - Memory usage
   - Database queries

2. Run same tests after refactoring
   - Should be same or better
   - Document any degradation
   - Optimize if needed

Document in devdocs/refactoring/performance_impact.md:
- Metrics before/after
- Explanation of changes
- Optimization opportunities

Behavior Verification

Ensure refactoring didn't change behavior:

1. Run all existing tests
   - Should pass without changes
   - If test needs updating, document why

2. Run probe tests
   - Compare output before/after
   - Check edge cases still work
   - Verify error handling unchanged

3. Manual testing checklist
   - Critical user paths
   - Integration points
   - Error scenarios

Document any behavior changes (there shouldn't be any!)

PHASE 6: CONTINUOUS REFACTORING

Daily Refactoring Habits

Make these part of your daily workflow:

When adding new code:
□ Extract if function > 30 lines
□ Name magic numbers as constants
□ Group related parameters into objects
□ Add types/interfaces where missing

When modifying existing code:
□ Leave it better than you found it
□ Extract repeated code you encounter
□ Improve names for clarity
□ Add missing tests

When reviewing code:
□ Identify emerging patterns
□ Note refactoring opportunities
□ Create tickets for larger refactors

Refactoring During Feature Development

Integrate refactoring into feature work:

Before starting feature:
1. Refactor the area you'll be working in
2. Clean up technical debt that would slow you down
3. Add tests for code you'll modify

During feature development:
1. Extract new abstractions as they emerge
2. Don't copy-paste, extract shared code
3. Keep consistent patterns

After feature completion:
1. Refactor based on new understanding
2. Consolidate similar code introduced
3. Update module documentation

COMMON REFACTORING PATTERNS WITH AI

Pattern 1: Extract and Generalize

When you have similar code with variations:

Prompt: "I have these 3 similar functions [paste code].
Extract the common logic into a base function and handle
variations through parameters or callbacks."

AI will:
- Identify commonality
- Create generalized version
- Show how to use for each case

Pattern 2: Simplify Conditional Logic

When you have complex nested conditions:

Prompt: "Simplify this complex conditional logic [paste code].
Use early returns, extract methods, or strategy pattern as appropriate."

AI will:
- Flatten nested conditions
- Extract boolean methods
- Suggest clearer structure

Pattern 3: Introduce Design Pattern

When code is begging for a pattern:

Prompt: "This code [paste code] seems to need a design pattern.
Identify which pattern would help and refactor to use it."

AI will:
- Recognize applicable patterns
- Implement the pattern
- Show migration path

REFACTORING SAFETY RULES

The Golden Rules

  1. One Thing at a Time - Never refactor multiple things simultaneously
  2. Test Continuously - Run tests after every small change
  3. Commit Frequently - Each successful refactor gets its own commit
  4. Keep It Working - Code should work at every step
  5. Document Why - Commit messages explain the refactoring reason

When NOT to Refactor

  • Before a deadline - Refactoring can wait
  • Without tests - Add tests first
  • During debugging - Fix the bug first
  • Unfamiliar code - Understand it first
  • If it’s working - Don’t fix what ain’t broken (unless it’s blocking progress)

REFACTORING CHECKLIST

Before starting any refactoring:

  • Current behavior is tested
  • Performance baseline captured
  • Rollback plan exists
  • Team is informed

During refactoring:

  • One change at a time
  • Tests pass after each change
  • Commits are atomic
  • Code always compiles

After refactoring:

  • All tests still pass
  • Performance unchanged or better
  • Documentation updated
  • Team code review completed
  • Old code removed

TIPS FOR AI-ASSISTED REFACTORING

  1. Show Context - Give AI the full picture, not just the code to refactor
  2. Specify Constraints - Tell AI what must not change
  3. Request Steps - Ask for incremental refactoring steps
  4. Verify Understanding - Have AI explain the refactoring before doing it
  5. Test Generation - Ask AI to create tests before and after

Remember: Refactoring is about making code better without changing what it does. It’s evolution, not revolution. The best refactoring is invisible to users but obvious to developers.

APPENDIX 4: ADDING NEW FEATURES WITH VDD

This appendix provides ready-to-use prompts for systematically planning and implementing new features using the Vibe-Driven Development methodology. These prompts ensure thorough planning, risk assessment, and safe implementation.

OVERVIEW

Adding features in VDD follows a structured documentation-first approach that prevents scope creep, identifies risks early, and provides clear implementation paths for AI collaboration.

PHASE 1: FEATURE DISCOVERY & DOCUMENTATION

In this phase make sure after each step, check the generated documentation and apply fixes

Step-1: Seeding the Feature

Start by describing the feature. Based on it’s complexity provide extra information. End your desc with this:

“Please desc me this feature in better phrasing.”

Having AI rephrase the feature back ensures mutual understanding before proceeding. This catches miscommunication early.

Once you read back what AI generated regarding the feature you asked, make sure to fix incorrect things.

Step-2: Generating Small Description and Plan

Okay now you understand this feature in general, 

Create the initial feature documentation structure in devdocs folder:

1. Create devdocs/features/planned/feat_X_[FEATURE_NAME]/desc.md with:
   - Problem statement (what problem does this solve?)
   - User value proposition (why do users need this?)
   - Success criteria (how do we know it's working?)
   - Scope boundaries (what this feature will NOT do)
   - Priority level (critical/high/medium/low)

2. Create a devdocs/features/planned/feat_X_[FEATURE_NAME]/implementation_plan.md with:
   2.1. High level plan summary in bullet points
   2.2. Full implementation plan 

  ( Keep it lightweight at this stage - we'll add detail as we validate the feature.)

Step-3: Compatibility check of the Plan (Optional)

Now based on the feature implementation plan (reread it, I made changes)

I want you to check and  Identify compatibility ISSUES, RISKS and CONFLICTS with respect to whole codebase's work logic. Check if this implementation plan caueses:
- Existing features that might break
- Performance implications
- API contract changes
- Database schema impacts
- Security considerations

Step-4: Compatibility Analysis

Analyze how this feature will interact with the existing codebase:

1. Read all relevant module interfaces and implementations
2. Create devdocs/features/planned/feat_X_[FEATURE_NAME]/compatibility_check.md

Document:
- Which existing features might be affected
- Potential conflicts or breaking changes
- Performance implications (latency, memory, storage)
- API contract changes required
- Database schema impacts
- Security considerations
- Required refactoring before implementation

Rate each risk as: Low/Medium/High
Suggest mitigation strategies for Medium/High risks.

PHASE 2: BLUEPRINT PLANNING (Optional)

Expand the implementation plan for this feature:

Update devdocs/features/planned/feat_X_[FEATURE_NAME]/implementation_plan.md

Structure:
1. High-level plan summary (5-10 bullet points)

2. Detailed implementation steps:
   Phase A: Foundation
   - [ ] Step 1: [Specific task]
   - [ ] Step 2: [Specific task]

   Phase B: Core Implementation
   - [ ] Step 3: [Specific task]
   - [ ] Step 4: [Specific task]

   Phase C: Integration
   - [ ] Step 5: [Specific task]
   - [ ] Step 6: [Specific task]

3. For each phase specify:
   - Files to be created/modified
   - New components/modules needed
   - Integration points
   - Testing approach

4. Architecture decisions:
   - Design patterns to use
   - Data flow design
   - Error handling strategy
   - Performance optimization approach

5. Code organization:
   - Where new code will live
   - Module boundaries
   - Interface definitions

PHASE 3: Execution Loop

Step-5: Executing

This is where implementation is executed by AI.

Now using all files we created regarding this feature, please start the execution. Be loyal to the files 

Step-6: Test Scenario Planning

Once execution is finished we would like to test it.

Define comprehensive test scenarios for this feature :

Create devdocs/features/planned/feat_X_[FEATURE_NAME]/test_scenarios.md

Document:

1. Feature Trigger Points
   - How can this feature be activated/used?
   - What are all the entry points?
   - User actions that trigger it
   - System events that trigger it

2. Happy Path Scenarios
   - Scenario A: [Description]
     * Input: [Specific data]
     * Expected: [Specific outcome]
   - Scenario B: [Description]
     * Input: [Specific data]
     * Expected: [Specific outcome]

3. Edge Cases
   - Boundary conditions
   - Empty/null inputs
   - Maximum load scenarios
   - Concurrent access cases

4. Error Scenarios
   - Invalid inputs
   - Missing dependencies
   - Network failures
   - Permission issues

5. Integration Tests
   - How it works with Feature X
   - How it works with Feature Y
   - Data flow through the system

6. Acceptance Criteria Checklist
   - [ ] Criterion 1 (how to verify)
   - [ ] Criterion 2 (how to verify)
   - [ ] Criterion 3 (how to verify)

7. Performance Benchmarks
   - Response time targets
   - Throughput requirements
   - Resource usage limits

Step-7: Probe Test Creation

Use the created test scenario documentaion and  Let's design comprehensive probe tests to validate our implementation of this feature
 
Please create probe_tests folder if it doesnt exists.


Use feature name as a folder and inside create:

1. 5 test files, each containing 5 focused test cases
2. Avoid mocking and use real componenets with real calls with real data. 
3. File naming convention: test_01_[test_focus_area].py, test_02_[next_focus_area].py, etc.
4. Make sure these tests 
      -  Individual functions/methods in isolation 
      -  Test how feature related components work together
      -  Verify solidly defined requirements are met or not

       
5. For each test file, provide:
  - Clear description of what aspect it tests
  - Why this testing area is critical
  - Brief outline of each test case within the file as a top comments
6. these tests shouldnt use any testing frameworks. Make the outputs verbose enough so you can see what exactly does not work. 
7. Start with initialization test file. 
8. Each test file's most top comment should include how to run it manually. 


(Write the files in such way that each subtest inside is seperate functin and in the end there is one funciton to orchestrate runnning all of them)

(Tests should test what actually exists, do not create fallbacks etc)


Step-8: Stable INtermediate Form (Anchor check)

Run all other probe tests to see this implemention did not break anyhthing.

PHASE 4: COMPLETION & DOCUMENTATION

Step-9: Implementation Notes

Document the implementation journey for this feature what was unexpected?
Deviations from original plan? 
Technical obstacles faced and how they were resolved?
Undocumented behaviors found or not?
Integration complexities?
Technical debt created?
What to avoid next time? 

document these in 

devdocs/features/planned/feat_X_[FEATURE_NAME]/implementation_notes.md:

Feature Migration

Move this feature to finished status.  Move the feat from planned folder to finished folder

End Notes

Writing this book was a journey of discovery. I used AI extensively to process and refine ideas that I’d been following intuitively but couldn’t fully articulate. It revealed gaps in my understanding I didn’t even know existed.

Perhaps one of AI’s greatest gifts to humanity will be helping us understand ourselves better. Whether AI ultimately elevates or challenges us, there’s no denying the magic of building with it.

Join us at: Linkedin VDD Community