DevAI.md Logo

DevAI.md - Context as Infrastructure

From Ad-Hoc Prompting to Systematic AI Development

AI-assisted development is most powerful when you move beyond ad-hoc prompting into repeatable, structured workflows. DevAI.md documents your AI interaction patterns, context engineering strategies, and prompt templates so your entire team benefits from collective learning.

Capture which context structures produce the best AI output, how to chain AI interactions for complex tasks, and which prompt patterns consistently outperform. Version these patterns in markdown so they evolve with your team's AI capabilities.

The teams that win with AI are not writing better prompts - they are building better context infrastructure. Start documenting your AI development patterns and transform experimentation into methodology.

AI Development Best Practices

Proven patterns for integrating AI into your development workflow - from effective prompting to systematic context engineering.

Layer Your Context Strategically

Structure context in layers - project overview, architectural constraints, specific task details. AI assistants perform best when context flows from broad to narrow, with the most relevant details closest to the instruction.

Minimize Context, Maximize Signal

More context is not always better. Identify the minimum effective context for each task type. Irrelevant context dilutes attention and increases token costs. Measure output quality against context size to find the sweet spot.

A/B Test Your Prompts

Document prompt variations and their results side by side. When you find a pattern that consistently produces better output, capture it as a team template. Prompt engineering is empirical - treat it like an experiment.

Design Agentic Workflows

Break complex tasks into sequential AI interactions where each step's output feeds the next step's context. Document the chain - task decomposition, intermediate outputs, and assembly patterns.

Build Validation Checkpoints

Insert human review points in AI workflows where errors are costly. Document which outputs need verification, what to check for, and common failure modes. Trust but verify - and document the verification criteria.

Create Reusable Prompt Templates

Identify recurring AI tasks - code review, test generation, refactoring - and create templated prompts with slots for project-specific context. Templates ensure consistency and save composition time across the team.

Track AI Output Quality

Log how often AI output is used as-is versus requires editing. Track by task type to identify where AI excels and where context needs improvement. Quality metrics drive targeted context refinements.

Document Human-AI Handoff Points

Clearly define which tasks are AI-suitable, which need human judgment, and which require collaboration. Well-defined boundaries prevent both over-reliance and under-utilization of AI capabilities.

Context Engineering Is the New Core Competency

The developers who get the most from AI are not prompt wizards - they are context engineers who understand how to structure information for AI consumption. This is a learnable, documentable, improvable skill. Capture your context engineering patterns in markdown, share them across the team, and iterate based on results. The quality of your AI output is directly proportional to the quality of your context infrastructure.

The DevAI Template

DevAI.md
# DevAI.md - AI-Assisted Development Patterns
<!-- Configuration and best practices for AI coding assistants -->
<!-- Covers context files, prompt patterns, model tips, and workflow integration -->
<!-- Last updated: YYYY-MM-DD -->

## AI Assistant Configuration

### Claude Code (CLAUDE.md)

This project uses Claude Code as the primary AI coding assistant. The `CLAUDE.md` file in the project root provides persistent context that Claude reads at the start of every session.

**Project CLAUDE.md Structure**:
```markdown
# Project Context - Atlas Platform

## Quick Reference
- Language: TypeScript (strict mode)
- Framework: Next.js 14 (App Router)
- Database: PostgreSQL 16 via Prisma ORM
- Testing: Vitest (unit), Playwright (E2E)
- Package Manager: pnpm (not npm or yarn)

## Critical Rules
- NEVER use `any` type - use `unknown` and narrow with type guards
- ALWAYS check permissions before returning data in API routes
- ALWAYS run `pnpm type-check` after making changes
- Database schema changes require a migration file, never modify schema.prisma directly

## Architecture Patterns
- Server Components by default, 'use client' only when necessary
- Business logic in src/server/services/, never in API routes
- Zod schemas for all API input validation
- Error handling: Result<T> type in services, TRPCError in routers

## File Conventions
- Components: src/components/{feature}/{ComponentName}.tsx
- Services: src/server/services/{entity}.service.ts
- Tests: tests/{unit|integration|e2e}/{feature}.test.ts

## Known Gotchas
- Prisma client must be regenerated after schema changes: `pnpm prisma generate`
- The `user` table uses `id` (UUID), not `userId`
- Date fields are stored as UTC, displayed in user's timezone via formatDate()
- NEVER import from @prisma/client directly - use the re-export from src/server/db
```

### Cursor (.cursorrules)

For teams using Cursor, create a `.cursorrules` file in the project root:
```markdown
You are an expert TypeScript developer working on a Next.js 14 application.

Rules:
1. Use TypeScript strict mode - no `any`, no `as` casts without justification
2. React Server Components by default. Only add 'use client' for interactivity.
3. Use Prisma ORM for database queries. Never write raw SQL.
4. All API inputs validated with Zod schemas.
5. Follow existing patterns - check similar files before writing new code.
6. Include error handling for all async operations.
7. Write Vitest unit tests for service functions.

Project structure:
[paste abbreviated tree here]

Key patterns:
[paste 2-3 code examples of your most common patterns]
```

### GitHub Copilot

Copilot uses comment-driven prompting. Write descriptive comments before functions to guide suggestions:
```typescript
// Validate that the user has permission to access the document.
// Check direct grants first, then walk up the folder hierarchy.
// Return the highest permission level found, or null if no access.
async function resolvePermission(userId: string, documentId: string): Promise<Permission | null> {
  // Copilot will suggest implementation based on the comments above
}
```

## Context File Architecture

### The Context Layer Stack

AI assistants work best when context is structured in layers, from most stable to most volatile:

```
Layer 1: Global Rules (rarely changes)
  ~/.claude/CLAUDE.md
  - Personal coding preferences
  - Default language/framework
  - Universal rules (no smart quotes, straight apostrophes)

Layer 2: Project Context (changes per project)
  /project/CLAUDE.md or .cursorrules
  - Tech stack and architecture
  - Coding patterns and conventions
  - File structure and naming
  - Critical rules and gotchas

Layer 3: Feature Context (changes per task)
  /project/.planning/feature-name.md
  - Current feature requirements
  - Design decisions for this feature
  - Related files and dependencies

Layer 4: Session Context (changes per conversation)
  - Verbal instructions in the chat
  - Code you paste into the conversation
  - Files you reference or open
```

### What Makes Good Context

**Include** (high signal, helps AI write better code):
- Concrete code examples of your patterns (not descriptions - actual code)
- Explicit constraints ("never do X", "always do Y")
- File naming and location conventions
- Database schema and key relationships
- Error handling patterns with examples
- Known gotchas specific to your codebase

**Exclude** (noise that dilutes context):
- Generic programming advice the AI already knows
- Obvious language features ("TypeScript supports interfaces")
- Long changelogs or version histories
- Full file contents when a summary would suffice
- Information that changes frequently (put this in session context instead)

### Context File Sizing Guide

| File | Ideal Size | Purpose |
|------|-----------|---------|
| CLAUDE.md | 100-300 lines | Project-level rules and patterns |
| .cursorrules | 50-150 lines | Editor-specific prompt instructions |
| .planning/*.md | 50-200 lines each | Feature-specific context |
| In-chat context | As needed | Volatile, session-specific information |

## Prompt Patterns for Development

### Pattern 1: Show, Do not Tell

Instead of describing what you want, show an example and ask for more like it:

```markdown
Here is how we define a tRPC router in this project:

[paste actual router code from your codebase]

Create a new router for the `notification` entity following this exact pattern.
The notification has these fields: id, userId, message, type (info|warning|error),
read (boolean), createdAt.

Include: list (paginated), markAsRead, markAllAsRead, getUnreadCount.
```

### Pattern 2: Constrained Generation

Tell the AI what NOT to do, which is often more useful than describing what to do:

```markdown
Add a file upload feature to the document editor.

Constraints:
- Do NOT use multer - we use presigned S3 URLs via storage.service.ts
- Do NOT add a new API route - add a tRPC procedure to the existing document router
- Do NOT store file content in the database - only store the S3 key
- Maximum file size: 50MB, allowed types: PDF, PNG, JPEG
- Follow the same error handling pattern as document.service.ts
```

### Pattern 3: Incremental Implementation

Break complex features into steps and validate each one:

```markdown
I need to add real-time notifications. Let us do this step by step.

Step 1: First, create the database schema for notifications.
Show me the Prisma schema addition and migration.
[Wait for response, review, then continue]

Step 2: Now create the notification service with these methods:
create, listForUser, markAsRead, getUnreadCount.
[Wait for response, review, then continue]

Step 3: Now add the tRPC router that exposes these as API procedures.
[And so on]
```

### Pattern 4: Bug Investigation

Structure bug reports for AI with all relevant context:

```markdown
Bug: Users see stale data after updating their profile.

Symptoms:
- User changes their name in settings
- After saving, the sidebar still shows the old name
- Hard refresh (Ctrl+Shift+R) fixes it
- Only affects the sidebar component, the settings page shows the new name

Relevant code:
- Settings page: src/app/(dashboard)/settings/page.tsx
- Sidebar: src/components/shared/sidebar.tsx
- User query: src/server/trpc/routers/user.ts (getProfile procedure)

Hypothesis: The sidebar is using cached data from a server component
and not invalidating after the mutation.

What is the correct fix? Show me the code changes needed.
```

### Pattern 5: Code Review Request

Ask AI to review code before submitting a PR:

```markdown
Review this code for:
1. Security issues (SQL injection, XSS, auth bypass)
2. Performance problems (N+1 queries, unnecessary re-renders)
3. Error handling gaps
4. Deviations from our codebase patterns

[paste the code diff or file contents]

For each issue found, explain why it is a problem and suggest a specific fix.
```

## Model-Specific Tips

### Claude (Opus / Sonnet)
- Excels at understanding full project context - provide your CLAUDE.md
- Strong at multi-file edits - ask it to update all affected files at once
- Give explicit file paths: "edit src/server/services/user.service.ts" not "edit the user service"
- For complex tasks, ask it to explain its plan before implementing

### GPT-4o / o1
- Works best with concrete examples in the prompt
- Chain-of-thought prompting helps with complex logic
- For debugging, provide the full error message and stack trace
- Less reliable with multi-file context - focus on one file at a time

### Copilot
- Works from comments and function signatures - write descriptive comments first
- Tab completion is most effective when you start typing the pattern you want
- Ghost text is better for boilerplate; Chat is better for design decisions
- The @workspace command searches your full codebase for relevant context

## Workflow Integration

### AI in the Development Cycle

```
1. Plan    -> Use AI to break down requirements into tasks
2. Design  -> Use AI to evaluate architecture options (ADR format)
3. Implement -> Use AI to write code following your patterns
4. Test    -> Use AI to generate test cases and edge cases
5. Review  -> Use AI to pre-review code before human review
6. Debug   -> Use AI to investigate bugs with structured context
7. Document -> Use AI to generate documentation from code
```

### When to Use AI vs. When to Think First

**Good for AI** (let it do the heavy lifting):
- Boilerplate code that follows established patterns
- Writing tests for existing functions
- Debugging with clear error messages
- Refactoring to match a new pattern you have defined
- Generating documentation from code
- Database schema design and migration scripts

**Think first, then use AI** (your judgment matters more):
- Architecture decisions (AI can evaluate options, you choose)
- Security-sensitive code (AI can draft, you must review carefully)
- Performance optimization (AI can suggest, you must measure)
- API design (AI can implement, you define the contract)
- User-facing copy and error messages (AI can draft, you humanize)

### Context Maintenance

Keep your context files current as your project evolves:

```markdown
## Monthly Context Review Checklist
- [ ] Update CLAUDE.md if tech stack or patterns have changed
- [ ] Remove gotchas that have been fixed
- [ ] Add new gotchas discovered this month
- [ ] Update code examples if patterns have evolved
- [ ] Review .planning/ docs - archive completed features
- [ ] Check that file structure documentation matches reality
```

## Measuring AI Effectiveness

### Track These Metrics (Informally)
- Time from task start to first working implementation
- Number of back-and-forth messages to get correct code
- Frequency of AI-generated bugs caught in review
- Test coverage of AI-generated code vs. hand-written code

### Continuous Improvement
- Keep a personal log of effective prompts - reuse them as templates
- Share useful patterns with your team in #ai-tips or similar channel
- When AI gets something wrong repeatedly, add it to the "gotchas" in CLAUDE.md
- Review AI-generated code with the same rigor as human-written code

Why Markdown Matters for AI-Native Development

AI-First Development

AI coding assistants are most effective with structured context. DevAI.md documents your prompt engineering patterns, context optimization strategies, and AI workflow integrations. Best practices for AI-assisted development become versioned knowledge. Every developer benefits from collective learning.

Context Engineering

The quality of AI output depends on the quality of context you provide. DevAI.md captures effective prompting patterns, context structuring techniques, and AI interaction workflows. Transform ad-hoc experimentation into repeatable methodology. Context becomes your competitive advantage.

Agentic Workflows

AI agents need clear instructions and proper context to be effective. DevAI.md documents agent capabilities, integration patterns, and workflow automations in markdown. Your development environment becomes AI-native. Repetitive tasks get automated systematically.

"AI-augmented development requires a new kind of infrastructure - one that captures not just code and requirements, but the patterns and practices for effective AI collaboration. DevAI.md provides that foundation."

Explore More Templates

About DevAI.md

Our Mission

Built by AI engineers who understand that prompt engineering is just the beginning.

We believe the next phase of AI-assisted development requires structured context engineering. It's not enough to write good prompts - you need to architect how context flows through your development environment. DevAI.md helps teams document their AI integration patterns, prompt templates, and context optimization strategies in markdown that evolves with their AI workflows.

Our goal is to help development teams transition from ad-hoc AI experimentation to systematic AI integration. When your AI collaboration patterns are captured in versioned .md files, best practices spread across your team, AI assistants become more effective, and your development environment becomes truly AI-native.

Why Markdown Matters

AI-Native

LLMs parse markdown better than any other format. Fewer tokens, cleaner structure, better results.

Version Control

Context evolves with code. Git tracks changes, PRs enable review, history preserves decisions.

Human Readable

No special tools needed. Plain text that works everywhere. Documentation humans actually read.

Experimenting with AI-assisted development? Want to share your AI integration patterns? Let's learn together.