A Claude Code skill that enforces research-validated development across the full product lifecycle.
This skill adds mandatory research and validation gates to every phase of development. No code gets written without first researching best practices, validating findings against multiple sources, and creating research-informed specifications.
RESEARCH → VALIDATE → PLAN → IMPLEMENT → REVIEW → VERIFY → DELIVER
Each phase produces an artifact. Each artifact is validated before the next phase begins. Phases cannot be skipped.
Most AI coding workflows jump straight from idea to implementation. This leads to:
- Wrong technology choices discovered too late
- Patterns that contradict community best practices
- Architectures that don't fit the existing codebase
- Costly rework when better approaches existed all along
Research-Driven Development fixes this by making research the foundation of every decision.
This skill orchestrates existing skills (superpowers TDD, code review, verification, etc.) while adding the missing pieces:
- Mandatory research before any planning or implementation
- Multi-source validation against community best practices
- Research-informed specs (plans reference findings, not assumptions)
- Model routing based on task complexity (ACR scoring)
- Full lifecycle coverage including delivery and infrastructure
- Explicit quality gates between every phase
# Copy the skill to your Claude Code skills directory
mkdir -p ~/.claude/skills/research-driven-development
cp SKILL.md ~/.claude/skills/research-driven-development/This skill follows the open SKILL.md standard and is compatible with:
- Claude Code
- OpenAI Codex CLI
- Gemini CLI
- Cursor
- Any tool supporting SKILL.md format
This skill works best with:
- superpowers — Core development skills (TDD, debugging, planning, code review)
- estimating-agent-tasks — ACR complexity scoring for model selection
This skill was designed based on extensive research of community patterns and industry best practices:
- Anthropic 2026 Agentic Coding Trends Report
- Spec-Driven Development (GitHub)
- How to Write a Good Spec for AI Agents (Addy Osmani)
- Multi-Agent Workflows That Don't Fail (GitHub Blog)
- Agentic Workflows for Software Development (McKinsey/QuantumBlack)
- Quality Gates for AI Code Reviews (CodeRabbit)
Community frameworks studied: Metaswarm, Spec-Flow, cc-sdd, sudocode, gbFinch/agentic-orchestration, Conductor-Orchestrator, Claude-Octopus, wshobson/agents, and 10+ others.
MIT