Documentation
Make your codebase AI-ready with our suite of analysis tools
🚀 Quick Start
Get started in seconds with zero configuration:
Installation
You can use AIReady tools without installation via npx, or install globally for faster runs:
Use with AI Agent
Prefer using AI agents like Cline, Cursor, GitHub Copilot Chat, or ChatGPT? Copy these ready-to-use prompts and paste them into your agent to run AIReady analysis.
💡 These prompts include step-by-step instructions for the AI agent to run the analysis and provide actionable recommendations.
🔍 Basic Scan
Quick analysis to identify top issues and get your AI Readiness Score.
📊 Detailed Analysis
Comprehensive analysis with prioritized recommendations and impact assessment.
🔧 Fix Issues
Have your AI agent automatically fix the top 3 critical issues and verify improvements.
💡 Pro Tips
- •These prompts work with any AI agent that can execute terminal commands
- •The agent will run the commands locally and analyze the results
- •All analysis happens on your machine - no code is uploaded
- •Customize the prompts to focus on specific tools or issues
Tools
Pattern Detection
@aiready/pattern-detectFind semantic duplicates that look different but do the same thing
✨ Features
- ✓Semantic detection using Jaccard similarity on AST tokens
- ✓Pattern classification (API handlers, validators, utilities)
- ✓Token cost analysis showing wasted AI context budget
- ✓Auto-excludes tests and build outputs
- ✓Adaptive threshold based on codebase size
🚀 Quick Start
# Run without installation npx @aiready/pattern-detect ./src # Or use unified CLI npx @aiready/cli scan ./src
📊 Example Output
📊 Duplicate Pattern Analysis ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 📁 Files analyzed: 47 ⚠️ Duplicate patterns: 12 files with 23 issues 💰 Wasted tokens: 8,450 CRITICAL (6 files) src/handlers/users.ts - 4 duplicates (1,200 tokens) src/handlers/posts.ts - 3 duplicates (950 tokens)
AI Readiness Scoring
📊 One Number, Complete Picture
Get a unified 0-100 score combining all three tools with proven default weights:
Default Weights
- Pattern Detection:40%
- Context Analysis:35%
- Consistency:25%
Rating Scale
- 90-100 Excellent
- 75-89 Good
- 60-74 Fair
- 40-59 Needs Work
- 0-39 Critical
🎯 Customizable Weights
Adjust weights to match your team's priorities:
💡 Tip: Use --threshold 75 to enforce minimum scores in CI/CD pipelines.
🚀 Forward-Compatible & Flexible
Forward-Compatible
- ✓Scores remain comparable as new tools are added
- ✓New tools are opt-in via
--toolsflag - ✓Existing scores unchanged when new tools launch
- ✓Historical trends stay valid for tracking progress
Fully Customizable
- ✓Run any tool combination you need
- ✓Adjust weights for your team's priorities
- ✓Override defaults via config files
- ✓Scoring is optional (backward compatible)
Understanding Metrics
📊 Fragmentation
Measures how scattered related code is across directories. Impacts AI's ability to load context efficiently.
fragmentation = (unique_directories - 1) / (total_files - 1)🔄 Duplication Density
Ratio of files with semantic duplicates. High density indicates systematic copy-paste patterns.
density = files_with_duplicates / total_files_analyzed🪙 Token Waste
Estimated tokens consumed by duplicate code when loaded into AI context windows.
Example: 24 duplicates consuming 20,300 tokens = ~25% of a typical 80K context budget wasted
📏 Context Budget
Total tokens (file + all dependencies) needed to provide full context for AI edits.
budget = file_tokens + sum(dependency_tokens)🔗 Import Depth
Maximum levels of transitive imports. Deep chains make it harder for AI to understand full context.
Unified CLI
The @aiready/cli package provides a unified interface to run all tools:
The CLI automatically formats results, handles errors, and provides a consistent experience across all tools.
Visualize
Generate an interactive HTML visualization from an AIReady JSON report. The repo includes a convenience script exposed as the `visualize` npm script.
Options
- --report <file>: Path to an existing report JSON. Auto-detects latest report in
.aiready/directory (pattern:aiready-report-*.json) - --output <file>: Output HTML path (default: packages/visualizer/visualization.html)
- --open: Open the generated visualization in the default browser
Consulting Audit
Are you an AI Consultant or Architect auditing codebases for readiness? Use this prompt to generate a professional, data-backed report for your clients.
📊 Professional Audit
This workflow produces a structured report and interactive visualization to identify systemic issues and token ROI.
White-label Reports
Export scan results to JSON to feed your own custom templates or AI synthesis engines.
Readiness ROI
Translate token waste into real dollar savings for your clients by optimizing context windows.
CLI Options
--output <path>
Save results to JSON file (default: .aiready/<tool>-results.json)
--exclude <patterns>
Glob patterns to exclude (comma-separated)
--include-tests
Include test files in analysis (default: excluded)
--threshold <number>
Similarity threshold for pattern detection (0-1, default: 0.7)
Contributing
We welcome contributions! AIReady is open source and available on GitHub. Star our landing page or report issues for any of our tools.