The 9 Metrics that Matter: Moving Beyond the Linter
Part 2 of our series: "The Self-Correcting Roadmap: From Readiness to Evolution."

For decades, we've used linters and static analysis to tell us if our code is "good." We measured cyclomatic complexity, test coverage, and line length. But these metrics were designed for human eyes.
When an autonomous agent (an "Agentic Dev") enters your repo, it doesn't care about your pretty indentation or your 80-character line limits. It cares about Signal-to-Noise Ratio and Contextual Density.
At AIReady, we've identified the 9 Metrics of Agency. These are the indicators that determine whether an agent will finish a task in 5 minutes or fail after 5 hours of hallucinations.
1. Semantic Scannability
How quickly can an LLM identify the purpose of a module without reading the implementation? This is driven by naming conventions that act as "GPS coordinates" for context. If your filenames are util.ts or helper.ts, your scannability is near zero.
2. Navigation Tax (The Jump Ratio)
The number of files an agent must read to understand a single function. If changing a button color requires reading 5 different CSS/TS files, you are charging a 500% Navigation Tax.
3. Token ROI (Information Density)
The ratio of "Logic per Token." Boilerplate-heavy code (like legacy Redux or verbose Java-style patterns) forces the agent to waste its limited context window on non-functional noise.
4. Explicit Signal Clarity
Are side effects documented in the type signatures? An agent shouldn't have to guess if a function writes to a database or triggers a webhook.
5. Dependency Fragmentation
How scattered are the dependencies for a single feature? High fragmentation leads to the "Context Window Crisis" we discussed in Part 1.
6-9: The Hidden Infrastructure
- Standard Divergence: How much a specific file deviates from the repo's established patterns.
- Aesthetic Consistency: Yes, agents are sensitive to visual structure. Messy formatting leads to attention decay in the model.
- Reasoning Breadth: The horizontal scope of a change.
- Validation Readiness: Are there existing tests that an agent can run and interpret without human help?
In our next entries, we'll look at how AIReady uses these metrics to generate a "Readiness Score" that is objectively measurable and enforceable.
In Part 3, we'll explore The Living & Lawful Documentation: How to turn these metrics into a self-enforcing standard.
How do your metrics stack up?npx @aiready/cli scan --metrics
Join the Discussion
Have questions or want to share your AI code quality story? Drop them below. I read every comment.