Agent rules as an adoption metric?
This is more intuition than anything, but I think what kind of rules your teams are deploying for their AI agents (e.g., claude, cursor, etc) might be a decent indicator of how well they're actually integrating the tools into their workflow.
Maybe even better than purely looking at metrics like token usage - because at first you burn through tokens quickly and may not actually be productive. And not to get sidetracked, but tab completion
usage ala Cursor metrics probably decreases as you get more advanced with your AI usage. It certainly has for me.
But I think agent rules correlate pretty closely with actual AI integration.
The more specific, tactical, and context-aware your rules become, the more you're actually shipping with AI rather than just playing with it. Let me explain.
The vague beginnings
Early on, I think most teams start with aspirational or generic instructions:
- "You're the best Go developer; you must write perfectly idiomatic Go blah blah"
- "Always follow best practices"
- "Write clean, maintainable code"
- "Be helpful and accurate"
These sound good on paper, but they're pointless. They're the AI equivalent of telling a new hire to "just do good work" without any context about your systems, conventions, or business logic.
Teams start out like this because at the beginning they're just experimenting, and maybe still a skeptic. They might be in the "I gave it a vague prompt, and it wrote shit code - AI is hype" phase.
They haven't yet realized that AI is basically a super smart - but super junior developer. And just like you coach jr dev's - you have to coach your AI tools.
The shift to tactical specificity
I think teams that actually ship with AI tools ...and... get shit done safely - develop increasingly specific, domain-aware rules:
Business logic specificity:
- "Always use
useCustomerDataHook
to fetch customer information because it handles our auth correctly" - "For payment processing, use the
PaymentService.processTransaction()
method, which includes our fraud detection pipeline."
Technical context awareness:
- "Ensure that you handle null columns using
sql.NullString
For example: blah blah" - "Never use Go's default http client. Always use an HTTP client with proper configuration, including timeouts and connection limits. For example: blah blah
Organizational knowledge:
- "When creating API endpoints, follow the pattern in
handlers/*.go
and always include rate limiting middleware" - "Legacy model handlers in models-old/*.go use an outdated access pattern, DO NOT reuse this package for net new code".
Why this matters
The specificity of your rules reveals a bunch about your org:
Codified tribal knowledge: Congrats, you're successfully starting to encode the unwritten knowledge that usually lives in developers' heads or things that pop up frequently in reviews for new hires that don't make it into docs.
Real production usage: Specific rules only emerge when you're actually building real features, hitting real edge cases, getting and giving reviews of generated code and solving real problems with AI assistance.
Systematic thinking: Advanced rules prove that you've moved beyond ad-hoc AI usage to systematic integration. Folks are not just asking ChatGPT for help; they're building AI into their actual development workflow - they are actually guiding it to produce code that conforms to the eng culture.
Building better rules
Start documenting the specific patterns and gotchas that come up in your codebase.
This is all shit that you would probably be telling a new hire in pairing sessions, or is coming up in reviews, etc. In fact, that's likely where you should start looking for your rules. In PR reviews.
Go back and look at recent reviews. Anything you see frequently in PR comments or in feedback to new hires - drop that in a rule.
- Things that your new hires have trouble with ? Drop that in a rule because the thinking rocks, they will have problems with that too.
- Some gotcha gets uncovered in a PR? Put it in a rule.
- Hear someone say "the AI keeps getting error handling wrong" ... put an example in a rule.
The best rules are born from real pain points, real debugging sessions, and real production issues. They're your code base's sharp edges, encoded in a way that AI can understand and apply.
If your rules are getting more specific and tactical over time, congratulations - you're probably actually adopting AI tools rather than just burning tokens fucking around with codex/claude/cursor.
...and then?
Agents and /slash commands
. They won't be successful until you have good rules, and I think they follow a similar maturity curve. But that's probably a post for another day.
Built with React Router v7, deployed on Cloudflare Workers, powered by D1 database. Vibe coded with Claude