Symposia packages the instructions that teach AI agents how to work in your project — so Claude, Cursor, Copilot, and every other tool your team uses just gets it.
+ adx-types@1.2.0
+ adx-context@1.0.3
2 packages installed across 7 agents
Without shared instructions, every AI agent starts from scratch in every project. Your team deserves better.
New repo? New AI config. Same conventions get rewritten, differently, every time. Your team wastes hours recreating what already exists.
That great CLAUDE.md someone wrote? It's been forked into 12 repos. Each one drifted. Nobody knows which version is current.
One engineer gets great AI output, another fights it all day. The difference isn't skill — it's configuration. But nobody's measuring it.
Diagnose your AI readiness, install versioned instruction packages, and watch your scores climb.
Score your codebase across 6 dimensions of AI-readiness — types, testing, context, structure, readability, and guardrails.
Add versioned instruction packages that teach every AI agent your team's conventions. One install, all 7+ agents configured.
Track your team's AI-native score over time. See adoption, catch drift, and prove impact with real data.
Most engineering teams are at Level 1 or 2 — everyone configures AI their own way, with wildly different results. Symposia takes you to Level 4+ by making AI instructions a shared, versioned, measurable concern.
Find your levelNo AI instructions. Every developer configures their own AI randomly.
Some CLAUDE.md / .cursorrules files exist. Hand-written, inconsistent.
Symposia packages installed. Team conventions codified and versioned.
Analytics tracking adoption. Drift detection in CI. Impact data.
Instructions are a first-class concern — versioned, reviewed, tested, governed.
When one engineer gets great AI output, the whole team should too. Symposia makes that the default, not the exception.
Share internal conventions without making them public. Your proprietary patterns stay in your org.
See which packages are installed where, track adoption across repos, and measure AI effectiveness over time.
A GitHub Action that catches when repos drift from your team's AI standards. Fix it before it ships.
Pin your team to specific versions. Roll out updates deliberately. Review instruction changes in PRs like code.
New team member? sym install. They get every convention, every pattern, configured for every AI tool they use.
Before-and-after ADX scores give your team real numbers. Show leadership that AI investment is paying off.
We're building the package manager for AI coding instructions. Drop your email and we'll let you know when it's ready.