Package manager for AI coding instructions

Make your codebase
& team AI-native

Symposia packages the instructions that teach AI agents how to work in your project — so Claude, Cursor, Copilot, and every other tool your team uses just gets it.

~/your-project
$sym audit
types
4/10
context
2/10
testing
7/10
$sym install adx-types adx-context

+ adx-types@1.2.0

+ adx-context@1.0.3

2 packages installed across 7 agents

$sym audit--score
8.1/ 10+3.4 from baseline
$_
Works withClaude CodeCursorWindsurfGitHub CopilotAiderRoo CodeAmplify

Your AI tools are flying blind

Without shared instructions, every AI agent starts from scratch in every project. Your team deserves better.

Every project starts from zero

New repo? New AI config. Same conventions get rewritten, differently, every time. Your team wastes hours recreating what already exists.

Copy-paste drift

That great CLAUDE.md someone wrote? It's been forked into 12 repos. Each one drifted. Nobody knows which version is current.

Wildly inconsistent results

One engineer gets great AI output, another fights it all day. The difference isn't skill — it's configuration. But nobody's measuring it.

Three commands. Every agent.

Diagnose your AI readiness, install versioned instruction packages, and watch your scores climb.

01
sym audit

Diagnose

Score your codebase across 6 dimensions of AI-readiness — types, testing, context, structure, readability, and guardrails.

02
sym install

Install

Add versioned instruction packages that teach every AI agent your team's conventions. One install, all 7+ agents configured.

03
sym audit --score

Measure

Track your team's AI-native score over time. See adoption, catch drift, and prove impact with real data.

AI-Native Maturity Model

Where does your team fall?

Most engineering teams are at Level 1 or 2 — everyone configures AI their own way, with wildly different results. Symposia takes you to Level 4+ by making AI instructions a shared, versioned, measurable concern.

Find your level
1

Ad Hoc

No AI instructions. Every developer configures their own AI randomly.

2

Aware

Some CLAUDE.md / .cursorrules files exist. Hand-written, inconsistent.

3

Standardized

Symposia packages installed. Team conventions codified and versioned.

4

Measured

Analytics tracking adoption. Drift detection in CI. Impact data.

5

AI-Native

Instructions are a first-class concern — versioned, reviewed, tested, governed.

Built for teams, not just individuals

When one engineer gets great AI output, the whole team should too. Symposia makes that the default, not the exception.

Private registries

Share internal conventions without making them public. Your proprietary patterns stay in your org.

Team analytics

See which packages are installed where, track adoption across repos, and measure AI effectiveness over time.

CI/CD drift detection

A GitHub Action that catches when repos drift from your team's AI standards. Fix it before it ships.

Versioned conventions

Pin your team to specific versions. Roll out updates deliberately. Review instruction changes in PRs like code.

Onboard in one command

New team member? sym install. They get every convention, every pattern, configured for every AI tool they use.

Prove the ROI

Before-and-after ADX scores give your team real numbers. Show leadership that AI investment is paying off.

Coming soon — join the waitlist

Be first to try Symposia

We're building the package manager for AI coding instructions. Drop your email and we'll let you know when it's ready.