Every AI coding CLI has its own config format. Claude Code reads .claude/settings.json and CLAUDE.md. Codex reads .codex/config.yaml. OpenCode reads opencode.json. Gemini reads .gemini/settings.json. If you use two or more of these tools, you're probably maintaining multiple config files that say roughly the same thing — and slowly drifting apart.
The Problem: Config Drift
Config drift is subtle. You add a new rule to your CLAUDE.md (no mocking the database in tests) but forget to update the equivalent in AGENTS.md for Codex. Three weeks later, a Codex-generated test file mocks the database and breaks in production. You spend an hour tracking it down.
Multiple tools, multiple formats, zero single source of truth. Config drift is not a question of if — it's a question of when.
Or you configure Claude Code's delegation model but haven't touched the OpenCode config. Your Claude sessions delegate expensive tasks to Opus, but when you hand off to a teammate using OpenCode, they get a generic config with no routing logic.
The Solution: One ai/ Directory
0dai treats the ai/directory as the canonical source of truth for your project's AI configuration. Every native config file for every tool is generated from it. Change one thing in ai/, run 0dai sync, and every generated config updates.
Running 0dai sync against this directory generates:
.claude/settings.json+CLAUDE.md.codex/config.yaml+AGENTS.md.gemini/settings.jsonopencode.json.aider/config.yml.mcp.json(shared MCP server config)
Getting Started in 2 Minutes
0dai init analyzes your project (detects stack, existing CLIs, folder structure) and generates the entire ai/ layer plus all native configs in one API call. It works for Next.js, FastAPI, Go, Flutter, Rust, monorepos — any stack.
What's in the Generated Config
Delegation Policy
ai/docs/delegation-policy.md that defines which model to use for which task class: Fast tier (Mimo v2 Omni, Haiku) for search and exploration; Balanced tier (Sonnet 4.6, Mimo v2 Pro) for implementation and review; Deep tier (Opus 4.6, GPT-5.3-codex) for architecture and security analysis. Fast tasks on fast models, expensive tasks only when they need depth.Shared Persona
ai/personas/default.md— your project's conventions, forbidden patterns, preferred libraries. One edit propagates to Claude Code, Codex, Gemini, and OpenCode simultaneously.MCP Tools
.mcp.jsonincludes 0dai's shared MCP server. Any MCP-compatible agent (Claude Code, OpenCode) can call tools like get_project_health, get_swarm_status, and get_model_ratings without leaving their session.Keeping Configs in Sync
After 0dai init, run 0dai sync whenever you update files in ai/:
0dai sync is idempotent — it only writes files that have changed, and it records what version generated each config so you can always tell whether a native config is current.
Team Workflows
The ai/ directory belongs in version control. Your team commits shared conventions once, and every member gets the same agent behavior regardless of which CLI they prefer.
Native config files (.claude/, .codex/, etc.) can be committed or gitignored — your choice. The ai/ directory is always the canonical source, so even if you gitignore the generated files, any team member can run 0dai sync to regenerate them locally.
ai/ directory, or you can share one at the root and use --target flags to generate package-specific configs. Run 0dai init --help for options.Check Model Ratings
Once installed, you can also check which models are available on your machine and how they rank:
0dai models
0dai models --available # only installed CLIsThis shows a ranked table of all supported models with tier, speed, and the exact CLI flag to use — useful when writing delegation policies or debugging which model a swarm task ran on.
Summary
If you use more than one AI coding CLI — or plan to — a shared config layer is the lowest- friction way to keep them consistent. 0dai init generates everything in one shot; 0dai sync keeps it current. The alternative is maintaining five config files by hand and hoping they stay aligned.