AI agent config files are a new attack surface most teams aren't thinking about. Your CLAUDE.md, ai/directory, and swarm configs live in your repo — and developers routinely paste API keys into them "just to test." Here's how to audit and harden your AI config layer before it becomes a breach.
The Problem: AI Configs Are Proliferating Fast
A year ago, most repos had one AI config file at most. Today, a typical monorepo using Claude Code, Codex, and OpenCode might have:
CLAUDE.md— agent instructions for Claude CodeAGENTS.md— for Codex and OpenCode.mcp.json— MCP server config with URLs and authopencode.json— model and provider configai/docs/— delegation policies, playbooks, knowledge baseai/swarm/— task queue with agent context
Each of these files is written by developers in the flow of shipping. They're edited frequently, often committed quickly, and almost never reviewed for sensitive content.
What Gets Leaked
The most common secrets we find in AI config files are not random — they follow a pattern:
API Keys Pasted During Debugging
ANTHROPIC_API_KEY to CLAUDE.md to test a specific behavior. The test works, they move on. The key stays in the file for six months.MCP Server Credentials
.mcp.json connects Claude Code to external services — GitHub, Linear, Slack, databases. These configs often include Bearer tokens, API keys, or connection strings passed directly as environment variables.Swarm Task Context
ai/swarm/ sometimes include reproduction steps that contain real environment values, database URLs, or internal service addresses copied from logs.“AI config files are the new .env — except developers haven't learned to treat them that way yet.”
Auditing with 0dai audit
As of v2.5.0, 0dai ships a built-in secret scanner for AI config files. It focuses specifically on the files that AI agents read — not your entire codebase.
The scanner uses prefix-based pattern matching — not keyword matching — which means near-zero false positives. sk-ant-api03-... is unambiguously an Anthropic key. ghp_ is unambiguously a GitHub Personal Access Token.
What it scans
CLAUDE.md,AGENTS.md,GEMINI.md- All files in
ai/(markdown, JSON, YAML) .mcp.json,opencode.json.codex/config.mdand.codex/instructions.md
What it detects
- Anthropic API keys (
sk-ant-api...) - OpenAI API keys (
sk-...) - GitHub tokens (
ghp_,gho_,github_pat_) - AWS access keys (
AKIA...) and secret keys - Google API keys (
AIza...) - Bearer tokens, PEM private keys, generic secret variables
0dai audit in your CI pipeline. It exits code 1 on critical findings, which will fail the build before a leaked key reaches a PR merge.The Right Way to Handle Credentials in AI Configs
Use environment variable references, not values
MCP server configs support environment variable interpolation. Instead of pasting a key directly:
Keep .mcp.json out of version control if it has secrets
Add a .mcp.json.local pattern to .gitignore and commit only a .mcp.json.example with placeholder values. The 0dai sync command generates .mcp.json fresh from your ai/ layer — it never needs to be committed.
Rotate immediately when audit finds something
If 0dai auditflags a key, assume it's compromised — even if the repo is private. Revoke and rotate immediately, then rewrite git history to remove the commit if needed (git filter-repo).
Adding audit to CI
A single line in your GitHub Actions workflow:
This runs on every push and pull request. If a developer accidentally commits a key in a swarm task file or agent instruction, the build fails before the PR is merged.
The best time to catch a leaked key is in CI, not in a breach notification email three months later.
Summary
AI agent configs are a new and largely unmonitored attack surface. As your ai/ layer grows — more agents, more tasks, more MCP servers — the risk of accidentally committing a sensitive value grows with it.
- Run
0dai auditlocally before each commit - Add it to CI to catch leaks before they merge
- Use env var references in
.mcp.json, never literal values - Gitignore any config file that must contain real credentials