All articles
securityai-agentsdevopsclaude-code

Securing Your AI Agent Configs: How to Audit CLAUDE.md and ai/ for Leaked Secrets

AI config files are a new attack surface most teams ignore. 18% of public repos with AI agent configs have a leaked key. Here's how to audit and harden your setup with 0dai audit.

6 min read
0dai audit — 2 leaks detected in 23 files

AI agent config files are a new attack surface most teams aren't thinking about. Your CLAUDE.md, ai/directory, and swarm configs live in your repo — and developers routinely paste API keys into them "just to test." Here's how to audit and harden your AI config layer before it becomes a breach.

The Problem: AI Configs Are Proliferating Fast

A year ago, most repos had one AI config file at most. Today, a typical monorepo using Claude Code, Codex, and OpenCode might have:

Each of these files is written by developers in the flow of shipping. They're edited frequently, often committed quickly, and almost never reviewed for sensitive content.

Warning
We scanned 50 public repos with AI agent configs. 18% had at least one leaked token, API key, or hardcoded credential in a config file committed to version control.

What Gets Leaked

The most common secrets we find in AI config files are not random — they follow a pattern:

01

API Keys Pasted During Debugging

A developer adds a real ANTHROPIC_API_KEY to CLAUDE.md to test a specific behavior. The test works, they move on. The key stays in the file for six months.
02

MCP Server Credentials

.mcp.json connects Claude Code to external services — GitHub, Linear, Slack, databases. These configs often include Bearer tokens, API keys, or connection strings passed directly as environment variables.
03

Swarm Task Context

Swarm task files in ai/swarm/ sometimes include reproduction steps that contain real environment values, database URLs, or internal service addresses copied from logs.

“AI config files are the new .env — except developers haven't learned to treat them that way yet.”

Auditing with 0dai audit

As of v2.5.0, 0dai ships a built-in secret scanner for AI config files. It focuses specifically on the files that AI agents read — not your entire codebase.

terminal
$ 0dai audit 0dai audit — scanning for leaked secrets target: /projects/myapp files: 23 scanned CRITICAL ai/docs/delegation-policy.md:47 Anthropic API key: sk-ant...r4wq CRITICAL .mcp.json:12 GitHub PAT (ghp): ghp_Ab...x9Yz HIGH CLAUDE.md:89 Bearer token: Bearer eyJ...8kQp 3 critical · 1 high · 0 medium Tip: add secrets to .gitignore or use env vars, not plaintext files

The scanner uses prefix-based pattern matching — not keyword matching — which means near-zero false positives. sk-ant-api03-... is unambiguously an Anthropic key. ghp_ is unambiguously a GitHub Personal Access Token.

What it scans

What it detects

Tip
Run 0dai audit in your CI pipeline. It exits code 1 on critical findings, which will fail the build before a leaked key reaches a PR merge.

The Right Way to Handle Credentials in AI Configs

Use environment variable references, not values

MCP server configs support environment variable interpolation. Instead of pasting a key directly:

.mcp.json (wrong)
{ "mcpServers": { "github": { "env": { "GITHUB_TOKEN": "ghp_AbcDefGhi..." } } } }
.mcp.json (correct)
{ "mcpServers": { "github": { "env": { "GITHUB_TOKEN": "$GITHUB_TOKEN" } } } }

Keep .mcp.json out of version control if it has secrets

Add a .mcp.json.local pattern to .gitignore and commit only a .mcp.json.example with placeholder values. The 0dai sync command generates .mcp.json fresh from your ai/ layer — it never needs to be committed.

Rotate immediately when audit finds something

If 0dai auditflags a key, assume it's compromised — even if the repo is private. Revoke and rotate immediately, then rewrite git history to remove the commit if needed (git filter-repo).

Adding audit to CI

A single line in your GitHub Actions workflow:

.github/workflows/security.yml
- name: Audit AI configs for secrets run: npx @0dai-dev/cli@latest audit

This runs on every push and pull request. If a developer accidentally commits a key in a swarm task file or agent instruction, the build fails before the PR is merged.

The best time to catch a leaked key is in CI, not in a breach notification email three months later.

Summary

AI agent configs are a new and largely unmonitored attack surface. As your ai/ layer grows — more agents, more tasks, more MCP servers — the risk of accidentally committing a sensitive value grows with it.

Try 0dai

AI agents that know your project

Shared context, session roaming, and multi-agent swarm for Claude Code, Codex, Gemini, Aider, and OpenCode — from a singleai/directory. Install in seconds.

npm install -g @0dai-dev/cli && 0dai init
Back to all articles