One Source of Truth for AI (by Android engineer)

4 months ago 2

Context is king. Getting useful results from your AI assistant often hinges on providing the right instructions. Yet most developers take this step casually, then wonder why their AI outputs are mediocre.

You provide context through two channels:

  1. Your immediate prompt — the main prompt you type in to the chat1
  2. Master instructions — persistent rules that shape every interaction (this post’s focus)

If you use different tools, you’ll quickly discover the current landscape for how to provide that master context or AI instructions is a mess.

Every AI tool wants its own special instruction file:

I use all these tools daily: Claude, Cursor, Gemini, OpenAI, Firebender3; and I’m constantly refining my instructions as the AI landscape evolves. Updating the same rules in five different files? That becomes untenable quickly.

So until the big AI companies figure out a common standard, here’s my fix: a single .ai directory with the important instructions and rules in it. But symlinked everywhere so different tools can pick it up.

# Create the master directory mkdir -p .ai touch .ai/instructions.md # master AI instructions file # Symlink all the different tool configs ln -s .ai/instructions.md CLAUDE.md ln -s .ai/instructions.md AGENTS.md # OpenAI, Gemini CLI, OpenCode ln -s .ai/instructions.md .cursorrules # Cursor (project level only) # ln -s .ai/instructions.md .windsurfrules # ln -s .ai/instructions.md AGENT.md # Amp # Optional: organize related AI assets mkdir -p .ai/commands # *.md prompt templates for frequent tasks (call directly) mkdir -p .ai/rules # *.mdc rules to follow for specific workflow (picked automatically) mkdir -p .ai/plans # technical execution plans to help the AI execute on large changes mkdir -p .ai/docs # project documentation that helps the AI understand features # Claude specific setup mkdir -p .claude ln -nsf ../.ai/commands .claude/commands # Cursor-specific setup mkdir -p .cursor ln -nsf ../.ai/rules .cursor/rules

I centralize my instructions in .ai/instructions.md once, and every tool stays in sync. The main benefit? You edit in one place, and all tools stay synchronized. No more hunting through multiple files when you refine your instructions.

Generating your first .ai/instructions file

Claude Code’s /init command generates an excellent starting template for your project. It analyzes your project structure and creates sensible defaults, which you can refine later.

User vs. project level instructions

I keep two levels of instructions:

  • User level: ~/.ai/instructions.md — personal preferences that apply across all projects
  • Project level: ~/myproject/.ai/instructions.md — shared team instructions specific to a codebase

Cursor only reads .cursorrules at the project level. If you want your home directory instructions to apply, you’ll need to add them directly in Cursor’s settings window.

Is this worth it?

The short answer: yes. While I haven’t done rigorous A/B testing on each rule or instruction, the improvements are noticeable. Without instructions, AI tools default to generic patterns and I’ve found myself having to repeat basic instructions more often. Folks online also seem to agree on this.

Fine-Tuning Your Rules

Want to see what context your AI is actually using? Try asking:

What custom instructions or context files are you currently using for this project? List any .ai/, AGENTS.md, or tool-specific rule files you can see.

Different tools have varying levels of transparency here. Claude and Cursor are generally forthcoming about loaded context, while others may be more opaque. Use this feedback to refine your instructions, removing redundancy and clarifying ambiguous rules.

Until we get a proper standard, this symlink approach keeps things manageable. Edit once, benefit everywhere. It’s not perfect, but it beats copying and pasting the same instructions across a dozen different files every time you want to make a change.

I’ll keep this post updated as new tools emerge and standards (hopefully) converge. In the meantime, may your context be consistent and your tokens be few.

Read Entire Article