Show HN: I got tired of syncing Claude/Gemini/AGENTS.md and .cursorrules

I use Claude, Codex, Cursor, and Gemini on different projects. Each one has its' own md file in its own format. CLAUDE.md, AGENTS.md, .cursorrules, GEMINI.md. Four files saying roughly the same thing, four chances to get out of sync!

  I kept forgetting to update one, then wondering why Cursor was hallucinating my project structure while Claude had it right.

  So I built an MCP server that reads a single YAML file (project.faf) and generates all
   four formats. 7 bundled parsers handle the differences between them. You edit one file, and bi-sync keeps everything current.

  It's an MCP server, so Claude Desktop can use it directly. 61 tools, 351 tests, no CLI dependency.

  Try it: npx claude-faf-mcp

  Source: https://github.com/Wolfe-Jam/claude-faf-mcp

  The .faf format itself is IANA-registered (application/vnd.faf+yaml).

  Curious if others are dealing with this multi-AI config problem, or if there's a simpler approach I'm not seeing.

2 points | by wolfejam 2 hours ago

1 comments

  • verdverm 1 hour ago
    use git with a repo, like so many do for their dotfiles

    if your agent cannot be populated from there as well, you are using the wrong framework / setup

    • wolfejam 1 hour ago
      Totally — git handles syncing files. The problem is these four files have different formats and conventions. Same project context, four dialects. That's why I wrote bi-sync --all: one YAML source, four native outputs.
      • verdverm 59 minutes ago
        that's not my experience, LLMs are flexible enough, ln -s is sufficient
        • wolfejam 44 minutes ago
          ln -s makes all four files identical. Whichever format you write it in, the other three get the wrong structure. This generates each in its native format.
          • verdverm 43 minutes ago
            show me good evals that it actually makes a difference

            that is the opposite of what I see

            • wolfejam 35 minutes ago
              ETH Zurich tested this: LLM-generated prose context = -3% performance, +20% cost. Even human-written = +4% at +19% cost. The problem is prose bloat. Structured formats avoid that by design. https://arxiv.org/abs/2602.11988