AI Workflows

Organize your AI prompts on disk: a flat library, version-controlled, reusable

Stop pasting the same instructions into every chat. Here's a battle-tested layout for a personal prompt library that you can grep, version-control, and share between Claude Code, Cursor, and Codex.

Honam Kang4 min read

Three weeks ago you wrote the perfect code-review prompt. Today you can't find it. You wrote it in a Claude chat that auto-rolled, copy-pasted into Cursor where it lives in the inline-edit history, and saved a tweaked version in a Notion page titled "ai stuff (wip)". You'll write the prompt again from scratch tonight, slightly worse. Repeat 50 times this year.

This is the cost of not having a prompt library. The fix is small, durable, and tool-agnostic.

The simplest layout that works

~/dev/_shared/prompts/
├── README.md
├── review/
│   ├── code-review-strict.md
│   ├── code-review-style.md
│   └── pr-description.md
├── generate/
│   ├── boilerplate-react-component.md
│   ├── jest-test-from-fn.md
│   └── changelog-from-commits.md
├── debug/
│   ├── stack-trace-explain.md
│   ├── flaky-test-triage.md
│   └── perf-regression-bisect.md
└── transform/
    ├── markdown-to-blog.md
    ├── feature-spec-to-tickets.md
    └── api-error-to-user-message.md

Four observations from a year of use:

  • Verb-first folders: review/, generate/, debug/, transform/. You always know which one to open. Domain-first (react/, python/) gets cluttered fast as cross-cutting prompts pile up.
  • Flat inside folders: don't nest below the verb. review/typescript/strict.md always loses to review/code-review-strict.md because the latter shows up in a single ⌘P search.
  • Lowercase, hyphenated, intent-named: pr-description.md, not PR Description (final).md. Filenames are part of grep — make them grep-friendly.
  • One prompt per file: never bundle "five variations" into one file. The unit of reuse is the file path.

What goes inside each file

---
intent: Strict code review with defect detection
tool: claude-code
inputs: diff, file paths
outputs: bullet list of severity-rated findings
last-updated: 2026-03-12
---

You are reviewing a pull request. Read the diff carefully, then produce a
severity-rated list (P0/P1/P2) of findings. For each finding:

1. Quote the offending line(s) in a fenced block
2. Explain the defect in one sentence
3. Suggest the smallest fix that addresses it
4. If the finding is a stylistic preference (not a defect), prefix with [STYLE]

Do not list things that are correct. Do not summarize the PR.

…

The frontmatter is for you: it's how you remember what this prompt does without re-reading the body. The body is for the agent.

last-updated matters more than you'd think. When a prompt regresses, you check whether it was edited recently. If yes, look at git blame on the file. If no, the regression is upstream — model change, tool update, codebase context drift.

Naming: rules that scale to 100 prompts

Three patterns that hold up:

  1. <verb>-<noun>[-modifier].mdreview-code-strict.md, generate-test-jest.md. Easy to scan.
  2. No version numbers in filenames — git handles versions. code-review-v3.md becomes hostile when v6 lands.
  3. No tool name in filename if the prompt is portablecode-review-strict.md runs on Claude, Cursor, GPT-4. If a prompt is genuinely tool-specific, prefix: claudecode-orchestrator.md.

How to actually use the library

Three integration points:

1. Per-session reference

In your session's CLAUDE.md:

## Reusable prompts
- Review pass: ~/dev/_shared/prompts/review/code-review-strict.md
- PR description: ~/dev/_shared/prompts/review/pr-description.md

The agent can read those files directly. You stop pasting prompts; you point at them.

2. Shell function

# ~/.zshrc
prompt() {
  local p="$HOME/dev/_shared/prompts/$1"
  if [[ ! -f "$p" ]]; then
    echo "Not found: $p" >&2
    return 1
  fi
  cat "$p" | pbcopy
  echo "Copied: $p"
}

Now prompt review/code-review-strict.md puts the prompt on your clipboard. Paste into any tool.

3. File-manager pane

Pin a pane to ~/dev/_shared/prompts/. Markdown preview means you can re-read a prompt before sending it. ⌘-click in mq-dir opens any matched file in a new tab. The friction of "which prompt did I want?" drops to zero.

When to extract a new prompt

Heuristic: the third time you write the same instruction, extract it.

The first time, you're prototyping the prompt — keep it inline. The second time, you've discovered there's a pattern. The third time, you've confirmed it's worth a file. Don't pre-optimize: a graveyard of unused prompts is worse than a missing one because every read is a distraction.

Refactoring prompts

When a prompt is wrong:

  1. Find the failure case: what input did it produce a bad output for?
  2. Add a counter-example to the prompt body: "Don't do X. Example of X: ___."
  3. Bump last-updated so you remember when this happened.
  4. Don't rewrite from scratch — the existing prompt has years of buried correctness fixes. Edit, don't replace.

Sharing the library

If you work on a team, the library can be a public-ish git repo (team/prompts/). Two things matter:

  • PR review of prompt changes. Treat them like code. A bad prompt change ships across the team in minutes and degrades reviews for weeks.
  • Author the README in the library, not in Notion. The library is the source of truth — README, naming conventions, and "how to add a prompt" all live there.

Why this beats every prompt-management SaaS

There are services that manage prompts as a backend with versioning, A/B testing, eval suites. They're great if you're shipping a product that calls an API. They're overkill — and a portability liability — if you're a developer running prompts during your own workday.

A flat directory of Markdown files:

  • Works in every editor.
  • Works on every OS.
  • Survives every tool deprecation.
  • Has zero monthly cost.
  • grep -r "code review" finds your prompts in 50 ms.

You don't need more than this until you cross 200 prompts and three teammates. Then revisit. For now: make the directory, populate three prompts, see if it sticks.

mq-dir's per-tab Markdown preview was specifically built for navigating libraries like this one. Open the prompts pane, scroll, ⌘-click to peek, paste. The library compounds and your typing-the-same-thing habit dies.

Try mq-dir

A native quad-pane macOS file manager — free, no telemetry.

v0.1.0-beta.12 · Universal Binary · 5.3 MB · macOS 14.0+

Download for Mac

Frequently asked questions

Files win for three reasons: grep, git, and tool-agnosticism. Notion is great for narrative documents but terrible for the 'find the prompt I used three weeks ago' workflow. Files give you ripgrep, blame history, and the same library works in Claude Code, Cursor, ChatGPT desktop, and a curl-based script.

References

  1. [1]
  2. [2]

Ready to try mq-dir?

A native quad-pane file manager built for AI multi-tasking on macOS. Free, MIT licensed, zero telemetry.

v0.1.0-beta.12 · MIT · macOS 14.0+ · download