Building The Best CLAUDE.md File

Your CLAUDE.md file is the brain of your AI-assisted project — neglect it and your AI starts working against you.

Terrance MacGregorTerrance MacGregor
March 25, 2026
6 min read
claude-efficient.png

Paying close attention to your CLAUDE.md file is one of the highest-leverage habits you can build as an AI-assisted developer. Keep it under 200 lines — anything beyond that bloats your context window and degrades performance. You can also offload repeatable commands into skills, which act as pointers your CLAUDE.md references without consuming context, giving you more power with less overhead. And as your project evolves, your CLAUDE.md needs to evolve with it — stale instructions are quietly costly.

This audit takes about 15 minutes. Do it occasionally and you'll stay sharp.

The Anatomy of a CLAUDE.md Review

Before we get into the steps, it helps to understand what a healthy review actually looks like in practice — and the best way to show that is to build it as a skill.

At Periscoped, we don't just tell our AI what to do once and forget about it. We codify repeatable processes into skills that the AI can execute consistently across projects. A skill is a structured markdown file that Claude knows how to read and follow — think of it as a standing operating procedure your AI can invoke on demand, or that your toolchain can trigger automatically.

The Four Steps

1. Audit your CLAUDE.md — go through it line by line and flag anything that's outdated or no longer relevant to the current project. Dead instructions aren't harmless — they consume context and quietly push your AI in the wrong direction.

2. Ask your LLM directly — invoke the review-claude-md skill above, or prompt Claude with "Looking at my CLAUDE.md, what's old, what's missing, and what could make this project run more efficiently?" You'll often get insights you'd miss on your own because the model sees patterns in your instructions that aren't obvious from the outside.

3. Do a skill review — check what skills exist in your current project and what exists in your system, identifying opportunities to add or update skills at either level. A well-maintained skill library means your CLAUDE.md file stays lean while your AI gets smarter over time.

4. Add a git hook — wire a pre-push hook to trigger a CLAUDE.md review so hygiene becomes automatic, not optional. The best maintenance habit is one you don't have to remember to do. Here's the one-liner to set it up:

CLAUDE Review Skill

Here's a skill we built called review-claude-md. Drop this into your skills folder and your AI will know exactly how to audit your CLAUDE.md every time you invoke it — whether that's manually, through a prompt, or automatically via a git hook. Pay special attention to Step 6: the skill actually checks whether it's been wired into your pre-push hook, which means it's self-aware about its own automation. That's the kind of thing that separates a thoughtful AI workflow from a sloppy one.

---
name: review-claude-md
description: Audit the current project's CLAUDE.md file for health, efficiency, 
and relevance. Use when asked to "review Claude.md", "audit Claude.md", 
or when a pre-commit hook calls it.
---

# CLAUDE.md Review
Audit the current project's CLAUDE.md for health, efficiency, and relevance. 
Report what's healthy, what's stale, and what should change.

## Instructions

### Step 1: Read the CLAUDE.md
Read the `CLAUDE.md` file at the project root. If it does not exist, report 
that and stop — recommend running the `project-health` skill instead.

### Step 2: Line Count Check
Count the total lines in the file. Flag if over 200 lines — CLAUDE.md should 
be concise and scannable. If over 200, note which sections are the longest 
and could be trimmed or extracted.

### Step 3: Stale Reference Detection
For every instruction that references a specific file, folder, function, route, 
model, template, or pattern:
1. Verify the referenced item still exists in the project (use Glob/Grep/Read as needed).
2. If the item no longer exists, flag the instruction as **STALE** and quote the line.
3. If the item exists but has been renamed or moved, flag as **OUTDATED** 
   with the current location.

### Step 4: Redundancy Check
Identify instructions that are:
1. **Duplicated** — the same rule stated more than once, possibly in different words.
2. **Obvious** — things any experienced developer or Claude would do by default 
   (e.g., "write clean code").
3. **Superseded** — rules that an existing skill already handles 
   (cross-reference the user's installed skills list).

For each redundant instruction, explain why it's redundant and recommend 
removal or consolidation.

### Step 5: Skill Extraction Candidates
Look for instructions that are:
1. Multi-step procedures (3+ steps described inline).
2. Workflow-specific rules that only apply in certain contexts.
3. Lengthy sections that could be self-contained.

For each candidate, suggest:
- A skill name (kebab-case)
- A one-line description
- Which lines from CLAUDE.md it would replace

### Step 6: Pre-Push Hook Check
Verify that a git pre-push hook exists that runs this CLAUDE.md review before pushing:
1. Check if `.git/hooks/pre-push` exists in the project.
2. If it exists, check whether it includes a call to the `review-claude-md` 
   skill or a CLAUDE.md validation step.
3. If the hook is missing or doesn't include the check, flag it and offer 
   to create or update the hook.

Report as:
- **CONFIGURED** — pre-push hook exists and includes CLAUDE.md review
- **HOOK EXISTS, NO REVIEW** — pre-push hook exists but doesn't include CLAUDE.md review
- **MISSING** — no pre-push hook at all

### Step 7: Generate Report
Present findings in this format:

CLAUDE.MD REVIEW
================
Project: <name>
Path: <root path>
Lines: <count> (<OK / OVER LIMIT>)
Date: <current date>

STALE REFERENCES: <count>
- <line number>: <quoted instruction> — <what's missing/moved>

REDUNDANCIES: <count>
- <line number>: <quoted instruction> — <why it's redundant>

SKILL CANDIDATES: <count>
- "<suggested-skill-name>": <description> — replaces lines <N-M>

HEALTHY INSTRUCTIONS: <count>
- Instructions that are current, unique, and valuable.

SUMMARY:
- Healthy: <N>/<total instructions>
- Stale: <N>
- Redundant: <N>
- Extractable: <N>

PRE-PUSH HOOK: <CONFIGURED / HOOK EXISTS, NO REVIEW / MISSING>

RECOMMENDATIONS:
1. <first action>
2. <second action>
...

### Step 8: Offer Fixes
After presenting the report, offer to:
- Remove stale references
- Consolidate redundant instructions
- Create suggested skill files and trim CLAUDE.md accordingly
- Create or update a pre-push hook to include CLAUDE.md review
- Reformat or reorganize sections for clarity

Ask the user which fixes they want applied. Do not make changes without confirmation.

## Usage
Run from any project that has a CLAUDE.md file.
Typical invocation: "Review my CLAUDE.md" or "Audit CLAUDE.md"

The Bigger Picture

This isn't just a solo developer habit — it matters even more when you're running a team. On a fast-moving project, your CLAUDE.md can drift in days, not weeks, especially when multiple people are contributing. Building a review cadence and automating it with a hook means everyone on the team is working with an AI that has clean, current instructions — and that consistency shows up directly in the quality of what gets shipped.

Fifteen minutes of hygiene. A lot less pain down the road.

Enjoyed this? Explore more on claudeai or get in touch.