We Built an AI Skill That Reviews Our Case Studies in Seconds
Case studies are high-value but slow to produce. We built an AI-powered review skill that checks completeness, flags confidentiality concerns, validates SEO, and gives authors a clear checklist — turning weeks of review into seconds.

The Problem
Every case study we publish goes through the same gauntlet: missing images, inconsistent formatting, confidential details that shouldn't be there, and an approval loop with clients that stalls for weeks. The bottleneck isn't the writing — it's the inconsistency. Without a standard process, you end up with half-finished drafts that never see the light of day.
The approval loop is where things stall the hardest. If your case study isn't buttoned up when you send it for client review — vague metrics, no testimonial, proprietary architecture details exposed — the client pushes back, and you're back to square one. Getting it right the first time isn't just efficient, it's respectful of your client's time.
What We Learned
We needed a system that could enforce a standard automatically, not rely on someone remembering every detail. The format, the checks, the SEO requirements — all of it needed to be codified once and run every time.
What You Can Do About It
We built a review skill that runs inside Claude Code. It's a structured set of rules that evaluates a case study against everything we care about — automatically, in seconds.
Here's what it checks:
Completeness. Are the hero image, thumbnail, client logo, and challenge image uploaded? Is there a headline, tagline, testimonial with full attribution, and at least four key metrics? Are all four content sections present — Challenge, Solution, Approach, and Results?
Confidentiality. This is the big one. The Solution section is where IP leaks happen. The skill flags exact counts, scoring formulas, architecture patterns, and named integration targets that would let a competitor replicate the work. If there's a confidentiality concern, the section gets flagged regardless of how well it's written.
SEO. Image file names are checked against a naming convention that search engines can actually parse. The skill runs a Google Lighthouse audit against the published page and reports the score. It also scans for technologies, standards, and formal names that should be hyperlinked to their official documentation.
Section quality. Word counts are checked against target ranges. Each section gets a substance review from a customer's perspective — does the Challenge make you feel the pain? Does the Solution show expertise without giving away the recipe? Do the Results end strong?
Client approval. The skill flags whether the client has signed off. No case study should go live without it.
The output is a single report you can paste into Slack and hand to whoever's responsible. It lists every blocking issue, every warning, and a numbered set of recommendations. No ambiguity about what needs to happen next.
How It Works Under the Hood
The skill is a Claude Code skill file — a structured markdown document that lives in the project repository at .claude/skills/case-study-review.md. When invoked, Claude reads the skill instructions and executes a 12-step review pipeline against the case study data.
Data extraction. The skill queries the PostgreSQL database directly via Prisma to pull the full case study record — all fields, sections, images, and technology tags. This means it's evaluating real data, not scraping a rendered page.
Content analysis. Claude's LLM capabilities handle the qualitative checks — evaluating whether the Challenge section articulates a relatable pain point, whether the Solution reveals too much IP, whether the Results tie back to the original problem. These aren't keyword matches; the model reads the content the way a prospect would and flags substantive issues.
Lighthouse integration. The skill shells out to Google Lighthouse CLI (npx lighthouse --only-categories=seo --output=json) against the live published URL. The JSON output is parsed programmatically to extract the SEO score and any failed audits. This catches technical SEO issues — missing meta descriptions, non-crawlable links, invalid hreflang — that content review alone would miss.
Image SEO validation. File names are extracted from S3 URLs and compared against a naming convention ({slug}-{purpose}.{ext}). Search engines use image file names as ranking signals, so acme-cloud-migration-hero.png outranks Screenshot_2026-03-23_at_3.31.36_PM.png in image search results.
Structured output. The review generates a deterministic report format with clear status indicators per check. The report is saved to docs/case-study-reviews/{slug}-review.md and committed to git, giving you a version-controlled audit trail of every review.
Implementation Lessons
Start with the output format, not the checks. We designed the report template first — what does a useful review look like when you paste it into Slack? — then worked backward to figure out what checks produce those outputs. This kept the skill focused on actionable results instead of exhaustive analysis.
Confidentiality detection is harder than completeness. Checking if an image exists is trivial. Checking if the Solution section gives away your architecture is a judgment call. We learned to give the LLM specific signals to look for: exact counts (e.g., "82 triggers"), named integration targets, scoring formulas, and architecture patterns. Without these concrete anchors, the confidentiality check was too vague to be useful.
Lighthouse adds 10 seconds but catches things humans don't. We debated whether to include it. The SEO score was consistently 100 on our pages, so it felt redundant. Then we found a case study where a missing meta description dropped the score to 82. Ten seconds is worth it.
The pre-push hook was a mistake at first. We added the skill to our git pre-push hook so every push ran a full CLAUDE.md review. It worked, but it added significant time to every push. For the case study review, we kept it as a manual invocation (/case-study-review ) rather than automating it on every push. Not every check belongs in a gate.
Word count ranges need context, not just numbers. Our first version flagged anything under 150 words as "too short." But a tight, well-written 130-word Approach section is better than a padded 160-word one. We kept the ranges as guidelines and let the substance review handle quality — the word count flags are warnings, not blockers.
Why It Matters
When you take the grunt work out of the review process, people focus on what actually matters — telling the story of what the team built and the value it created for the client.
The skill handles the checklist. The author focuses on substance. The owner knows it's complete because a standardized check ran against it. The client gets a polished document on the first pass, which means faster approval and less back-and-forth.
For us, this means case studies actually get published. They don't sit in a queue. The gap between project completion and a live case study on our website shrinks from weeks to days. And every case study that goes out meets the same standard — because the standard is enforced automatically, not by memory.
If you're producing any kind of recurring content — case studies, blog posts, proposals — and you're relying on people to remember every detail every time, you're going to get inconsistent results. A simple automated review that codifies what "done" looks like changes the game.
At Periscoped, we build tools that let teams move fast without cutting corners. Sometimes that means building the tool for ourselves first.
Try It Yourself
Want to use this skill in your own Claude Code setup? Copy the file below and save it to .claude/skills/case-study-review.md in your project. Customize the checks to match your own case study format.
# Case Study Review
Review a case study for completeness, quality, and publishing readiness. Accepts a case study ID or slug.
## Instructions
### Step 1: Load the Case Study
The user will provide a case study ID or slug. Use the database or API to fetch the full case study including:
- All top-level fields (title, slug, headline, tagline, keyMetrics, testimonial fields, industry, etc.)
- All sections (challenge, solution, approach, results)
- All images (hero, thumbnail, client logo, section images)
- Technology tags and services provided
If fetching from the database directly, query using Prisma via a script or read the relevant admin page data. If the user provides a URL, fetch the public page and work from there.
### Step 2: Required Fields Check
Every published case study MUST have all of the following. Flag any that are missing as **MISSING (BLOCKING)**.
**Images:**
- Hero image
- Thumbnail image
- Client logo image
- Challenge section image (the first visual into the actual project)
**Content:**
- Headline or title
- Tagline
- Industry
- Testimonial quote (required, not optional)
- Testimonial person name
- Testimonial person title
- Testimonial company name
- Minimum 4 key metrics
- Technologies used (at least 1)
- Services provided (at least 1)
**Sections (all 4 required):**
- Challenge
- Solution
- Approach
- Results (Results & Impact / Results & Outcomes)
### Step 3: Fields That Should NOT Be Populated
The following fields reveal effort/timeline details that give prospects pricing anchors before a sales conversation. Flag if any are populated as **SHOULD BE EMPTY**.
- projectDurationMonths
- teamSize
- projectStartDate
- projectEndDate
These should not be filled in for published case studies.
### Step 4: Image Naming (SEO)
Check image file names/URLs against the SEO naming convention. For each image, the file name should follow:
- Hero: `{slug}-hero.{ext}`
- Thumbnail: `{slug}-thumbnail.{ext}`
- Client logo: `{slug}-client-logo.{ext}`
- Challenge image: `{slug}-challenge.{ext}`
Flag any images that don't follow this pattern as **SEO: BAD IMAGE NAME** with the current name and the expected name.
### Step 5: Section Length Check
Each section should have substantive content. Check word counts:
- **Challenge**: 150-300 words (2-3 paragraphs)
- **Solution**: 150-300 words (2-3 paragraphs)
- **Approach**: 150-300 words (2-3 paragraphs)
- **Results & Impact**: 150-400 words (2-4 paragraphs, can be longer)
Flag sections as:
- **TOO SHORT** if under the minimum
- **TOO LONG** if over the maximum by more than 50%
- Otherwise, report the word count as healthy
### Step 6: URL References Check
Scan all section content for mentions of technologies, frameworks, standards, methodologies, or formal product names. These should be hyperlinked to their official documentation or website.
Examples:
- "React" should link to https://react.dev
- "AWS Lambda" should link to the AWS Lambda product page
- "HIPAA" should link to the HHS HIPAA page
- "Agile" or "Scrum" should link to their respective reference pages
Flag unlinked references as **NEEDS URL: "{term}"** with a suggested link.
Only flag well-known technologies, standards, and formal names. Do not flag generic terms like "database" or "API".
### Step 7: Content Substance Review
Review each section from a prospective customer's perspective. Only flag **major** issues — not minor wording preferences. Look for:
**Challenge section:**
- Does it clearly articulate the business problem?
- Would a prospect relate to this pain point?
- Does it set stakes (what happens if the problem isn't solved)?
**Solution section (most sensitive — this is where IP leaks happen):**
- Does it explain what was built and why this approach?
- Does it demonstrate expertise without giving away the technical recipe?
- Is it clear what Periscoped's unique contribution was?
- CRITICAL: Does it reveal proprietary implementation details (exact counts, scoring formulas, architecture patterns, integration targets) that would let a competitor scope or replicate the work?
- Does it expose anything that could embarrass or compromise the client?
If the Solution section has confidentiality concerns, it MUST be rated **NEEDS WORK** regardless of writing quality. The rating should lead with the IP/confidentiality issue.
**Approach section:**
- Does it show how the team worked (methodology, collaboration)?
- Does it build confidence in Periscoped's process?
**Results & Impact section:**
- Are results tied back to the original challenge?
- Are the key metrics substantiated in the narrative?
- Does it end strong — would a prospect want to talk to Periscoped after reading this?
- Does it avoid restating specific numbers from the Solution section? (Focus on business outcomes, not feature counts)
Rate each section: **STRONG**, **GOOD**, or **NEEDS WORK** (with a brief note on what's off).
### Step 8: Lighthouse SEO Audit
Run a Google Lighthouse SEO audit against the published page:
```bash
npx lighthouse https://periscoped.io/case-studies/{slug} --only-categories=seo --output=json --chrome-flags="--headless --no-sandbox"
```
If the case study is not published, skip this step and note "SKIPPED (not published)".
Parse the JSON output and report:
- **SEO Score**: 0-100
- **Passed audits**: list them briefly
- **Failed audits**: list each with the Lighthouse description and recommendation
Include the score and any failures in the report. A score below 90 should be flagged as **NEEDS ATTENTION**.
### Step 9: Client Approval Check
Check if the case study has client approval tracking:
- If a `client_approved` field exists, check its value
- If the field doesn't exist yet (pending PW-69), note that client approval cannot be verified and remind the author to get client sign-off before publishing
### Step 10: Generate Report
```
CASE STUDY REVIEW
=================
Title: <title>
Slug: <slug>
ID: <id>
Published: <yes/no>
Public URL: https://periscoped.io/case-studies/<slug>
Admin URL: https://periscoped.io/admin/dashboard/case-studies/<id>
Date: <current date>
REQUIRED FIELDS: <pass/fail>
- Hero image: <present/MISSING>
- Thumbnail image: <present/MISSING>
- Client logo: <present/MISSING>
- Challenge image: <present/MISSING>
- Headline: <present/MISSING>
- Tagline: <present/MISSING>
- Industry: <present/MISSING>
- Key metrics: <count> (<pass if 4+/NEEDS MORE>)
- Testimonial: <complete/MISSING fields>
- Technologies: <count>
- Services: <count>
- Sections: <list present sections, flag missing>
FIELDS THAT SHOULD BE EMPTY: <pass/issues found>
- Duration: <empty/SHOULD BE EMPTY (current value)>
- Team size: <empty/SHOULD BE EMPTY (current value)>
- Start date: <empty/SHOULD BE EMPTY (current value)>
- End date: <empty/SHOULD BE EMPTY (current value)>
IMAGE SEO: <all good/issues found>
(Search engines use image file names as ranking signals — descriptive, keyword-rich
names improve discoverability in image search and reinforce page relevance.)
- Hero: <filename> — <OK/RENAME, expected: ...>
- Thumbnail: <filename> — <OK/RENAME, expected: ...>
- Client logo: <filename> — <OK/RENAME, expected: ...>
- Challenge: <filename> — <OK/RENAME, expected: ...>
SECTION LENGTH:
- Challenge: <word count> — <OK/TOO SHORT/TOO LONG>
- Solution: <word count> — <OK/TOO SHORT/TOO LONG>
- Approach: <word count> — <OK/TOO SHORT/TOO LONG>
- Results: <word count> — <OK/TOO SHORT/TOO LONG>
URL REFERENCES NEEDED: <count>
- "<term>" in <section> — suggested: <url>
SUBSTANCE REVIEW:
- Challenge: <STRONG/GOOD/NEEDS WORK> — <brief note>
- Solution: <STRONG/GOOD/NEEDS WORK> — <brief note>
- Approach: <STRONG/GOOD/NEEDS WORK> — <brief note>
- Results: <STRONG/GOOD/NEEDS WORK> — <brief note>
- Confidentiality: <PASS/CONCERN — brief note>
LIGHTHOUSE SEO: <score>/100 — <PASS/NEEDS ATTENTION/SKIPPED>
- Passed: <count> audits
- Failed: <list any failures with brief description>
CLIENT APPROVAL: <APPROVED/NOT APPROVED/CANNOT VERIFY (PW-69 pending)>
SUMMARY:
- Blocking issues: <count>
- Warnings: <count>
- Publishing ready: <YES/NO>
RECOMMENDATIONS:
1. <first action>
2. <second action>
...
```
### Step 11: Save the Review
Save the full report as a markdown file at:
```
docs/case-study-reviews/{slug}-review.md
```
For example: `docs/case-study-reviews/fintegral-mortgage-pipeline-management-review.md`
These files are committed to git so they can be shared and tracked over time. If a case study is re-reviewed, overwrite the previous review file with the new one.
### Step 12: Offer Fixes
After presenting the report, offer to:
- Fix image names (rename in S3 and update database references)
- Add missing URL references to section content
- Flag sections that need rewriting
- Create a Jira ticket for any blocking issues
Ask the user which fixes they want applied.
## Usage
```
/case-study-review <id or slug>
```
Examples:
- `/case-study-review 10`
- `/case-study-review twilight-care`
Enjoyed this? get in touch.