Claude Code /insights: What My AI Performance Review Taught Me
Most people who use AI never stop to ask a simple question: is this actually working?
They open Claude, ChatGPT, or whatever tool they prefer, type a prompt, get a response, and move on. Maybe the output was good. Maybe it was average. They rarely look back. I was the same until I ran Claude Code /insights for the first time and got what felt like a performance review. Not of my code. Of me.
If you haven’t heard of it, Claude Code is Anthropic’s command-line tool that lets you work with Claude directly inside your terminal. You can build apps, edit files, run commands, and manage entire projects through conversation. The /insights command is a recent addition. Type it, wait a few minutes, and it generates an interactive HTML report analysing every session you’ve had over the past 30 days.
It tells you what you worked on, how you interact with Claude, where things went well, and where they didn’t. Think of it as a monthly review from a manager who has been watching every single thing you do. Quietly. Without blinking.
What Claude Code Insights Actually Shows You
The report is surprisingly detailed. Mine covered 84 sessions across 30 days and broke everything down into clear sections.
There’s an ‘at a glance’ summary at the top that reads like a coaching note. Then it goes deeper: what you spent your time on, which tools you used most, your satisfaction levels (yes, it infers how happy you were based on your responses), where friction occurred, and what you could try differently.
It even tracks your response times. Mine showed that most of my replies came within 10 to 30 seconds. That told me something I already suspected but hadn’t confirmed: I tend to fire off quick instructions rather than taking a moment to give Claude more context upfront.
The whole thing is generated locally. Your session data stays on your machine. Claude analyses the patterns, renders an HTML file, and you open it in your browser. No dashboards. No third-party tools. Just a single command.
The Part That Stung
Here’s where it gets interesting. The report doesn’t just pat you on the back. It tells you what’s going wrong.
Mine flagged three friction patterns:
- Context limits on large tasks. I kept trying to do too much in a single session instead of breaking work into smaller batches. Claude would run out of context halfway through, and I’d lose momentum picking up where it left off.
- Inconsistent design and styling. When I asked Claude to handle visual changes, it would fix one page and let something else drift. Font sizes, spacing, alignment. Always something slightly off on a page I hadn’t asked it to touch.
- Mismatched expectations. Claude sometimes misjudged the format or complexity level I wanted, which wasted entire rounds of back and forth before we got on the same page.
None of this was news, exactly. But seeing it laid out with specific examples made it harder to ignore. I’d been vaguely aware of these patterns. The report made them concrete.
One of the friction examples was brutal. It referenced a session where I’d asked Claude to update the branding across a project. The result was so poor I had every change reverted. The report described it as one of only two sessions that month where the goal was ‘not achieved at all.’ Reading that felt like getting a note from a teacher. A fair one.
What I Actually Changed
The useful part of Claude Code insights isn’t the diagnosis. It’s the prescription.
The report suggested specific things to add to my CLAUDE.md file (a configuration file that tells Claude how to behave in your projects). Things like: always generate a continuation prompt before running out of context, always check consistency across every page before committing style changes, and always use the project’s established file conventions instead of guessing.
These are small instructions. But they address patterns that had been costing me time across dozens of sessions.
It also recommended features I hadn’t tried. Custom skills (reusable prompt templates you trigger with a slash command) and hooks (automated checks that run after every edit). Both of these directly target my most common friction points. I’ve since set up a skill for my planning workflow and it’s already saved me from repeating the same instructions at the start of every session.
The suggestions weren’t generic. They were drawn from my actual usage data. That’s the difference between reading a ‘top tips for Claude Code’ article and getting feedback based on how you specifically work.
Beyond CLAUDE.md tweaks, optimising your Claude Code memory file is another lever that can dramatically improve session quality by keeping your context window lean.
Why Most People Skip This Step
There’s a reason most AI users never review their own habits. It’s the same reason most people don’t track how they spend their time or review their own meeting notes. Reflection takes effort, and the feedback isn’t always comfortable.
But here’s what I’ve noticed after a month of heavy AI usage: the people who get the most from these tools aren’t the ones with the cleverest prompts. They’re the ones who pay attention to what’s working and what isn’t. They iterate on their process, not just their output.
If you’re still building your prompt engineering foundations, this matters even more. The habits you form early tend to stick. Running /insights regularly is one way to make sure those habits are actually good ones.
Nate Meyvis made a sharp observation after running his own report: using AI feedback to get better at working with AI is going to be an essential skill. I think he’s right. And I’d add that learning to push back on that feedback (knowing which suggestions fit your workflow and which don’t) is equally important.
How to Run Claude Code /insights Yourself
If you already have Claude Code installed, it’s one command. Open your terminal, type /insights, and wait. It takes a few minutes to process your sessions. When it’s done, you’ll find the report at ~/.claude/usage-data/report.html. Open it in your browser.
If you don’t have Claude Code yet, you’ll need a Claude Pro or Max subscription and Node.js installed on your machine. The official documentation walks you through setup.
A few things worth knowing before you run it. You need at least a couple of weeks of regular usage for the report to be meaningful. Two or three sessions won’t give it enough data to spot patterns. It also analyses a rolling 30-day window, so running it monthly gives you a fresh view of how your habits are evolving.
Stop Using AI on Autopilot
The biggest lesson from my Claude Code insights report wasn’t any single suggestion. It was the reminder that working with AI is a skill that improves with attention, not just repetition.
Most people treat AI like a vending machine. Put in a prompt, get out a result. But the people who get genuinely good at this treat it more like a working relationship. They notice patterns. They adjust. They get better.
Run /insights. Read the report. Change one thing. That’s all it takes to stop running on autopilot and start actually improving.
Related posts
Claude Cowork Scheduled Tasks: What They Actually Do
Claude Cowork scheduled tasks let AI handle recurring work automatically. Here's what they do, how to set them up, and the limitations worth knowing.
Git Worktrees in Claude Code: A Practical Guide to Parallel AI Agents
Learn how to set up git worktrees in Claude Code so you can run multiple AI agents in parallel. Step-by-step guide with every command you need.
Claude Skills: The Complete Guide to Building Your Own
Claude Skills let you teach AI your way of working. Learn what they are, how to build one from scratch, and see real examples from my own workflow.