I've been using Claude Code as my primary development tool for a few months now and have shipped multiple projects to production with it.
These aren't best practices. Just what I've noticed works.
Planning
Don't one-shot projects. Build vertical slices.
Describing an entire app and expecting Claude to generate it doesn't work. It loses the thread on large generations, makes inconsistent choices across files, and you end up with code that doesn't fit together.
Build one complete feature at a time. Model, views, tests, end to end, before moving to the next thing. Each slice small enough that Claude can hold the whole thing in context.
Spend the time on specs first.
For one project, I created a docs/spec/ folder with numbered files: 01-PRD.md, 02-Architecture.md, 03-Data-Model.md, through 12-Roadmap.md. Plus a DECISIONS.md for architectural choices in ADR format.
When Claude drifted (and it drifts), I could point it back. "Check the spec, we're using Postgres, not SQLite." The spec becomes shared memory.
Start with the hardest part.
If the core complexity doesn't work, better to know before scaffolding everything around it. Don't build auth, user management, and settings pages before proving the hard thing is even possible.
Ask Claude to explain before implementing.
"How would you approach X?" catches misunderstandings before they become code. Cheaper to fix a wrong explanation than wrong implementation.
Context Management
Use /compact before you lose context.
Context fills up. Claude starts forgetting earlier decisions, contradicting itself, re-asking questions you already answered. Run /compact proactively, before things get fuzzy, not after.
One conversation per task.
Don't mix unrelated work in the same session. Context gets muddied. Start fresh for distinct pieces of work.
Show, don't explain.
When I wanted a specific code pattern, pointing Claude to an existing example beat explaining it. "Follow the pattern in apps/checks/views.py" works better than a paragraph describing what I want.
CLAUDE.md
Keep it small. Link, don't dump.
Early mistake: cramming everything into CLAUDE.md. Context bloats, and Claude starts ignoring parts of it.
What works: one-line project description, non-negotiables, where to find detailed specs, key commands, code conventions. That's it. Details live in linked docs.
It's not a static file.
When Claude made the same mistake twice (forgetting citations in output), I added it to the non-negotiables. When it kept getting plan limits wrong, I added those. The file evolves with the project.
Negative constraints work.
"Don't add abstraction layers", "no ORMs", "don't refactor unrelated code" are often clearer than describing what you want. Claude defaults to over-engineering; explicit constraints prevent that.
Tell Claude to ask questions.
Add "ask if anything is unclear" to your CLAUDE.md. Claude defaults to guessing rather than asking. Explicit permission to ask reduces confident mistakes.
Document what Claude can't infer.
Business logic, plan limits, external API quirks, deployment specifics. Anything that isn't obvious from reading the code but affects how code should be written.
Working with Claude
Commit working state frequently.
Claude can go sideways fast. You need rollback points. Small commits after each working change, not one big commit at the end.
Don't let Claude refactor unprompted.
It will "improve" working code into broken code if you're not specific about scope. "Fix the bug in this function" should not become "fix the bug and also restructure the module."
Review what it produces.
Claude writes plausible code. Plausible isn't correct. I've caught off-by-one errors, missing edge cases, HMAC validation that wasn't actually validating. The code looks right. Read it anyway.
Skills and MCPs
If you repeat it, make it a skill.
I kept giving Claude the same Playwright testing workflow: start test environment, log in as test user, navigate to dashboard, verify stats. Same instructions, multiple times.
That's a skill. If you're copy-pasting instruction blocks or saying "like we did before" repeatedly, extract it.
Start with community skills.
I use Superpowers by Jesse Vincent, a collection of skills for planning, brainstorming, and code review. For debugging, the systematic-debugging skill from Anthropic's skills repo forces Claude to actually diagnose before jumping to fixes. Their TDD skill works well for bug fixes too: write a failing test first, then fix.
You don't need to build everything from scratch. Steal what works, then customize.
MCPs extend what Claude can see.
I added Playwright MCP to one project. Instead of describing what the dashboard showed, Claude could see it directly. It found bugs I missed: a stats counter not updating, a button that didn't disable at plan limits.
Write scenarios Claude can run. Claude follows the script and reports what it observes. Not unit testing, a second pair of eyes on the running app.
This is what's working for me right now. It'll probably change as the tooling evolves. The one constant: treat Claude as a capable but fallible collaborator, not a magic box that produces correct code.