This article was written by Claude (AI) with human review and editing. The tips and workflow patterns described are real techniques I use daily with Claude Code.

After building over 50 projects with Claude Code – from full-stack SaaS applications to 3D games to AI-powered tools – I’ve learned what separates productive Claude Code sessions from frustrating ones. Here’s what actually works.
The #1 Productivity Secret: Run Multiple Jobs in Parallel
This is the single most important lesson I’ve learned: treat AI agents like employees you’re managing, not tools you’re using one at a time.
I routinely run 3-5 Claude Code sessions simultaneously across different terminal tabs. While one is researching an API, another is writing tests, and a third is refactoring a component. The key insight is that you need to context switch frequently – check in on each agent, give it a nudge if it’s going off track, add context when needed, then move to the next one.
Think of it like managing a team of junior developers: you wouldn’t sit and watch one person code for an hour. You’d give them a task, check on someone else, come back with feedback, and keep the whole team moving forward. The same applies to AI agents – they work best with slight nudges and context to improve their ability to work autonomously.
These tools have gotten dramatically better in the last 6 months. Six months ago, you really needed to babysit Claude through complex tasks. Now, with good context in your CLAUDE.md files, Claude can work autonomously for much longer stretches. But you still get the best results by managing multiple sessions and providing course corrections.
Pro tip: Use claude --continue to resume your last session, or claude --session [name] to maintain named sessions for different projects. This way you can switch between “frontend-refactor” and “api-integration” sessions without losing context.
1. Set Up Persistent Memory
The single biggest productivity multiplier is giving Claude context that persists across sessions. Without it, every conversation starts from zero.
Use the MCP Memory Server:
// In ~/.claude.json
{
"mcpServers": {
"memory": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-memory"]
}
}
}Then create a CLAUDE.md file that instructs Claude to:
- Read memory at session start
- Store project decisions, tech stacks, and deployment configs
- Track billing/services you’re paying for
- Remember file locations and patterns
After a few sessions, Claude remembers things like “project deploys to Fly.io at project-proud-forest-117” or “ArborHub uses Claude Vision API for tree detection” without you explaining it again.
2. Create Verification Requirements
Code that compiles isn’t code that works. I learned this the hard way after declaring games “complete” that crashed on first interaction.
Add this to your CLAUDE.md:
## Verification Requirements
Before declaring any web application or game complete:
1. Run the dev server and verify it loads without errors
2. Use Playwright headless testing to verify core functionality
3. Check browser console for runtime JavaScript exceptions
4. Test primary user interactions
5. Don't rely on code review alone - runtime errors require runtime testingThis forces Claude to actually run the code before saying “done.” The Playwright MCP server makes this automated:
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["@playwright/mcp@latest", "--headless", "--viewport-size", "1024x768"]
}
}
}3. Document Deployment Patterns
Nothing wastes more time than re-figuring out deployment configs. I store these in memory:
project-deployment:
- CRITICAL: Must deploy from ROOT directory, not backend/
- Uses combined Dockerfile that builds frontend AND backend
- nginx runs on port 80 as reverse proxy
- Spring Boot runs on port 8080 internally
- fly.toml must be at ROOT level with internal_port=80When I say “deploy project” Claude knows exactly what to do. No guessing, no debugging broken deployments.
4. Track What You’re Paying For
I have Claude maintain a running list of all billed services:
Being billed for:
- Fly.io hosting (arbor-hub-api, arbor-hub-web)
- Anthropic API (Claude Sonnet 4 Vision)
- Replicate.com API (SDXL image generation)
- AWS Lightsail FrenchProgrammingBlog-New (~$3.50/mo)
- AWS Route 53 domains ($15/year each)This prevents surprise bills and helps when cleaning up abandoned projects.
5. Use Batch Processing for Repetitive Tasks
When I needed 500+ city guides for a project, I didn’t do them one at a time. I had Claude:
- Research 16 cities in parallel using web search
- Generate JSON files with real coffee shop data
- Validate each file’s structure
- Commit in batches
The key is giving Claude clear patterns to follow:
Each city guide contains:
- City metadata and coffee scene intro
- 8-12 real coffee shops (researched via web search)
- Real addresses, real descriptions
- Saved to /frontend/src/data/cities/{city-slug}.jsonBatch 9 alone generated 16 complete city guides in a single session.
6. Store Technical Decisions
When you make an architectural choice, document why:
ArborHub-TreeDetection:
- Original approach using Claude Vision for bounding boxes was unreliable
- Decided to use multi-pass pipeline: Grounding DINO -> Claude validation -> SAM segmentation
- Cost increases from $0.01 to ~$0.016/image for dramatically better accuracySix months later when you revisit the code, Claude can explain why you didn’t just use Claude Vision directly.
7. Prefer Automation Over Manual Steps
I added this to my global config:
## Workflow Preferences
### Automation First
- Always prefer automated options over manual intervention
- If a task can be completed programmatically, use that approach
- Only fall back to manual steps when no automated option existsThis means Claude will use API calls instead of asking me to click magic links, run scripts instead of giving me manual steps, and deploy via CLI instead of web dashboards.
8. Document Patterns for Complex Systems
For my 3D game project, I had Claude document every subsystem:
Three.js Game Architecture:
- Scene: THREE.Scene() with FogExp2 (density 0.003)
- Camera: PerspectiveCamera (75° FOV, attached to scene for weapon visibility)
- Renderer: WebGLRenderer with ACES Filmic tone mapping
- Quality presets: Low (25% particles), Medium (50%), High (100% + shadows)
Combat System:
- Player melee: 60° arc, 3 unit range, cone query for hits
- Combo scaling: damage * (1 + comboCount * 0.2)
- 4 abilities: Slash (Q), Spin (E), Dash (Shift+Space), FireBlast (R)When I came back to add features months later, Claude understood the architecture immediately.
9. Use CLAUDE.md at Multiple Levels
I have three CLAUDE.md files:
- Global (
~/.claude/CLAUDE.md): Universal preferences, memory protocol, billing tracking - Workspace (
~/claude-workspace/CLAUDE.md): Cross-project conventions - Project-specific (
project/CLAUDE.md): Deployment configs, API keys, project-specific patterns
This layered approach means Claude always has the right context without me repeating myself.
10. Use GitHub Actions for Deployment, Not Direct Deploy
My biggest workflow improvement was using GitHub Actions for deployment instead of deploying directly:
# .github/workflows/deploy.yml
name: Deploy
on:
push:
branches: [main]
concurrency:
group: deploy-production
cancel-in-progress: false
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: superfly/flyctl-actions/setup-flyctl@master
- run: flyctl deploy --remote-onlyWhy this matters: When running multiple Claude sessions in parallel (as you should be!), you’ll have race conditions. Two sessions finish at the same time, both try to deploy. With concurrency, GitHub Actions queues deployments properly instead of failing.
Now my workflow is just git add . && git commit && git push. GitHub handles the rest.
11. Offload Tasks to Ollama for Cost Savings
Claude Code is powerful but costs money. For many routine tasks, a local LLM running via Ollama works just as well:
// In ~/.claude.json
{
"mcpServers": {
"ollama": {
"command": "npx",
"args": ["-y", "ollama-mcp-server"]
}
}
}Good candidates for Ollama:
- Code formatting and linting suggestions
- Simple refactoring tasks
- Generating boilerplate code
- Writing tests for straightforward functions
- Documentation generation
Keep on Claude:
- Complex architectural decisions
- Debugging tricky issues
- Anything requiring web search or current information
- Tasks needing full codebase context
I’ve cut my API costs by roughly 30% by offloading routine work to local models.
12. Get the Right Image Generation Tools
For AI-powered image generation in your projects, here are the options I’ve found work best:
fal.ai – Best for quick, cheap generation. FLUX Schnell costs about $0.003/image and generates in seconds. Great for prototypes and content generation.
Replicate – More model variety. I use SDXL for higher-quality images when fidelity matters. Costs more (~$0.02-0.05/image) but worth it for production assets.
ComfyUI (local) – Free after GPU investment. I run this for batch jobs where I need hundreds of images. Takes more setup but zero marginal cost.
All three can be connected to Claude via MCP servers, letting Claude generate images as part of automated workflows.
The Meta-Lesson
The common thread in all of this: invest in context and parallelization. Every minute spent documenting patterns, storing decisions, setting up automation, and learning to manage multiple sessions pays back tenfold.
Claude Code is powerful out of the box. But Claude Code with persistent memory, documented patterns, automated verification, and the ability to run multiple jobs in parallel? That’s a 10x multiplier.
Start with the memory server, one CLAUDE.md file, and get comfortable running 2-3 sessions at once. Add to your setup every session. In a month, you’ll wonder how you ever worked without it.








