How to Get the Most From Claude Code: Lessons From Building 50+ Projects

This article was written by Claude (AI) with human review and editing. The tips and workflow patterns described are real techniques I use daily with Claude Code.

Humorous illustration of a programmer juggling multiple AI assistants
The reality of running multiple Claude sessions in parallel

After building over 50 projects with Claude Code – from full-stack SaaS applications to 3D games to AI-powered tools – I’ve learned what separates productive Claude Code sessions from frustrating ones. Here’s what actually works.

The #1 Productivity Secret: Run Multiple Jobs in Parallel

This is the single most important lesson I’ve learned: treat AI agents like employees you’re managing, not tools you’re using one at a time.

I routinely run 3-5 Claude Code sessions simultaneously across different terminal tabs. While one is researching an API, another is writing tests, and a third is refactoring a component. The key insight is that you need to context switch frequently – check in on each agent, give it a nudge if it’s going off track, add context when needed, then move to the next one.

Think of it like managing a team of junior developers: you wouldn’t sit and watch one person code for an hour. You’d give them a task, check on someone else, come back with feedback, and keep the whole team moving forward. The same applies to AI agents – they work best with slight nudges and context to improve their ability to work autonomously.

These tools have gotten dramatically better in the last 6 months. Six months ago, you really needed to babysit Claude through complex tasks. Now, with good context in your CLAUDE.md files, Claude can work autonomously for much longer stretches. But you still get the best results by managing multiple sessions and providing course corrections.

Pro tip: Use claude --continue to resume your last session, or claude --session [name] to maintain named sessions for different projects. This way you can switch between “frontend-refactor” and “api-integration” sessions without losing context.

1. Set Up Persistent Memory

The single biggest productivity multiplier is giving Claude context that persists across sessions. Without it, every conversation starts from zero.

Use the MCP Memory Server:

// In ~/.claude.json { "mcpServers": { "memory": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-memory"] } } }

Then create a CLAUDE.md file that instructs Claude to:

  • Read memory at session start
  • Store project decisions, tech stacks, and deployment configs
  • Track billing/services you’re paying for
  • Remember file locations and patterns

After a few sessions, Claude remembers things like “coffee-explorer deploys to Fly.io at coffee-explorer-proud-forest-117” or “ArborHub uses Claude Vision API for tree detection” without you explaining it again.

2. Create Verification Requirements

Code that compiles isn’t code that works. I learned this the hard way after declaring games “complete” that crashed on first interaction.

Add this to your CLAUDE.md:

## Verification Requirements Before declaring any web application or game complete: 1. Run the dev server and verify it loads without errors 2. Use Playwright headless testing to verify core functionality 3. Check browser console for runtime JavaScript exceptions 4. Test primary user interactions 5. Don't rely on code review alone - runtime errors require runtime testing

This forces Claude to actually run the code before saying “done.” The Playwright MCP server makes this automated:

{ "mcpServers": { "playwright": { "command": "npx", "args": ["@playwright/mcp@latest", "--headless", "--viewport-size", "1024x768"] } } }

3. Document Deployment Patterns

Nothing wastes more time than re-figuring out deployment configs. I store these in memory:

coffee-explorer-deployment: - CRITICAL: Must deploy from ROOT directory, not backend/ - Uses combined Dockerfile that builds frontend AND backend - nginx runs on port 80 as reverse proxy - Spring Boot runs on port 8080 internally - fly.toml must be at ROOT level with internal_port=80

When I say “deploy coffee-explorer,” Claude knows exactly what to do. No guessing, no debugging broken deployments.

4. Track What You’re Paying For

I have Claude maintain a running list of all billed services:

Being billed for: - Fly.io hosting (arbor-hub-api, arbor-hub-web) - Anthropic API (Claude Sonnet 4 Vision) - Replicate.com API (SDXL image generation) - AWS Lightsail FrenchProgrammingBlog-New (~$3.50/mo) - AWS Route 53 domains ($15/year each)

This prevents surprise bills and helps when cleaning up abandoned projects.

5. Use Batch Processing for Repetitive Tasks

When I needed 500+ city guides for Coffee Explorer, I didn’t do them one at a time. I had Claude:

  1. Research 16 cities in parallel using web search
  2. Generate JSON files with real coffee shop data
  3. Validate each file’s structure
  4. Commit in batches

The key is giving Claude clear patterns to follow:

Each city guide contains: - City metadata and coffee scene intro - 8-12 real coffee shops (researched via web search) - Real addresses, real descriptions - Saved to /frontend/src/data/cities/{city-slug}.json

Batch 9 alone generated 16 complete city guides in a single session.

6. Store Technical Decisions

When you make an architectural choice, document why:

ArborHub-TreeDetection: - Original approach using Claude Vision for bounding boxes was unreliable - Decided to use multi-pass pipeline: Grounding DINO -> Claude validation -> SAM segmentation - Cost increases from $0.01 to ~$0.016/image for dramatically better accuracy

Six months later when you revisit the code, Claude can explain why you didn’t just use Claude Vision directly.

7. Prefer Automation Over Manual Steps

I added this to my global config:

## Workflow Preferences ### Automation First - Always prefer automated options over manual intervention - If a task can be completed programmatically, use that approach - Only fall back to manual steps when no automated option exists

This means Claude will use API calls instead of asking me to click magic links, run scripts instead of giving me manual steps, and deploy via CLI instead of web dashboards.

8. Document Patterns for Complex Systems

For my 3D game project, I had Claude document every subsystem:

Three.js Game Architecture: - Scene: THREE.Scene() with FogExp2 (density 0.003) - Camera: PerspectiveCamera (75 FOV, attached to scene for weapon visibility) - Renderer: WebGLRenderer with ACES Filmic tone mapping - Quality presets: Low (25% particles), Medium (50%), High (100% + shadows) Combat System: - Player melee: 60 arc, 3 unit range, cone query for hits - Combo scaling: damage * (1 + comboCount * 0.2) - 4 abilities: Slash (Q), Spin (E), Dash (Shift+Space), FireBlast (R)

When I came back to add features months later, Claude understood the architecture immediately.

9. Use CLAUDE.md at Multiple Levels

I have three CLAUDE.md files:

  1. Global (~/.claude/CLAUDE.md): Universal preferences, memory protocol, billing tracking
  2. Workspace (~/claude-workspace/CLAUDE.md): Cross-project conventions
  3. Project-specific (project/CLAUDE.md): Deployment configs, API keys, project-specific patterns

This layered approach means Claude always has the right context without me repeating myself.

10. Use GitHub Actions for Deployment, Not Direct Deploy

My original advice was “always commit, push, and deploy.” But I’ve learned a better pattern: always commit and push, then let GitHub Actions handle the deploy.

Why? When you’re running multiple Claude sessions in parallel (and you should be), you’ll inevitably have two sessions try to deploy at the same time. Direct deploys from multiple terminals cause conflicts, failed builds, and wasted time debugging “why did my deploy break?”

Instead, set up a GitHub Actions workflow with concurrency controls:

# .github/workflows/deploy.yml name: Deploy on: push: branches: [main] concurrency: group: deploy cancel-in-progress: false # Queue deploys, don't cancel jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: superfly/flyctl-actions/setup-flyctl@master - run: flyctl deploy --remote-only env: FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}

Now multiple Claude sessions can push commits freely, and GitHub Actions queues the deploys automatically. No conflicts, no babysitting.

11. Offload Tasks to Ollama for Cost Savings

Claude Code is powerful but not cheap. For repetitive, low-complexity tasks, I offload work to local LLMs running via Ollama.

Tasks I run locally:

  • Generating boilerplate content (city descriptions, product descriptions)
  • Simple code transformations and formatting
  • Bulk data validation and cleanup
  • First-pass content generation before Claude refinement

I set up an Ollama MCP server that Claude can call for these tasks:

{ "mcpServers": { "ollama": { "command": "node", "args": ["scripts/ollama-mcp-server/index.js"] } } }

This hybrid approach lets me use Claude’s intelligence for complex reasoning while keeping costs down on bulk operations. When I generated 500 city guides, the content drafts came from Llama running locally – Claude only handled the research and final polish.

12. Get the Right Image Generation Tools

Code is only half the product. Modern web apps need images, and AI image generation has become essential to my workflow.

My current stack:

  • fal.ai – Fast and cheap for bulk generation (~$0.003/image with FLUX Schnell). I use this for hero images, product shots, and blog illustrations.
  • Replicate – Good for SDXL when I need more control over the generation process
  • ComfyUI – Local generation for iterating on complex prompts without API costs

I store the API keys in my environment and have Claude generate images as part of the development workflow. For Coffee Explorer, I generated 100+ city hero images in a single batch – something that would have cost thousands in stock photography.

The key is treating image generation as a first-class part of development, not an afterthought. When I’m building a new feature, I have Claude generate the supporting images alongside the code.


The Meta-Lesson

The common thread in all of this: invest in context and run in parallel. Every minute spent documenting patterns, storing decisions, and setting up automation pays back tenfold across dozens of sessions.

But the real multiplier is parallelization. Stop thinking of Claude as a single tool you interact with sequentially. Think of it as a team you’re managing – give each instance clear context, check in regularly, provide nudges when needed, and let them work autonomously.

Claude Code is powerful out of the box. But Claude Code with persistent memory, documented patterns, automated verification, cost-optimized task routing, and parallel execution? That’s a 10x multiplier.

Start with the memory server and one CLAUDE.md file. Add to it every session. In a month, you’ll wonder how you ever worked without it.

Leave a Reply

Your email address will not be published. Required fields are marked *