Programming Β· Β· 11 min read

I Spent 40 Hours Mining r/ClaudeAI for the Techniques That Actually Work

I Spent 40 Hours Mining r/ClaudeAI for the Techniques That Actually Work

Most people use Claude Code like a fancy autocomplete. They type a request, get a response, and repeat until something works. I did the same thing for months.

Then I spent a week reading through hundreds of posts on r/ClaudeAI, filtering out the complaints and feature requests to find what power users actually do differently. What I found changed how I work with this tool.

This is everything I learned, organized into the techniques that produced measurable differences in my output.

Claude Code is not only for developers

All below tweaks and customisations (well... almost all) apply to any work with CC. If you want to create a personal assistant that knows your product, ICP, pain points, tone of voice, and helps you research competitors or brainstorm content ideas - this still applies!

Extended Thinking: What Actually Works Now

If you've read older guides about Claude Code, you'll see references to "ultrathink" and a hierarchy of thinking triggers. That's outdated. As of January 16, 2026, Anthropic deprecated those keywords.

Here's what actually works now:

Extended thinking is on by default. Claude Code automatically allocates thinking budget based on task complexity. You don't need to trigger it manually for most tasks.

The Tab key toggles thinking per-message. Press Tab before sending to turn extended thinking off for simple questions where you don't need deep reasoning. Press it again to turn it back on.

There's a hidden environment variable for power users. If you want more thinking budget than the default, you can set:

MAX_THINKING_TOKENS=63999 claude

That's roughly 2x the old "ultrathink" budget. You can also make it permanent by adding to your shell config:

# Add to ~/.zshrc or ~/.bashrc
export MAX_THINKING_TOKENS=63999

When to increase the thinking budget:

I've found the higher budget helps with complex debugging, where Claude keeps proposing the same failed solution, architectural decisions spanning multiple files, and migration planning, where there are many interdependencies to track.

For routine coding, the default automatic thinking works fine. The extra budget costs more tokens and takes longer, so use it when you're genuinely stuck, not as a default.

The Explore-Plan-Code-Commit Workflow

Anthropic's own engineering team published their internal workflow, and it's the single most useful thing I found. The pattern prevents Claude's worst tendency: jumping straight into code without understanding what already exists.

Step 1: Explore

Tell Claude to read files and learn about the specific problem you want to work on, but explicitly forbid it from writing anything:

Read the files that handle user authentication. Don't write any code yet.

This forces Claude to understand existing patterns before proposing changes. I've watched it catch naming conventions, error handling styles, and architectural decisions that it would have violated otherwise.

Step 2: Plan

Now use a thinking trigger:

Think hard about how to add OAuth support to this authentication system. 
Write a detailed plan to plan.md before any implementation.

Writing the plan to a file creates a checkpoint. When implementation goes sideways (and it will), you can return to the plan and try a different approach without losing your analysis.

Step 3: Code

Only now do you let Claude write code. Reference the plan:

Implement the OAuth changes according to plan.md. 
Run the test suite after each file change.

Step 4: Commit

Have Claude create the commit and PR:

Create a commit with a descriptive message explaining the OAuth implementation.
Push and open a PR.

This workflow adds maybe two minutes to each task. I stopped seeing the "Claude rewrote my entire file in a different style" problem almost entirely.

CLAUDE.md: Your Project's Persistent Memory

Every Claude Code session starts by reading a file called CLAUDE.md from your project root. This is where you put instructions that should apply to every conversation.

Keep it under 300 lines. Every instruction competes for attention with your actual task. Claude starts ignoring the rules if you pack too many things.

Make commands copy-paste ready. Don't write "run the tests." Write the actual command:

## Commands
- `npm run test` - Run Jest tests (use -t "pattern" for specific tests)
- `npm run typecheck` - TypeScript strict mode checking
- `npm run build` - Production build with optimizations

State what NOT to do, with alternatives:

## Guardrails
- Don't commit directly to main. Use feature branches.
- Don't edit .env files. Use .env.example as a template.
- Don't install new dependencies without asking first.

Include a project map:

## Project Structure
- src/api/ - REST endpoints and middleware
- src/components/ui/ - Shared React components
- src/lib/ - Utility functions and helpers
- prisma/ - Database schema and migrations

Here's an example template you can use:

# Project: [Name]

## Commands
- `npm run dev` - Start development server on port 3000
- `npm run test` - Run Jest (use -t "pattern" for specific tests)
- `npm run typecheck` - TypeScript checking
- `npm run lint` - ESLint with auto-fix

## Code Style
- TypeScript strict mode, never use `any`
- Named exports only, no default exports
- Functional components with hooks
- Error messages must be user-readable

## Workflow
1. Read relevant files before making changes
2. Write plan to plan.md for complex tasks
3. Run typecheck after every change
4. Use single test runs when iterating, full suite before commit

## Guardrails
- Don't commit to main
- Don't modify package.json without asking
- Don't delete files without confirmation
- Don't use console.log for debugging (use the logger)

## Project Map
- src/api/ - API routes and handlers
- src/components/ - React components
- src/hooks/ - Custom React hooks
- src/lib/ - Shared utilities
- prisma/schema.prisma - Database schema

## Current Focus
[I update this section with what I'm working on]

File hierarchy matters. Claude loads CLAUDE.md files from multiple locations:

  1. ~/.claude/CLAUDE.md - Global preferences (your personal defaults)
  2. ./CLAUDE.md - Project root (shared with team via git)
  3. ./CLAUDE.local.md - Personal overrides (add to .gitignore)
  4. ./src/CLAUDE.md - Subdirectory-specific (loads when Claude works there)

I keep my global file minimal, just formatting preferences and a few commands I use everywhere. Project-specific stuff goes in the project file.

MCP Servers: Shit That Actually Matters

MCP (Model Context Protocol) servers extend Claude's capabilities by connecting it to external tools. The ecosystem has exploded, and there are hundreds of servers now, but most aren't worth the context window cost. Make sure to only add what you actually use (you can switch them on and off).

After testing hundreds of servers, I narrowed my daily setup to four:

1. Filesystem MCP (Official)

Gives Claude proper file operations instead of shell commands. Sounds minor, but it handles permissions and paths more reliably.

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/project"]
    }
  }
}

2. GitHub MCP (Official)

Repository management without leaving Claude. I use it for creating PRs, reviewing diffs, and managing issues.

{
  "github": {
    "command": "npx",
    "args": ["-y", "@modelcontextprotocol/server-github"],
    "env": {
      "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_your_token_here"
    }
  }
}

3. Context7 MCP

This one solves a real problem. Claude's training data is months old, so it doesn't know recent framework changes. Context7 fetches current documentation on demand.

When Claude suggests deprecated Next.js patterns, I can say "use context7 to check the current Next.js 15 documentation" and it pulls fresh docs.

GitHub: https://github.com/upstash/context7

4. Sequential Thinking MCP

Forces Claude into a structured problem-solving mode. Useful for debugging sessions where it keeps proposing the same failed solution.

{
  "sequential-thinking": {
    "command": "npx",
    "args": ["-y", "@modelcontextprotocol/server-sequential-thinking"]
  }
}

The tradeoff nobody mentions: Each MCP server loads its tool definitions into your context window before you start working. Five servers with twenty tools each means hundreds of tokens gone before you type anything.

Start with two servers. Add more only when you have a specific need.

For configuration, create ~/.claude/settings.json for global servers or .mcp.json in your project root for project-specific ones.

Other servers worth knowing about:

  • Vista Social MCP - Social media management on steroids
  • Supabase MCP - Direct database queries and schema inspection
  • Playwright MCP - Browser automation for testing web apps
  • Slack MCP - Read channels and threads for context
  • Linear MCP - Issue tracking integration
  • Rube MCP - Single server connecting to 500+ services (good for reducing context bloat)

Full list: https://github.com/modelcontextprotocol/servers

Context Management: The Skill That Separates Good Sessions From Bad Ones

Claude has a 200K token context window. This sounds enormous until you're three hours into a complex project and Claude starts suggesting code you already rejected an hour ago.

I've heard this described as "AI context is like milk - best served fresh and condensed." It's accurate.

The handoff document pattern

When I'm approaching context limits or Claude's responses start degrading, I ask for a handoff:

Write a handoff document explaining:
- What we tried
- What worked
- What failed and why
- Specific next steps

Save it to experiments/HANDOFF-[date].md

Then I start a fresh session and paste the file path. Claude gets full context with clean conversation history.

This consistently beats the /compact command, which summarizes in ways that sometimes lose critical details.

Signs your context is degrading:

  • Claude suggests approaches you already tried and rejected
  • It violates patterns established earlier in the session
  • Responses feel generic instead of project-specific
  • It asks questions you already answered

When I see these signs, I stop immediately. Commit what works, write a handoff document, start fresh. Fighting degraded context wastes more time than restarting.

Monitor with /context

This command shows current usage. I try to hand off around 60% rather than waiting for limits.

Git worktrees for parallel sessions

For large features, I run multiple Claude instances in separate worktrees:

git worktree add ../project-feature-auth feature-auth
git worktree add ../project-feature-billing feature-billing

Launch Claude in each directory. Each instance has its own context, its own conversation history, and its own set of changes. No conflicts.

I cycle through checking progress, and when one instance hits context limits, I hand off and continue. The other instances keep working.

Custom Commands: Automate Your Repetitive Workflows

Every time I found myself typing the same multi-step instructions, I turned it into a custom command.

Commands live in .claude/commands/ (project-specific) or ~/.claude/commands/ (global). Each command is a markdown file.

Example: Fix a GitHub issue end-to-end

File: .claude/commands/fix-issue.md

---
description: Analyze and fix a GitHub issue
allowed-tools: Bash(git:*), Bash(gh:*), Read, Edit
---

Analyze and fix GitHub issue: $ARGUMENTS

Steps:
1. Run `gh issue view $ARGUMENTS` to get full issue details
2. Search codebase for relevant files
3. Implement the fix
4. Write tests that verify the fix
5. Run the full test suite
6. Create a commit with message "Fix #$ARGUMENTS: [description]"
7. Push and create a PR linking to the issue

Usage: /project:fix-issue 1234

Example: Code review with specific criteria

File: ~/.claude/commands/review.md

---
description: Review code against team standards
allowed-tools: Read, Bash(git diff:*)
---

Review the changes in this branch against these criteria:

1. Type safety: Are there any `any` types or type assertions that could be avoided?
2. Error handling: Are errors caught and handled appropriately?
3. Testing: Are there tests for new functionality?
4. Performance: Any obvious N+1 queries or unnecessary re-renders?
5. Security: Any user input that isn't validated?

For each issue found, show the file, line, and a suggested fix.

Usage: /user:review

Example: Generate a component with tests

File: .claude/commands/component.md

---
description: Create a new React component with tests
allowed-tools: Write, Read
---

Create a new React component: $ARGUMENTS

Requirements:
- TypeScript with proper prop types
- Functional component with hooks
- Place in src/components/$ARGUMENTS/
- Include unit tests in $ARGUMENTS.test.tsx
- Include Storybook story in $ARGUMENTS.stories.tsx
- Follow existing component patterns in src/components/Button/

Usage: /project:component UserAvatar

The $ARGUMENTS placeholder captures everything after the command name. You can also use $1, $2 for positional arguments.

Hooks: Automatic Quality Checks

Hooks run before or after Claude uses specific tools. I use them to catch mistakes before they accumulate.

Configuration goes in .claude/settings.json:

{
  "hooks": {
    "PostToolUse": [
      {
        "matcher": "Edit|Write",
        "hooks": [
          {
            "type": "command",
            "command": "npx prettier --write \"$CLAUDE_FILE_PATHS\""
          }
        ]
      },
      {
        "matcher": "Edit",
        "hooks": [
          {
            "type": "command", 
            "command": "npx tsc --noEmit --skipLibCheck 2>&1 | head -20"
          }
        ]
      }
    ]
  }
}

This runs Prettier after every file write and TypeScript checking after every edit. Claude sees the type errors immediately and can fix them in the same turn.

Block dangerous commands:

{
  "permissions": {
    "deny": [
      "rm -rf",
      "DROP TABLE",
      "DELETE FROM * WHERE",
      "git push --force"
    ]
  }
}

Claude can't execute these even in auto-accept mode. I added git push --force after an incident I'd rather not describe.

Prompting Patterns That Made a Measurable Difference

I tested dozens of prompting techniques. Most made no difference. A few made a consistent difference.

XML tags for structure

Claude parses XML reliably. For complex requests, I structure them:

<context>
This is a Next.js 15 app using the App Router. 
Authentication is handled by NextAuth.js.
The database is PostgreSQL via Prisma.
</context>

<task>
Add a "forgot password" flow that:
- Sends a reset email with a time-limited token
- Validates the token before showing the reset form
- Enforces password strength requirements
</task>

<constraints>
- Don't add new dependencies
- Follow the existing error handling pattern in src/lib/errors.ts
- Write tests for the happy path and token expiration
</constraints>

Put data before instructions

This ordering matters more than I expected. Content first, instructions second:

<codebase>
[paste relevant code here]
</codebase>

<error>
[paste error message here]
</error>

Given the code and error above, identify the root cause and fix it.

Not:

Fix this error:
[error]

Here's the code:
[code]

I don't know why this works better, but it consistently does.

Explain the "why" behind rules

Claude follows instructions better when it understands the reason:

Instead of:

Never use ellipses in output.

Write:

This output will be read aloud by text-to-speech software. 
Avoid ellipses because the TTS engine won't know how to pronounce them.

Claude applies the underlying principle to cases you didn't anticipate.

Separate prompts by mode

I stopped mixing different types of requests in single prompts:

  • Build mode: Concrete implementation requests. No meta-instructions.
  • Debug mode: Share inputs, errors, and environment first. Ask for diagnosis.
  • Review mode: Ask for critique OR a rewrite. Not both.
  • Learn mode: One concept at a time with examples.

Mixing modes in one prompt degrades output. "Implement this feature and also explain your design decisions" gets worse results than asking separately.

IDE Integration: CLI vs Cursor vs Extensions

I use both the Claude Code CLI and the VS Code extension (not a big fan of Cursor, and GitHub Copilot got much better while everyone keeps shitting on it). They're good at different things.

Claude Code CLI handles complex, multi-file work. True (well... most of the time) 200K context window, direct filesystem access, the full MCP ecosystem. It's claude.ai on steroids with automated agentic features. Seriously - if you think this command line crap is for tech geeks and developers only, just give it a go.

The VS Code extension gives you a graphical interface for Claude Code with inline diffs and accept/reject buttons. The conversation history syncs with CLI, so you can start in VS Code and continue in the terminal with claude --resume.

Cline (formerly Claude Dev) is worth knowing about. It's an autonomous coding agent that runs in VS Code with checkpoint/restore functionality. Different philosophy than Claude Code - more autonomous, less conversational. Honestly, I stopped using it once Copilot was good enough at smaller tasks.

Project Structure That Helps Claude Help You

How you organize your project affects how well Claude understands it.

Recommended layout:

project/
β”œβ”€β”€ CLAUDE.md              # Project instructions
β”œβ”€β”€ .mcp.json              # MCP server configuration
β”œβ”€β”€ .claude/
β”‚   β”œβ”€β”€ settings.json      # Hooks, permissions
β”‚   └── commands/          # Custom slash commands
β”‚       β”œβ”€β”€ fix-issue.md
β”‚       └── review.md
β”œβ”€β”€ docs/
β”‚   β”œβ”€β”€ ARCHITECTURE.md    # System design (Claude reads this)
β”‚   └── API.md             # API documentation
β”œβ”€β”€ experiments/
β”‚   └── HANDOFF-*.md       # Session handoff documents
└── src/
    └── CLAUDE.md          # Subdirectory-specific instructions

For large codebases, start Claude in specific subdirectories:

cd packages/api && claude

This naturally limits scope. Claude won't try to understand your entire monorepo when you're fixing one API endpoint.

Use .gitignore patterns for files Claude shouldn't read. There's no official .claudeignore yet, but .gitignore patterns are respected.

For sensitive files, configure read restrictions:

{
  "permissions": {
    "deny": [".env", ".env.*", "*.secret", "secrets/", "*.pem"]
  }
}

Tools and Resources

Documentation:

  • Official Claude Code docs: https://docs.anthropic.com/en/docs/claude-code
  • MCP protocol spec: https://modelcontextprotocol.io/
  • MCP server directory: https://github.com/modelcontextprotocol/servers

Community resources:

  • r/ClaudeAI subreddit (filter by "Tips" and "Guide" flairs)
  • Claude Code showcase repo: https://github.com/ChrisWiles/claude-code-showcase
  • Community tips collection: https://github.com/ykdojo/claude-code-tips

MCP servers I mentioned:

  • Context7: https://github.com/upstash/context7
  • Playwright (official): https://github.com/anthropics/mcp-server-playwright
  • Rube (multi-service): https://github.com/rube-mcp/rube

What I'm Still Figuring Out

Multi-agent workflows where Claude instances collaborate. Running code review as a separate Claude instance that checks the first instance's work. Automated handoffs between specialized agents.

The patterns exist in the community, but I haven't found a setup that consistently outperforms careful single-agent work. When I do, I'll write about it.

The gap between casual Claude Code usage and power usage is mostly configuration and discipline. Set up your CLAUDE.md properly, use the explore-plan-code-commit workflow, manage your context actively, and you'll see 10x better results from the same tool everyone else is using.

Read next