You've learned the individual pieces: CLAUDE.md for context, hooks for automation, slash commands for reusable prompts, skills for structured workflows, MCP servers for external integrations, parallel tasks for speed, and settings for security. Now let's see how they work together in the actual workflows developers use day-to-day.
These aren't simplified examples. They're the patterns that show up repeatedly in teams that have made Claude Code part of how they ship software.
Workflow 1: Building a new feature from spec to PR
This is the flagship workflow. You have a feature spec or ticket — you want to go from that description to a merged PR with minimal context-switching.
What's in place before you start:
CLAUDE.mdat the repo root with your stack, conventions, test commands, and "before you write code, read these files" instructions- A
/new-featureslash command that templates the approach - A PostToolUse hook that runs prettier after every file write
- The GitHub MCP server configured globally
The session:
/new-feature user-authentication-oauth
The slash command expands into something like: "We're adding OAuth authentication. Before writing any code, read src/auth/, src/middleware/auth.ts, and src/config/index.ts to understand the existing auth patterns. Then implement OAuth following those patterns."
Claude reads the existing auth code first. It finds that the project uses a custom AuthProvider class, stores sessions in Redis, and has a specific error response format. It doesn't have to infer these patterns from CLAUDE.md alone — it read the actual code.
Claude writes the OAuth provider implementation following the existing patterns, then the middleware that uses it, then updates the config schema. After each file write, the PostToolUse hook runs prettier automatically — no manual formatting step.
Now write tests for the OAuth flow. Cover:
- Successful authorization code exchange
- Invalid/expired token handling
- Missing required scopes
- Provider API being unavailable (circuit breaker behavior)
Claude writes the tests, runs them:
npm run test src/auth/oauth.test.ts
All pass.
/commit
The /commit slash command triggers Claude to run git diff --staged, write a conventional commit message summarizing the changes, and commit. Output:
feat(auth): add OAuth 2.0 provider with Redis session storage
Implements authorization code flow for OAuth authentication.
Adds OAuthProvider class extending AuthProvider base, middleware
integration, config schema update, and comprehensive test coverage.
Open a PR for this feature targeting main. Title it "feat: OAuth authentication"
and include a summary of the changes and testing approach.
Claude uses the GitHub MCP server to create the PR. You get a link to the open PR without leaving your terminal.
What made this work: CLAUDE.md loaded context automatically. The slash command guided the approach. Hooks kept formatting clean throughout. The GitHub MCP server handled the PR creation. Each piece did its job invisibly, and the total time from prompt to open PR was one focused session.
Workflow 2: Debugging a production bug
You've got a Sentry alert, a stack trace, and a user complaining. You need the root cause and a fix fast.
We have a production bug. Here's the stack trace:
TypeError: Cannot read properties of undefined (reading 'userId')
at OrderService.createOrder (src/services/OrderService.ts:142:28)
at async POST /api/orders (src/app/api/orders/route.ts:31:18)
This is happening when users try to place orders. It started appearing
about 2 hours ago. What changed recently and what's the bug?
Claude reads src/services/OrderService.ts around line 142, reads the route handler, then runs:
git log --oneline src/services/OrderService.ts -10
It finds a commit from 3 hours ago that refactored how the session user is retrieved. The old code used req.session.user.userId. The new code uses req.user?.userId — but the middleware that sets req.user wasn't updated to match, so req.user is undefined when the old session format is still in Redis.
Found it. The middleware update in commit a3f2c9b changed how the user
object is attached to the request, but OrderService.createOrder still
uses the old path. There are two options:
1. Update OrderService to use req.user?.userId (matches new pattern)
2. Update the middleware to maintain backward compatibility during rollout
Option 1 is cleaner. Want me to implement it?
You confirm. Claude makes the targeted change, runs the affected tests, writes a regression test that would have caught this, then commits with a message referencing the issue number.
What made this work: Claude had the codebase context to trace the stack, git history to find the relevant commit, and enough authority to implement a targeted fix. The session stayed focused on one problem, one fix, one commit.
Workflow 3: Refactoring a large codebase in parallel
You've introduced a new BaseRepository class with shared connection management, query logging, and retry logic. You need to migrate 12 existing repository files to extend it.
This is exactly the scenario where parallel tasks pay off.
We need to migrate all repository files to extend BaseRepository.
Here's the BaseRepository implementation:
[paste BaseRepository source]
The files to migrate are in src/repositories/. First, list all the
files there so we know the scope.
Claude lists the 12 files. You confirm the plan:
Run the migration in parallel. For each of the 12 files:
1. Read the current implementation
2. Identify what needs to change to extend BaseRepository correctly
3. Make the changes
4. Run TypeScript type-checking on just that file: npx tsc --noEmit --isolatedModules src/repositories/[filename].ts
5. Return the refactored code and a list of what you changed
Run all 12 simultaneously and report results.
Claude spawns 12 background tasks. Each subagent gets the BaseRepository code, its specific file, and the instructions. They run concurrently.
Results come back: 10 pass type-checking cleanly. 2 have issues — one uses a custom connection option that BaseRepository doesn't support yet, one has a method with a conflicting signature.
For the 2 files with issues, show me the specific problems and propose fixes.
Claude synthesizes the 2 edge cases, proposes targeted fixes for each, implements them. Full migration done. Total wall-clock time: roughly as long as the slowest single file, not 12 × that.
Daily workflow habits from production use
These are the things that separate teams who get real value from Claude Code and teams who use it occasionally and find it inconsistent.
Keep CLAUDE.md lean and honest
The instinct when you first set up CLAUDE.md is to document everything. Resist it. After a few weeks, go through it ruthlessly and cut anything that:
- Claude would infer correctly without being told (don't document "use TypeScript for all new files" in a TypeScript repo)
- Is obvious from the codebase structure (don't document the directory layout that Claude can read directly)
- Is aspirational rather than actual (don't document conventions you want to follow if the codebase doesn't follow them yet — Claude will be confused when reality contradicts the docs)
A good CLAUDE.md is under 200 lines and every line earns its place. The best signal: if removing a line would cause Claude to make a mistake in this repo, keep it. If removing it wouldn't change anything, cut it.
End-of-session CLAUDE.md updates
At the end of any session where you discovered something new about the codebase or established a new pattern, ask:
Based on what we worked on today, what should I add to CLAUDE.md to help
future Claude sessions in this codebase? Keep it concise — only things that
aren't obvious from the code itself.
Claude surfaces the non-obvious things: the quirky reason the auth middleware runs before the rate limiter, the specific way the test fixtures need to be structured, the external service that has an undocumented 10-second delay in staging.
This is one of the highest-leverage uses of the tool: Claude builds its own documentation.
Session continuity with --resume
Claude Code saves sessions. If you left a long task mid-stream or want to continue work from yesterday:
# List recent sessions
claude --list-sessions
# Resume a specific session
claude --resume session-id-here
The resumed session has the full conversation history from before. You can pick up exactly where you left off — Claude remembers what files it read, what decisions it made, what's left to do.
Parallel terminal setup with tmux
For long-running tasks, a two-pane tmux setup works well:
- Left pane: Claude Code session
- Right pane: test runner in watch mode (
npm run test:watch)
As Claude writes code, you see the test results in real time on the right. When tests fail, you can immediately give Claude the failure output. This tightens the feedback loop significantly compared to running tests only after Claude finishes a chunk of work.
Background tasks for switching context
When Claude starts a task that'll take more than a few minutes (large refactor, writing comprehensive tests for a module, generating documentation), use background mode and switch to something else:
Run a background task to write comprehensive unit tests for all 8 functions
in src/utils/dateHelpers.ts. Use the same test patterns as the existing tests
in src/__tests__/. I'll check back when it's done.
You go work on something else. Come back in 10 minutes. Claude has the tests ready for review.
Trusting the permission prompts
When Claude Code asks for confirmation before running a command, that's the system working as intended. The instinct is to auto-approve everything to speed up the workflow. Don't.
Each confirmation is Claude recognizing that it's about to do something with non-trivial consequences: modifying a file, running a command that changes state, calling an external API. A 2-second "yes" is cheap. Debugging an accidental rm -rf dist or an unintended database mutation is not.
If you find yourself approving the same command repeatedly, add it to permissions.allow in your settings. That's the right way to streamline: be intentional about what you're pre-approving, and let everything else surface for review.
What to build next
Claude Code is the interactive layer — you at a terminal, working with Claude in real time. If you want to go deeper on building systems where agents orchestrate agents programmatically (pipelines, scheduled jobs, product features), the AI Agents track covers the full architecture: what agents are, how they're built, how tool use works at the SDK level, multi-agent coordination patterns, and how to evaluate whether your agents are actually working.
For copy-paste prompts to use as system prompts for agents you build, the prompt library has a collection organized by use case — including agent system prompts optimized for specific tasks like code review, data extraction, and customer support.
The skills from this track carry directly into programmatic agent development. CLAUDE.md is just a system prompt. Hooks are just post-processing logic. MCP servers are the same tool-use model that underlies all Claude agent systems. Once you've internalized how Claude Code works, the context engineering lesson will make sense at a deeper level — because you've seen firsthand how context shapes everything Claude does.