If you searched "how to build an AI agent" recently, you've encountered all three: Google ADK (Agent Development Kit), LangGraph, and n8n. They're all described as "agent frameworks." They're also quite different in their underlying mental models, target users, and deployment contexts.
This post is for someone overwhelmed by options who needs to understand the real differences and pick one. We'll cover what each framework actually is, who it's for, and when to choose it.
The quick pick guide
If you want the short answer:
| Profile | Start with |
|---|---|
| Non-developer, want visual builder | n8n |
| Python developer, want production-grade control | LangGraph |
| Google Cloud shop, want GCP-native agent | Google ADK |
| Want the simplest thing that works | n8n |
| Need complex state management | LangGraph |
| Enterprise deployment on GCP | Google ADK |
The rest of this post explains the reasoning behind these recommendations.
Google ADK: cloud-native, modular orchestration
Google released the Agent Development Kit (ADK) in 2025 as their answer to the proliferating agent framework landscape. It's a Python framework for building agents with first-class Google Cloud integration.
The core concept: ADK is built around composable, modular agent types that you assemble into larger workflows.
- LlmAgent: a single LLM-powered agent with tools, the basic building block
- SequentialAgent: runs a list of sub-agents in order, passing context between them
- LoopAgent: iterates over a sub-agent until a stopping condition is met (useful for refinement loops)
- ParallelAgent: runs multiple sub-agents simultaneously and merges results
- Agent-as-a-Tool: wraps any agent as a tool that another agent can call
The agent-as-a-tool pattern is particularly powerful. An orchestrator agent can delegate to specialist sub-agents the same way it calls API tools, enabling hierarchical agent architectures without complex custom code.
from google.adk.agents import LlmAgent, SequentialAgent
from google.adk.tools import google_search, google_docs
research_agent = LlmAgent(
name="researcher",
model="gemini-2.0-flash",
instruction="Research the given topic thoroughly using web search",
tools=[google_search]
)
summary_agent = LlmAgent(
name="summarizer",
model="gemini-2.0-flash",
instruction="Synthesize research into a clear, structured summary",
tools=[google_docs]
)
review_agent = LlmAgent(
name="reviewer",
model="gemini-2.0-pro",
instruction="Review the summary for accuracy, completeness, and clarity",
tools=[]
)
pipeline = SequentialAgent(
name="research_pipeline",
sub_agents=[research_agent, summary_agent, review_agent]
)
The GCP integration is the differentiator: ADK agents deploy natively to Cloud Run with a single command. You get Vertex AI model access, Cloud Storage as a built-in tool, Cloud Trace for observability, and IAM for access control — all without custom integration work. For teams already running infrastructure on GCP, this removes significant friction.
Best use cases: production agent systems on GCP, enterprise workflows needing security/compliance that GCP provides, teams with existing Vertex AI investment, sequential and parallel multi-agent orchestration.
Tradeoffs: GCP lock-in is real. If you ever want to deploy elsewhere, ADK's native integration becomes a migration burden rather than an advantage. The framework is newer (2025), which means less community content, fewer examples, and a smaller ecosystem of third-party integrations compared to LangGraph or n8n. Python-only.
Install: pip install google-adk
LangGraph: stateful graphs for production
LangGraph is covered in detail in LangGraph: building stateful agents, so this section focuses on what distinguishes it from ADK and n8n rather than repeating the fundamentals.
The core concept: agents as explicit state machines. You define a typed state schema, nodes (Python functions that read and modify state), and edges (transitions between nodes, including conditional branching). The graph compiles and executes, with state persisting and evolving across every step.
The explicitness is both the learning curve and the value. You always know exactly what state the agent is in. Debugging means inspecting a Python dict. Conditional branching (retry if quality is low, escalate to human if confidence is below threshold) is first-class. Checkpointing lets you pause a long-running agent and resume it later.
from langgraph.graph import StateGraph, END
from typing import TypedDict
class WorkflowState(TypedDict):
topic: str
research: str
draft: str
quality_score: float
revision_count: int
def research(state: WorkflowState) -> WorkflowState:
return {**state, "research": run_search(state["topic"])}
def write(state: WorkflowState) -> WorkflowState:
return {**state, "draft": generate_draft(state["research"])}
def evaluate(state: WorkflowState) -> WorkflowState:
score = score_draft(state["draft"])
return {**state, "quality_score": score, "revision_count": state["revision_count"] + 1}
def route(state: WorkflowState) -> str:
if state["quality_score"] >= 0.85 or state["revision_count"] >= 3:
return END
return "write" # Revise if quality is low and under revision limit
graph = StateGraph(WorkflowState)
graph.add_node("research", research)
graph.add_node("write", write)
graph.add_node("evaluate", evaluate)
graph.add_edge("research", "write")
graph.add_edge("write", "evaluate")
graph.add_conditional_edges("evaluate", route)
graph.set_entry_point("research")
app = graph.compile()
What LangGraph does better than ADK: framework-agnostic deployment (any cloud, any infrastructure), LangSmith integration for production observability and debugging, and more mature support for complex branching logic and human-in-the-loop patterns. The community is larger, the documentation is more extensive, and the ecosystem of examples is richer.
Best use cases: complex stateful workflows with branching, human-in-the-loop approval flows, long-running agents that need checkpointing, production systems where observability is non-negotiable.
Tradeoffs: steeper learning curve than ADK or n8n. No native cloud deployment shortcut — you're responsible for infrastructure. More boilerplate per agent.
n8n: visual workflows for everyone
n8n is the outlier in this comparison. Where ADK and LangGraph are code-first Python frameworks for developers, n8n is a visual workflow automation platform with first-class AI agent support.
The core concept: nodes connected by wires in a visual canvas. An AI Agent node wraps an LLM with tools and memory. You connect it to trigger nodes (webhooks, schedules, Gmail), action nodes (HTTP requests, database writes, Slack messages), and other AI nodes in a canvas without writing code.
[Schedule Trigger] → [AI Agent: Research Agent] → [HTTP Request: Save to Notion]
↓
[Tools: Brave Search, Calculator, Code Executor]
That's a complete automated research pipeline, built in 15 minutes in the n8n canvas.
The breadth of integrations is n8n's unique advantage: 400+ pre-built integrations with third-party services. Slack, Gmail, Google Calendar, Notion, Airtable, Salesforce, HubSpot, GitHub, Stripe — connecting your agent to any of these is drag-and-drop. In a code-first framework, each integration requires finding and wiring up a library. In n8n, it's a click.
For non-developers, n8n is the right starting point, full stop. The visual representation also makes it easier to communicate workflows to non-technical stakeholders — you can show them the canvas and they understand what the agent does.
Best use cases: non-developer teams, rapid prototyping, integration-heavy workflows, agents that primarily connect existing services, customer support automation, content operations with many tool calls.
Tradeoffs: less precise control over agent behavior compared to code. Complex conditional logic gets unwieldy visually — deeply nested branches are hard to follow in a canvas. Debugging is harder than stepping through Python. For sophisticated state management or custom reasoning patterns, you'll hit n8n's limits.
n8n can be self-hosted (Docker) or used via their cloud offering.
Side-by-side comparison
| Dimension | Google ADK | LangGraph | n8n |
|---|---|---|---|
| Required skill | Python intermediate | Python advanced | Low/no-code |
| State management | Good | Excellent | Basic |
| Cloud deployment | GCP native | Any (manual) | Cloud or self-hosted |
| Observability | GCP Monitoring + Trace | LangSmith | Built-in dashboard |
| Third-party integrations | Manual | Manual | 400+ pre-built |
| Best agent type | Sequential, parallel | Complex stateful | Integration-heavy |
| Community maturity | Growing (2025) | Mature | Very mature |
| Lock-in | GCP | None | n8n |
| Time to first agent | ~1 hour | ~2-3 hours | ~15 minutes |
Choosing between them
The honest answer is that these tools are not purely competitive — they serve different user profiles and use cases.
Choose n8n if: your team isn't primarily Python developers, you're in the "does this even work?" validation phase, or your workflow is primarily connecting existing services together. n8n's 400+ integrations are hard to match code-first.
Choose LangGraph if: you need complex stateful logic, branching, or human-in-the-loop patterns; you're deploying to production and need serious observability; or you're not tied to GCP and want maximum flexibility. LangGraph has the most mature production track record.
Choose Google ADK if: you're already running on GCP and want native cloud integration; you need enterprise security/compliance that GCP provides; or your use case maps cleanly to sequential or parallel agent pipelines.
These tools work together
An underappreciated point: these frameworks aren't mutually exclusive.
n8n can make HTTP requests to LangGraph-powered API endpoints. Your n8n automation can trigger a sophisticated stateful agent built in LangGraph, get the result, and route it to Slack or Notion.
Google ADK agents can be exposed as REST APIs and called from n8n as HTTP tool calls. You can build the complex orchestration in ADK and trigger it from an n8n workflow that handles all the integrations.
If you're starting from scratch: begin with n8n to validate that the automation is valuable. Once you've validated the concept and hit n8n's limits, wrap the complex logic in a LangGraph or ADK service and call it from n8n.
The simple version
Pick the simplest tool that meets your actual requirements. Don't start with LangGraph because it's "the right way to do agents" if n8n would have you running in 15 minutes. Don't add GCP lock-in if you're not on GCP.
Validate first. Optimize later. The best framework is the one that helps you ship a working agent today, not the one that's theoretically most powerful.
For a broader view of the agent framework landscape including smolagents and CrewAI, see smolagents, CrewAI, or LangGraph — which agent framework should you use?.



