Both LangChain and LangGraph are from the same team. They're designed to work together. But they solve different problems, and reaching for the wrong one makes your code either over-engineered or impossible to maintain.
Here's the honest breakdown.
The Core Difference in One Sentence
LangChain is for pipelines. LangGraph is for agents.
A pipeline runs left to right, once. An agent loops, branches, and makes decisions along the way.
LangChain: What It's Good At
LangChain's LCEL (LangChain Expression Language) shines for DAG-shaped workflows — directed acyclic graphs, where you move forward through steps without ever going backwards.
Example: Multi-step summarisation pipeline
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser, PydanticOutputParser
from pydantic import BaseModel
from typing import List
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
parser = StrOutputParser()
# Step 1: extract key points
extract_chain = (
ChatPromptTemplate.from_template("List the 5 most important points from: {text}")
| llm | parser
)
# Step 2: summarise key points
summarise_chain = (
ChatPromptTemplate.from_template("Summarise these points in 2 sentences: {points}")
| llm | parser
)
# Step 3: generate a title
title_chain = (
ChatPromptTemplate.from_template("Write a compelling title for an article with this summary: {summary}")
| llm | parser
)
# Compose the full pipeline
full_pipeline = (
{"points": extract_chain}
| {"summary": summarise_chain, "points": lambda x: x["points"]}
| {"title": title_chain, "summary": lambda x: x["summary"]}
)
result = full_pipeline.invoke({"text": "Your long article text here..."})
print(result["title"])
print(result["summary"])
This is clean, readable, and exactly what LCEL is designed for. No LangGraph needed.
When LangChain is the right choice
- Fixed number of steps — the workflow always does the same sequence
- No retry logic — if step 3 fails, you don't loop back to step 1
- No branching on LLM output — you don't route to different nodes based on what the LLM said
- RAG pipelines — retrieve → augment → generate, runs once
- Batch processing — process 1000 documents through the same pipeline
LangGraph: What It's Good At
LangGraph is a graph execution engine with state. The key things it unlocks that LCEL can't do:
- Cycles — go back to a previous node
- Conditional edges — inspect state and choose which node to go to next
- Persistent state — a shared dict every node reads from and writes to
- Checkpointing — save state to disk; resume interrupted runs
The canonical LangGraph pattern: ReAct agent
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langchain_core.messages import HumanMessage
from typing import TypedDict, List, Annotated
import operator
@tool
def get_word_count(text: str) -> int:
"""Count the number of words in a text."""
return len(text.split())
@tool
def get_reading_time(word_count: int) -> str:
"""Estimate reading time given word count (200 wpm average)."""
minutes = round(word_count / 200, 1)
return f"{minutes} minutes"
tools = [get_word_count, get_reading_time]
llm = ChatOpenAI(model="gpt-4o", temperature=0).bind_tools(tools)
class State(TypedDict):
messages: Annotated[list, operator.add]
def agent(state: State) -> dict:
return {"messages": [llm.invoke(state["messages"])]}
def route(state: State) -> str:
last = state["messages"][-1]
return "tools" if getattr(last, "tool_calls", None) else END
tool_node = ToolNode(tools)
graph = StateGraph(State)
graph.add_node("agent", agent)
graph.add_node("tools", tool_node)
graph.set_entry_point("agent")
graph.add_conditional_edges("agent", route, {"tools": "tools", END: END})
graph.add_edge("tools", "agent") # loop back after tool use
app = graph.compile()
result = app.invoke({
"messages": [HumanMessage(content="How long would it take to read a 1500-word article?")]
})
print(result["messages"][-1].content)
The agent loops — calls tools as needed, reasons about results, loops again — until it has a final answer. This is fundamentally not possible in a pure LangChain LCEL pipeline.
Side-by-Side Comparison
| Scenario | LangChain | LangGraph |
|---|---|---|
| Linear A → B → C pipeline | ✅ LCEL | Overkill |
| Parallel execution | ✅ RunnableParallel | ✅ Both work |
| Conditional routing | ✗ Not cleanly | ✅ Conditional edges |
| Retry loop (try until good) | ✗ Requires workarounds | ✅ Cycles |
| Tool-using agent | ✗ Fragile without cycles | ✅ Native pattern |
| Persistent conversation memory | ✅ RunnableWithMessageHistory | ✅ Checkpointer (better) |
| Multi-agent coordination | ✗ Not supported | ✅ Subgraphs |
| Human-in-the-loop approval | ✗ Not supported | ✅ interrupt_before |
| Stream step-by-step events | ✅ Both | ✅ Both |
| Complexity | Low | Medium |
Where LangChain Breaks Down
Trying to implement retry logic in LCEL
This is what developers attempt when they don't reach for LangGraph:
# ❌ Anti-pattern — don't do this
def validate_output(text: str) -> bool:
return len(text) > 100 and "summary" in text.lower()
def run_with_retry(chain, inputs, max_retries=3):
for i in range(max_retries):
result = chain.invoke(inputs)
if validate_output(result):
return result
print(f"Attempt {i+1} failed, retrying...")
return result # return last attempt regardless
chain = prompt | llm | StrOutputParser()
result = run_with_retry(chain, {"text": "..."})
This works but it's stateless — every retry starts from scratch, with no memory of what went wrong or why. The LLM doesn't know why it's being called again. You can't progressively refine the output.
The LangGraph version carries the failure reason forward:
# ✅ LangGraph version — each retry sees the previous failure
def write(state):
prompt = state["task"]
if state.get("feedback"):
prompt = f"Previous attempt failed because: {state['feedback']}\n\nTask: {state['task']}"
result = llm.invoke(prompt)
return {"draft": result.content, "attempts": state["attempts"] + 1}
def validate(state):
if len(state["draft"]) > 100:
return {"approved": True}
return {"approved": False, "feedback": "Response too short — need at least 100 characters"}
def route(state):
if state.get("approved") or state["attempts"] >= 3:
return END
return "write"
Where LangGraph Overkills
A 3-step summarisation pipeline does not need a state graph:
# ❌ Unnecessary LangGraph for a simple pipeline
class State(TypedDict):
text: str
summary: str
title: str
def summarise(state): ...
def generate_title(state): ...
graph = StateGraph(State)
graph.add_node("summarise", summarise)
graph.add_node("generate_title", generate_title)
graph.set_entry_point("summarise")
graph.add_edge("summarise", "generate_title")
graph.add_edge("generate_title", END)
This is 15 lines to express what LCEL does in 3. If there's no branching, no looping, and no need for persistent state, use LCEL.
The Decision Framework
Does your workflow need any of these?
├── Loops / retry logic
├── Conditional routing based on LLM output
├── Tools with multi-turn reasoning
├── Persistent state across many steps
└── Human-in-the-loop checkpoints
YES → Use LangGraph
NO → Use LangChain LCEL
In practice: if you're building a chatbot that just answers questions, LangChain. If you're building an agent that searches the web, writes code, runs tests, and iterates — LangGraph.
Using Both Together
The most common production setup is LangGraph for the outer control flow, with LangChain components inside each node:
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from typing import TypedDict
llm = ChatOpenAI(model="gpt-4o", temperature=0)
# LangChain LCEL chain — lives inside a LangGraph node
summarise_chain = (
ChatPromptTemplate.from_template("Summarise: {text}")
| llm
| StrOutputParser()
)
class State(TypedDict):
text: str
summary: str
quality_score: int
# LangGraph node wraps the LangChain chain
def summarise_node(state: State) -> dict:
summary = summarise_chain.invoke({"text": state["text"]})
return {"summary": summary}
def score_node(state: State) -> dict:
score_chain = (
ChatPromptTemplate.from_template("Score this summary 1-10 for clarity: {summary}. Reply with just the number.")
| llm
| StrOutputParser()
)
score = int(score_chain.invoke({"summary": state["summary"]}).strip())
return {"quality_score": score}
def route(state: State) -> str:
return "summarise" if state["quality_score"] < 7 else END
graph = StateGraph(State)
graph.add_node("summarise", summarise_node)
graph.add_node("score", score_node)
graph.set_entry_point("summarise")
graph.add_edge("summarise", "score")
graph.add_conditional_edges("score", route, {"summarise": "summarise", END: END})
app = graph.compile()
result = app.invoke({"text": "Your article here...", "summary": "", "quality_score": 0})
print(result["summary"])
LangGraph manages the loop. LangChain manages the individual operations. Each does what it's best at.
Summary
Start with LangChain LCEL. It's simpler, easier to test, and covers most use cases. Add LangGraph when you hit a wall: you need to loop, you need to branch, your agent uses tools and needs to reason across multiple turns.
The upgrade path is natural — most LangGraph nodes are just LangChain chains wrapped in a function. You're not replacing anything, just adding a layer of control flow.
For the practical details: see LangChain deep dive and LangGraph agent patterns.
