Function calling is one of those features that sounds technical but dramatically changes what you can build with AI. Once you understand it, a whole category of "useful but limited chatbot" problems become solvable.
Here's the clearest explanation I've found to actually give people, with real examples.
The Core Problem It Solves
Without function calling, LLMs have a fundamental limitation: they only generate text. Ask "what's the weather in London?" and the model either makes something up, tells you to go check a weather website, or confidently states weather from its training data (which is months or years old).
Function calling solves this by letting the model say "I need real data to answer this. Please call this specific function and give me the result."
Your code then actually calls the function, returns the real data, and the model incorporates it into a grounded response.
How It Works: Step by Step
1. You define available tools
Tell the model what functions exist and what they do:
tools = [
{
"name": "get_weather",
"description": "Get the current weather for a location. Use this when asked about weather conditions.",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City and country, e.g. 'London, UK' or 'Tokyo, Japan'"
},
"units": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "Temperature units"
}
},
"required": ["location"]
}
}
]
2. User asks a question
"What's the weather like in Tokyo right now?"
3. Model decides to use a tool
Instead of generating a text response, the model returns a tool call:
{
"type": "tool_use",
"name": "get_weather",
"input": {
"location": "Tokyo, Japan",
"units": "celsius"
}
}
4. Your code executes the function
# You call the actual weather API
weather_data = weather_api.get(location="Tokyo, Japan", units="celsius")
# Returns: {"temp": 8, "condition": "Partly cloudy", "humidity": 65}
5. You return the result to the model
messages.append({
"role": "user",
"content": [{
"type": "tool_result",
"tool_use_id": tool_call_id,
"content": json.dumps(weather_data)
}]
})
6. Model generates the final response
It's currently 8°C and partly cloudy in Tokyo. The humidity is at 65%.
A light jacket should be enough if you're heading out.
A Complete Working Example
Here's the full pattern with Claude:
import anthropic
import json
client = anthropic.Anthropic()
# Define tools
tools = [
{
"name": "get_current_price",
"description": "Get the current price of a product by its ID",
"input_schema": {
"type": "object",
"properties": {
"product_id": {"type": "string", "description": "The product ID"},
},
"required": ["product_id"]
}
},
{
"name": "check_inventory",
"description": "Check if a product is in stock",
"input_schema": {
"type": "object",
"properties": {
"product_id": {"type": "string"},
"quantity": {"type": "integer", "description": "Desired quantity"}
},
"required": ["product_id", "quantity"]
}
}
]
def handle_tool_call(tool_name: str, tool_input: dict) -> str:
"""Execute the actual tool logic and return result."""
if tool_name == "get_current_price":
# In reality, this would query your database
prices = {"PROD-001": 29.99, "PROD-002": 49.99}
price = prices.get(tool_input["product_id"], "unknown")
return json.dumps({"price": price, "currency": "USD"})
elif tool_name == "check_inventory":
# In reality, check your inventory system
in_stock = tool_input["product_id"] in ["PROD-001"]
return json.dumps({"in_stock": in_stock, "available_quantity": 50 if in_stock else 0})
def ask_with_tools(question: str) -> str:
"""Run a conversation with tool use enabled."""
messages = [{"role": "user", "content": question}]
while True:
response = client.messages.create(
model="claude-opus-4-6",
max_tokens=1024,
tools=tools,
messages=messages
)
# If model is done (no tool calls), return the text response
if response.stop_reason == "end_turn":
return response.content[0].text
# Process tool calls
messages.append({"role": "assistant", "content": response.content})
tool_results = []
for block in response.content:
if block.type == "tool_use":
result = handle_tool_call(block.name, block.input)
tool_results.append({
"type": "tool_result",
"tool_use_id": block.id,
"content": result
})
messages.append({"role": "user", "content": tool_results})
# Usage
answer = ask_with_tools("What does PROD-001 cost and is it in stock? I need 10 units.")
print(answer)
# Output: "PROD-001 is currently priced at $29.99 USD. Good news — it is in stock
# and 50 units are available, so your order of 10 units can be fulfilled."
Writing Good Tool Descriptions
The description field matters enormously. The model uses it to decide when to call the tool. Vague descriptions lead to wrong calls or missed calls.
# Bad description — too vague
{
"name": "search",
"description": "Search for information"
}
# Good description — tells the model exactly when to use it
{
"name": "search_products",
"description": "Search the product catalog by name, category, or features. Use this when the user asks about available products, product specifications, or wants to find items matching specific criteria. Do NOT use for pricing or inventory — use get_current_price and check_inventory for those."
}
Key elements of a good tool description:
- When to use it — what kind of question or task triggers this tool
- What it returns — what kind of data the model will get back
- When NOT to use it — helps the model distinguish between similar tools
Common Patterns
Multi-tool workflows: The model can call multiple tools in a single response, or chain calls across turns. A question like "Which of our products is cheapest and in stock?" might trigger a price lookup, then an inventory check, then a synthesis.
Parallel tool calls: Modern APIs support returning multiple tool calls at once, which your code can execute in parallel. Claude can request get_weather("London") and get_weather("Paris") simultaneously.
Tool failure handling: Always return something useful in the tool result, even on error:
try:
result = actual_api_call()
return json.dumps({"success": True, "data": result})
except Exception as e:
return json.dumps({"success": False, "error": str(e)})
Let the model know when a tool fails so it can handle it gracefully rather than silently.
When to Use Function Calling
Use it when:
- The model needs real, current data (weather, prices, live inventory)
- The model needs to take actions (send email, create record, run query)
- You want the model to perform calculations it might get wrong (delegate to a calculator tool)
- You're building agents that loop and act over multiple steps
Don't need it when:
- All the information the model needs is in the context already
- You're doing pure text generation or transformation
- Latency is critical and the extra round-trip is too costly
Function calling is what turns a chatbot into an agent — the ability to act in the world rather than just generate text.



