| Learn how to implement OpenAI-style tool calling — from raw REST to elegant Spring AI annotations. Includes real code, diagrams, and end-to-end flow.
LLMs are great at chatting — but they’re blind to your data.
They don’t know your inventory, your APIs, or how your systems work. But what if they could call your functions, run live SQL, or trigger internal services?
That’s what tool calling enables. This blog shows you:
- 🔧 How LLM tool calling works (in raw REST, step-by-step)
- 📊 How it handles multi-step reasoning
- ✅ And how Spring AI simplifies it all
Tool calling (previously “function calling”) allows an LLM to say:
“I can’t answer directly — but I want to call this function using these arguments.”
You run the function, send back the result, and the model incorporates it into the final response.
Let’s say the user asks:
“Do you have AirPods Pro in stock?”
This is a POST to your LLM provider’s chat/completions endpoint:
{"model": "your-model-id",
"messages": [
{ "role": "user", "content": "Do you have AirPods Pro in stock?" }
],
"tools": [
{
"type": "function",
"function": {
"name": "findProductByName",
"description": "Finds a product by name or description",
"parameters": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Product name or keyword"
}
},
"required": ["name"]
}
}
}
],
"tool_choice": "auto"
}{
"choices": [
{
"message": {
"tool_calls": [
{
"id": "abc123",
"function": {
"name": "findProductByName",
"arguments": "{\"name\":\"AirPods Pro\"}"
},
"type": "function"
}
]
}
}
]
}
✅ The model wants you to run your findProductByName function.
Example Java logic:
List<Product> result = productService.findByName("AirPods Pro");You serialize it to JSON:
{"name": "AirPods Pro",
"price": 249,
"stock": 5
}
"messages": [
{ "role": "user", "content": "Do you have AirPods Pro in stock?" },
{
"role": "assistant",
"tool_calls": [
{
"id": "abc123",
"function": {
"name": "findProductByName",
"arguments": "{\"name\":\"AirPods Pro\"}"
}
}
]
},
{
"role": "tool",
"tool_call_id": "abc123",
"content": "{\"name\":\"AirPods Pro\",\"price\":249,\"stock\":5}"
}
],
"tool_choice": "auto",
"tools": [ /* same tool definitions */ ]
}
"choices": [
{
"message": {
"content": "Yes, we have AirPods Pro in stock at $249, with 5 units available."
}
}
]
}
🎉 That’s a complete tool calling loop.
The LLM selects one tool and makes a function call with parameters.
When the model identifies multiple needs, it returns multiple tool calls in parallel.
The model reasons step-by-step: list tables, get schema, and then run SQL.
- Write JSON schemas
- Track tool_call_id for each call
- Parse + bind every argument
- Serialize responses back
- Handle multi-tool orchestration
- Maintain conversation history
- Inject system prompts for routing
- Catch errors and retries cleanly
It’s flexible — but also heavy and brittle.
Spring AI speaks the same REST protocol — but you don’t write any of it manually.
@Tool(description = "Find a product by name or description")public String findProductByName(
@ToolParam(description = "product name", required = true)
String name
) {
List<Product> products = productService.findByName(name);
return new ObjectMapper().writeValueAsString(products);
}
In your controller:
@PostMapping("/chat")public ChatBotResponse chatWithInventory(@RequestBody ChatBotRequest req) {
Prompt prompt = new Prompt(List.of(new UserMessage(req.question())));
String answer = ChatClient.builder(llmModel)
.defaultTools(productTools)
.build()
.prompt(prompt)
.call()
.content();
return new ChatBotResponse(req.question(), answer);
}
- ✅ Tool schema generation
- ✅ Argument binding
- ✅ tool_call_id mapping
- ✅ Message state management
- ✅ Parallel tool orchestration
- ✅ Sequential tool routing
- ✅ Spring Boot DI, validation, and observability
- ✅ Compatible with OpenAI, Mistral, Gemini, and others
You keep writing business logic. Spring AI wires up everything else.
Need your tools to work beyond just chat — like inside other agents or frontend clients?
Spring AI makes it effortless. Just add the MCP server starter:
<dependency><groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-mcp-server-spring-boot-starter</artifactId>
</dependency>
✅ Every @Tool method you define becomes an MCP-compliant endpoint — with zero extra code. Just add a small config (like type: stdio or type: sse) in your MCP server setup.
No YAML. No codegen. No new annotations.
This gives you instant interoperability with any client or platform that speaks the Model Context Protocol (MCP).
🔁 In most frameworks (including Python), MCP tools and LLM tools are defined separately. ✅ In Spring AI, your @Tool method is both — no duplication, no extra wiring.
Most other frameworks treat LLM tools and interoperable tools like MCP separately — Spring AI bridges them natively.
This blog is based on Chapter 5 — Tool Calling in:
📘 Spring AI for Your Organization — GCP Vertex Edition
What you’ll find:
- 🔍 Full REST-first tool calling implementations
- 🧱 Raw request/response handling
- 🧠 SQL chaining with system prompt routing
- ✅ Spring AI abstraction with clean @Tool logic
- 🧪 Three full examples with source code
.png)

