When rolling your own agent, tool calling comes with annoying plumbing (packaging up arguments from the LLM, unpacking return values, etc). People like frameworks in part because this plumbing is dealt with for you. But if you could solve this, it’d be fairly straightforward to roll a generic agentic loop that just interacted with your arbitrary python functions “directly” without the tool management headaches.
It turns out, with a few dozen lines of Python introspection and metaprogramming, you can make life quite a bit easier for yourself.
Let me explain.
An LLM, by itself, would struggle to answer the prompt:
Which location in the US currently has the highest temperature?
You’ll get a response:
I don’t have live access to real‑time weather data, so I can’t tell you the current hottest city right now.
Instead, the LLM asks YOU the question. (In Soviet LLM, LLM asks YOU question!) **They ask you in the form of a tool call.
So, maybe you have, at your beck and call, a function like the following:
The agent needs to know that:
(a) It has a tool it can call to get the weather
(b) What arguments that tool takes
(c) What return value to get back
So you add a tools parameter to your call (as per the spec here):
Within resp.output list, you’ll then have some instances that look like this:
In other words, a request that you
(a) Call the tool (with the params)
(b) Package up the response
(c) Call openai again with the results
And that’s the agentic loop. Call, fulfill tools, continue, until there are no more tool requests to fulfill.
Convenience function for tool calling
People reach for frameworks - or even an MCP server! - because said framework makes it easy to package up a function and treat it as a tool without any plumbing. You can just write that code yourself. Then have a generic, agentic loop that delegates tool calls to any python function you pass.
And that’s what I’ve done in this code snippet, that I want to demonstrate here:
Assuming your python function has decent type annotations you could just do this:
The code for make_tool_adapter iterates function arguments, grabs their type annotations, and builds a pydantic model that wraps them. Simultaneously, it also gives you a tool spec for them.
Finally, it gives you a function call_from_tool that wraps your function get_current_weather . This wrapper function unpacks arguments from ArgsModel, calls your function get_current_weather , and responds with the correct return value.
But it also does a few other convenient things:
- The tool’s description is the doc string
- The tool’s name is the function name
What does this look like in practice?
Assuming a helper, mapping a dictionary of function names → tools (the 3-tuple of arguments, tool spec, and tool wrapper function);
Then when processing any outputs from an LLM, we can simply loop as follows:
Now you have a general purpose agentic loop that will work for experimenting with any use case. I won’t pretend it has the scalability constraints you need, but it’s been an invaluable tool for experimentation.
If you want to use the make_tool_adapter helper, just grab pydantize.py from my course code and go to town.
I hope you join me at Cheat at Search with LLMs to learn how to apply LLMs to search applications. Check out this post for a sneak preview.
.png)


