Show HN: Use any LLM in Go with stable, minimal API
56 minutes ago
1
A powerful and flexible Go library for interacting with Large Language Models (LLMs) with built-in support for function calling and streaming responses. Currently supports Anthropic, Google, and OpenAI compatible providers.
Supports Anthropic, Google (Gemini + Vertex), and OpenAI (Chat Completions + Responses)
Also supports using custom endpoints compatible with any of the APIs above
Streaming responses (including thinking) for real-time interaction
Built-in tool calling with Go generics to generate JSON schemas automatically
Structured output (JSON schema based) for model response
OpenAI only: Custom grammars like Lark / Regex for structured output
Here’s a simple example that creates an LLM instance and has a conversation with it:
package main
import (
"fmt""os""github.com/flitsinc/go-llms/content""github.com/flitsinc/go-llms/llms""github.com/flitsinc/go-llms/openai""github.com/flitsinc/go-llms/tools"
)
funcmain() {
// Create a new LLM instance with OpenAI's o4-mini modelllm:=llms.New(
openai.NewResponsesAPI(os.Getenv("OPENAI_API_KEY"), "o4-mini"),
)
// Optional: Set a system promptllm.SystemPrompt=func() content.Content {
returncontent.FromText("You are a helpful assistant.")
}
// Start a chat conversationforupdate:=rangellm.Chat("What's the capital of France?") {
switchupdate:=update.(type) {
case llms.TextUpdate:
fmt.Print(update.Text)
}
}
// Check for errors after the chat completesiferr:=llm.Err(); err!=nil {
panic(err)
}
}
Generating images with Gemini 2.5 Flash Image (aka Nano Banana)
You must specify modalities for this model to work (and you cannot use WithThinking):
funcmain() {
provider:=google.New("gemini-2.5-flash-image-preview").
WithGeminiAPI(os.Getenv("GEMINI_API_KEY")).
WithModalities("TEXT", "IMAGE")
llm:=llms.New(provider)
forupdate:=rangellm.Chat("Draw a sketch of a happy banana") {
switchupdate:=update.(type) {
case llms.TextUpdate:
fmt.Print(update.Text)
case llms.ImageUpdate:
path:=fmt.Sprintf("image-%d.png", time.Now().Unix())
// We assume we always get a base64-encoded PNG image -- don't do this in prod!b64, _:=strings.CutPrefix(update.URL, "data:image/png;base64,")
data, _:=base64.StdEncoding.DecodeString(b64)
os.WriteFile(path, data, 0644)
fmt.Printf("\nImage written to: %s\n", path)
}
}
}
Advanced Usage with Tools
Here’s an example showing how to use tools (function calling):
package main
import (
"fmt""os""github.com/flitsinc/go-llms/anthropic""github.com/flitsinc/go-llms/llms""github.com/flitsinc/go-llms/tools"
)
// Define tool parameterstypeCommandParamsstruct {
Commandstring`json:"command" description:"The shell command to run"`
}
// Create a shell command toolvarRunCommand=tools.Func(
"Run Command",
"Run a shell command and return the output",
"run_command",
func(r tools.Runner, pCommandParams) tools.Result {
returntools.SuccessWithLabel(p.Command, map[string]any{
"output": fmt.Sprintf("Simulated output for: %s", p.Command),
})
},
)
funcmain() {
// Create a new LLM instance using Anthropic's Claude with toolsllm:=llms.New(
anthropic.New(os.Getenv("ANTHROPIC_API_KEY"), "claude-sonnet-4-20250514"),
RunCommand,
)
// Chat with tool usageforupdate:=rangellm.Chat("List files in the current directory") {
switchupdate:=update.(type) {
case llms.TextUpdate:
fmt.Print(update.Text)
case llms.ToolStartUpdate:
fmt.Printf("(Using tool: %s)\n", update.Tool.Label()) // Shows "Run Command"case llms.ToolDoneUpdate:
// Shows the label specific to this execution, e.g., "ls -l"fmt.Printf("(Tool result: %s)\n", update.Result.Label())
}
}
// Check for errors after the chat completesiferr:=llm.Err(); err!=nil {
panic(err)
}
}
Sometimes, you might have a set of predefined tool schemas (perhaps from an external source or another system) that you want the LLM to be able to use. AddExternalTools allows you to provide these schemas along with a single handler function.
This is useful when the logic for handling multiple tools is centralized, or when you need to dynamically add tools based on external definitions.
The handler function receives the tools.Runner and the raw JSON parameters for the called tool. You can use llms.GetToolCall(r.Context()) within the handler to retrieve the specific ToolCall instance, which includes the function name (tc.Name) and unique call ID (tc.ID), allowing you to dispatch to the correct logic.
// Example external tool schemas (could come from a config file, API, etc.)varexternalToolSchemas= []tools.FunctionSchema{
{
Name: "get_stock_price",
/* ... */
},
{
Name: "get_weather",
/* ... */
},
}
funcmain() {
llm:=llms.New(anthropic.New(os.Getenv("ANTHROPIC_API_KEY"), "claude-sonnet-4-20250514"))
// Add external tools and their handlerllm.AddExternalTools(externalToolSchemas, handleExternalTool)
// Now the LLM can use "get_stock_price" and "get_weather"forupdate:=rangellm.Chat("What's the weather in London?") {
switchupdate:=update.(type) {
case llms.TextUpdate:
fmt.Print(update.Text)
case llms.ToolStartUpdate:
fmt.Printf("(Using tool: %s)\n", update.Tool.Label())
case llms.ToolDoneUpdate:
fmt.Printf("(Tool result: %s - %s)\n", update.Tool.Label(), update.Result.Label())
}
}
iferr:=llm.Err(); err!=nil {
panic(err)
}
}
// Single handler that forwards external tool calls.funchandleExternalTool(r tools.Runner, params json.RawMessage) tools.Result {
// Get the specific tool call details from the contexttoolCall, ok:=llms.GetToolCall(r.Context())
if!ok {
returntools.Errorf("Could not get tool call details from context")
}
// Typically, you would now:// 1. Construct a request to your external API endpoint (or send it to a browser client)targetURL:=fmt.Sprintf("https://api.example.com/tool?name=%s", toolCall.Name)
req, err:=http.NewRequestWithContext(r.Context(), "POST", targetURL, bytes.NewReader(params))
// ... set headers, handle error ...// 2. Execute the request.resp, err:=httpClient.Do(req)
// ... handle error ...// 3. Process the response.deferresp.Body.Close()
ifresp.StatusCode!=http.StatusOK {
// Use Errorf or ErrorWithLabel for tool errorsbodyBytes, _:=io.ReadAll(resp.Body)
returntools.Errorf("External tool API failed (%s): %s", resp.Status, string(bodyBytes))
}
bodyBytes, err:=io.ReadAll(resp.Body)
// ... handle read error ...// 4. Return the result based on the response body.returntools.Success(json.RawMessage(bodyBytes))
}
Grammar-Based Tools (OpenAI Only)
OpenAI supports custom tools that can enforce specific input formats using grammars. This allows you to constrain the model's output to follow precise patterns, which is useful for structured data extraction, validation, or parsing tasks.
Regex Grammar: For pattern matching using regular expressions
Text Grammar: For free-form text input
Here are examples of grammar-based tools:
package main
import (
"fmt""os""regexp""strconv""strings""github.com/flitsinc/go-llms/llms""github.com/flitsinc/go-llms/openai""github.com/flitsinc/go-llms/tools"
)
// Example 1: Lark Grammar for Math ExpressionsvarmathGrammar=tools.Lark(`start: exprexpr: term (SP ADD SP term)* -> add | termterm: factor (SP MUL SP factor)* -> mul | factorfactor: INTSP: " "ADD: "+"MUL: "*"%import common.INT`)
varMathTool=tools.FuncGrammar(
mathGrammar,
"Math Calculator",
"Evaluate simple math expressions with addition and multiplication",
"calculate_math",
func(r tools.Runner, expressionstring) tools.Result {
// Simple evaluation for demo (in practice, use a proper parser)expression=strings.ReplaceAll(expression, " ", "")
// Handle simple cases for demonstrationifstrings.Contains(expression, "+") {
parts:=strings.Split(expression, "+")
iflen(parts) ==2 {
a, err1:=strconv.Atoi(parts[0])
b, err2:=strconv.Atoi(parts[1])
iferr1==nil&&err2==nil {
result:=a+breturntools.SuccessWithLabel(expression, map[string]any{
"result": result,
"expression": expression,
})
}
}
}
// Fallback for single numbersifnum, err:=strconv.Atoi(expression); err==nil {
returntools.SuccessWithLabel(expression, map[string]any{
"result": num,
"expression": expression,
})
}
returntools.ErrorWithLabel("Invalid expression", fmt.Errorf("could not parse: %s", expression))
},
)
// Example 2: Regex Grammar for Email ValidationvaremailGrammar=tools.Regex(`^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$`)
varEmailValidator=tools.FuncGrammar(
emailGrammar,
"Email Validator",
"Validate and extract information from email addresses",
"validate_email",
func(r tools.Runner, emailstring) tools.Result {
emailRegex:=regexp.MustCompile(`^([a-zA-Z0-9._%+-]+)@([a-zA-Z0-9.-]+\.[a-zA-Z]{2,})$`)
matches:=emailRegex.FindStringSubmatch(email)
iflen(matches) !=3 {
returntools.ErrorWithLabel("Invalid email", fmt.Errorf("email format is invalid: %s", email))
}
username:=matches[1]
domain:=matches[2]
returntools.SuccessWithLabel(email, map[string]any{
"email": email,
"username": username,
"domain": domain,
"is_valid": true,
})
},
)
funcmain() {
// Create LLM with grammar-based tools (OpenAI only)llm:=llms.New(
openai.New(os.Getenv("OPENAI_API_KEY"), "gpt-5"),
MathTool,
EmailValidator,
)
// The model can now use structured tools with grammar constraintsforupdate:=rangellm.Chat("Calculate 15 + 27 and validate the email [email protected]") {
switchupdate:=update.(type) {
case llms.TextUpdate:
fmt.Print(update.Text)
case llms.ToolStartUpdate:
fmt.Printf("(Using %s: ", update.Tool.Label())
case llms.ToolDoneUpdate:
fmt.Printf("%s)\n", update.Result.Label())
}
}
iferr:=llm.Err(); err!=nil {
panic(err)
}
}
Note: Grammar-based tools are currently only supported by OpenAI's API. Other providers currently do not support grammar-based tools.
The library currently supports:
Anthropic
Google (Gemini API and Vertex AI)
OpenAI and all compatible providers (you can customize the endpoint)
OpenAI’s newer Responses API
Each provider can be initialized with their respective configuration:
// Anthropicllm:=llms.New(anthropic.New(os.Getenv("ANTHROPIC_API_KEY"), "claude-sonnet-4-20250514"))
// Google Geminillm:=llms.New(google.New("gemini-2.5-flash").WithGeminiAPI(os.Getenv("GEMINI_API_KEY")))
// Google Vertex AIts, err:=googleoauth.DefaultTokenSource(ctx, "https://www.googleapis.com/auth/cloud-platform")
llm:=llms.New(google.New("gemini-2.5-flash").WithVertexAI(ts, projectID, "global"))
// OpenAI (Responses API)llm:=llms.New(openai.NewResponsesAPI(os.Getenv("OPENAI_API_KEY"), "o4-mini"))
// OpenAI (Chat Completions API)llm:=llms.New(openai.New(os.Getenv("OPENAI_API_KEY"), "gpt-4.1"))
// OpenAI-compatible endpoint (e.g., xAI)// You can use the OpenAI provider with compatible APIs by configuring the endpoint.llm:=llms.New(
openai.New(os.Getenv("XAI_API_KEY"), "grok-3-latest").
WithEndpoint("https://api.x.ai/v1/chat/completions", "xAI"),
)
You can easily implement new providers by implementing the Provider interface:
typeProviderinterface {
Company() stringModel() stringSetDebugger(dDebugger)
// Generate takes a system prompt, message history, and optional toolbox,// returning a stream for the LLM's response. The provided context should// be respected for cancellation.Generate(
ctx context.Context,
systemPrompt content.Content,
messages []Message,
toolbox*tools.Toolbox,
jsonOutputSchema*tools.ValueSchema,
) ProviderStream
}
typeProviderStreaminterface {
Err() errorIter() func(yieldfunc(StreamStatus) bool)
Message() MessageText() stringImage() (string, string)
Thought() content.ThoughtToolCall() ToolCallUsage() Usage
}
Track the usage of your LLM interactions:
usage:=llm.TotalUsagefmt.Printf("Cached Input Tokens: %d, Tokens Written to Input Cache: %d, Input Tokens: %d, Output Tokens: %d\n",
usage.CachedInputTokens, usage.CacheCreationInputTokens, usage.InputTokens, usage.OutputTokens)
As patterns emerge between providers with regards to cache tokens, speculative tokens, etc. these will be added too.
When you want to make providers easily swappable and a simplified API that focuses on hekoing you implement the most common types of agentic flows.
Since each LLM provider has its own quirks, especially around reasoning, streaming, and tool calling, we’ve done our best to smooth those over, but expect some differences still.
additionalProperties forbidden and required depending on provider
Google doesn’t allow the additionalProperties field for JSON schemas (probably a bug), while OpenAI’s new Responses API requires it for tool calls! It’s also commonly required for models with strict JSON outputs since it helps with speculative decoding.
Because of this, we strip out the additionalProperties field before sending it to Google, so it shouldn’t be a problem for you, just keep it in mind.
Anthropic doesn’t stream partial property values by default
The streaming API of Anthropic only sends complete string values when streaming tool calls, so if you have a tool call like edit_file which produces very long fields nothing will update until that field has completely finished generating.
To fix this, use fine-grained tool streaming which is currently in beta, by calling .WithBeta("fine-grained-tool-streaming-2025-05-14") on the Anthropic provider instance.