AI-SDK-cpp: Modern C++ AI SDK

4 months ago 2

The AI SDK CPP is a modern C++ toolkit designed to help you build AI-powered applications with popular model providers like OpenAI and Anthropic. It provides a unified, easy-to-use API that abstracts away the complexity of different provider implementations.

C++ developers have long lacked a first-class, convenient way to interact with modern AI services like OpenAI, Anthropic, and others. AI SDK CPP bridges this gap by providing:

  • Unified API: Work with multiple AI providers through a single, consistent interface
  • Modern C++: Built with C++20 features for clean, expressive code
  • Minimal Dependencies: Minimal external dependencies for easy integration

You will need a C++20 compatible compiler and CMake 3.16+ installed on your development machine.

The AI SDK CPP Core module provides a unified API to interact with model providers like OpenAI and Anthropic.

#include <ai/openai.h> #include <ai/generate.h> #include <iostream> int main() { // Ensure OPENAI_API_KEY environment variable is set auto client = ai::openai::create_client(); auto result = client.generate_text({ .model = ai::openai::models::kGpt4o, // this can also be a string like "gpt-4o" .system = "You are a friendly assistant!", .prompt = "Why is the sky blue?" }); if (result) { std::cout << result->text << std::endl; } return 0; }
#include <ai/anthropic.h> #include <ai/generate.h> #include <iostream> int main() { // Ensure ANTHROPIC_API_KEY environment variable is set auto client = ai::anthropic::create_client(); auto result = client.generate_text({ .model = ai::anthropic::models::kClaude35Sonnet, .system = "You are a helpful assistant.", .prompt = "Explain quantum computing in simple terms." }); if (result) { std::cout << result->text << std::endl; } return 0; }
#include <ai/openai.h> #include <ai/stream.h> #include <iostream> int main() { auto client = ai::openai::create_client(); auto stream = client.stream_text({ .model = ai::openai::models::kGpt4o, // this can also be a string like "gpt-4o" .system = "You are a helpful assistant.", .prompt = "Write a short story about a robot." }); for (const auto& chunk : stream) { if (chunk.text) { std::cout << chunk.text.value() << std::flush; } } return 0; }
#include <ai/openai.h> #include <ai/generate.h> #include <iostream> int main() { auto client = ai::openai::create_client(); ai::Messages messages = { {"system", "You are a helpful math tutor."}, {"user", "What is 2 + 2?"}, {"assistant", "2 + 2 equals 4."}, {"user", "Now what is 4 + 4?"} }; auto result = client.generate_text({ .model = ai::openai::models::kGpt4o, // this can also be a string like "gpt-4o" .messages = messages }); if (result) { std::cout << result->text << std::endl; } return 0; }

The AI SDK CPP supports function calling, allowing models to interact with external systems and APIs.

#include <ai/openai.h> #include <ai/generate.h> #include <ai/tools.h> #include <iostream> // Define a tool function ai::JsonValue get_weather(const ai::JsonValue& args, const ai::ToolExecutionContext& context) { std::string location = args["location"].get<std::string>(); // Your weather API logic here return ai::JsonValue{ {"location", location}, {"temperature", 72}, {"condition", "Sunny"} }; } int main() { auto client = ai::openai::create_client(); // Create tools ai::ToolSet tools = { {"weather", ai::create_simple_tool( "weather", "Get current weather for a location", {{"location", "string"}}, get_weather )} }; auto result = client.generate_text({ .model = ai::openai::models::kGpt4o, .prompt = "What's the weather like in San Francisco?", .tools = tools, .max_steps = 3 // Enable multi-step tool calling }); if (result) { std::cout << result->text << std::endl; // Inspect tool calls and results for (const auto& call : result->tool_calls) { std::cout << "Tool: " << call.tool_name << ", Args: " << call.arguments.dump() << std::endl; } } return 0; }

For long-running operations, you can define asynchronous tools:

#include <future> #include <thread> #include <chrono> // Async tool that returns a future std::future<ai::JsonValue> fetch_data_async(const ai::JsonValue& args, const ai::ToolExecutionContext& context) { return std::async(std::launch::async, [args]() { // Simulate async operation std::this_thread::sleep_for(std::chrono::seconds(1)); return ai::JsonValue{ {"data", "Fetched from API"}, {"timestamp", std::time(nullptr)} }; }); } int main() { auto client = ai::openai::create_client(); ai::ToolSet tools = { {"fetch_data", ai::create_simple_async_tool( "fetch_data", "Fetch data from external API", {{"endpoint", "string"}}, fetch_data_async )} }; // Multiple async tools will execute in parallel auto result = client.generate_text({ .model = ai::openai::models::kGpt4o, .prompt = "Fetch data from the user and product APIs", .tools = tools }); return 0; }
  • Text Generation: Generate text completions with OpenAI and Anthropic models
  • Streaming: Real-time streaming of generated content
  • Multi-turn Conversations: Support for conversation history
  • Error Handling: Comprehensive error handling with optional types
  • Tool Calling: Function calling and tool integration with multi-step support
  • Async Tools: Asynchronous tool execution with parallel processing
  • 🚧 Additional Providers: Google, Cohere, and other providers
  • 🚧 Embeddings: Text embedding support
  • 🚧 Image Generation: Support for image generation models

Check out our examples directory for more comprehensive usage examples:

  • C++ Standard: C++20 or higher
  • CMake: 3.16 or higher

Inspired by the excellent Vercel AI SDK for TypeScript/JavaScript developers.

Read Entire Article