Compared Vercel AI SDK, Firebase Genkit and Langchain.js through code

8 hours ago 1

How can a developer choose one framework from a few similar ones? By popularity? But it doesn't tell you much about how pleasant it is to work with.

I decided to battle-test Vercel AI SDK, Firebase Genkit and Langchain.js by developing a few code examples with them all.

Here I'll show you the examples and share my observations.

ToC

Example 1: Simple question

Vercel AI SDK

import 'dotenv/config' import { anthropic } from '@ai-sdk/anthropic' import { generateText } from 'ai' const result = await generateText({ model: anthropic('claude-3-5-sonnet-latest'), system: "You are Amy, a friendly assistant, who you can chat with about everyday stuff.", prompt: "What's your name?", maxSteps: 1, temperature: 0, }) console.log(result.text)

Firebase Genkit

import 'dotenv/config' import { genkit } from 'genkit' import { anthropic, claude35Sonnet } from 'genkitx-anthropic' const ai = genkit({ plugins: [anthropic({ apiKey: process.env.ANTHROPIC_API_KEY })], model: claude35Sonnet, }) const result = await ai.generate({ system: "You are Amy, a friendly assistant, who you can chat with about everyday stuff.", prompt: "What's your name?", maxTurns: 1, temperature: 0, }) console.log(result.text)

Langchain.js

import 'dotenv/config' import { ChatAnthropic } from '@langchain/anthropic' import { BaseMessage, HumanMessage, SystemMessage, } from '@langchain/core/messages' const ai = new ChatAnthropic({ model: 'claude-3-5-sonnet-latest', temperature: 0, }) const messages: BaseMessage[] = [ new SystemMessage("You are Amy, a friendly assistant, who you can chat with about everyday stuff."), new HumanMessage("What's your name?"), ] const response = await ai.invoke(messages) console.log(response.content)

Observations

  • Vercel AI SDK LLM call looks simpler, it's just a single function call unlike Genkit and Langchain where it's necessary to instantiate the main object first.

  • I like SystemMessage and HumanMessage OOP message wrappers which are useful for more complex cases. However, for simple cases like the ones above, prompt and system work better for me.

  • Genkit param/return types made me think that they are overly complex.

Vercel AI SDK

import { tool } from 'ai' import { z } from 'zod' export const vercelTemperatureTool = tool({ description: 'Gets current temperature in the given city', parameters: z.object({ city: z.string().describe('The city to get the current temperature for'), }), execute: async ({ city }) => { try { const min = -10 const max = 40 const temperature = (Math.random() * (max - min) + min).toFixed(0) return `${temperature}°C` } catch (error: any) { return { error: error.message } } }, })

Firebase Genkit

import { Genkit, z } from 'genkit' export const createGenkitTemperatureTool = (ai: Genkit) => { return ai.defineTool( { name: 'temperature', description: 'Gets current temperature in the given city', inputSchema: z.object({ city: z .string() .describe('The city to get the current temperature for'), }), outputSchema: z.string(), }, async ({ city }) => { try { const min = -10 const max = 40 const temperature = (Math.random() * (max - min) + min).toFixed(0) return `${temperature}°C` } catch (error: any) { return error.message } } ) }

Langchain.js

import { tool } from '@langchain/core/tools' import { z } from 'zod' export const langchainTemperatureTool = tool( async ({ city }) => { try { const min = -10 const max = 40 const temperature = (Math.random() * (max - min) + min).toFixed(0) return `${temperature}°C` } catch (error: any) { return error.message } }, { name: 'temperature', description: 'Gets current temperature in the given city', schema: z.object({ city: z.string().describe('The city to get the current temperature for'), }), responseFormat: 'content', } )

Observations

  • Unlike Vercel AI SDK and Langchain, Firebase Genkit package exports Zod lib, which means you don't need to install and import it separately.

  • Unlike Firebase Genkit and Langchain, Vercel AI SDK doesn't let you set the output format. I don't think I need it though.

  • Unfortunately, Genkit tools require the main Genkit object, so I used JavaScript currying to define the tool. Good thing is Firebase people are listening to feedback on X, so I hope the defineTool method can become a static function soon.

Example 3: Simple question with a tool call

Vercel AI SDK

import 'dotenv/config' import { anthropic } from '@ai-sdk/anthropic' import { generateText } from 'ai' import { vercelTemperatureTool } from './tools/vercelTemperatureTool' const result = await generateText({ model: anthropic('claude-3-5-sonnet-latest'), system: "You are Amy, a friendly assistant, who you can chat with about everyday stuff.", prompt: "What's the temperature in New York?", tools: { temperature: vercelTemperatureTool }, maxSteps: 2, temperature: 0, }) console.log(result.text)

Firebase Genkit

import 'dotenv/config' import { genkit } from 'genkit' import { anthropic, claude35Sonnet } from 'genkitx-anthropic' import { createGenkitTemperatureTool } from './tools/genkitTemperatureTool' const ai = genkit({ plugins: [anthropic({ apiKey: process.env.ANTHROPIC_API_KEY })], model: claude35Sonnet, }) const genkitTemperatureTool = createGenkitTemperatureTool(ai) const result = await ai.generate({ tools: [genkitTemperatureTool], system: "You are Amy, a friendly assistant, who you can chat with about everyday stuff.", prompt: "What's the temperature in New York?", maxTurns: 2, temperature: 0, }) console.log(result.text)

Langchain.js

import 'dotenv/config' import { ChatAnthropic } from '@langchain/anthropic' import { BaseMessage, HumanMessage } from '@langchain/core/messages' import { createReactAgent } from '@langchain/langgraph/prebuilt' import { langchainTemperatureTool } from './tools/langchainTemperatureTool' const ai = new ChatAnthropic({ model: 'claude-3-5-sonnet-latest', temperature: 0, }) const agent = createReactAgent({ llm: ai, tools: [langchainTemperatureTool], prompt: "You are Amy, a friendly assistant, who you can chat with about everyday stuff.", }) const messages: BaseMessage[] = [ new HumanMessage("What's the temperature in New York?"), ] const result = await agent.invoke({ messages, }) for (let i = messages.length; i < result.messages.length; i++) { if ( result.messages[i].getType() === 'tool' || typeof result.messages[i].content !== 'string' ) { continue } console.log(result.messages[i].content) }

Observations

  • For the Vercel AI SDK and Firebase Genkit, I needed to allow at least two iterations for the LLM to be able to call the tool. I didn't find a way to set the number of iterations for Langchain.js but fortunately its defaults allow to call the tools.

  • For the Langchain.js example, I had to use a langgraph primitive called createReactAgent because manual tool calling without it was a pain.

  • As you may see, processing of the results for the Langchain.js example is more verbose. Let me know if you know how to improve that.

More examples

You may find an interactive chat with memory and a tool enabled along with other examples in the kometolabs/ai-sdk-comparison repo. Feel free to submit a PR if you know how to improve the examples.

If you need to stream LLM responses, here is my example for Vercel AI SDK. You may implement something like that for your framework of choice based on the code samples above.

Conclusions

To sum up, I like the functional simplicity and the docs of the Vercel AI SDK. Firebase Genkit follows it closely. Langchain, however, requires more boilerplate and it's not very intuitive for me personally.

As one wise person said, "healthy competition keeps everybody sharp and gives the community more choice". I don't agree with the "more choice" part, because with more choice, developers need to spend time on learning the differences which they could have spent on developing features for their users. So I'd like to encourage framework builders to collaborate more and compete less.

Read Entire Article