Gemini and Vercel AI SDK Cheatsheet

11 hours ago 2

Gemini models are accessible using the AI SDK by Vercel. This guide helps you get started with the AI SDK and Gemini, and provides code snippets for most of the important features.

For more information, see the following resources:

Install the AI SDK and the Google Generative AI integration:

npm install ai
npm install @ai-sdk/google
// pnpm: pnpm add ai @ai-sdk/google
// yarn: yarn add ai @ai-sdk/google

Set the GOOGLE_GENERATIVE_AI_API_KEY environment variable with your API key. A free API key can be obtained at Google AI Studio.


export GOOGLE_GENERATIVE_AI_API_KEY="YOUR_API_KEY_HERE"


setx GOOGLE_GENERATIVE_AI_API_KEY "YOUR_API_KEY_HERE"

Here's a basic example that takes a single text input:

import { generateText } from 'ai';
import { google } from '@ai-sdk/google';

const model = google('gemini-2.0-flash');

const { text } = await generateText({
model: model,
prompt: 'Why is the sky blue?',


});

console.log(text);

Here's a basic streaming example:

import { streamText } from 'ai';
import { google } from '@ai-sdk/google';

const model = google('gemini-2.0-flash');

const { textStream } = streamText({
model: model,
prompt: 'Why is the sky blue?',
});

for await (const textPart of textStream) {
console.log(textPart);
}

You can use thinking models with support for thinking budgets and thought summaries:

import { generateText } from 'ai';
import { google } from '@ai-sdk/google';
import { GoogleGenerativeAIProviderOptions } from '@ai-sdk/google';

const model = google('gemini-2.5-flash-preview-05-20');

const response = await generateText({
model: model,
prompt: 'What is the sum of the first 10 prime numbers?',
providerOptions: {
google: {
thinkingConfig: {
thinkingBudget: 2024,
includeThoughts: true
},
} satisfies GoogleGenerativeAIProviderOptions,
},
});

console.log(response.text);


console.log("Reasoning");
console.log(response.reasoning);

You can configure Search grounding with Google Search:

import { generateText } from 'ai';
import { google } from '@ai-sdk/google';

const model = google('gemini-2.5-flash-preview-05-20', { useSearchGrounding: true });

const { text, sources, providerMetadata } = await generateText({
model: model,
prompt: 'Who won the Super Bowl in 2025?',
});

console.log(text);

console.log("Sources:")
console.log(sources);

console.log("Metadata:")
console.log(providerMetadata?.google.groundingMetadata);

The AI SDK supports function calling:

import { z } from 'zod';
import { generateText, tool } from 'ai';
import { google } from '@ai-sdk/google';

const model = google('gemini-2.5-flash-preview-05-20');

const result = await generateText({
model: model,
prompt: 'What is the weather in San Francisco?',
tools: {
weather: tool({
description: 'Get the weather in a location',
parameters: z.object({
location: z.string().describe('The location to get the weather for'),
}),

execute: async ({ location }) => ({
location,
temperature: 72 + Math.floor(Math.random() * 21) - 10,
}),
}),
},
maxSteps: 5,
});

console.log(result.text)





for (const message of result.response.messages) {
console.log(message.content);
}

See the AI SDK tool calling guide for further resources.

Document / PDF understanding

The AI SDK supports file inputs, e.g. PDF files:

import { generateText } from 'ai';
import { google } from '@ai-sdk/google';
import { readFile } from 'fs/promises';

const model = google('gemini-2.0-flash');

const { text } = await generateText({
model: model,
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: 'Extract the date and price from the invoice',
},
{
type: 'file',
data: await readFile('./invoice.pdf'),
mimeType: 'application/pdf',
},
],
},
],
});

console.log(text);

The AI SDK supports image inputs:

import { generateText } from 'ai';
import { google } from '@ai-sdk/google';
import { readFile } from 'fs/promises';

const model = google('gemini-2.0-flash');

const { text } = await generateText({
model: model,
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: 'List all items from the picture',
},
{
type: 'image',
image: await readFile('./veggies.jpeg'),
mimeType: 'image/jpeg',
},
],
},
],
});

console.log(text);

The AI SDK supports structured outputs:

import { generateObject } from 'ai';
import { z } from 'zod';
import { google } from '@ai-sdk/google';
import { readFile } from 'fs/promises';

const model = google('gemini-2.0-flash');

const { object } = await generateObject({
model: model,
schema: z.object({
date: z.string(),
total_gross_worth: z.number(),
invoice_number: z.string()
}),
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: 'Extract the structured data from the following PDF file',
},
{
type: 'file',
data: await readFile('./invoice.pdf'),
mimeType: 'application/pdf',
},
],
},
],
});

console.log(object)

See the AI SDK structured data guide for further resources.

For further resources about building with the Gemini API, see

Read Entire Article