Improving MCP tool call performance through LLM code generation

4 weeks ago 1

NPM Version MIT licensed Install MCP Server Install in VS Code (npx)

A Model Context Protocol (MCP) proxy server that translates tool calls into TypeScript code generation. Instead of making multiple tool calls back and forth, LLMs can write TypeScript code that calls multiple tools naturally, reducing token overhead and leveraging the LLM's superior code generation capabilities.

LLMs make multiple sequential tool calls, burning tokens and struggling with complex workflows:

  • ❌ Multiple round-trips between LLM and tools
  • ❌ Complex tool calling sequences are error-prone
  • ❌ Data cannot easily be passed between tools
  • ❌ Limited error handling and control flow

LLMs write TypeScript code that calls multiple tools naturally:

  • ✅ Write code to call multiple tools in sequence
  • ✅ Use variables, loops, and conditionals naturally
  • ✅ Better error handling with try/catch
  • ✅ Reduce token usage by combining operations
  • ✅ Leverage LLM's strong code generation capabilities
  1. Install mcpcodeserver in your MCP client (see installation section below)
  2. Create an mcp.json configuration file with your child MCP servers
  3. Start using it - your LLM can now generate and execute TypeScript code that calls your tools
// Instead of multiple tool calls, write code like this: const files = await filesystem.list_directory({ path: "/tmp" }); const results = await Promise.all( files.map(file => filesystem.read_file({ path: file.path })) ); return results.filter(content => content.includes("important"));

mcpcodeserver is a unique MCP server that:

  • Acts as an MCP client to connect to one or more child MCP servers
  • Discovers all tools from child servers
  • Exposes three powerful tools to parent LLM clients:
    1. list_servers - Lists all available sub-servers connected to this MCP server
    2. get_tool_definitions - Returns TypeScript type definitions for discovered tools (optionally filtered by server)
    3. generate_and_execute_code - Generates and executes TypeScript code that calls those tools in a sandbox

This architecture allows LLMs to orchestrate complex multi-tool workflows by writing code instead of making sequential tool calls, which is often more efficient and natural for modern language models.

This approach is inspired by recent research showing that LLMs perform better when generating executable code rather than making direct tool calls:

  • CodeAct: Your LLM Agent Acts Better when Generating Code (Apple, ICML 2024) - Demonstrates that LLM agents achieve up to 20% higher success rates when using executable Python code as a unified action space instead of pre-defined tool calling formats.

  • Cloudflare Code Mode - A similar implementation that converts MCP tools into TypeScript APIs, showing that "LLMs are better at writing code to call MCP, than at calling MCP directly."

The key insight from this research is that LLMs have extensive training on real-world code but limited exposure to synthetic tool-calling formats, making code generation a more natural and effective approach for complex agent workflows.

Traditional Tool Calling Problems

  • Multiple round-trips between LLM and tools burn tokens
  • LLMs often struggle with complex tool calling sequences
  • Each tool call requires JSON schema understanding and formatting
  • Data cannot easily be passed between tools without going through the LLM
  • Write TypeScript code to call multiple tools in sequence
  • Use variables, loops, and conditionals naturally
  • Better error handling with try/catch
  • Reduce token usage by combining operations
  • Leverage LLM's strong code generation capabilities

mcpcodeserver automatically monitors child MCP servers for tool changes and notifies parent clients when tools are added, removed, or modified:

  • Automatic Refresh: Checks for tool changes every 30 seconds
  • Real-time Notifications: Sends notifications/tools/list_changed to parent clients
  • Dynamic Updates: Tool definitions and summaries update automatically
  • No Manual Refresh: Parent LLMs receive notifications to refresh their tool knowledge

This ensures that parent LLMs always have the most current tool definitions without requiring manual intervention.

To reduce context window usage and improve focus, mcpcodeserver supports filtering tool definitions by specific servers:

  • List Available Servers: Use list_servers to see all connected sub-servers
  • Filtered Tool Definitions: Use get_tool_definitions with server_names parameter to get tools from specific servers only
  • Reduced Verbosity: Get focused TypeScript definitions without overwhelming the LLM's context window
  • Method Namespacing: All generated functions are prefixed with server names (e.g., pizzashop_create_pizza, filesystem_read_file)

Example usage:

// List available servers const servers = await list_servers({}); // Returns: ["pizzashop", "filesystem", "memory"] // Get all tool definitions const allTools = await get_tool_definitions({}); // Get only pizzashop tools const pizzashopTools = await get_tool_definitions({ server_names: ["pizzashop"] });

mcpcodeserver supports pass-through of advanced MCP protocol features when both parent and child servers support them:

  • Elicitation: Child servers can request user input during tool execution, which is passed through to parent clients
  • Roots: Lists and aggregates roots from all child servers, providing a unified view of available resources
  • Sampling: Enables LLM sampling requests to be passed through to child servers for advanced AI capabilities

These features are automatically advertised to parent clients and work seamlessly when supported by the underlying child MCP servers.

Try it immediately with npx (no installation required):

# From GitHub npx github:zbowling/mcpcodeserver --help # Or when published to npm npx mcpcodeserver --help
  • Node.js >= v18.0.0
  • Cursor, Claude Code, VSCode, Windsurf or another MCP Client

To install mcpcodeserver for any client automatically via Smithery:

npx -y @smithery/cli@latest install mcpcodeserver --client <client-name> --key <smithery-key>

Go to: Settings -> Cursor Settings -> MCP -> Add new global MCP server

Pasting the following configuration into your Cursor ~/.cursor/mcp.json file is the recommended approach. You may also install in a specific project by creating .cursor/mcp.json in your project folder.

Cursor One-Click Installation

Install MCP Server

Cursor Local Server Connection

{ "mcpServers": { "mcpcodeserver": { "command": "npx", "args": ["-y", "mcpcodeserver", "--config", "/path/to/your/mcp.json"] } } }

Cursor Remote Server Connection (if you set up HTTP transport)

{ "mcpServers": { "mcpcodeserver": { "url": "http://localhost:3000/mcp" } } }

Run this command. See Claude Code MCP docs for more info.

Claude Code Local Server Connection

claude mcp add mcpcodeserver -- npx -y mcpcodeserver --config /path/to/your/mcp.json

Claude Code Remote Server Connection

claude mcp add --transport http mcpcodeserver http://localhost:3000/mcp

VSCode One-Click Installation

Install in VS Code (npx)

VSCode Manual Configuration

Add to your VSCode MCP settings:

{ "mcpServers": { "mcpcodeserver": { "command": "npx", "args": ["-y", "mcpcodeserver", "--config", "/path/to/your/mcp.json"] } } }

Windsurf One-Click Installation

Install in Windsurf

Install in AI Coding Assistants

For Continue, Cline, and RooCode, add to your configuration:

{ "mcpServers": { "mcpcodeserver": { "command": "npx", "args": ["-y", "mcpcodeserver", "--config", "/path/to/your/mcp.json"] } } }

Run this command in your terminal. See Amp MCP docs for more info.

amp mcp add mcpcodeserver -- npx -y mcpcodeserver --config /path/to/your/mcp.json

For Aider, Codium, Zed, Nova, and Sublime Text, add to your configuration:

{ "mcpServers": { "mcpcodeserver": { "command": "npx", "args": ["-y", "mcpcodeserver", "--config", "/path/to/your/mcp.json"] } } }

Add to your Neovim MCP configuration:

{ mcpServers = { mcpcodeserver = { command = "npx", args = {"-y", "mcpcodeserver", "--config", "/path/to/your/mcp.json"} } } }

Add to your Emacs MCP configuration:

(setq mcp-servers '((mcpcodeserver :command "npx" :args ("-y" "mcpcodeserver" "--config" "/path/to/your/mcp.json"))))

Install in JetBrains IDEs

For IntelliJ IDEA, WebStorm, PyCharm, and Android Studio, add to your MCP settings:

{ "mcpServers": { "mcpcodeserver": { "command": "npx", "args": ["-y", "mcpcodeserver", "--config", "/path/to/your/mcp.json"] } } }

For Codeium, Tabnine, GitHub Copilot, and Amazon CodeWhisperer, add to your MCP settings:

{ "mcpServers": { "mcpcodeserver": { "command": "npx", "args": ["-y", "mcpcodeserver", "--config", "/path/to/your/mcp.json"] } } }

For Replit, CodeSandbox, StackBlitz, GitPod, GitHub Codespaces, GitLab Web IDE, and Bitbucket Cloud, add to your MCP settings:

{ "mcpServers": { "mcpcodeserver": { "command": "npx", "args": ["-y", "mcpcodeserver", "--config", "/path/to/your/mcp.json"] } } }

For Xcode, Fleet, Sourcegraph, and JetBrains Gateway, add to your MCP configuration:

{ "mcpServers": { "mcpcodeserver": { "command": "npx", "args": ["-y", "mcpcodeserver", "--config", "/path/to/your/mcp.json"] } } }

Install in Remote Development

For remote development environments, you can also use HTTP transport:

{ "mcpServers": { "mcpcodeserver": { "url": "http://your-server:3000/mcp" } } }

Create an mcp.json configuration file to define your child MCP servers:

{ "mcpServers": { "filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"], "env": { "DEBUG": "false" } }, "memory": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-memory"] }, "brave-search": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-brave-search"], "env": { "BRAVE_API_KEY": "your-api-key" } } } }

Installation for Development

# Install dependencies (using Bun for faster performance) bun install # Or with npm npm install # Build the project bun run build # Test the built server bun dist/index.js --help

Note: This project uses Bun for better performance, but npm/node also work fine.

If you encounter ERR_MODULE_NOT_FOUND, try using bunx instead of npx:

{ "mcpServers": { "mcpcodeserver": { "command": "bunx", "args": ["-y", "mcpcodeserver", "--config", "/path/to/your/mcp.json"] } } }

For errors like Error: Cannot find module, try the --experimental-vm-modules flag:

{ "mcpServers": { "mcpcodeserver": { "command": "npx", "args": ["-y", "--node-options=--experimental-vm-modules", "mcpcodeserver", "--config", "/path/to/your/mcp.json"] } } }

Use the --experimental-fetch flag to bypass TLS-related problems:

{ "mcpServers": { "mcpcodeserver": { "command": "npx", "args": ["-y", "--node-options=--experimental-fetch", "mcpcodeserver", "--config", "/path/to/your/mcp.json"] } } }

General MCP Client Errors

  1. Try adding @latest to the package name
  2. Use bunx as an alternative to npx
  3. Consider using deno as another alternative
  4. Ensure you're using Node.js v18 or higher for native fetch support
  • Make sure your mcp.json file is valid JSON
  • Check that all child server commands are available in your PATH
  • Verify that child servers can start independently
  • Check file permissions for the config file path

Testing with MCP Inspector

npx -y @modelcontextprotocol/inspector npx mcpcodeserver --config /path/to/your/mcp.json

mcpcodeserver accepts the following CLI flags:

  • --config <path> – Path to the MCP configuration file (default: ./mcp.json)
  • --transport <stdio|http> – Transport to use (stdio by default). Note that HTTP transport automatically provides both HTTP and SSE endpoints
  • --port <number> – Port to listen on when using http transport (default 3000)
  • --help – Show help message

Example with HTTP transport and port 8080:

npx mcpcodeserver --config /path/to/mcp.json --transport http --port 8080

Example with stdio transport:

npx mcpcodeserver --config /path/to/mcp.json --transport stdio

You can use environment variables for configuration:

  • MCP_CONFIG_PATH – Path to the MCP configuration file (alternative to --config)
  • MCP_TRANSPORT – Transport type (alternative to --transport)
  • MCP_PORT – Port number for HTTP transport (alternative to --port)

Example with environment variables:

# .env MCP_CONFIG_PATH=/path/to/your/mcp.json MCP_TRANSPORT=stdio

Example MCP configuration using environment variables:

{ "mcpServers": { "mcpcodeserver": { "command": "npx", "args": ["-y", "mcpcodeserver"], "env": { "MCP_CONFIG_PATH": "/path/to/your/mcp.json" } } } }

Note: CLI flags take precedence over environment variables when both are provided.

Local Development Configuration

For local development, you can run the TypeScript source directly:

{ "mcpServers": { "mcpcodeserver": { "command": "npx", "args": ["tsx", "/path/to/mcpcodeserver/src/index.ts", "--config", "/path/to/your/mcp.json"] } } }

The server runs in stdio mode by default, which is perfect for integration with MCP clients like Claude Desktop:

# Run in stdio mode npx mcpcodeserver --config mcp.json # Or with custom config path npx mcpcodeserver --config /path/to/your/mcp.json

For debugging, testing, or integration with web-based MCP clients, you can run the server in HTTP mode:

# Run in HTTP mode on default port 3000 npx mcpcodeserver --http --config mcp.json # Run on custom port and host npx mcpcodeserver --http --port 8080 --host 0.0.0.0 --config mcp.json

When running in HTTP mode, the server will be available at:

  • Server URL: http://localhost:3000/mcp (or your custom host:port)
  • MCP Inspector: Use npx @modelcontextprotocol/inspector http://localhost:3000/mcp to debug and test

MCP Inspector Integration

The MCP Inspector is a powerful tool for debugging and testing MCP servers. When running in HTTP mode, you can use it to:

  • Inspect available tools and their schemas
  • Test tool calls interactively
  • Debug resource access and prompts
  • Monitor real-time notifications
# Start the server in HTTP mode npx mcpcodeserver --http --config mcp.json # In another terminal, start the MCP Inspector npx @modelcontextprotocol/inspector http://localhost:3000/mcp # Or use the shorthand script (includes all example servers) npm run inspector

The inspector will open in your browser and provide a full interface for exploring and testing your MCP server.

Note: The npm run inspector command uses mcp-test.json which includes 8 MCP servers (67 tools total) from the official examples, including both TypeScript (npx) and Python (uvx) based servers.

Create an mcp.json file that defines which child MCP servers to connect to. This follows the standard MCP client configuration format:

{ "mcpServers": { "filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"], "env": { "DEBUG": "false" } }, "github": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-github"], "env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "your-token-here" } }, "weather": { "url": "http://localhost:3000/mcp", "transport": "sse" } } }

Each server entry supports:

For stdio transport:

  • command (required) - The command to execute (e.g., "node", "python", "npx")
  • args (optional) - Array of arguments to pass to the command
  • env (optional) - Environment variables for the child process

For HTTP/SSE transport:

  • url (required) - The HTTP endpoint URL
  • transport - Set to "sse" for Server-Sent Events
# Use default config (./mcp.json) mcpcodeserver # Use custom config location mcpcodeserver --config /path/to/custom-mcp.json # Show help mcpcodeserver --help

Configure mcpcodeserver in your MCP client (like Claude Desktop, Claude Code, Cline, etc.):

With npx (recommended - no installation needed):

{ "mcpServers": { "codeserver": { "command": "npx", "args": ["-y", "mcpcodeserver", "--config", "/path/to/mcp.json"] } } }

From GitHub (works immediately):

{ "mcpServers": { "codeserver": { "command": "npx", "args": ["-y", "github:zbowling/mcpcodeserver", "--config", "/path/to/mcp.json"] } } }

With other package managers:

// yarn { "command": "yarn", "args": ["dlx", "mcpcodeserver", "--config", "/path/to/mcp.json"] } // pnpm { "command": "pnpm", "args": ["dlx", "mcpcodeserver", "--config", "/path/to/mcp.json"] } // bun { "command": "bunx", "args": ["mcpcodeserver", "--config", "/path/to/mcp.json"] }

See examples/ for more configuration examples and MCP client-specific setups.

Tool 1: get_tool_definitions

This tool returns TypeScript type definitions for all discovered tools from child servers.

Input:

  • include_examples (optional boolean) - Whether to include usage examples

Example:

// Call the tool (in your MCP client) get_tool_definitions({ include_examples: true })

Output: Returns TypeScript code with interfaces and function declarations:

/** * Auto-generated TypeScript definitions for MCP tools */ interface ToolResult { content: Array<{ type: string; text?: string; // ... }>; isError?: boolean; } /** * Read contents of a file * Server: filesystem * Tool: read_file */ interface ReadFileParams { path: string; } declare function filesystem_read_file(params: ReadFileParams): Promise<ToolResult>; // ... more tool definitions

Tool 2: generate_and_execute_code

This tool executes TypeScript code in a sandbox with access to all discovered tool functions.

Input:

  • code (required string) - TypeScript/JavaScript code to execute
  • timeout (optional number) - Max execution time in milliseconds (default: 30000, max: 300000)

Example:

// Call the tool with TypeScript code generate_and_execute_code({ code: ` // Read multiple files and combine them const file1 = await filesystem_read_file({ path: "/tmp/file1.txt" }); const file2 = await filesystem_read_file({ path: "/tmp/file2.txt" }); const text1 = file1.content[0].text; const text2 = file2.content[0].text; console.log("File 1 length:", text1.length); console.log("File 2 length:", text2.length); return { combined: text1 + text2, totalLength: text1.length + text2.length }; ` })

Output:

=== Console Output === File 1 length: 42 File 2 length: 38 === Result === { "combined": "...", "totalLength": 80 }

The TypeScript execution sandbox provides:

Available:

  • All discovered tool functions (as async functions)
  • Console methods: console.log(), console.error(), console.warn(), console.info()
  • Basic JavaScript globals: Math, JSON, Date, Array, Object, String, Number, Boolean
  • Promise and async/await support
  • Error handling with try/catch
  • Timers: setTimeout, setInterval, clearTimeout, clearInterval

Not Available:

  • Node.js modules (fs, http, child_process, etc.)
  • File system access (except via MCP tools)
  • Network access (except via MCP tools)
  • Process information

Security Note: This is not a fully secure sandbox. The VM context provides isolation but is not bulletproof. Only execute trusted code.

Errors in the sandbox are caught and returned with stack traces:

generate_and_execute_code({ code: ` try { const result = await filesystem_read_file({ path: "/nonexistent" }); return result; } catch (error) { console.error("Failed to read file:", error.message); throw error; // Re-throw to surface to parent } ` })

Want to try mcpcodeserver with Claude Code? Use the one-command setup:

./setup-claude-code-test.sh

This will build the project, install test dependencies, and show you exactly what to add to your Claude Code configuration. See TESTING_WITH_CLAUDE.md for detailed instructions.

# Install dependencies bun install # Build the project bun run build # Watch mode for development bun run dev # Run the server bun start # Run tests bun test # All tests bun run test:unit # Unit tests only bun run test:integration # Integration tests (requires Python) # Code quality bun run lint # Check linting bun run format # Format code bun run typecheck # Type checking

See AGENTS.md for detailed project structure and component documentation.

Instead of making multiple tool calls through the LLM, write code:

const files = ["/tmp/a.txt", "/tmp/b.txt", "/tmp/c.txt"]; const contents = await Promise.all( files.map(path => filesystem_read_file({ path })) ); return contents.map(r => r.content[0].text);

Process data between tool calls without LLM intervention:

const data = await api_fetch({ url: "https://api.example.com/data" }); const json = JSON.parse(data.content[0].text); const filtered = json.items.filter(item => item.active); return filtered.length;

Make decisions based on tool results:

const exists = await filesystem_read_file({ path: "/tmp/config.json" }); if (exists.isError) { console.log("Config doesn't exist, using defaults"); return { source: "defaults" }; } else { return { source: "file", config: JSON.parse(exists.content[0].text) }; }

Handle errors gracefully without aborting the entire workflow:

const results = []; for (const path of ["/tmp/a.txt", "/tmp/b.txt", "/tmp/c.txt"]) { try { const content = await filesystem_read_file({ path }); results.push({ path, success: true, data: content }); } catch (error) { results.push({ path, success: false, error: error.message }); } } return results;

Upstream MCP Servers Integration

mcpcodeserver can integrate with official upstream MCP servers from the Model Context Protocol servers repository. This allows you to use real, production-ready MCP servers alongside your custom tools.

Supported Upstream Servers

  • filesystem: File system operations (read, write, list directories)
  • memory: In-memory key-value storage
  • sqlite: SQLite database operations
  • github: GitHub API integration
  • brave-search: Web search capabilities
  • fetch: HTTP request capabilities
{ "mcpServers": { "filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"] }, "memory": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-memory"] }, "sqlite": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-sqlite", "--db-path", "/tmp/test.db"] } } }

Testing Upstream Integration

The project includes comprehensive tests for upstream server integration:

# Run upstream servers integration tests bun tests/integration/run-upstream-tests.ts # Or manually test with upstream config npx mcpcodeserver --config tests/integration/upstream-test-config.json

With upstream servers, you can create powerful cross-server workflows:

// Store database query results in memory and write to file const queryResult = await sqlite_execute_sql({ sql: "SELECT COUNT(*) as count FROM users" }); const count = queryResult.content[0].text; await memory_create({ key: "user-count", value: count }); await filesystem_write_file({ path: "/tmp/user-count.txt", content: `Total users: ${count}` });
  • Execution timeout: Maximum 5 minutes (configurable, default 30 seconds)
  • Memory: Limited by Node.js VM context
  • No persistent state between executions
  • Cannot require/import external modules
  • Not a security sandbox - don't run untrusted code

Contributions welcome! This project is built with:

  • TypeScript 5.7+
  • Node.js 18+
  • MCP TypeScript SDK 1.20+
  • Zod for validation

See CONTRIBUTING.md for detailed contribution guidelines.

If you find this project helpful, consider buying me a coffee!

Buy Me A Coffee

MIT

Read Entire Article