A comprehensive evaluation library for Model Context Protocol (MCP) servers. Test and validate your MCP servers with complete MCP specification coverage including Tools, Resources, Prompts, and Sampling with deterministic metrics, security validation, and optional LLM-based evaluation.
Status: MVP – API stable, minor breaking changes possible before 1.0.0
Create a config file (e.g., mcp-eval.config.ts):
MCPVals provides comprehensive testing for all MCP specification primitives:
-
Tool Health Testing: Directly calls individual tools with specific arguments to verify their correctness, performance, and error handling. This is ideal for unit testing and regression checking.
-
Resource Evaluation: Tests MCP resources (data and context) including discovery, access validation, URI template testing, content validation, and subscription capabilities.
-
Prompt Evaluation: Validates MCP prompts (templates and workflows) with argument testing, template generation, security validation, and dynamic content evaluation.
-
Sampling Evaluation: Tests MCP sampling capabilities (server-initiated LLM requests) including capability negotiation, human-in-the-loop workflows, model preferences, and security controls.
-
Workflow Evaluation: Uses a large language model (LLM) to interpret natural language prompts and execute a series of tool calls to achieve a goal. This tests the integration of your MCP primitives from an LLM's perspective.
-
OAuth 2.1 Testing: Comprehensive OAuth 2.1 authentication flow testing with PKCE, resource indicators (RFC 8707), multi-tenant support, and security validation. Tests authorization code flows, client credentials, device flows, token management, and security controls.
- Node.js ≥ 18 – we rely on native fetch, EventSource, and fs/promises.
- pnpm / npm / yarn – whichever you prefer, MCPVals is published as an ESM‐only package.
- MCP Server – a local stdio binary or a remote Streaming-HTTP endpoint.
- Anthropic API Key – Required for workflow execution (uses Claude to drive tool calls). Set via ANTHROPIC_API_KEY environment variable.
- (Optional) OpenAI key – Only required if using the LLM judge feature. Set via OPENAI_API_KEY.
ESM-only: You cannot require("mcpvals") from a CommonJS project. Either enable "type": "module" in your package.json or use dynamic import().
Runs tests specified in the config file. It will run all configured test types (toolHealthSuites, resourceSuites, promptSuites, samplingSuites, and workflows) by default. Use flags to run only specific types. Exits 0 on success or 1 on any failure – perfect for CI.
Static inspection – prints workflows without starting the server. Handy when iterating on test coverage.
MCPVals loads either a .json file or a .ts/.js module that export default an object. Any string value in the config supports Bash-style environment variable interpolation ${VAR}.
Defines how to connect to your MCP server.
- transport: stdio, shttp (Streaming HTTP), or sse (Server-Sent Events).
- command/args: (for stdio) The command to execute your server.
- env: (for stdio) Environment variables to set for the child process.
- url/headers: (for shttp and sse) The endpoint and headers for a remote server.
- reconnect/reconnectInterval/maxReconnectAttempts: (for sse) Reconnection settings for SSE connections.
Example shttp with Authentication:
Example sse with Reconnection:
An array of suites for testing tools directly. Each suite contains:
- name: Identifier for the test suite.
- tests: An array of individual tool tests.
- parallel: (boolean) Whether to run tests in the suite in parallel (default: false).
- timeout: (number) Override the global timeout for this suite.
| name | string | Tool name to test (must match an available MCP tool). |
| description | string? | What this test validates. |
| args | object | Arguments to pass to the tool. |
| expectedResult | any? | Expected result. Uses deep equality for objects, contains for strings. |
| expectedError | string? | Expected error message if the tool should fail. |
| maxLatency | number? | Maximum acceptable latency in milliseconds. |
| retries | number? | Retries on failure (0-5, default: 0). |
An array of suites for testing MCP resources. Each suite contains:
- name: Identifier for the resource test suite.
- discoveryTests: Tests for resource discovery and enumeration.
- tests: Resource access and content validation tests.
- templateTests: URI template instantiation tests.
- subscriptionTests: Resource subscription and update tests.
- parallel: (boolean) Whether to run tests in parallel (default: false).
- timeout: (number) Override the global timeout for this suite.
Discovery Tests: Validate listResources functionality
Resource Access Tests: Validate readResource operations
Template Tests: Validate URI template instantiation
Subscription Tests: Validate resource update subscriptions
An array of suites for testing MCP prompts. Each suite contains:
- name: Identifier for the prompt test suite.
- discoveryTests: Tests for prompt discovery and enumeration.
- tests: Prompt execution and content validation tests.
- argumentTests: Argument validation tests (required vs optional).
- templateTests: Template generation and content tests.
- securityTests: Security validation including injection prevention.
- parallel: (boolean) Whether to run tests in parallel (default: false).
Discovery Tests: Validate listPrompts functionality
Prompt Execution Tests: Validate getPrompt operations
Argument Tests: Validate prompt argument handling
Security Tests: Test prompt injection prevention
An array of suites for testing MCP sampling capabilities. Each suite contains:
- name: Identifier for the sampling test suite.
- capabilityTests: Tests for sampling capability negotiation.
- requestTests: Tests for sampling request/response handling.
- securityTests: Security validation for sampling operations.
- performanceTests: Performance and rate limiting tests.
- contentTests: Content type validation (text, image, mixed).
- workflowTests: End-to-end sampling workflow tests.
Capability Tests: Validate sampling capability negotiation
Request Tests: Validate sampling message creation
Security Tests: Validate security controls
Performance Tests: Validate performance and limits
An array of suites for testing OAuth 2.1 authentication flows. Each suite contains comprehensive tests for modern OAuth 2.1 security practices including PKCE, resource indicators, and multi-tenant support.
- name: Identifier for the OAuth test suite.
- description: Optional description of the test suite purpose.
- authorizationCodeTests: Authorization code flow tests with PKCE.
- clientCredentialsTests: Machine-to-machine authentication tests.
- deviceCodeTests: Device authorization flow tests for input-limited devices.
- tokenManagementTests: Token refresh, revocation, and expiration tests.
- pkceValidationTests: PKCE (Proof Key for Code Exchange) security validation.
- resourceIndicatorTests: RFC 8707 resource indicators for audience restriction.
- multiTenantTests: Multi-tenant isolation and access control tests.
- parallel: (boolean) Whether to run tests in parallel (default: false).
- timeout: (number) Override the global timeout for this suite.
Authorization Code Flow Tests: Complete OAuth 2.1 authorization code flow with PKCE
Token Management Tests: Refresh, revocation, and expiration validation
PKCE Validation Tests: Security validation for Proof Key for Code Exchange
Resource Indicator Tests: RFC 8707 audience restriction validation
Multi-Tenant Tests: Tenant isolation and cross-tenant access control
An array of LLM-driven test workflows. Each workflow contains:
- name: Identifier for the workflow.
- steps: An array of user interactions (usually just one for a high-level goal).
- expectTools: An array of tool names expected to be called during the workflow.
| user | string | High-level user intent. The LLM will plan how to accomplish this. |
| expectedState | string? | A sub-string the evaluator looks for in the final assistant message or tool result. |
- Write natural prompts: Instead of micro-managing tool calls, give the LLM a complete task (e.g., "Book a flight from SF to NY for next Tuesday and then find a hotel near the airport.").
- Use workflow-level expectTools: List all tools you expect to be used across the entire workflow to verify the LLM's plan.
- timeout: (number) Global timeout in ms for server startup and individual tool calls. Default: 30000.
- llmJudge: (boolean) Enables the LLM Judge feature. Default: false.
- openaiKey: (string) OpenAI API key for the LLM Judge.
- judgeModel: (string) The model to use for judging. Default: "gpt-4o".
- passThreshold: (number) The minimum score (0-1) from the LLM Judge to pass. Default: 0.8.
When running tool health tests, the following is assessed for each test:
- Result Correctness: Does the output match expectedResult?
- Error Correctness: If expectedError is set, did the tool fail with a matching error?
- Latency: Did the tool respond within maxLatency?
- Success: Did the tool call complete without unexpected errors?
For resource tests, the following is assessed:
- Discovery Metrics: Resource count validation, expected resource presence
- Access Metrics: Successful resource reading, MIME type validation, content correctness
- Template Metrics: URI template instantiation, parameter substitution accuracy
- Subscription Metrics: Update notification handling, subscription lifecycle management
- Performance Metrics: Response latency, retry success rates
For prompt tests, the following is assessed:
- Discovery Metrics: Prompt availability and enumeration
- Execution Metrics: Prompt generation success, content validation, message structure
- Argument Metrics: Required/optional parameter handling, validation correctness
- Template Metrics: Dynamic content generation, parameter substitution
- Security Metrics: Injection prevention, input sanitization effectiveness
For sampling tests, the following is assessed:
- Capability Metrics: Sampling support detection and negotiation
- Request Metrics: Message creation, model preference handling, approval workflows
- Security Metrics: Unauthorized request blocking, sensitive data filtering, privacy protection
- Performance Metrics: Concurrent request handling, rate limiting, latency management
- Content Metrics: Text/image/mixed content validation, format handling
For OAuth 2.1 authentication tests, the following is assessed:
- Flow Completion Metrics: Successful completion of authorization code, client credentials, and device code flows
- PKCE Security Metrics: Code challenge/verifier validation, S256 method enforcement, replay attack prevention
- Token Management Metrics: Token refresh success, revocation effectiveness, expiration validation
- Security Validation Metrics: State parameter validation, nonce verification, audience restriction compliance
- Multi-Tenant Metrics: Tenant isolation enforcement, cross-tenant access blocking, tenant switching validation
- Resource Indicator Metrics: RFC 8707 compliance, audience restriction, scope validation
- Performance Metrics: Token endpoint latency, authorization flow completion time, concurrent request handling
For each workflow, a trace of the LLM interaction is recorded and evaluated against 3 metrics:
| 1 | End-to-End Success | expectedState is found in the final response. |
| 2 | Tool Invocation Order | The tools listed in expectTools were called in the exact order specified. |
| 3 | Tool Call Health | All tool calls completed successfully (no errors, HTTP 2xx, etc.). |
The overall score is an arithmetic mean. The evaluation fails if any metric fails.
Add subjective grading when deterministic checks are not enough (e.g., checking tone, or conversational quality).
- Set "llmJudge": true in the config and provide an OpenAI key.
- Use the --llm-judge CLI flag.
The judge asks the specified judgeModel for a score and a reason. A 4th metric, LLM Judge, is added to the workflow results, which passes if score >= passThreshold.
You can run evaluations programmatically.
The library exports all configuration and result types for use in TypeScript projects:
Configuration Types:
- Config, Workflow, WorkflowStep, ToolTest, ToolHealthSuite
- ResourceSuite, ResourceTest, ResourceDiscoveryTest, ResourceTemplateTest, ResourceSubscriptionTest
- PromptSuite, PromptTest, PromptArgumentTest, PromptTemplateTest, PromptSecurityTest
- SamplingSuite, SamplingCapabilityTest, SamplingRequestTest, SamplingSecurityTest, SamplingPerformanceTest, SamplingContentTest, SamplingWorkflowTest
- OAuth2TestSuite, AuthorizationCodeTest, ClientCredentialsTest, DeviceCodeTest, TokenManagementTest, PKCEValidationTest, ResourceIndicatorTest, MultiTenantTest
Result Types:
- EvaluationReport, WorkflowEvaluation, EvaluationResult
- ToolHealthResult, ToolTestResult
- ResourceSuiteResult, ResourceDiscoveryResult, ResourceTestResult, ResourceTemplateResult, ResourceSubscriptionResult
- PromptSuiteResult, PromptDiscoveryResult, PromptTestResult, PromptArgumentResult, PromptTemplateResult, PromptSecurityResult
- SamplingSuiteResult, SamplingCapabilityResult, SamplingRequestResult, SamplingSecurityResult, SamplingPerformanceResult, SamplingContentResult, SamplingWorkflowResult
- OAuth2SuiteResult, OAuth2TestResult, TokenManager, PKCEUtils, SecurityUtils
- runLlmJudge, LlmJudgeResult
- Custom Reporters: Import ConsoleReporter for reference and implement your own .report() method.
- Server Hangs: Increase the timeout value in your config. Ensure your server writes MCP messages to stdout.
- LLM Judge Fails: Use --debug to inspect the raw model output for malformed JSON.
✅ Completed (v0.1.0):
- Complete MCP specification coverage (Tools, Resources, Prompts, Sampling)
- Resource evaluation with discovery, access, templates, and subscriptions
- Prompt evaluation with execution, arguments, templates, and security testing
- Sampling evaluation with capability, requests, security, and performance testing
- OAuth 2.1 authentication testing with PKCE, resource indicators, and multi-tenant support
- Comprehensive security validation framework
- Enhanced console reporting for all evaluation types
- Server-Sent Events (SSE) transport support with automatic reconnection
🚧 In Progress:
- JUnit XML reporter for CI integration
- Advanced security testing extensions
- Performance benchmarking and comparison tools
📋 Planned (v0.2.0):
- Fluent API alongside configuration files
- Interactive CLI for test generation
- Output-schema validation for tool calls
- Parallel workflow execution
- Web dashboard for replaying traces
- Configurable expectTools strictness (e.g., allow extra or unordered calls)
- MCP protocol compliance validator
- Real-time resource subscription testing
Here's a comprehensive example showcasing all evaluation types:
- Model Context Protocol – for the SDK
- Vercel AI SDK – for LLM integration
- chalk – for terminal colors
Enjoy testing your MCP servers – PRs, issues & feedback welcome! ✨
.png)

