Show HN: Lib for real-time agent with streaming support via SSE and RabbitMQ
3 hours ago
2
MCPHub is an embeddable Model Context Protocol (MCP) solution for AI services. It enables seamless integration of MCP servers into any AI framework, allowing developers to easily configure, set up, and manage MCP servers within their applications. Whether you're using OpenAI Agents, LangChain, or Autogen, MCPHub provides a unified way to connect your AI services with MCP tools and resources.
Manual Configuration: Add the server configuration directly to your .mcphub.json file.
Automatic Configuration from GitHub: Use the add_server_from_repo method to automatically configure a server from its GitHub repository:
frommcphubimportMCPHub# Initialize MCPHubhub=MCPHub()
# Add a new server from GitHubhub.servers_params.add_server_from_repo(
server_name="my-server",
repo_url="https://github.com/username/repo"
)
The automatic configuration:
Fetches the README from the GitHub repository
Uses OpenAI to analyze the README and extract the server configuration
Adds the configuration to your .mcphub.json file
Requires an OpenAI API key (set via OPENAI_API_KEY environment variable)
importasyncioimportjsonfromagentsimportAgent, RunnerfrommcphubimportMCPHubasyncdefmain():
""" Example of using MCPHub to integrate MCP servers with OpenAI Agents. This example demonstrates: 1. Initializing MCPHub 2. Fetching and using an MCP server 3. Listing available tools 4. Creating and running an agent with MCP tools """# Step 1: Initialize MCPHub# MCPHub will automatically:# - Find .mcphub.json in your project# - Load server configurations# - Set up servers (clone repos, run setup scripts if needed)hub=MCPHub()
# Step 2: Create an MCP server instance using async context manager# Parameters:# - mcp_name: The name of the server from your .mcphub.json# - cache_tools_list: Cache the tools list for better performanceasyncwithhub.fetch_openai_mcp_server(
mcp_name="sequential-thinking-mcp",
cache_tools_list=True
) asserver:
# Step 3: List available tools from the MCP server# This shows what capabilities are available to your agenttools=awaitserver.list_tools()
# Pretty print the tools for better readabilitytools_dict= [
dict(tool) ifhasattr(tool, "__dict__") elsetoolfortoolintools
]
print("Available MCP Tools:")
print(json.dumps(tools_dict, indent=2))
# Step 4: Create an OpenAI Agent with MCP server# The agent can now use all tools provided by the MCP serveragent=Agent(
name="Assistant",
instructions="Use the available tools to accomplish the given task",
mcp_servers=[server] # Provide the MCP server to the agent
)
# Step 5: Run your agent with a complex task# The agent will automatically have access to all MCP toolscomplex_task="""Please help me analyze the following complex problem: We need to design a new feature for our product that balances user privacy with data collection for improving the service. Consider the ethical implications, technical feasibility, and business impact. Break down your thinking process step by step, and provide a detailed recommendation with clear justification for each decision point."""# Execute the task and get the resultresult=awaitRunner.run(agent, complex_task)
print("\nAgent Response:")
print(result)
if__name__=="__main__":
# Run the async main functionasyncio.run(main())
frommcphubimportMCPHubasyncdefframework_quick_examples():
hub=MCPHub()
# 1. OpenAI Agents Integrationasyncwithhub.fetch_openai_mcp_server(
mcp_name="sequential-thinking-mcp",
cache_tools_list=True
) asserver:
# Use server with OpenAI agentsagent=Agent(
name="Assistant",
mcp_servers=[server]
)
# 2. LangChain Tools Integrationlangchain_tools=awaithub.fetch_langchain_mcp_tools(
mcp_name="sequential-thinking-mcp",
cache_tools_list=True
)
# Use tools with LangChain# 3. Autogen Adapters Integrationautogen_adapters=awaithub.fetch_autogen_mcp_adapters(
mcp_name="sequential-thinking-mcp"
)
# Use adapters with Autogen
Tool Discovery: Automatically list and manage available tools from MCP servers
Tool Caching: Optional caching of tool lists for improved performance
Framework-specific Adapters: Convert MCP tools to framework-specific formats
Discover and manage MCP server tools:
frommcphubimportMCPHubasyncdeftool_management():
hub=MCPHub()
# List all serversservers=hub.list_servers()
# List all tools from a specific MCP servertools=awaithub.list_tools(mcp_name="sequential-thinking-mcp")
# Print tool informationfortoolintools:
print(f"Tool Name: {tool.name}")
print(f"Description: {tool.description}")
print(f"Parameters: {tool.parameters}")
print("---")
# Tools can be:# - Cached for better performance using cache_tools_list=True# - Converted to framework-specific formats automatically# - Used directly with AI frameworks through adapters
MCPHub: High-Level Overview
MCPHub simplifies the integration of Model Context Protocol (MCP) servers into AI applications through four main components:
Params Hub
Manages configurations from .mcphub.json
Defines which MCP servers to use and how to set them up
Stores server parameters like commands, arguments, and environment variables
MCP Servers Manager
Handles server installation and setup
Supports two types of servers:
TypeScript-based servers (installed via npx)
Python-based servers (installed via uv from GitHub)
Manages server lifecycle and environment
MCP Client
Establishes communication with MCP servers
Uses stdio transport for server interaction
Handles two main operations:
list_tools: Discovers available server tools
call_tool: Executes server tools
Framework Adapters
Converts MCP tools to framework-specific formats
Supports multiple AI frameworks:
OpenAI Agents
LangChain
Autogen
Configuration & Setup
Params Hub reads configuration
Servers Manager sets up required servers
Servers start and become available
Communication
MCP Client connects to servers via stdio
Tools are discovered and made available
Requests and responses flow between client and server
Integration
Framework adapters convert MCP tools
AI applications use adapted tools through their preferred framework
Tools are executed through the established communication channel
This architecture provides a seamless way to integrate MCP capabilities into any AI application while maintaining clean separation of concerns and framework flexibility.
Run the unit tests with pytest:
This project uses GitHub Actions for continuous integration and deployment:
Automated Testing: Tests are run on Python 3.10, 3.11, and 3.12 for every push to main and release branches and for pull requests.
Automatic Version Bumping and Tagging: When code is pushed to the release branch:
The patch version is automatically incremented in pyproject.toml
A new Git tag (e.g., v0.1.2) is created for the release
Changes are committed back to the repository
PyPI Publishing: When code is pushed to the release branch and tests pass, the package is automatically built and published to PyPI.
Setting Up PyPI Deployment
To enable automatic PyPI deployment, you need to add a PyPI API token as a GitHub Secret: