Show HN: RunAgent; Multi-Framework Agent Deployment and Rust,Go,JS SDKs(+others)

3 months ago 4

RunAgent logo RunAgent Logo

Universal AI Agent Platform

Write agent once, use everywhere

Read the Docs

PyPI PyPI - Downloads Python Versions Discord

Write AI agents once in Python, access them natively from any language.
Deploy locally today, or on our serverless cloud platform (coming soon).

FeaturesQuick StartCLIDocumentationContributing


RunAgent is a revolutionary platform that solves the language barrier problem in AI development. Write your AI agents once in Python using any framework, then access them natively from Rust, JavaScript, Go, or any other language(Python itself) through our comprehensive SDKs. No more REST API complexity, no more language barriers - just seamless AI agent integration across your entire technology stack.

With RunAgent, you get both a powerful local development environment and a clear path to production-scale serverless deployment. Whether you're building a prototype on your laptop or scaling to millions of users, RunAgent provides the infrastructure and tools to deploy your agents seamlessly.

RunAgent Diagram RunAgent Diagram


  • 🌐 Universal Access: Write agents in Python, use them natively in any language - no API complexity
  • 🚀 Deploy in Minutes: From local development to production with a single command
  • 🔧 Framework Agnostic: Works with any AI agent framework - LangChain, LangGraph, CrewAI, or custom
  • 🌊 Native Streaming: Real-time agent responses that feel natural in every programming language
  • 📦 Smart Architecture: Local development today, serverless cloud deployment tomorrow
  • 🛡️ Production Ready: Built-in monitoring, logging, and debugging tools included
  • 📊 Developer Experience: Intuitive CLI, comprehensive SDKs, and extensive documentation
  • Polyglot Teams: Python AI experts collaborating with frontend (JS/TS) and backend (Rust/Go) developers
  • Microservices Architecture: Centralized AI logic with distributed access across your service mesh
  • Legacy Integration: Add cutting-edge AI capabilities to existing applications without rewrites
  • Performance Critical Apps: Write AI logic in Python, consume from high-performance languages
  • Rapid Prototyping: Quick local development with seamless production deployment path

SharedCode: Native-feeling SDKs for Python, JavaScript, Rust, and Go with full type safety, intellisense, automatic error handling, and seamless authentication - no more wrestling with REST APIs

UniversalDeploy: Local development server available now with full debugging capabilities, serverless cloud deployment with auto-scaling and global edge distribution coming soon, plus future enterprise options with private cloud and advanced security

StreamingNative: Synchronous and asynchronous clients with language-specific iterators, comprehensive retry logic, and built-in monitoring for complex multi-step agent workflows

CloudScale: Complete agent lifecycle management with smart project initialization, local FastAPI server with hot reload, environment management, and one-command deployment

FrameworkAgnostic: Framework-specific templates, real-time monitoring with logging and metrics, extensive documentation and examples, plus webhook support for event-driven architectures


🚀 QuickStart(Deploying an Agent)

RunAgent provides a comprehensive toolset for both deploying agents from any framework and consuming those agents in your applications. The Python SDK is bundled with the CLI, while other language SDKs are available from their respective package repositories.

Learn more about installation options in our Installation Guide.

2. Initialize from a template

We've created comprehensive templates covering most supported frameworks. These serve as both examples and starting points, and you can initialize them with the runagent init command.

# Framework-specific templates: runagent init my_agent --langgraph # LangGraph template runagent init my_agent --crewai # CrewAI template runagent init my_agent --letta # Letta template runagent init my_agent --agno # Agno template # Create a minimal project: runagent init my_agent

See all available framework templates in our Frameworks Overview.

If we examine the directory structure:

my-agent/ ├── __init__.py ├── email_agent.py ├── main.py └── runagent.config.json

The most important file in any RunAgent project is runagent.config.json, which contains project metadata and the list of entrypoints - the functions/methods from your codebase that will be exposed through the RunAgent server.

Key Concept: entrypoints are the single most important concept in the RunAgent ecosystem. They define which functions in your code become accessible through our SDKs. Learn more in Core Concepts.

3. Configure your agent (runagent.config.json)

Let's examine the main.py content (partial):

from .email_agent import MockOpenAIClient from typing import Iterator def mock_response(message, role="user"): """Test the mock agent with non-streaming responses""" client = MockOpenAIClient() prompt = [ { "role": role, "content": message } ] response = client.create(model="gpt-4", messages=prompt) print(response.content) print(f"\nTokens used: {response.usage_tokens}") print(f"Response time: {response.response_time:.2f}s") return response.content def mock_response_stream(message, role="user") -> Iterator[str]: """Test the mock agent with streaming responses""" client = MockOpenAIClient() prompt = [ { "role": role, "content": message } ] for chunk in client.create( model="gpt-4", messages=prompt, stream=True ): if not chunk.finished: yield chunk.delta else: yield "\n[STREAM COMPLETE]"

The functions mock_response and mock_response_stream will serve as agent entrypoints. We specify these in the runagent.config.json file:

{ "agent_name": "my-agent", "description": "A simple placeholder agent", "framework": "default", "template": "default", "version": "1.0.0", "created_at": "2025-07-11 15:08:18", "template_source": { "repo_url": "https://github.com/runagent-dev/runagent.git", "author": "sawradip", "path": "templates/default" }, "agent_architecture": { "entrypoints": [ { "file": "main.py", "module": "mock_response", "tag": "minimal" }, { "file": "main.py", "module": "mock_response_stream", "tag": "minimal_stream" } ] }, "env_vars": {} }

Each entrypoint record specifies:

  • file: Relative (or absolute) path to the file containing the entrypoint function
  • module: The function name to expose
  • tag: Unique identifier for this entrypoint within the project

Important: For streaming entrypoints (like mock_response_stream), the tag should include a _stream suffix to indicate streaming capability.

Learn more about configuration in our Configuration File Guide.

4. Serve your agent (built-in runagent server)

# Start the local development server runagent serve <project_dir> # If you're inside the project directory: runagent serve . # Using project name also works: runagent serve my_agent

This command starts a local FastAPI server and provides you with:

  • An agent_id for programmatic access
  • A host:port URL for HTTP access
  • WebSocket endpoints for streaming
  • A development dashboard for monitoring

You can use either the agent_id or host:port combination with RunAgentClient from any RunAgent SDK.

See more details in our Local Development Guide.

5. Deploy agent on RunAgent Cloud - Serverless (coming soon)

# Deploy to production cloud infrastructure runagent deploy <project_dir>

Cloud deployment will provide:

  • Automatic scaling based on demand
  • Global edge distribution
  • Built-in monitoring and analytics
  • Webhook integrations
  • Production-grade security

Learn more about our upcoming Cloud Deployment features.

🚀 QuickStart(Use a Deployed Agent)

RunAgent provides SDKs for multiple languages, with more being added continuously. If you need support for a specific language or want to contribute, join our Discord or raise an issue.

Remember the agent_id, URL (host:port), and tag from the deployment step. With our SDKs, you can access the entrypoints (defined in runagent.config.json) like native functions - even for streaming responses. We handle all the complex communication in the background.

The Python SDK is already installed when you install the CLI. Let's connect to our agent using the minimal tag, which corresponds to def mock_response(message, role="user"):.

from runagent import RunAgentClient # Connect using agent_id ra = RunAgentClient( agent_id="<agent_id>", entrypoint_tag="<tag>", local=True # You're running a local server ) # Or connect using host:port ra = RunAgentClient( host="<host>", port=<port>, entrypoint_tag="<tag>", local=True )

The magic of RunAgent SDKs is that you can invoke the RunAgentClient.run method as if it were the target entrypoint function. Since our function signature is def mock_response(message, role="user"):, you can call:

agent_results = ra.run( role="user", message="Analyze the benefits of remote work for software teams" ) print(agent_results)

Complete example:

from runagent import RunAgentClient ra = RunAgentClient( agent_id="<agent_id>", entrypoint_tag="<tag>", local=True ) agent_results = ra.run( role="user", message="Analyze the benefits of remote work for software teams" ) print(agent_results)

Streaming Support: For streaming functions, you can iterate over the returned object naturally:

from runagent import RunAgentClient ra = RunAgentClient( agent_id="<agent_id>", entrypoint_tag="<tag>_stream", # Note the _stream suffix local=True ) for chunk in ra.run( role="user", message="Analyze the benefits of remote work for software teams" ): print(chunk, end='')

Learn more in our Python SDK Documentation.

Non-streaming Example:

use runagent::client::RunAgentClient; use serde_json::json; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { // Connect to agent let client = RunAgentClient::new("<agent_id>", "<tag>", true).await?; // Simple invocation with native Rust types let result = client.run(&[ ("message", json!("Help me plan a trip to Japan")), ("role", json!("user")) ]).await?; println!("Result: {}", result); Ok(()) }

Streaming Example:

use runagent::client::RunAgentClient; use serde_json::json; use futures::StreamExt; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { // Stream responses for real-time interaction let client = RunAgentClient::new("<agent_id>", "<tag>_stream", true).await?; let mut stream = client.run_stream(&[ ("message", json!("Explain quantum computing step by step")), ("role", json!("user")) ]).await?; while let Some(chunk) = stream.next().await { print!("{}", chunk?); } Ok(()) }

JavaScript/ TypeScript SDK

Create an .mjs file or add "type": "module" to your package.json.

Non-streaming Example:

import { RunAgentClient } from 'runagent'; const ra = new RunAgentClient({ agentId: "<agent_id>", host: "<host>", port: <port>, entrypointTag: "minimal", local: true }); await ra.initialize(); const solutionResult = await ra.run({ role: 'user', message: 'Analyze the benefits of remote work for software teams', }); console.log(solutionResult);

Streaming Example:

import { RunAgentClient } from 'runagent'; const ra = new RunAgentClient({ agentId: "<agent_id>", host: "<host>", port: <port>, entrypointTag: "minimal_stream", local: true }); await ra.initialize(); const stream = await ra.run({ role: 'user', message: 'Write a detailed analysis of remote work benefits', }); for await (const chunk of stream) { process.stdout.write(chunk); }

Non-streaming Example:

package main import ( "context" "fmt" "log" "time" "github.com/runagent-dev/runagent/runagent-go/runagent" ) func main() { fmt.Println("=== RunAgent Go SDK Example ===") config := runagent.Config{ AgentID: "<agent_id>", EntrypointTag: "minimal", Host: "<host>", Port: <port>, Local: true, } client := runagent.NewRunAgentClient(config) ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute) defer cancel() if err := client.Initialize(ctx); err != nil { log.Fatalf("Failed to initialize: %v", err) } result, err := client.Run(ctx, map[string]interface{}{ "role": "user", "message": "Analyze the benefits of remote work for software teams", }) if err != nil { log.Fatalf("Failed to run agent: %v", err) } fmt.Printf("Result: %v\n", result) }

Streaming Example:

package main import ( "context" "fmt" "log" "time" "github.com/runagent-dev/runagent/runagent-go/runagent" ) func main() { fmt.Println("=== Streaming Agent Example ===") client := runagent.NewRunAgentClient(runagent.Config{ AgentID: "<agent_id>", EntrypointTag: "minimal_stream", Host: "<host>", Port: <port>, Local: true, }) ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute) defer cancel() if err := client.Initialize(ctx); err != nil { log.Fatalf("Failed to initialize: %v", err) } result, err := client.Run(ctx, map[string]interface{}{ "role": "user", "message": "Write a detailed analysis of remote work benefits", }) if err != nil { log.Fatalf("Failed to start streaming: %v", err) } stream := result.(*runagent.StreamIterator) defer stream.Close() fmt.Println("📡 Streaming response:") fmt.Println("----------------------------------------") for { chunk, hasMore, err := stream.Next(ctx) if err != nil { log.Printf("Stream error: %v", err) break } if !hasMore { break } if chunk != nil { fmt.Print(chunk) } } fmt.Println("\n✅ Stream completed!") }

🌊 Advanced: Streaming Responses

One of RunAgent's most powerful features is native streaming support across all languages. Streaming responses feel natural and idiomatic in each programming language:

Python - Uses Python iterators:

for chunk in client.run(message="Generate a long story"): print(chunk, end='')

JavaScript - Uses async iterators:

for await (const chunk of client.run({message: "Generate a long story"})) { process.stdout.write(chunk); }

Rust - Uses futures streams:

while let Some(chunk) = stream.next().await { print!("{}", chunk?); }

Go - Uses context-aware iterators:

for { chunk, hasMore, err := stream.Next(ctx) if !hasMore { break } fmt.Print(chunk) }

Learn more about streaming in our Streaming Responses Guide.


The RunAgent CLI is your command center for agent operations. Here's the complete reference:

See our CLI Overview and CLI Examples for comprehensive usage patterns.

RunAgent works with any Python AI framework. We provide pre-built templates and specific guides for popular frameworks:

  • LangGraph - Deploy LangGraph agents with full graph support
  • CrewAI - Deploy multi-agent CrewAI systems
  • Letta - Deploy Letta agents with long-term memory
  • Agno - Deploy Agno agents with built-in reasoning
  • Custom Frameworks - Deploy agents built with any Python framework
runagent init my-agent --langgraph # LangGraph template runagent init my-agent --crewai # CrewAI multi-agent template runagent init my-agent --letta # Letta memory-enabled template runagent init my-agent --agno # Agno reasoning template runagent init my-agent --custom # Custom framework template

See our Frameworks Overview for detailed guides on each framework.

🏢 Enterprise Features (Coming Soon)

RunAgent's enterprise tier will provide production-grade features for organizations:

  • Private Cloud Deployment: Run RunAgent in your own infrastructure
  • Advanced Security: SOC2 compliance, encryption at rest and in transit
  • Team Collaboration: Shared agents, role-based access control, audit logs
  • Custom Domains: Deploy agents to your own domains with SSL
  • SLA Support: 99.9% uptime guarantee with dedicated support
  • Advanced Monitoring: Custom dashboards, alerts, and analytics
  • Compliance: GDPR, HIPAA, and industry-specific compliance features

Our comprehensive documentation covers every aspect of RunAgent development and deployment:

Configuration & Deployment

🗺️ Platform Evolution Roadmap

✅ Foundation (Available Now)

Our current open-source foundation provides everything you need for local development and self-hosting:

  • Cross-Language SDKs: Python, JavaScript, Rust, and Go SDKs with native streaming
  • Local Development Server: Full-featured FastAPI server with hot reload
  • Framework-Agnostic Support: Works with LangChain, LangGraph, CrewAI, Letta, Agno, or any Python framework
  • Pre-Built Templates: Quick-start templates for popular AI frameworks
  • Comprehensive CLI: Complete command-line interface for project management
  • Real-Time Debugging: Built-in logging, monitoring, and debugging tools
  • Production-Ready: Suitable for self-hosting and enterprise deployments

🚧 Cloud Platform (Coming Q2 2025)

Our serverless cloud platform will provide effortless scaling and production deployment:

  • One-Command Deployment: Deploy with runagent deploy to auto-scaling infrastructure
  • Global Edge Network: Deploy agents across multiple regions for minimal latency
  • Auto-Scaling: Scale from zero to thousands of concurrent requests automatically
  • Built-in Monitoring: Real-time dashboards, metrics, and alerting
  • Webhook Integrations: Event-driven architectures with HTTP callbacks
  • Team Collaboration: Shared agents, environment management, and access controls
  • Usage Analytics: Detailed insights into agent performance and usage patterns
  • API Gateway: Advanced routing, rate limiting, and authentication features

🔮 Enterprise (Coming 2025)

Enterprise features for organizations requiring advanced security and control:

  • Private Cloud Deployment: Run RunAgent in your own infrastructure (AWS, GCP, Azure)
  • Advanced Security: SOC2 Type II compliance, encryption at rest and in transit
  • Compliance Features: GDPR, HIPAA, and industry-specific compliance tools
  • Role-Based Access Control: Fine-grained permissions and audit logging
  • Custom SLAs: 99.9% uptime guarantees with dedicated support
  • Custom Runtime Environments: Specialized containers and security sandboxes
  • Enterprise Integrations: SSO, LDAP, and enterprise directory services
  • Advanced Analytics: Custom reporting, data export, and business intelligence

We welcome contributions from the community! RunAgent is built by developers, for developers, and we're excited to have you join us.

  • Bug Reports: Found an issue? Report it on GitHub Issues
  • Feature Requests: Have an idea? Share it in GitHub Discussions
  • Code Contributions: Submit pull requests for bug fixes or new features
  • Documentation: Help improve our guides, examples, and API documentation
  • Community Support: Help other developers in Discord and GitHub discussions
  • SDK Development: Contribute to existing SDKs or help build new language support
# Clone the repository git clone https://github.com/runagent-dev/runagent.git cd runagent # Install development dependencies pip install -e ".[dev]" # Run tests pytest # Format code black runagent tests # Run linting flake8 runagent tests # Type checking mypy runagent
  • Follow our Code of Conduct
  • Write tests for new features and bug fixes
  • Follow existing code style and conventions
  • Update documentation for any user-facing changes
  • Add appropriate type hints for Python code

See our detailed Contributing Guide for more information.

Join our growing community of AI developers and get support from both the team and fellow users:

  • Discord Community - Real-time chat with developers, get help, and share projects
  • GitHub Discussions - Ask questions, share ideas, and discuss features
  • Twitter - Follow for updates, announcements, and AI development tips
  • Blog - In-depth tutorials, best practices, and case studies
  • Technical Issues: Check our Troubleshooting Guide first
  • General Questions: Browse our FAQ
  • Community Support: Ask in Discord or GitHub Discussions
  • Bug Reports: Create an issue on GitHub with detailed reproduction steps
  • GitHub Releases: Watch our repository for new releases and features
  • Newsletter: Subscribe on our website for monthly updates and tutorials
  • Social Media: Follow us on Twitter for daily updates and community highlights

RunAgent is licensed under the Elastic License 2.0 (ELv2). What this means:

✅ Free to use for development, testing, and production
✅ Modify and distribute as needed
✅ Commercial use in your applications
❌ Cannot offer RunAgent as a managed service without permission

This license ensures RunAgent remains freely available while protecting against unfair competition from cloud providers. See the LICENSE file for complete terms.

RunAgent stands on the shoulders of giants. We're grateful to the entire AI and open-source community, especially:

  • FastAPI - For the blazingly fast API framework that powers our local server
  • Pydantic - For data validation and serialization
  • Uvicorn - For the lightning-fast ASGI server
  • LangChain and LangGraph - For pioneering AI agent frameworks
  • LlamaIndex - For advancing RAG and knowledge-based AI systems
  • CrewAI - For multi-agent system innovations
  • Letta - For advancing memory-enabled conversational AI
  • Vercel - For inspiration in developer experience and deployment simplicity
  • Supabase - For demonstrating the power of open-source alternatives
  • Hugging Face - For democratizing AI and fostering community
  • Contributors - Everyone who has submitted code, documentation, or bug reports
  • Early Adopters - Beta users who provided feedback and shaped our roadmap
  • Discord Community - Active members who help each other and improve RunAgent
  • Framework Maintainers - Teams behind the AI frameworks we support
  • All the developers building amazing AI applications with RunAgent
  • The broader Python and AI communities for continuous innovation
  • Everyone who believes in making AI development more accessible

Ready to build universal AI agents?

Get Started Local Join Cloud Waitlist

🌟 Star us on GitHub💬 Join Discord📚 Read the Docs

Visitor Badge

Made with ❤️ by the RunAgent Team

Read Entire Article