An open-source RAG (Retrieval-Augmented Generation) SDK for canvas-based knowledge graphs. It ingests canvas data, mirrors it into Helix graph-vector database for semantic, relational, and spatial clustering, and provides LLM-powered analysis of the canvas.
A hosted version of the Treyspace SDK with a full user interface and an Excalidraw canvas is available:
Try it here: https://treyspace.app/
Figure: End-to-end canvas-to-Helix-to-LLM pipeline
Figure: Semantic, relational, and spatial cluster visualization
- Responses API – Wrapper around OpenAI responses API with canvas-specific context handling
- Canvas AI engine – SSE endpoints under /api/ai/* orchestrate the full RAG pipeline with graph-based retrieval
- Helix bridge – /api/clusters and /api/mcp-bridge proxy the Helix graph database for semantic clustering
- SDK & Server – Use as a library or run as a standalone server
- Development-focused – No auth, rate limiting, or billing; easily adaptable to your needs
- Features
- Quick Start
- Architecture
- Usage
- Documentation
- Examples
- Environment Variables
- Development
- Contributing
- License
- Node.js >= 18.0.0
- npm >= 9.0.0
- OpenAI API key
- Helix DB instance (optional - can run with in-memory mode)
-
Clone and install
-
Copy the default config
- Only set OPENAI_API_KEY to run locally.
- Helix is optional – By default, the SDK uses an in-memory graph store. To use HelixDB instead, pass --enable_helix when starting the server or follow the Helix install guide.
-
Start the SDK façade
cd sdk npm install node server.jsLeave this process running (or use your favourite watcher).
-
Start the backend (new terminal)
-
Verify the pipeline
npm run smoke # health check (requires servers from steps 3 & 4) npm run test:smoke # requires OPENAI_API_KEY npm run test:integration # full pipeline (OPENAI_API_KEY required) npx tsx tests/runFullPipeline.spec.mjs # full pipeline using in-memory mode (default)The test:* commands start (and stop) both the SDK façade and the AI proxy automatically and skip if OPENAI_API_KEY is not set.
By default, the SDK runs with an in-memory graph store – perfect for development, testing, or lightweight deployments. To enable the full HelixDB backend for production use:
- API Reference – endpoints and payloads
- Configuration & Deployment – Environment variables, Docker, and Helix setup
- Pipeline Guide – Step-by-step canvas processing workflow
- examples/responses.mjs – POST a one-off prompt to /v1/responses
- examples/stream-ai-engine.mjs – Stream /api/ai/engine SSE output for a board
Set AI_BACKEND_URL to override the backend URL for proxied deployments.
| NODE_ENV | development, production, or test. | development |
| PORT / HOST | HTTP binding for the backend. | 8787 / 0.0.0.0 |
| LOG_LEVEL | Pino log level. | info |
| DEBUG / TREYSPACE_DEBUG | Set to 1 to enable verbose logging. | 0 |
| ALLOWED_ORIGINS | Comma-separated CORS allow-list. | http://localhost:3000,http://localhost:5173 |
| AI_BACKEND_URL / AI_BACKEND_PUBLIC_BASE | Optional overrides for outbound URLs. | – |
| OPENAI_API_KEY (+ OPENAI_DEFAULT_MODEL) | Enables OpenAI routes. | – / gpt-5-mini |
| HELIX_RAG_URL | Base URL for the SDK façade. | http://localhost:3001 |
| HELIX_INTERNAL_ORIGIN | Origin header forwarded to the façade. | http://localhost:3000 |
- GET /healthz
- POST /v1/responses
- POST /api/ai/engine
- POST /api/clusters
- POST /api/mcp-bridge
Connect with us on social media:
Released under the MIT License.
Contributions are welcome—fork this repo, create a feature branch, and submit a pull request. See CONTRIBUTING.md for the full guidelines.
Special thanks to the Helix DB and Excalidraw teams, whose tools enable Treyspace’s graph and canvas features.
.png)
