Show HN: LiteChat or "How Hard Can It Be to" Build an AI Chat App

4 months ago 4

LiteChat is a modular, extensible, and privacy-focused AI chat application designed for power users, developers, and teams. It supports multiple AI providers, advanced prompt engineering, project-based organization, and powerful developer features like virtual file systems, Git integration, and a comprehensive modding system.

🔒 Privacy-First Architecture

  • 100% Client-Side: All data stored locally in your browser using IndexedDB
  • No Server Dependencies: Core functionality requires no backend services
  • Full Data Control: Export/import your entire configuration or specific data types (conversations, projects, settings, API keys, providers, rules, tags, mods, sync repos, MCP servers, prompt templates, and agents).

🤖 Multi-Provider AI Support

  • OpenRouter: Access to 300+ models through unified API
  • OpenAI: GPT-4x, o3-mini, o4-mini, o3, o3-pro, with reasoning and tool support
  • Google Gemini: Gemini Pro models with multimodal capabilities
  • Anthropic Claude: Sonnet, Opus, ...
  • Local Providers: Ollama, LMStudio, and other OpenAI-compatible APIs
  • Advanced Features: Streaming, reasoning, tool execution, ...

🌐 Everyone's Favorite Features

  • Send text files to any LLM: Even those who say they do not support file uploads
  • Multimodal support: If a model support a file type, you can send it to it
  • Auto title generation: The AI will generate a title for your conversation
  • Conversation export: Export your conversation to a file
  • Message regeneration: When the model falls on its face, you can regenerate the message
  • Conversation Sync: Sync conversations with Git repositories for a "poor man" no thrill sync solution.
  • Prompt Library: Create, manage, and use reusable prompt templates
  • Workflow Automation: Create, save, and execute multi-step AI workflows with automated sequences, variable mapping, and intelligent orchestration
  • Agents: Create, manage, and use powerful AI agents and their associated tasks.
  • Tool System: AI can read/write files, execute Git commands, and more, including tools from MCP servers.
  • Race: you can send the same prompt to multiple models at once and see the results
  • Mermaid Diagrams: Real-time diagram rendering with full Mermaid.js support
  • Response editor: edit the response after it has been generated to remove the fluff and save on tokens
  • Rules: you can add rules to the AI to guide its behavior, tags are here to bundle rules together
  • Regenerate with: regenerate the message with a different model

🛠️ Developer-Focused Features

  • Code Block Enhancements: Filepath syntax, individual downloads, ZIP exports
  • Codeblock editor: you can edit the codeblock content directly in the browser, and use it in the follow up chats !
  • Virtual File System: Browser-based filesystem with full CRUD operations
  • Git Integration: Clone, commit, push, pull directly in the browser
  • Structured Output: you can ask the AI to return a structured output, like a JSON, a table, a list, etc. (untested ^^')
  • Formedible codeblock: LLMs can use the formedible codeblock to create a form to interact with the user in a deterministice maner using the Formedible library.

If you have a 1000 LoC to spare, you can create you own custom Codeblock renderer see FormedibleBlockRendererModule for an example.

  • Hierarchical Projects: Organize conversations in nested project structures
  • Per-Project Settings: Custom models, prompts, and configurations
  • Rules & Tags: Reusable prompt engineering with organization

🔌 MCP (Model Context Protocol) Integration

  • HTTP and Stdio MCP Servers: Connect to external MCP servers via HTTP Server-Sent Events, HTTP Stream Transport and Stdio (via node ./bin/mcp-bridge.js)
  • Automatic Tool Discovery: Tools from MCP servers are automatically available to the AI
  • Graceful Error Handling: Configurable retry logic with exponential backoff
  • Connection Management: Real-time status monitoring and manual retry capabilities
  • Secure Authentication: Support for custom headers and API key authentication

⚙️ Extensibility & Customization

  • Modding System: Safe, sandboxed extension API for custom functionality
  • Control Modules: Modular UI components with clean separation of concerns
  • Event-Driven Architecture: Decoupled communication for maintainability
  • Build-Time Configuration: Ship with pre-configured setups for teams/demos
  • Custom Themes: Full visual customization with CSS variables

Public Version: https://litechat.dbuild.dev (hosted on GitHub Pages)

For comprehensive documentation, see the docs/ directory:

# Download and extract the latest release curl -L https://litechat.dbuild.dev/release/latest.zip -o litechat.zip unzip litechat.zip -d litechat cd litechat # Start a local server (choose one) python3 -m http.server 8080 # Python npx http-server -p 8080 . # Node.js php -S localhost:8080 # PHP # Open http://localhost:8080 in your browser
# Clone and setup git clone https://github.com/user/litechat.git cd litechat npm install # Start development server npm run dev # Build for production npm run build

Note: AI assistance is highly recommended for development. See the development documentation for detailed setup instructions. You have access to an llm.txt file to help you with your development.

A sample docker/nginx.conf is provided. You can use the following Dockerfile (ensure paths are correct for your build output, typically dist):

FROM nginx:alpine COPY ./dist /usr/share/nginx/html # Assuming your build output is in 'dist' COPY ./docker/nginx.conf /etc/nginx/conf.d/default.conf EXPOSE 80

Build and run:

docker build -t litechat . docker run -d -p 8080:80 litechat

If using local models (Ollama, LMStudio, etc.) or custom API endpoints, you might need to configure CORS on your AI backend server. LiteChat makes direct requests from the browser.

  • Ollama: Start Ollama with OLLAMA_ORIGIN='*' (or a more specific origin like http://localhost:8080) environment variable. Example: OLLAMA_ORIGIN='*' ollama serve.
  • OpenAI-Compatible APIs (e.g., LMStudio): Check your server's documentation for enabling CORS headers.

No server-side CORS is needed for LiteChat's internal VFS operations as they happen entirely in the browser via IndexedDB.

Gemini says no, for now. And if you are trying from the web on https, well, you can't talk to http endpoints... (so probably no local providers...)

LiteChat follows a modular, event-driven architecture designed for extensibility and maintainability:

  • 100% Client-Side: All data stored locally using IndexedDB
  • Control Module System: UI features encapsulated as pluggable modules
  • Event-Driven Communication: Decoupled components using mitt event emitter
  • Zustand State Management: Domain-specific stores with immutable updates
  • Virtual File System: Browser-based filesystem using ZenFS + IndexedDB
  • Modding API: Safe, controlled interface for external extensions
  • Tech Stack: React 19, TypeScript, Zustand, Vite, Tailwind CSS, shadcn/ui
  • Data Storage: Dexie.js (IndexedDB), ZenFS (VFS backend)
  • AI Integration: Vercel AI SDK with multiple provider support
  • Version Control: isomorphic-git for browser-based Git operations
  • Extensibility: Event-driven architecture with controlled modding API

Development & Contributing

  • Linting & Formatting: ESLint and Prettier are used
  • Testing: Vitest for unit/integration tests
  • Contributions: Pull Requests and GitHub Issues are welcome!
  • Architecture: See Control Module System documentation to understand LiteChat's core architecture

For detailed development setup, contribution guidelines, and architectural information, see the documentation.

If you have made it through the whole AI slope (but still relevant) part, first of all, congratulation, you are deserveful (I am sure that is a word !) of these human written words ! And you might be asking yourself that question: WHY ?

I am a happy t3.chat user but I was (and well, after adding them to my chat, I AM) missing a few features - like the ability to chain AI interactions into automated workflows (because who doesn't want their AI to do the work while they make coffee?). So I did what every sane person on the internet nowdays does, whine at length to the support in an (Oh so thoughtfully crafted) email.

I already toyed a bit before with my Bash AI chat (yes, Bash, because, I mean, why not ?) and these features I asked were what I was missing from it (plus a UI, but how hard can UI be, I have done that before !) and after receiving a very fast (like within the hour for the real support problem and 2 more for a complete feedback on my lengthy boat of an email/wishlist) and insightful (and detailled, and thoughtful, and ... ! best support exchange with a company when it comes to a fat) "nope", my hubris took over !

How hard can it be? Right? You've created this Bash AI chat (did I tell you it was in bash? Oh right, sorry...) in less than a week, you've done a big fat frontend project before, you just have to, you know... 🤝 ! easy !

SUUURE budd, sure ! (spoiler alert, no !) So sure in fact that I was going to through fat rocks at myself, I wanted it local "only" (no server what so ever) AND, I was only going to use t3.chat to ENTIRELY "vibecode" the thing (several of my arms articulation thank me very much !), because i was going to do that on a budget, aaand... why not ? tis supposed to be the Future ! right !? ... right ??!

I caved in after a few weeks and reused Cursor when the complete project was around 250k tokens in total (giving it all to gemini was possible but the results were crap) and targeted file feeding was becoming a real chore... plus at some point, things are so interdependant that you end up with significant portions of your code base anyway... (Sorry t3.chat team ^^' )

I am very much more on the "function over form" team so you may find some ... meeeh, let's call them discutable choices, especially in the UI departement ! Tabbed dialogs ? Button placement from hell ? The "so close therefore so infuriating" vibe ? Blame gemini ! (or Theo, his chat did that !).

Plus I am almost out of cursor requests and there is no way in hell I am refactoring this madness manually ! It all has to be split anyway, sssoo, you know... (spoiler for the astute readers ? mmmaaayyybeeee!)

It was fun though ! And now I have my own chat app ! And so can you :D !

If you would like to know what "the AI" has top say about this project (and what I have to say about that :P), checkout the AI-says.

MIT License. See LICENSE file for details.


LiteChat is an open-source project. Feedback, bug reports, and contributions are highly encouraged!

Read Entire Article