LLM-powered translation system for React/Next.js - translate dynamic content with smart caching.
ALPHA VERSION (0.1.0-alpha.0) - This is an early extraction from a production app. APIs may change. Contributions welcome!
Audarma (from Kazakh "audar": translate) is an LLM-powered translation system for dynamic content. It uses Large Language Models (OpenAI, Claude, Gemini, etc.) to translate user-generated content, product descriptions, and messages - with intelligent caching to minimize API costs.
Traditional i18n libraries (next-intl, react-i18next) are designed for static UI labels that you translate once.
Audarma is designed for dynamic content that changes frequently:
- Product catalogs with thousands of items
- User-generated messages and reviews
- Real-time marketplace listings
- Blog posts and articles
With smart caching, you only pay for each translation once - subsequent requests are instant and free.
- View-level translation tracking - Translate entire views at once, not individual strings
- Smart caching - Content hash tracking prevents unnecessary re-translations
- Progressive loading - Shows original text immediately, translates in background
- Adapter pattern - Bring your own database, LLM provider, and i18n library
- Batch translation - Groups multiple items into single LLM calls
- React hooks - Simple, composable API with loading states
- Dual-mode operation - Lazy (on-demand) + CLI (batch pre-translation)
Audarma supports two complementary modes that share the same cache:
- Translates content on first user visit
- Best for: User-generated content, long-tail pages, rapid development
- Setup: Just wrap views with ViewTranslationProvider
- Pre-translates content before deployment
- Best for: SEO, high-traffic pages, new locale launches
- Setup: Create audarma.config.ts and run npm run translate
Use both together for optimal performance: CLI pre-fills cache, lazy mode catches gaps.
Read full dual-mode documentation
Create adapters for your database, LLM provider, and i18n system:
Audarma uses three adapter interfaces to remain database and LLM agnostic:
- View Mounting - ViewTranslationProvider calculates content hash from all items
- Cache Check - Checks localStorage metadata to see if view was translated before
- Database Query - Fetches cached translations from database (if any)
- LLM Translation - Translates only missing items via LLM provider
- Cache Update - Saves new translations to database and updates metadata
- Re-render - Components get translated text via useViewTranslation hook
Audarma tracks whether English content has changed using SHA256 hashes:
- View Hash - Hash of all content IDs + texts in a view
- Item Hash - Hash of individual item text (stored with translation)
When content changes, only the changed items are re-translated.
Audarma requires a content_translations table:
See /src/adapters/examples/ for reference implementations:
Database Adapters:
- Supabase - PostgreSQL database adapter
LLM Providers:
- OpenAI - GPT-5, GPT-4.1, o4-mini
- Anthropic - Claude Sonnet 4.5, Claude Haiku 4.5, Claude Opus 4.1
- Cerebras - Qwen3-235B, DeepSeek R1 with blazing speed and free daily credits
- Nebius - OpenAI-compatible API
I18n Adapters:
- next-intl - I18n adapter for Next.js
You can implement these interfaces for any backend:
- Databases: Supabase, Prisma, Drizzle, raw SQL, MongoDB, Redis
- LLMs: OpenAI, Anthropic (Claude), Google Gemini, Cerebras, Nebius, local Llama models
- I18n: next-intl, react-i18next, FormatJS, custom
This is an alpha release extracted from a production app. Here are known limitations:
- Hard-coded English source - Currently assumes English as source language
- No cache invalidation API - Must manually delete translations when source text changes
- No error boundaries - Translation errors can crash views
- No retry logic - Failed translations aren't automatically retried
- No cost tracking - No built-in token counting or cost estimation
- Client-side only - Server component support needs work
- No streaming - All translations must complete before returning
- No partial updates - Can't update cache incrementally
These bugs were found and fixed in production. The fixes are documented for your awareness:
- Bug 1: LLM included [content_type] tags in output
- Bug 2: Duplicate insert errors with batch upserts (need deduplication)
- Bug 3: next-intl language switching requires full page reload
- Bug 4: Old translations had artifact prefixes
Help us prioritize! Open an issue to vote or propose features.
Short-term (Community contributions welcome)
- Add retry logic with exponential backoff
- Add error boundaries and fallback UI
- Add cache invalidation utilities
- Add OpenAI adapter example
- Add Prisma adapter example
- Add cost estimation helpers
- Add TypeScript strict mode for examples
Medium-term
- Server component support (RSC)
- Streaming translations (show partial results)
- Multiple source languages
- Translation quality scoring
- A/B testing framework
- Admin UI for managing translations
Long-term
- Automatic context detection (use surrounding text)
- Multi-LLM routing (cheap for simple, expensive for complex)
- Real-time collaborative translation
- Translation memory (suggest similar translations)
This is an early alpha release - we need your help!
Most valuable contributions:
- Adapter implementations - Add examples for popular databases/LLMs
- Bug fixes - Fix the known limitations above
- Documentation - Improve examples and guides
- Testing - Add unit/integration tests
See CONTRIBUTING.md for detailed guidelines.
Audarma is for dynamic content (products, messages, user-generated content) that can't be pre-translated. For static UI labels, use traditional i18n libraries like next-intl or react-i18next.
Depends on your LLM provider and content volume. With smart caching, you only pay once per content item per language. Example: 1,000 products × 5 languages × $0.001/item = $5 total (one-time).
Not yet. Currently designed for client components. Server component support is on the roadmap.
Yes! Audarma is designed to complement existing i18n libraries. Use next-intl/react-i18next for UI labels, and Audarma for dynamic content.
- Try a better LLM model (GPT-4 vs Llama 3.3)
- Improve your prompts in the LLM adapter
- Add context to translation items
- Use translation quality scoring (roadmap feature)
Currently, you must manually delete old translations from database. Cache invalidation API is on the roadmap.
If Audarma saves you time and money, consider supporting development:
MIT © Eldar Syzdykov
Built by @eldarski to solve real translation challenges in a production marketplace app.
.png)

![AI-powered humanoid robot demo [video]](https://news.najib.digital/site/assets/img/broken.gif)