Maya1: Open-source 3B Voice Model

2 hours ago 1

Maya1 is a speech model built for expressive voice generation with rich human emotion and precise voice design.

try it: Playground

What it does:

  • Voice design through natural language descriptions
  • 20+ emotions: laugh, cry, whisper, angry, sigh, gasp, and more
  • Real-time streaming with SNAC neural codec
  • 3B parameters, runs on single GPU
  • Apache 2.0 license

Developed by Maya Research.


Demos

Example 1: Energetic Female Event Host

Voice Description:

Female, in her 30s with an American accent and is an event host, energetic, clear diction

Text:

Wow. This place looks even better than I imagined. How did they set all this up so perfectly? The lights, the music, everything feels magical. I can't stop smiling right now.

Audio Output:


Example 2: Dark Villain with Anger

Voice Description:

Dark villain character, Male voice in their 40s with a British accent. low pitch, gravelly timbre, slow pacing, angry tone at high intensity.

Text:

Welcome back to another episode of our podcast! <laugh_harder> Today we are diving into an absolutely fascinating topic

Audio Output:


Example 3: Demon Character (Screaming Emotion)

Voice Description:

Demon character, Male voice in their 30s with a Middle Eastern accent. screaming tone at high intensity.

Text:

You dare challenge me, mortal <snort> how amusing. Your kind always thinks they can win

Audio Output:


Example 4: Mythical Goddess with Crying Emotion

Voice Description:

Mythical godlike magical character, Female voice in their 30s slow pacing, curious tone at medium intensity.

Text:

After all we went through to pull him out of that mess <cry> I can't believe he was the traitor

Audio Output:


Why Maya1 is Different: Voice Design Features That Matter

1. Natural Language Voice Control

Describe voices like you would brief a voice actor:

<description="40-year-old, warm, low pitch, conversational">

No complex parameters. No training data. Just describe and generate.

2. Inline Emotion Tags for Expressive Speech

Add emotions exactly where they belong in your text:

Our new update <laugh> finally ships with the feature you asked for.

Supported Emotions: <laugh> <sigh> <whisper> <angry> <giggle> <chuckle> <gasp> <cry> and 12+ more.

3. Streaming Audio Generation

Real-time voice synthesis with SNAC neural codec (~0.98 kbps). Perfect for:

  • Voice assistants
  • Interactive AI agents
  • Live content generation
  • Game characters
  • Podcasts and audiobooks

4. Production-Ready Infrastructure

  • Runs on single GPU
  • vLLM integration for scale
  • Automatic prefix caching for efficiency
  • 24 kHz audio output
  • WebAudio compatible for browser playback

How to Use maya1: Download and Run in Minutes

Quick Start: Generate Voice with Emotions

import torch from transformers import AutoModelForCausalLM, AutoTokenizer from snac import SNAC import soundfile as sf model = AutoModelForCausalLM.from_pretrained( "maya-research/maya1", torch_dtype=torch.bfloat16, device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("maya-research/maya1") snac_model = SNAC.from_pretrained("hubertsiuzdak/snac_24khz").eval().to("cuda") description = "Realistic male voice in the 30s age with american accent. Normal pitch, warm timbre, conversational pacing." text = "Hello! This is Maya1 <laugh> the best open source voice AI model with emotions." prompt = f'<description="{description}"> {text}' inputs = tokenizer(prompt, return_tensors="pt").to("cuda") with torch.inference_mode(): outputs = model.generate( **inputs, max_new_tokens=500, temperature=0.4, top_p=0.9, do_sample=True ) generated_ids = outputs[0, inputs['input_ids'].shape[1]:] snac_tokens = [t.item() for t in generated_ids if 128266 <= t <= 156937] frames = len(snac_tokens) // 7 codes = [[], [], []] for i in range(frames): s = snac_tokens[i*7:(i+1)*7] codes[0].append((s[0]-128266) % 4096) codes[1].extend([(s[1]-128266) % 4096, (s[4]-128266) % 4096]) codes[2].extend([(s[2]-128266) % 4096, (s[3]-128266) % 4096, (s[5]-128266) % 4096, (s[6]-128266) % 4096]) codes_tensor = [torch.tensor(c, dtype=torch.long, device="cuda").unsqueeze(0) for c in codes] with torch.inference_mode(): audio = snac_model.decoder(snac_model.quantizer.from_codes(codes_tensor))[0, 0].cpu().numpy() sf.write("output.wav", audio, 24000) print("Voice generated successfully! Play output.wav")

Advanced: Production Streaming with vLLM

For production deployments with real-time streaming, use our vLLM script:

Download: vllm_streaming_inference.py

Key Features:

  • Automatic Prefix Caching (APC) for repeated voice descriptions
  • WebAudio ring buffer integration
  • Multi-GPU scaling support
  • Sub-100ms latency for real-time applications

Technical Excellence: What Makes Maya1 the Best

Architecture: 3B-Parameter Llama Backbone for Voice

We pretrained a 3B-parameter decoder-only transformer (Llama-style) to predict SNAC neural codec tokens instead of raw waveforms.

The Flow:

<description="..."> text → tokenize → generate SNAC codes (7 tokens/frame) → decode → 24 kHz audio

Why SNAC? Multi-scale hierarchical structure (≈12/23/47 Hz) keeps autoregressive sequences compact for real-time streaming at ~0.98 kbps.

Training Data: What Makes Our Voice AI the Best

Pretraining: Internet-scale English speech corpus for broad acoustic coverage and natural coarticulation.

Supervised Fine-Tuning: Proprietary curated dataset of studio recordings with:

  • Human-verified voice descriptions
  • 20+ emotion tags per sample
  • Multi-accent English coverage
  • Character and role variations

Data Pipeline Excellence:

  1. 24 kHz mono resampling with -23 LUFS normalization
  2. VAD silence trimming with duration bounds (1-14s)
  3. Forced alignment (MFA) for clean phrase boundaries
  4. MinHash-LSH text deduplication
  5. Chromaprint audio deduplication
  6. SNAC encoding with 7-token frame packing

Voice Design Experiments: Why Natural Language Won

We tested 4 conditioning formats. Only one delivered production-quality results:

❌ Colon format: {description}: {text} - Format drift, model spoke descriptions

❌ Angle-list attributes: <{age}, {pitch}, {character}> - Too rigid, poor generalization

❌ Key-value tags: <age=40><pitch=low> - Token bloat, brittle to mistakes

✅ XML-attribute (WINNER): <description="40-yr old, low-pitch, warm"> - Natural language, robust, scalable


Use Cases

Game Character Voices

Generate unique character voices with emotions on-the-fly. No voice actor recording sessions.

Podcast & Audiobook Production

Narrate content with emotional range and consistent personas across hours of audio.

AI Voice Assistants

Build conversational agents with natural emotional responses in real-time.

Video Content Creation

Create voiceovers for YouTube, TikTok, and social media with expressive delivery.

Customer Service AI

Deploy empathetic voice bots that understand context and respond with appropriate emotions.

Accessibility Tools

Build screen readers and assistive technologies with natural, engaging voices.


Frequently Asked Questions

Q: What makes Maya1 different?
A: We're the only open source model offering 20+ emotions, zero-shot voice design, production-ready streaming, and 3B parameters—all in one package.

Q: Can I use this commercially?
A: Absolutely. Apache 2.0 license. Build products, deploy services, monetize freely.

Q: What languages does it support?
A: Currently English with multi-accent support. Future models will expand to languages and accents underserved by mainstream voice AI.

Q: How does it compare to ElevenLabs, Murf.ai, or other closed-source tools?
A: Feature parity with emotions and voice design. Advantage: you own the deployment, pay no per-second fees, and can customize the model.

Q: Can I fine-tune on my own voices?
A: Yes. The model architecture supports fine-tuning on custom datasets for specialized voices.

Q: What GPU do I need?
A: Single GPU with 16GB+ VRAM (A100, H100, or consumer RTX 4090).

Q: Is streaming really real-time?
A: Yes. SNAC codec enables sub-100ms latency with vLLM deployment.


Comparison

Feature Maya1 ElevenLabs OpenAI TTS Coqui TTS
Open Source Yes No No Yes
Emotions 20+ Limited No No
Voice Design Natural Language Voice Library Fixed Complex
Streaming Real-time Yes Yes No
Cost Free Pay-per-use Pay-per-use Free
Customization Full Limited None Moderate
Parameters 3B Unknown Unknown <1B

Model Metadata

Developed by: Maya Research
Website: mayaresearch.ai
Backed by: South Park Commons
Model Type: Text-to-Speech, Emotional Voice Synthesis, Voice Design AI
Language: English (Multi-accent)
Architecture: 3B-parameter Llama-style transformer with SNAC codec
License: Apache 2.0 (Fully Open Source)
Training Data: Proprietary curated + Internet-scale pretraining
Audio Quality: 24 kHz, mono, ~0.98 kbps streaming
Inference: vLLM compatible, single GPU deployment
Status: Production-ready (Novermber 2025)


Getting Started

Hugging Face Model Hub

git lfs install git clone https://huggingface.co/maya-research/maya1 from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("maya-research/maya1")

Requirements

pip install torch transformers snac soundfile

Additional Resources


Citations & References

If you use Maya1 in your research or product, please cite:

@misc{maya1voice2025, title={Maya1: Open Source Voice AI with Emotional Intelligence}, author={Maya Research}, year={2025}, publisher={Hugging Face}, howpublished={\url{https://huggingface.co/maya-research/maya1}}, }

Key Technologies:


Why We Build Open Source Voice AI

Voice AI will be everywhere, but it's fundamentally broken for 90% of the world. Current voice models only work well for a narrow slice of English speakers because training data for most accents, languages, and speaking styles simply doesn't exist.

Maya Research builds emotionally intelligent, native voice models that finally let the rest of the world speak. We're open source because we believe voice intelligence should not be a privilege reserved for the few.

Technology should be open - The best voice AI tools should not be locked behind proprietary APIs charging per-second fees.

Community drives innovation - Open source accelerates research. When developers worldwide can build on our work, everyone wins.

Voice intelligence for everyone - We're building for the 90% of the world ignored by mainstream voice AI. That requires open models, not closed platforms.


Maya Research - Building voice intelligence for the 90% of the world left behind by mainstream AI.

Website: mayaresearch.ai
Twitter/X: @mayaresearch_ai
Hugging Face: maya-research
Backed by: South Park Commons

License: Apache 2.0
Mission: Emotionally intelligent voice models that finally let everyone speak

Read Entire Article