HuggingFace 🤗: Model, Q8 GGUF, Q4 GGUF Spaces
Click the image above to watch NeuTTS Air in action on YouTube!
Created by Neuphonic - building faster, smaller, on-device voice AI
State-of-the-art Voice AI has been locked behind web APIs for too long. NeuTTS Air is the world’s first super-realistic, on-device, TTS speech language model with instant voice cloning. Built off a 0.5B LLM backbone, NeuTTS Air brings natural-sounding speech, real-time performance, built-in security and speaker cloning to your local device - unlocking a new category of embedded voice agents, assistants, toys, and compliance-safe apps.
- 🗣Best-in-class realism for its size - produces natural, ultra-realistic voices that sound human
- 📱Optimised for on-device deployment - provided in GGML format, ready to run on phones, laptops, or even Raspberry Pis
- 👫Instant voice cloning - create your own speaker with as little as 3 seconds of audio
- 🚄Simple LM + codec architecture built off a 0.5B backbone - the sweet spot between speed, size, and quality for real-world applications
NeuTTS Air is built off Qwen 0.5B - a lightweight yet capable language model optimised for text understanding and generation - as well as a powerful combination of technologies designed for efficiency and quality:
- Audio Codec: NeuCodec - our proprietary neural audio codec that achieves exceptional audio quality at low bitrates using a single codebook
- Format: Available in GGML format for efficient on-device inference
- Responsibility: Watermarked outputs
- Inference Speed: Real-time generation on mid-range devices
- Power Consumption: Optimised for mobile and embedded devices
-
Clone Git Repo
git clone https://github.com/neuphonic/neutts-air.git -
Install espeak (required dependency)
Please refer to the following link for instructions on how to install espeak:
https://github.com/espeak-ng/espeak-ng/blob/master/docs/guide.md
# Mac OS brew install espeak # Ubuntu/Debian sudo apt install espeakMac users may need to put the following lines at the top of the neutts.py file.
from phonemizer.backend.espeak.wrapper import EspeakWrapper _ESPEAK_LIBRARY = '/opt/homebrew/Cellar/espeak/1.48.04_1/lib/libespeak.1.1.48.dylib' #use the Path to the library. EspeakWrapper.set_library(_ESPEAK_LIBRARY) -
Install Python dependencies
The requirements file includes the dependencies needed to run the model with PyTorch. When using an ONNX decoder or a GGML model, some dependencies (such as PyTorch) are no longer required.
The inference is compatible and tested on python>=3.11.
pip install -r requirements.txt -
(Optional) Install Llama-cpp-python to use the GGUF models.
pip install llama-cpp-pythonTo run llama-cpp with GPU suport (CUDA, MPS) support please refer to: https://pypi.org/project/llama-cpp-python/
-
(Optional) Install onnxruntime to use the .onnx decoder. If you wnat to run the onnxdecoder
Run the basic example script to synthesize speech:
To specify a particular model repo for the backbone or codec, add the --backbone argument. Available backbones are listed in NeuTTS-Air huggingface collection.
Several examples are available, including a Jupyter notebook in the examples folder.
Make sure you have installed onnxruntime
To run the model with the onnx decoder you need to encode the reference sample. Please refer to the encode_reference example.
You only need to provide a reference audio for the reference encoding.
NeuTTS Air requires two inputs:
- A reference audio sample (.wav file)
- A text string
The model then synthesises the text as speech in the style of the reference audio. This is what enables NeuTTS Air’s instant voice cloning capability.
You can find some ready-to-use samples in the examples folder:
- samples/dave.wav
- samples/jo.wav
For optimal performance, reference audio samples should be:
- Mono channel
- 16-44 kHz sample rate
- 3–15 seconds in length
- Saved as a .wav file
- Clean — minimal to no background noise
- Natural, continuous speech — like a monologue or conversation, with few pauses, so the model can capture tone effectively
Every audio file generated by NeuTTS Air includes Perth (Perceptual Threshold) Watermarker.
Don't use this model to do bad things… please.
To run the pre commit hooks to contribute to this project run:
Then:
.png)


