中文 | English
Hugging Face |
🖥️ Official Website |
🕖 HunyuanAPI |
🕹️ Demo |
ModelScope
Technical Report | GITHUB | cnb.cool | LICENSE
Welcome to the official repository of Hunyuan-A13B, an innovative and open-source large language model (LLM) built on a fine-grained Mixture-of-Experts (MoE) architecture. Designed for efficiency and scalability, Hunyuan-A13B delivers cutting-edge performance with minimal computational overhead, making it an ideal choice for advanced reasoning and general-purpose applications, especially in resource-constrained environments.
With the rapid advancement of artificial intelligence technology, large language models (LLMs) have achieved remarkable progress in natural language processing, computer vision, and scientific tasks. However, as model scales continue to expand, optimizing resource consumption while maintaining high performance has become a critical challenge. To address this, we have explored Mixture of Experts (MoE) architectures. The newly introduced Hunyuan-A13B model features a total of 80 billion parameters with 13 billion active parameters. It not only delivers high-performance results but also achieves optimal resource efficiency, successfully balancing computational power and resource utilization.
- Compact yet Powerful: With only 13 billion active parameters (out of a total of 80 billion), the model delivers competitive performance on a wide range of benchmark tasks, rivaling much larger models.
- Hybrid Reasoning Support: Supports both fast and slow thinking modes, allowing users to flexibly choose according to their needs.
- Ultra-Long Context Understanding: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
- Enhanced Agent Capabilities: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3, τ-Bench and C3-Bench.
- Efficient Inference: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
As a powerful yet computationally efficient large model, Hunyuan-A13B is an ideal choice for researchers and developers seeking high performance under resource constraints. Whether for academic research, cost-effective AI solution development, or innovative application exploration, this model provides a robust foundation for advancement.
- 2025.6.27 We have open-sourced Hunyuan-A13B-Pretrain , Hunyuan-A13B-Instruct , Hunyuan-A13B-Instruct-FP8 , Hunyuan-A13B-Instruct-GPTQ-Int4 on Hugging Face. In addition, we have released a technical report and a training and inference operation manual, which provide detailed information about the model’s capabilities as well as the operations for training and inference.
Note: The following benchmarks are evaluated by TRT-LLM-backend on several base models.
MMLU | 88.40 | 86.10 | 87.81 | 88.17 |
MMLU-Pro | 60.20 | 58.10 | 68.18 | 67.23 |
MMLU-Redux | 87.47 | 83.90 | 87.40 | 87.67 |
BBH | 86.30 | 85.80 | 88.87 | 87.56 |
SuperGPQA | 38.90 | 36.20 | 44.06 | 41.32 |
EvalPlus | 75.69 | 65.93 | 77.60 | 78.64 |
MultiPL-E | 59.13 | 60.50 | 65.94 | 69.33 |
MBPP | 72.60 | 76.00 | 81.40 | 83.86 |
CRUX-I | 57.00 | 57.63 | - | 70.13 |
CRUX-O | 60.63 | 66.20 | 79.00 | 77.00 |
MATH | 69.80 | 62.12 | 71.84 | 72.35 |
CMATH | 91.30 | 84.80 | - | 91.17 |
GSM8k | 92.80 | 91.50 | 94.39 | 91.83 |
GPQA | 25.18 | 45.90 | 47.47 | 49.12 |
Hunyuan-A13B-Instruct has achieved highly competitive performance across multiple benchmarks, particularly in mathematics, science, agent domains, and more. We compared it with several powerful models, and the results are shown below.
Mathematics | AIME 2024 AIME 2025 MATH |
74.3 79.2 96.4 |
79.8 70 94.9 |
85.7 81.5 94.0 |
87.3 76.8 94.3 |
Science | GPQA-Diamond OlympiadBench |
78 83.1 |
71.5 82.4 |
71.1 85.7 |
71.2 82.7 |
Coding | Livecodebench Fullstackbench ArtifactsBench |
63.9 64.6 38.6 |
65.9 71.6 44.6 |
70.7 65.6 44.6 |
63.9 67.8 43 |
Reasoning | BBH DROP ZebraLogic |
80.4 90.2 81 |
83.7 92.2 78.7 |
88.9 90.3 80.3 |
89.1 91.1 84.7 |
Instruction Following |
IF-Eval SysBench |
91.8 82.5 |
88.3 77.7 |
83.4 74.2 |
84.7 76.1 |
Text Creation |
LengthCtrl InsCtrl |
60.1 74.8 |
55.9 69 |
53.3 73.7 |
55.4 71.9 |
NLU | ComplexNLU Word-Task |
64.7 67.1 |
64.5 76.3 |
59.8 56.4 |
61.2 62.9 |
Agent | BFCL v3 τ-Bench ComplexFuncBench C3-Bench |
67.8 60.4 47.6 58.8 |
56.9 43.8 41.1 55.3 |
70.8 44.6 40.6 51.7 |
78.3 54.7 61.2 63.5 |
Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
- Pass "enable_thinking=False" when calling apply_chat_template.
- Adding "/no_think" before the prompt will force the model not to use perform CoT reasoning. Similarly, adding "/think" before the prompt will force the model to perform CoT reasoning.
The following code snippet shows how to use the transformers library to load and apply the model. It also demonstrates how to enable and disable the reasoning mode , and how to parse the reasoning process along with the final output.
Hunyuan-A13B provides processes related to model training. Please refer to Training for model training purposes.
We used our own AngleSlim compression tool to produce FP8 and INT4 quantization models. AngleSlim compression tool is expected to be open source in early July, which will support one-click quantization and compression of large models, please look forward to it, and you can download our quantization models directly for deployment testing now.
We use FP8-static quantization, FP8 quantization adopts 8-bit floating point format, through a small amount of calibration data (without training) to pre-determine the quantization scale, the model weights and activation values will be converted to FP8 format, to improve the inference efficiency and reduce the deployment threshold. We you can use AngleSlim quantization, you can also directly download our quantization completed open source model to use Hunyuan-A13B-Instruct-FP8.
This subsection describes the Benchmark metrics for the Hunyuan-80B-A13B-Instruct-FP8 quantitative model.
AIME 2024 | 87.3 | 86.7 |
Gsm8k | 94.39 | 94.01 |
BBH | 89.1 | 88.34 |
DROP | 91.1 | 91.1 |
We use the GPTQ algorithm to achieve W4A16 quantization, which processes the model weights layer by layer, uses a small amount of calibration data to minimize the reconfiguration error of the quantized weights, and adjusts the weights layer by layer by the optimization process of approximating the Hessian inverse matrix. The process eliminates the need to retrain the model and requires only a small amount of calibration data to quantize the weights, improving inference efficiency and lowering the deployment threshold. You can use AngleSlim quantization, you can also directly download our quantization completed open source model to use Hunyuan-A13B-Instruct-Int4.
This subsection describes the Benchmark metrics for the Hunyuan-80B-A13B-Instruct-GPTQ-Int4 quantitative model.
OlympiadBench | 82.7 | 84.0 |
AIME 2024 | 87.3 | 86.7 |
Gsm8k | 94.39 | 94.24 |
BBH | 89.1 | 87.91 |
DROP | 91.1 | 91.05 |
For deployment, you can use frameworks such as TensorRT-LLM, vLLM, or SGLang to serve the model and create an OpenAI-compatible API endpoint.
image: https://hub.docker.com/r/hunyuaninfer/hunyuan-a13b/tags
We provide a pre-built Docker image based on the latest version of TensorRT-LLM.
- To get started:
https://hub.docker.com/r/hunyuaninfer/hunyuan-large/tags
- Prepare Configuration file:
- Start the API server:
We provide a pre-built Docker image containing vLLM 0.8.5 with full support for this model. The official vllm release is currently under development, note: cuda 12.8 is require for this docker.
- To get started:
-
Download Model file:
- Huggingface: will download automicly by vllm.
- ModelScope: modelscope download --model Tencent-Hunyuan/Hunyuan-A13B-Instruct
-
Start the API server:
model download by huggingface:
model downloaded by modelscope:
This section describes the process of deploying a post-quantization model using vLLM.
Image: Deploy image as in BF16.
Deploying the Int8-weight-only version of the HunYuan-A13B model only requires setting the environment variables in run_server_int8.sh
Next we start the Int8 service. Run:
After running run_server_int8.sh successfully, run the request script
Deploying the Int4-weight-only version of the HunYuan-A13B model only requires setting the environment variables in run_server_int4.sh, using the GPTQ method
Next we start the Int4 service. Run
After running run_server_int4.sh successfully, run the request script
Deploying the W8A8C8 version of the HunYuan-A13B model only requires setting the environment variables in run_server_int8.sh
Next we start the FP8 service. Run
After running run_server_fp8.sh successfully, run the request script
We also provide a pre-built Docker image based on the latest version of SGLang.
To get started:
- Pull the Docker image
- Start the API server:
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email ([email protected]).