High-Performance MLX Implementation for GPT-OSS Models on Apple Silicon
Original GPT-OSS · Guides · Model Card
Download gpt-oss-120b and gpt-oss-20b on Hugging Face
OpenHarmony-MLX is an optimized Apple Silicon implementation of OpenAI's GPT-OSS series models, featuring native MLX acceleration for exceptional performance on Mac hardware.
- 🍎 Apple Silicon Optimized: Native MLX acceleration for M1/M2/M3/M4 chips
- ⚡ Blazing Fast: Up to 40 tokens/sec on Apple Silicon (vs 5-15 on CPU)
- 🧠 Memory Efficient: Run GPT-OSS-120b in 30GB with quantization
- 🛠️ Developer Friendly: Drop-in replacement with familiar APIs
- 📦 Complete Package: Includes all inference backends and tools
- gpt-oss-120b — 117B parameters, 5.1B active per token
- gpt-oss-20b — 21B parameters, 3.6B active per token
Both models were trained using our harmony response format and should only be used with this format; otherwise, they will not work correctly.
- Permissive Apache 2.0 license: Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
- Configurable reasoning effort: Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
- Full chain-of-thought: Provides complete access to the model's reasoning process, facilitating easier debugging and greater trust in outputs. This information is not intended to be shown to end users.
- Fine-tunable: Fully customize models to your specific use case through parameter fine-tuning.
- Agentic capabilities: Use the models' native capabilities for function calling, web browsing, Python code execution, and Structured Outputs.
- Native MXFP4 quantization: The models are trained with native MXFP4 precision for the MoE layer, allowing gpt-oss-120b to run on a single H100 GPU and gpt-oss-20b to run within 16GB of memory..
You can use gpt-oss-120b and gpt-oss-20b with Transformers. If you use Transformers's chat template it will automatically apply the harmony response format. If you use model.generate directly, you need to apply the harmony format manually using the chat template or use our openai-harmony package.
Learn more about how to use gpt-oss with Transformers.
vLLM recommends using uv for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.
Learn more about how to use gpt-oss with vLLM.
These implementations are largely reference implementations for educational purposes and are not expected to be run in production.
If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after installing Ollama.
Learn more about how to use gpt-oss with Ollama.
If you are using LM Studio you can use the following commands to download.
Check out our awesome list for a broader collection of gpt-oss resources and inference partners.
This repository provides a collection of reference implementations:
- Inference:
- torch — a non-optimized PyTorch implementation for educational purposes only. Requires at least 4x H100s because it's not optimized
- triton — a more optimized implementation using PyTorch & Triton incl. using CUDA graphs and basic caching
- metal — a Metal-specific implementation for running the models on Apple Silicon hardware
- Tools:
- Client examples:
- chat — a basic terminal chat application that uses the PyTorch or Triton implementations for inference along with the python and browser tools
- responses_api — an example Responses API compatible server that implements the browser tool along with other Responses-compatible functionality
- python 3.12
- On macOS: Install the Xcode CLI tools --> xcode-select --install
- On Linux: These reference implementations require CUDA
- On Windows: These reference implementations have not been tested on Windows. Try using solutions like Ollama if you are trying to run the model locally.
If you want to try any of the code you can install it directly from PyPI
If you want to modify the code or try the metal implementation set the project up locally:
You can download the model weights from the Hugging Face Hub directly from Hugging Face CLI:
We include an inefficient reference PyTorch implementation in gpt_oss/torch/model.py. This code uses basic PyTorch operators to show the exact model architecture, with a small addition of supporting tensor parallelism in MoE so that the larger model can run with this code (e.g., on 4xH100 or 2xH200). In this implementation, we upcast all weights to BF16 and run the model in BF16.
To run the reference implementation. Install dependencies:
And then run:
We also include an optimized reference implementation that uses an optimized triton MoE kernel that supports MXFP4. It also has some optimization on the attention code to reduce the memory cost. To run this implementation, the nightly version of triton and torch will be installed. This version can be run on a single 80GB GPU for gpt-oss-120b.
To install the reference Triton implementation run
And then run:
If you encounter torch.OutOfMemoryError make sure to turn on the expandable allocator to avoid crashes when loading weights from the checkpoint.
Additionally we are providing a reference implementation for Metal to run on Apple Silicon. This implementation is not production-ready but is accurate to the PyTorch implementation.
The implementation will get automatically compiled when running the .[metal] installation on an Apple Silicon device:
To perform inference you'll need to first convert the SafeTensor weights from Hugging Face into the right format using:
Or downloaded the pre-converted weight:
To test it you can run:
We also provide a high-performance MLX implementation for Apple Silicon in gpt_oss/mlx_gpt_oss. Install with:
You can use it via the CLI:
Or the Python API:
Along with the model, we are also releasing a new chat format library harmony to interact with the model. Check this guide for more info about harmony.
We also include two system tools for the model: browsing and python container. Check gpt_oss/tools for the tool implementation.
The terminal chat application is a basic example on how to use the harmony format together with the PyTorch, Triton, and vLLM implementations. It also exposes both the python and browser tool as optional tools that can be used.
Note
The torch and triton implementation requires original checkpoint under gpt-oss-120b/original/ and gpt-oss-20b/original/ respectively. While vLLM uses the Hugging Face converted checkpoint under gpt-oss-120b/ and gpt-oss-20b/ root directory respectively.
We also include an example Responses API server. This server does not implement every feature and event of the Responses API but should be compatible with most of the basic use cases and serve as inspiration for anyone building their own server. Some of our inference partners are also offering their own Responses API.
You can start this server with the following inference backends:
- triton — uses the triton implementation
- metal — uses the metal implementation on Apple Silicon only
- ollama — uses the Ollama /api/generate API as a inference solution
- vllm — uses your installed vllm version to perform inference
- transformers — uses your installed transformers version to perform local inference
We support codex as a client for gpt-oss. To run the 20b version, set this to ~/.codex/config.toml:
This will work with any chat completions-API compatible server listening on port 11434, like ollama. Start the server and point codex to the oss model:
Warning
This implementation is purely for educational purposes and should not be used in production. You should implement your own equivalent of the ExaBackend class with your own browsing environment.
Both gpt-oss models were trained with the capability to browse using the browser tool that exposes the following three methods:
- search to search for key phrases
- open to open a particular page
- find to look for contents on a page
To enable the browser tool, you'll have to place the definition into the system message of your harmony formatted prompt. You can either use the with_browser() method if your tool implements the full interface or modify the definition using with_tools(). For example:
To control the context window size this tool use a scrollable window of text that the model can interact with. So it might fetch the first 50 lines of a page and then scroll to the next 20 lines after that. The model has also been trained to then use citations from this tool in its answers.
To improve performance the tool caches requests so that the model can revisit a different part of a page without having to reload the page. For that reason you should create a new browser instance for every request.
The model was trained to use a python tool to perform calculations and other actions as part of its chain-of-thought. During the training the model used a stateful tool which makes running tools between CoT loops easier. This reference implementation, however, uses a stateless mode. As a result the PythonTool defines its own tool description to override the definition in openai-harmony.
Warning
This implementation runs in a permissive Docker container which could be problematic in cases like prompt injections. It's serving as an example and you should consider implementing your own container restrictions in production.
To enable the python tool, you'll have to place the definition into the system message of your harmony formatted prompt. You can either use the with_python() method if your tool implements the full interface or modify the definition using with_tools(). For example:
apply_patch can be used to create, update or delete files locally.
We released the models with native quantization support. Specifically, we use MXFP4 for the linear projection weights in the MoE layer. We store the MoE tensor in two parts:
- tensor.blocks stores the actual fp4 values. We pack every two value in one uint8 value.
- tensor.scales stores the block scale. The block scaling is done among the last dimension for all MXFP4 tensors.
All other tensors will be in BF16. We also recommend use BF16 as the activation precision for the model.
We recommend sampling with temperature=1.0 and top_p=1.0.
The reference implementations in this repository are meant as a starting point and inspiration. Outside of bug fixes we do not intend to accept new feature contributions. If you build implementations based on this code such as new tool implementations you are welcome to contribute them to the awesome-gpt-oss.md file.
.png)
