Un-LOCC (Universal Lossy Optical Context Compression) is a Python library that wraps the OpenAI SDK to enable optical compression of text inputs. By rendering text into images, it leverages Vision-Language Models (VLMs) for more efficient token usage, especially when dealing with large text contexts.
- Optical Compression: Converts text into images for VLM-compatible input.
- Seamless Integration: Drop-in replacement for OpenAI client with compression support.
- Synchronous and Asynchronous: Supports both sync and async OpenAI operations.
- Flexible Compression: Customize font, size, dimensions, and more.
- Efficient Rendering: Uses fast libraries like ReportLab and pypdfium2 when available, falls back to PIL.
- openai
- Pillow (PIL)
- Optional: reportlab, pypdfium2, aggdraw for enhanced performance
- UnLOCC: Synchronous wrapper for OpenAI client.
- AsyncUnLOCC: Asynchronous wrapper for OpenAI client.
Both classes initialize like the OpenAI client: UnLOCC(api_key="...").
Default compression settings (uses built-in Atkinson Hyperlegible Regular font):
Customize by passing a dict to compressed:
For responses.create, pass compression as a dict or True for defaults.
- client.chat.completions.create(messages, **kwargs): Compresses messages with "compressed" key.
- client.chat.completions.create(**kwargs): Standard usage.
- client.responses.create(input, compression=None, **kwargs): Compresses input if compression is provided.
- String Content: Directly compressed into images.
- List Content: Processes parts; text parts are compressed, others remain unchanged.
The library selects the fastest available rendering method:
- ReportLab + pypdfium2 (fastest, recommended).
- ReportLab only.
- PIL fallback (ultra-fast bitmap).
Ensure fonts are available; defaults to system fonts if not found.
Through several trials, I've found that it's much better to embed instructions into plain text and then only compress the large context like this:
This approach keeps instructions clear and readable while compressing only the bulky content. Alternatively, use it to compress prior chat history for efficient context management.
MIT License see LICENSE for details.
Contributions welcome! Please submit issues and pull requests.
For more details on the library and optimal per model configurations, check out github.com/MaxDevv/UN-LOCC.
Based on UN-LOCC research for optical context compression in VLMs.
.png)


