Your friendly guide to building LLM chat apps in Python with less effort and more clarity.
Quick start
Get started in 3 simple steps:
- Choose a model provider, such as ChatOpenAI or ChatAnthropic.
- Visit the provider’s reference page to get setup with necessary credentials.
- Create the relevant Chat client and start chatting!
The current weather in San Francisco is sunny.
Install
Install the latest stable release from PyPI:
Why chatlas?
🚀 Opinionated design: most problems just need the right model, system prompt, and tool calls. Spend more time mastering the fundamentals and less time navigating needless complexity.
🧩 Model agnostic: try different models with minimal code changes.
🌊 Stream output: automatically in notebooks, at the console, and your favorite IDE. You can also stream responses into bespoke applications (e.g., chatbots).
🛠️ Tool calling: give the LLM “agentic” capabilities by simply writing Python function(s).
🔄 Multi-turn chat: history is retained by default, making the common case easy.
🖼️ Multi-modal input: submit input like images, pdfs, and more.
📂 Structured output: easily extract structure from unstructured input.
⏱️ Async: supports async operations for efficiency and scale.
✏️ Autocomplete: easily discover and use provider-specific parameters like temperature, max_tokens, and more.
🔍 Inspectable: tools for debugging and monitoring in production.
🔌 Extensible: add new model providers, content types, and more.
Next steps
Next we’ll learn more about what model providers are available and how to approach picking a particular model. If you already have a model in mind, or just want to see what chatlas can do, skip ahead to hello chat.