Chatlas: Guide to building LLM apps with less effort and more clarity

9 hours ago 3

chatlas website banner image

Your friendly guide to building LLM chat apps in Python with less effort and more clarity.

Quick start

Get started in 3 simple steps:

  1. Choose a model provider, such as ChatOpenAI or ChatAnthropic.
  2. Visit the provider’s reference page to get setup with necessary credentials.
  3. Create the relevant Chat client and start chatting!
from chatlas import ChatOpenAI # Optional (but recommended) model and system_prompt chat = ChatOpenAI( model="gpt-4.1-mini", system_prompt="You are a helpful assistant.", ) # Optional tool registration def get_current_weather(lat: float, lng: float): "Get the current weather for a given location." return "sunny" chat.register_tool(get_current_weather) # Send user prompt to the model for a response. chat.chat("How's the weather in San Francisco?")
# 🛠️ tool request get_current_weather(37.7749, -122.4194)
# ✅ tool result sunny

The current weather in San Francisco is sunny.

Install

Install the latest stable release from PyPI:

Why chatlas?

🚀 Opinionated design: most problems just need the right model, system prompt, and tool calls. Spend more time mastering the fundamentals and less time navigating needless complexity.

🧩 Model agnostic: try different models with minimal code changes.

🌊 Stream output: automatically in notebooks, at the console, and your favorite IDE. You can also stream responses into bespoke applications (e.g., chatbots).

🛠️ Tool calling: give the LLM “agentic” capabilities by simply writing Python function(s).

🔄 Multi-turn chat: history is retained by default, making the common case easy.

🖼️ Multi-modal input: submit input like images, pdfs, and more.

📂 Structured output: easily extract structure from unstructured input.

⏱️ Async: supports async operations for efficiency and scale.

✏️ Autocomplete: easily discover and use provider-specific parameters like temperature, max_tokens, and more.

🔍 Inspectable: tools for debugging and monitoring in production.

🔌 Extensible: add new model providers, content types, and more.

Next steps

Next we’ll learn more about what model providers are available and how to approach picking a particular model. If you already have a model in mind, or just want to see what chatlas can do, skip ahead to hello chat.

Read Entire Article