I have built an LLM API logger with rate limits and request scoping

3 weeks ago 2

Powerful, centralized tools for monitoring, controlling and understanding your LLM API requests.

Understand your LLM usage

Understand your LLM usage

Monitor your LLM API requests with powerful logging and insights to understand what is really happening in your app.

Monitor your LLM costs

Monitor your LLM costs

All providers, all models, and all costs aggregated and visualized in one place. Zero chance of missing anything.

Control your LLM costs

Control your LLM costs

Limit the cost of your LLM requests by versatile rules. Derisk your business and deploy fine-grained, user-specific rate limits.

Control your LLM costs

Understand your profitability

You make an LLM request with the intention to sell it in some way. AI Gateway helps you attribute every request to a specific sale or customer.

How do you use AI Gateway?

Integrating AI Gateway couldn't be easier. All you have to do is to prepend our domain to the URL of your existing LLM requests. Everything else stays exactly the same as the original provider API suggests:

# Usually you would do: curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent" \ # And now you do: \ curl "https://ai-gateway.app/generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent" \ -H "x-goog-api-key: $GEMINI_API_KEY" \ -H 'Content-Type: application/json' \ -X POST \ -d '{ "contents": [ { "parts": [ { "text": "How does AI work?" } ] } ], # Optionally, add your tags here: "_meta": { "tags": ["my-cool-product"] } }'
Read Entire Article