Quick Start¶
Five minutes from install to first response.
1. Set your API key¶
2. Make your first call¶
from llmgate import completion
resp = completion(
"gpt-4o-mini",
[{"role": "user", "content": "What is the capital of France?"}],
)
print(resp.text)
# → "The capital of France is Paris."
3. Switch providers — literally one word¶
from llmgate import completion
messages = [{"role": "user", "content": "Explain recursion in one sentence."}]
# OpenAI
resp = completion("gpt-4o-mini", messages)
# Anthropic
resp = completion("claude-3-5-haiku-20241022", messages)
# Gemini
resp = completion("gemini-2.5-flash-lite", messages)
# Groq (fastest)
resp = completion("groq/llama-3.1-8b-instant", messages)
print(resp.text) # always the same
print(resp.provider) # "openai" | "anthropic" | "gemini" | "groq"
4. Async¶
import asyncio
from llmgate import acompletion
async def main():
resp = await acompletion(
"groq/llama-3.3-70b-versatile",
[{"role": "user", "content": "Hello!"}],
)
print(resp.text)
asyncio.run(main())
What's next?¶
- Configuration — API keys, env vars, per-call overrides
- Completions guide — parameters, responses, provider-specific options
- Vision guide — image inputs across all providers
- Providers — per-provider notes and model lists