Installation
Install only the providers you use to keep dependencies minimal:Basic Usage
Configuration
Custom Endpoint
Point to a self-hosted or local backend:Supported Providers
OpenAI
- GPT-5 family:
gpt-5.2,gpt-5,gpt-5-mini,gpt-5-nano - GPT-4 family:
gpt-4o,gpt-4o-mini,gpt-4.1,gpt-4-turbo,gpt-4 - GPT-3.5:
gpt-3.5-turbo - Other:
o3,o4-mini
Anthropic
- Opus:
claude-opus-4-6,claude-opus-4-1-20250805,claude-opus-4-20250514 - Sonnet:
claude-sonnet-4-6,claude-sonnet-4-5-20250929,claude-sonnet-4-20250514,claude-3-7-sonnet-20250219 - Haiku:
claude-haiku-4-5-20251001,claude-3-haiku-20240307
Google Gemini
- Gemini 2.0:
gemini-2.0-flash,gemini-2.0-flash-lite - Gemini 1.5:
gemini-1.5-pro,gemini-1.5-flash,gemini-1.5-flash-8b
What Gets Tracked
Every LLM API call sends a trace containing:Requirements
- Python: 3.13 or higher
- Dependencies: Only
requestsis required. Provider SDKs are optional.
Troubleshooting
No traces appearing in dashboard
- Check that
costrace.init()is called before creating LLM clients - Verify your API key is correct
- Check for error messages in console output