Skip to main content

1. Get Your API Key

Sign up at costrace.dev and grab your API key from the dashboard.

2. Install the SDK

pip install costrace-sdk[openai]       # OpenAI only
pip install costrace-sdk[anthropic]    # Anthropic only
pip install costrace-sdk[gemini]       # Gemini only
pip install costrace-sdk[all]          # All providers

3. Initialize Costrace

Add one line at the top of your application:
import costrace

costrace.init(api_key="ct_your_api_key")

4. Use Your LLM SDKs Normally

That’s it. All API calls are now tracked automatically.
import openai

client = openai.OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}],
)

5. View Your Traces

Head to costrace.dev/dashboard/logs to see:
  • Cost per call in USD
  • Token counts (input/output)
  • Latency in milliseconds
  • Success/error status
  • Geographic distribution

Local Development

If you’re testing locally and want traces to go to your local backend instead of production:
costrace.init(
    api_key="ct_your_api_key",
    endpoint="http://localhost:8080/v1/traces"
)

Next Steps