Skip to main content

Installation

Install only the providers you use to keep dependencies minimal:
pip install costrace-sdk[openai]       # OpenAI only
pip install costrace-sdk[anthropic]    # Anthropic only
pip install costrace-sdk[gemini]       # Gemini only
pip install costrace-sdk[all]          # All providers

Basic Usage

import costrace
import openai

# Initialize Costrace once at startup
costrace.init(api_key="ct_your_api_key")

# Use OpenAI normally — all calls are tracked
client = openai.OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}],
)

Configuration

Custom Endpoint

Point to a self-hosted or local backend:
costrace.init(
    api_key="ct_your_api_key",
    endpoint="https://your-backend.com/v1/traces"
)

Supported Providers

OpenAI

import openai

client = openai.OpenAI(api_key="sk-...")
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello"}],
)
Supported Models:
  • GPT-5 family: gpt-5.2, gpt-5, gpt-5-mini, gpt-5-nano
  • GPT-4 family: gpt-4o, gpt-4o-mini, gpt-4.1, gpt-4-turbo, gpt-4
  • GPT-3.5: gpt-3.5-turbo
  • Other: o3, o4-mini

Anthropic

import anthropic

client = anthropic.Anthropic(api_key="sk-ant-...")
message = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello"}],
)
Supported Models:
  • Opus: claude-opus-4-6, claude-opus-4-1-20250805, claude-opus-4-20250514
  • Sonnet: claude-sonnet-4-6, claude-sonnet-4-5-20250929, claude-sonnet-4-20250514, claude-3-7-sonnet-20250219
  • Haiku: claude-haiku-4-5-20251001, claude-3-haiku-20240307

Google Gemini

from google import genai

client = genai.Client(api_key="AIza...")
response = client.models.generate_content(
    model="gemini-2.0-flash",
    contents="Hello",
)
Supported Models:
  • Gemini 2.0: gemini-2.0-flash, gemini-2.0-flash-lite
  • Gemini 1.5: gemini-1.5-pro, gemini-1.5-flash, gemini-1.5-flash-8b

What Gets Tracked

Every LLM API call sends a trace containing:
{
    "provider": "openai",           # openai | anthropic | gemini
    "model": "gpt-4o",
    "tokens_in": 100,               # Prompt tokens
    "tokens_out": 50,               # Completion tokens
    "latency_ms": 1234,             # Time in milliseconds
    "cost_usd": 0.005,              # Calculated cost
    "status": "success",            # success | error
    "api_key": "ct_...",            # Your Costrace API key
    "error": "..."                  # Error message (if status=error)
}
Traces are sent in a background thread — they don’t block your application.

Requirements

  • Python: 3.13 or higher
  • Dependencies: Only requests is required. Provider SDKs are optional.

Troubleshooting

No traces appearing in dashboard

  1. Check that costrace.init() is called before creating LLM clients
  2. Verify your API key is correct
  3. Check for error messages in console output

Traces not being sent

The SDK silently catches network errors when sending traces. This is intentional — trace failures should never break your application.

Source Code

GitHub: github.com/ikotun-dev/costrace PyPI: pypi.org/project/costrace-sdk