Skip to main content

Installation

Install the SDK from PyPI using pip:
pip install costrace-sdk
This installs the core SDK with no provider dependencies. To include the provider libraries you need, use extras:
pip install costrace-sdk[openai]
If you already have a provider SDK installed (e.g. openai, anthropic, or google-genai), the base pip install costrace-sdk is all you need — Costrace will detect and patch any installed providers automatically.
Requires Python 3.8 or higher.

How It Works

Costrace uses monkey-patching to wrap your existing LLM client methods. When you call costrace.init(), it automatically patches the clients for any installed providers — no code changes needed on your end. Every LLM call is intercepted to capture token usage, latency, and cost, then a trace is sent to the Costrace backend in a background thread so your application is never blocked.

Quick Start

import costrace
import openai

# Initialize Costrace once at startup — before creating any LLM clients
costrace.init(api_key="ct_your_api_key")

# Use OpenAI as you normally would — all calls are automatically tracked
client = openai.OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}],
)
You must call costrace.init() before creating any LLM client instances. The SDK patches client constructors, so clients created before initialization won’t be tracked.

Configuration

Custom Endpoint

Point to a self-hosted or local backend:
costrace.init(
    api_key="ct_your_api_key",
    endpoint="https://your-backend.com/v1/traces"
)

Supported Providers

OpenAI

import costrace
import openai

costrace.init(api_key="ct_your_api_key")

client = openai.OpenAI(api_key="sk-...")
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello"}],
)
Supported Models:
  • GPT-5 family: gpt-5.2, gpt-5, gpt-5-mini, gpt-5-nano
  • GPT-4 family: gpt-4o, gpt-4o-mini, gpt-4.1, gpt-4-turbo, gpt-4
  • GPT-3.5: gpt-3.5-turbo
  • Other: o3, o4-mini

Anthropic

import costrace
import anthropic

costrace.init(api_key="ct_your_api_key")

client = anthropic.Anthropic(api_key="sk-ant-...")
message = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello"}],
)
Supported Models:
  • Opus: claude-opus-4-6, claude-opus-4-1-20250805, claude-opus-4-20250514
  • Sonnet: claude-sonnet-4-6, claude-sonnet-4-5-20250929, claude-sonnet-4-20250514, claude-3-7-sonnet-20250219
  • Haiku: claude-haiku-4-5-20251001, claude-3-haiku-20240307

Google Gemini

import costrace
from google import genai

costrace.init(api_key="ct_your_api_key")

client = genai.Client(api_key="AIza...")
response = client.models.generate_content(
    model="gemini-2.0-flash",
    contents="Hello",
)
Supported Models:
  • Gemini 2.0: gemini-2.0-flash, gemini-2.0-flash-lite
  • Gemini 1.5: gemini-1.5-pro, gemini-1.5-flash, gemini-1.5-flash-8b

What Gets Tracked

Every LLM API call sends a trace containing:
{
    "provider": "openai",           # openai | anthropic | gemini
    "model": "gpt-4o",
    "tokens_in": 100,               # Prompt tokens
    "tokens_out": 50,               # Completion tokens
    "latency_ms": 1234,             # Time in milliseconds
    "cost_usd": 0.005,              # Calculated cost
    "status": "success",            # success | error
    "api_key": "ct_...",            # Your Costrace API key
    "error": "..."                  # Error message (if status=error)
}
Traces are sent in a background thread — they don’t block your application. The SDK also registers an atexit handler to wait up to 10 seconds for any pending traces before your process exits.

Requirements

  • Python: 3.8 or higher
  • Core dependency: requests — installed automatically
  • Provider SDKs: Optional, install only the ones you use
ExtraInstallsMinimum Version
openaiopenai>=2.21.0
anthropicanthropic>=0.82.0
geminigoogle-genai>=1.64.0
allAll of the above

Troubleshooting

No traces appearing in dashboard

  1. Check that costrace.init() is called before creating LLM clients
  2. Verify your API key is correct
  3. Check for error messages in console output

Traces not being sent

The SDK silently catches network errors when sending traces. This is intentional — trace failures should never break your application. Check your network connectivity and ensure the Costrace API endpoint is reachable.

Provider not being tracked

If a provider isn’t being tracked, make sure:
  1. The provider SDK is installed (e.g. pip install openai)
  2. costrace.init() is called before creating the client instance
  3. You’re using a supported model from the lists above

Source Code

GitHub: github.com/ikotun-dev/costrace PyPI: pypi.org/project/costrace-sdk