Open source · Self-hostable · No vendor lock-in

LLM observability that stays out of your way

Replace one line in your code. Get request logging, cost tracking, and agent tracing across OpenAI, Anthropic, and Gemini — instantly.

Before

base_url = "https://api.openai.com"

After

base_url = "https://proxy.spanlens.io/openai"

That's it. Every request is now tracked.

Up in 3 minutes

No SDK installation, no code changes beyond the base URL.

1

Add your provider key

Paste your OpenAI, Anthropic, or Gemini API key. We encrypt it at rest.

2

Get your Spanlens key

Copy your API key and the proxy base URL for your provider.

3

Watch requests flow in

See every request, cost, latency, and token count in your dashboard.

Everything you need

Cost tracking

Per-request cost breakdown across all providers and models.

Latency monitoring

p50 / p95 latency per model so you can spot regressions instantly.

Agent tracing

Visualize multi-step agent flows as Gantt/waterfall span trees.

Self-hostable

Run on your own infra with a single Docker command. Your data stays yours.

Start observing your LLM calls today

Free plan includes 10,000 requests/month.