Self-hosting
Run the Spanlens proxy, API, and dashboard on your own infra. Keeps all request bodies, traces, and encrypted provider keys inside your network.
Who should self-host
- Compliance requirements (SOC 2, HIPAA, data residency) forbid sending LLM bodies through a third-party SaaS
- You already run Supabase in-house
- You expect traffic volumes where per-request pricing on the hosted plan exceeds the cost of running your own infra
What you need
- A Supabase project. The free tier on supabase.com is enough to start. Plain Postgres is not supported — the server uses
@supabase/supabase-jsdirectly. - A 32-byte encryption key. Used for AES-256-GCM encryption of provider keys at rest. Generate with
openssl rand -base64 32. Back this up. Losing it makes every stored provider key unrecoverable. - Docker, or anywhere that can run a Node 22 container (Fly.io, Railway, ECS, Cloud Run, plain VPS).
- A reverse proxy with HTTPS in front (Caddy, nginx, Cloudflare Tunnel). The containers speak HTTP on ports 3000 (web) and 3001 (server).
Walkthrough
Option A — docker-compose (recommended)
The easiest way to self-host. Pulls pre-built images from GHCR and runs both the dashboard (web) and the proxy / API server together. No source code needed.
1. Apply the database schema
Open your Supabase project → SQL Editor → New query, paste the contents of supabase/init.sql, and click Run. No CLI needed.
Prefer the terminal? Use psql instead:
curl -o init.sql https://raw.githubusercontent.com/spanlens/Spanlens/main/supabase/init.sql
psql "postgresql://postgres:<password>@db.<ref>.supabase.co:5432/postgres" -f init.sqlbash2. Create a .env file
# Required
NEXT_PUBLIC_SUPABASE_URL=https://<ref>.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJ...
SUPABASE_URL=https://<ref>.supabase.co
SUPABASE_ANON_KEY=eyJ...
SUPABASE_SERVICE_ROLE_KEY=eyJ... # keep server-side only
ENCRYPTION_KEY=$(openssl rand -base64 32) # back this up — see below
CRON_SECRET=$(openssl rand -hex 16)
# ClickHouse — request logs are stored here, NOT Supabase
# The bundled docker-compose ships a clickhouse container; these defaults
# match it. Point at ClickHouse Cloud (or any managed ClickHouse) for prod.
CLICKHOUSE_URL=http://clickhouse:8123
CLICKHOUSE_USER=spanlens
CLICKHOUSE_PASSWORD=$(openssl rand -hex 16)
CLICKHOUSE_DB=spanlens
# Optional — for invite emails
# WEB_URL=https://your-domain.com
# RESEND_API_KEY=re_...
# RESEND_FROM=Spanlens <no-reply@your-domain.com>bash3. Start
curl -o docker-compose.yml https://raw.githubusercontent.com/spanlens/Spanlens/main/docker-compose.yml
docker compose up -dbash- Dashboard:
http://localhost:3000 - API / proxy:
http://localhost:3001 - ClickHouse (analytics, internal):
http://localhost:8123
Three containers come up: web, server, and clickhouse. The server waits for ClickHouse's healthcheck before accepting traffic. The web container reads NEXT_PUBLIC_* from env at startup and patches them into the pre-built bundle automatically — no rebuild needed.
4. Apply the ClickHouse schema
The requests table needs to exist before the server can write logs. Run the migration script once after the ClickHouse container is healthy:
# Clone or fetch the migrations folder
curl -L https://github.com/spanlens/Spanlens/archive/main.tar.gz | tar xz --strip-components=1 spanlens-main/clickhouse
# Apply (idempotent — re-running is safe)
CLICKHOUSE_URL=http://localhost:8123 \
CLICKHOUSE_USER=spanlens CLICKHOUSE_PASSWORD=<password> CLICKHOUSE_DB=spanlens \
npx -y tsx clickhouse/apply.tsbashOption B — server only
If you run the dashboard separately (at spanlens.io or your own Next.js deployment), you can run just the API server.
1. Create a Supabase project
Sign in at supabase.com, create a project, wait for it to provision (~1 minute). From Project Settings → API, copy:
- Project URL →
SUPABASE_URL - anon public key →
SUPABASE_ANON_KEY - service_role secret key →
SUPABASE_SERVICE_ROLE_KEY(server-side only)
2. Apply the schema
Same as Option A step 1 — open SQL Editor → New query, paste init.sql, run.
3. Provision ClickHouse
Request logs live in ClickHouse, not Supabase. Two options:
- ClickHouse Cloud (recommended for production) — sign up at clickhouse.cloud, create a service, copy the HTTPS endpoint + credentials.
- Self-hosted ClickHouse — run
clickhouse/clickhouse-server:24.10-alpinewith persistent volumes (see the bundleddocker-compose.ymlfor the canonical setup).
Apply the schema before starting the server:
curl -L https://github.com/spanlens/Spanlens/archive/main.tar.gz | tar xz --strip-components=1 spanlens-main/clickhouse
CLICKHOUSE_URL=https://<host>:8443 \
CLICKHOUSE_USER=default CLICKHOUSE_PASSWORD=<password> CLICKHOUSE_DB=spanlens \
npx -y tsx clickhouse/apply.tsbash4. Run the server
docker run -d --name spanlens-server \
-p 3001:3001 \
-e SUPABASE_URL="https://<ref>.supabase.co" \
-e SUPABASE_ANON_KEY="eyJ..." \
-e SUPABASE_SERVICE_ROLE_KEY="eyJ..." \
-e CLICKHOUSE_URL="https://<host>:8443" \
-e CLICKHOUSE_USER="default" \
-e CLICKHOUSE_PASSWORD="<password>" \
-e CLICKHOUSE_DB="spanlens" \
-e ENCRYPTION_KEY="$(openssl rand -base64 32)" \
-e CRON_SECRET="$(openssl rand -hex 16)" \
ghcr.io/spanlens/spanlens-server:latestbashcurl http://localhost:3001/health
# {"status":"ok"}bash4. Point your SDK at the self-hosted proxy
Option 1 — CLI wizard (automates the step below):
npx @spanlens/cli@latest init --server-url https://spanlens.yourcompany.combashValidates your key against your server, patches existing new OpenAI() / new Anthropic() calls, and writes SPANLENS_BASE_URL to .env.local automatically.
Option 2 — manual:
import { createOpenAI } from '@spanlens/sdk/openai'
const openai = createOpenAI({
baseURL: 'https://spanlens.yourcompany.com/proxy/openai/v1',
})tsEnvironment variables
| Variable | Required | Description |
|---|---|---|
SUPABASE_URL | Yes | Your Supabase project URL (https://<ref>.supabase.co) |
SUPABASE_SERVICE_ROLE_KEY | Yes | Service role key — used by the server to write to Supabase past RLS (orgs, projects, traces, etc.) |
SUPABASE_ANON_KEY | Yes | Anon key — used for RLS-protected reads from dashboard queries |
CLICKHOUSE_URL | Yes | HTTPS endpoint of your ClickHouse cluster (e.g. https://<host>:8443 for Cloud, or http://clickhouse:8123 for the bundled container). |
CLICKHOUSE_USER | Yes | ClickHouse user (default default for Cloud, spanlens for the bundled container) |
CLICKHOUSE_PASSWORD | Yes | ClickHouse password |
CLICKHOUSE_DB | Yes | Database name. Default spanlens. The requests table lives here. |
ENCRYPTION_KEY | Yes | 32-byte base64 key for AES-256-GCM provider-key encryption at rest |
NEXT_PUBLIC_SUPABASE_URL | Yes (web only) | Same as SUPABASE_URL — exposed to the browser for Supabase Auth |
NEXT_PUBLIC_SUPABASE_ANON_KEY | Yes (web only) | Same as SUPABASE_ANON_KEY — exposed to the browser for Supabase Auth |
WEB_URL | Yes (multi-user) | Base URL of your dashboard (e.g. https://spanlens.example.com). Used to build the accept link in invitation emails. Falls back to http://localhost:3000 if unset — fine for local dev, broken in production. |
RESEND_API_KEY | No | Resend API token for outbound email (invitations). When unset, emails are skipped silently and the invite endpoint returns the accept link as devAcceptUrl so an admin can hand-deliver it. |
RESEND_FROM | No | Sender header. Default Spanlens <notifications@spanlens.io>. Override with a verified sender on your own domain to avoid spam filters. |
PORT | No | HTTP port for the server (default 3001) |
Upgrading
# Pull the latest images and restart
docker compose pull && docker compose up -d
# If a new release added migrations, re-run init.sql in SQL Editor
# (all statements use CREATE IF NOT EXISTS / ALTER IF NOT EXISTS — safe to re-run)bashWe ship semver tags (ghcr.io/spanlens/spanlens-server:0.3.0, ghcr.io/spanlens/spanlens-web:0.3.0). Pin a tag in production and upgrade deliberately.
Dashboard options
- docker-compose (recommended) — pulls
ghcr.io/spanlens/spanlens-web:latestalongside the server. Full self-hosting with no source required. See Option A above. - Use the hosted dashboard at spanlens.io pointed at your self-hosted backend. Log in, then override the API base URL in Settings.
- Build from source — clone the repo and
docker compose up -d --buildto build both images locally.
Backups
Two data stores, two backup strategies:
- Supabase Postgres — holds the transactional crown jewels (orgs, projects, provider keys, subscriptions, prompts, evals, traces). Standard
pg_dumpagainst your Supabase DB covers you. Catastrophic if lost. - ClickHouse — holds request logs only. Append-only telemetry. Options in order of effort:
- ClickHouse Cloud automatic backups (1-day RPO, same region) — set and forget.
- BACKUP TO S3 on a schedule —
BACKUP TABLE requests TO S3('s3://bucket/path'). - Accept the loss — historical logs are observability, not source-of-truth. Loss costs you the past N days of dashboards, not customer trust.
- ENCRYPTION_KEY (outside any DB) — back this up in your secret manager (AWS Secrets Manager, GCP Secret Manager, HashiCorp Vault). Without it, encrypted provider keys are unrecoverable.
Known limitations
- Plain Postgres isn't supported. The server imports
@supabase/supabase-jsdirectly. Moving to a thin abstraction layer is on the roadmap but not a launch blocker. - ClickHouse is required. The server's logger and analytics helpers all assume a reachable ClickHouse instance. A Postgres-only mode is not provided — the dual-store architecture is intentional (OLAP workload, columnar storage, faster aggregates).
- Operational tooling is minimal. No built-in monitoring, no migration rollback tool, no backup cron. DIY for now.
Found a problem? Open an issue on GitHub.