Documentation
Complete guide to install, configure, and use AI Observer to monitor your AI coding assistants.
Installation
AI Observer can be installed using Docker, Homebrew (macOS), or as a standalone binary.
Docker (Recommended)
docker run -d \
--name ai-observer \
-p 4318:4318 \
-p 8080:8080 \
-v ai-observer-data:/app/data \
tobilg/ai-observer:latest
Dashboard: http://localhost:8080 |
OTLP endpoint: http://localhost:4318
Using a Local Directory for Data
# Create a local data directory
mkdir -p ./ai-observer-data
# Run with local volume mount
docker run -d \
-p 8080:8080 \
-p 4318:4318 \
-v $(pwd)/ai-observer-data:/app/data \
-e AI_OBSERVER_DATABASE_PATH=/app/data/ai-observer.duckdb \
--name ai-observer \
tobilg/ai-observer:latest Homebrew (macOS Apple Silicon)
brew tap tobilg/ai-observer
brew install ai-observer
ai-observer Binary Download
Download the latest release from GitHub Releases , then run:
./ai-observer Building from Source
git clone https://github.com/tobilg/ai-observer.git
cd ai-observer
make setup # Install dependencies
make all # Build single binary with embedded frontend
./bin/ai-observer Configuration
AI Observer uses environment variables for configuration.
Environment Variables
| Variable | Default | Description |
|---|---|---|
| AI_OBSERVER_API_PORT | 8080 | HTTP server port (dashboard + API) |
| AI_OBSERVER_OTLP_PORT | 4318 | OTLP ingestion port |
| AI_OBSERVER_DATABASE_PATH | ./data/ai-observer.duckdb | DuckDB database file path |
| AI_OBSERVER_FRONTEND_URL | http://localhost:5173 | Allowed CORS origin (dev mode) |
| AI_OBSERVER_LOG_LEVEL | INFO | Log level: DEBUG, INFO, WARN, ERROR |
CLI Commands
AI Observer provides several CLI commands for managing your telemetry data.
ai-observer [command] [options] Available Commands
| Command | Description |
|---|---|
| serve | Start the OTLP server (default if no command) |
| import | Import local sessions from AI tool files |
| export | Export telemetry data to Parquet files |
| delete | Delete telemetry data from database |
| setup | Show setup instructions for AI tools |
Import Command
Import historical session data from local AI coding tool files.
ai-observer import [claude-code|codex|gemini|all] [options] | Option | Description |
|---|---|
| --from DATE | Only import sessions from DATE (YYYY-MM-DD) |
| --to DATE | Only import sessions up to DATE (YYYY-MM-DD) |
| --force | Re-import already imported files |
| --dry-run | Show what would be imported without making changes |
| --purge | Delete existing data in time range before importing |
| --pricing-mode | Cost calculation mode: auto (default), calculate, display |
File Locations
| Tool | Default Location |
|---|---|
| Claude Code | ~/.claude/projects/**/*.jsonl |
| Codex CLI | ~/.codex/sessions/*.jsonl |
| Gemini CLI | ~/.gemini/tmp/**/session-*.json |
Override with environment variables: AI_OBSERVER_CLAUDE_PATH, AI_OBSERVER_CODEX_PATH, AI_OBSERVER_GEMINI_PATH
Examples
# Import from all tools
ai-observer import all
# Import Claude data from specific date range
ai-observer import claude-code --from 2025-01-01 --to 2025-12-31
# Dry run to see what would be imported
ai-observer import all --dry-run
# Force re-import and recalculate costs
ai-observer import claude-code --force --pricing-mode calculate Export Command
Export telemetry data to portable Parquet files with an optional DuckDB views database.
ai-observer export [claude-code|codex|gemini|all] --output <directory> [options] | Option | Description |
|---|---|
| --output DIR | Output directory (required) |
| --from DATE | Start date filter (YYYY-MM-DD) |
| --to DATE | End date filter (YYYY-MM-DD) |
| --from-files | Read from raw JSON/JSONL files instead of database |
| --zip | Create single ZIP archive of exported files |
Examples
# Export all data from database
ai-observer export all --output ./export
# Export Claude data with date filter
ai-observer export claude-code --output ./export --from 2025-01-01 --to 2025-01-15
# Export to ZIP archive
ai-observer export all --output ./export --zip Delete Command
Delete telemetry data from the database by time range.
ai-observer delete [logs|metrics|traces|all] --from DATE --to DATE [options] Examples
# Delete all data in a date range
ai-observer delete all --from 2025-01-01 --to 2025-01-31
# Delete only logs in a date range
ai-observer delete logs --from 2025-01-01 --to 2025-01-31
# Delete only Claude Code data
ai-observer delete all --from 2025-01-01 --to 2025-01-31 --service claude-code AI Tool Setup
Configure your AI coding assistants to send telemetry to AI Observer.
Claude Code
Add the following environment variables to your shell profile (~/.bashrc or ~/.zshrc):
# Enable telemetry (required)
export CLAUDE_CODE_ENABLE_TELEMETRY=1
# Configure exporters
export OTEL_METRICS_EXPORTER=otlp
export OTEL_LOGS_EXPORTER=otlp
# Set OTLP endpoint (HTTP)
export OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
# Set shorter intervals
export OTEL_METRIC_EXPORT_INTERVAL=10000 # 10 seconds
export OTEL_LOGS_EXPORT_INTERVAL=5000 # 5 seconds Gemini CLI
Add to ~/.gemini/settings.json:
{
"telemetry": {
"enabled": true,
"target": "local",
"useCollector": true,
"otlpEndpoint": "http://localhost:4318",
"otlpProtocol": "http",
"logPrompts": true
}
} Also add these environment variables (workaround for Gemini CLI timing issues):
export OTEL_METRIC_EXPORT_TIMEOUT=10000
export OTEL_LOGS_EXPORT_TIMEOUT=5000 OpenAI Codex CLI
Add to ~/.codex/config.toml:
[otel]
log_user_prompt = true # set to false to redact prompts
exporter = { otlp-http = { endpoint = "http://localhost:4318/v1/logs", protocol = "binary" } }
trace_exporter = { otlp-http = { endpoint = "http://localhost:4318/v1/traces", protocol = "binary" } } Note: Codex CLI exports logs and traces (no metrics).
API Reference
AI Observer exposes two HTTP servers for ingestion and querying.
OTLP Ingestion (Port 4318)
Standard OpenTelemetry Protocol endpoints for receiving telemetry data.
| Method | Endpoint | Description |
|---|---|---|
| POST | /v1/traces | Ingest trace spans (protobuf or JSON) |
| POST | /v1/metrics | Ingest metrics (protobuf or JSON) |
| POST | /v1/logs | Ingest logs (protobuf or JSON) |
| POST | / | Auto-detect signal type (Gemini CLI compatibility) |
| GET | /health | Health check |
Query API (Port 8080)
REST API for querying stored telemetry data.
Traces
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/traces | List traces with filtering and pagination |
| GET | /api/traces/recent | Get most recent traces |
| GET | /api/traces/{traceId} | Get a specific trace |
| GET | /api/traces/{traceId}/spans | Get all spans for a trace |
Metrics
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/metrics | List metrics with filtering |
| GET | /api/metrics/names | List all metric names |
| GET | /api/metrics/series | Get time series data for a metric |
| POST | /api/metrics/batch-series | Get multiple time series in one request |
Logs
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/logs | List logs with filtering and pagination |
| GET | /api/logs/levels | Get log counts by severity level |
Dashboards
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/dashboards | List all dashboards |
| POST | /api/dashboards | Create a new dashboard |
| GET | /api/dashboards/default | Get the default dashboard with widgets |
| GET | /api/dashboards/{id} | Get a dashboard by ID |
| PUT | /api/dashboards/{id} | Update a dashboard |
| DELETE | /api/dashboards/{id} | Delete a dashboard |
Other Endpoints
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/services | List all services sending telemetry |
| GET | /api/stats | Get aggregate statistics |
| GET | /ws | WebSocket for real-time updates |
| GET | /health | Health check |
Troubleshooting
Port Already in Use
Change the ports using environment variables:
AI_OBSERVER_API_PORT=9090 AI_OBSERVER_OTLP_PORT=4319 ./ai-observer No Data Appearing in Dashboard
- Verify your AI tool is configured correctly
- Check that the OTLP endpoint is reachable:
curl http://localhost:4318/health - Look for errors in the AI Observer logs
CORS Errors in Browser Console
Set the AI_OBSERVER_FRONTEND_URL environment variable to match your frontend origin:
AI_OBSERVER_FRONTEND_URL=http://localhost:3000 ./ai-observer Need More Help?
Check out the GitHub repository for additional documentation, examples, and community support.