A looking glass for the AT Proto Firehose
A collection of looking-glass tools for the AT Proto Network, featuring a high-performance firehose consumer backed by DuckDB.
The Looking Glass Consumer is a Go service that connects to an AT Proto Firehose and processes events in real-time.
Key Features:
The consumer replaces the previous multi-backend approach (SQLite + Parquet + BigQuery) with a single, powerful DuckDB backend that provides:
# Start the consumer
just lg-up
# Rebuild and start
just lg-rebuild
# Stop the consumer
just lg-down
# View logs
just lg-logs
# Run locally (without Docker)
just run-lg
# Start the consumer
docker compose -f cmd/stream/docker-compose.yml up -d
# Stop the consumer
docker compose -f cmd/stream/docker-compose.yml down
go run cmd/stream/main.go --help
The consumer supports the following environment variables:
LG_WS_URL: WebSocket URL for the firehose (default: wss://bsky.network/xrpc/com.atproto.sync.subscribeRepos)LG_PORT: HTTP server port (default: 8080)LG_DEBUG: Enable debug logging (default: false)LG_DUCKDB_PATH: Path to DuckDB database file (default: /data/looking-glass.db)LG_MIGRATE_DB: Run database migrations on startup (default: true)LG_EVT_RECORD_TTL: Time-to-live for events and records (default: 72h)LG_PLC_RATE_LIMIT: Rate limit for PLC lookups in requests/second (default: 100)LG_LOOKUP_ON_COMMIT: Lookup DID docs on commit events (default: false)The consumer exposes the following HTTP endpoints:
GET /records - Query records with filters:
?did=<DID> - Filter by repository DID?collection=<NSID> - Filter by collection?rkey=<RecordKey> - Filter by record key?seq=<number> - Filter by firehose sequence?limit=<number> - Limit results (max 1000)GET /events - Query events:
?did=<DID> - Filter by repository DID?event_type=<type> - Filter by event type?seq=<number> - Filter by firehose sequence?limit=<number> - Limit results (max 1000)GET /identities - Query identities:
?did=<DID> - Filter by DID?handle=<handle> - Filter by handle?pds=<PDS> - Filter by PDS endpoint?limit=<number> - Limit results (max 1000)GET /metrics - Prometheus metrics
GET /debug/pprof/* - Go pprof profiling endpoints
The consumer stores data in a DuckDB database with the following tables:
By default, data is stored in ./data/looking-glass.db when running locally, or in a Docker volume when using Docker Compose.
The Checkout tool lets you download your AT Proto repository as a directory of JSON files (one per record).
It supports:
bsky.network)Usage:
go run cmd/checkout/main.go <repo-DID>
# With options
go run cmd/checkout/main.go --help
Exports and monitors the PLC directory.
Usage:
# Start with Docker Compose
just plc-up
# Stop
just plc-down
# Build all binaries
just build-all
# Build specific service
just build-lg
# Install dependencies
just deps
# Run tests
just test
# Run tests with coverage
just test-coverage
# Format code
just fmt
# Lint code
just lint
# Tidy dependencies
just tidy
If you're upgrading from the previous SQLite/Parquet/BigQuery version:
Data Migration: The DuckDB schema is similar to the old SQLite schema. You can export data from SQLite and import into DuckDB if needed.
Configuration Changes:
LG_SQLITE_PATH → LG_DUCKDB_PATHLG_SQLITE_PERSIST, LG_PARQUET_DIR, LG_BIGQUERY_* variablesAPI Compatibility: All HTTP endpoints remain the same, ensuring backward compatibility.
See LICENSE for details.
Go source code for Bluesky's atproto services.
The AT Protocol blogging platform
Self-hosted PLC mirror
A minimal implementation of a BlueSky Feed Generator in Go
A simplified JSON event stream for AT Proto
A mirror of my personal monorepo's AT Proto tools
Your Brand Here!
50K+ engaged viewers every month
Limited spots available!
📧 Contact us via email🦋 Contact us on Bluesky