Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,8 @@ FETCH_WORKERS=10

# Number of blocks to fetch per RPC batch request (reduces HTTP round-trips)
RPC_BATCH_SIZE=20

# API settings
# API_HOST=127.0.0.1
# API_PORT=3000
# API_DB_MAX_CONNECTIONS=20
3 changes: 1 addition & 2 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,6 @@ jobs:
image-tag: main
apps: |
[
{"name": "atlas-oss-indexer", "context": "backend", "dockerfile": "backend/Dockerfile", "target": "indexer"},
{"name": "atlas-oss-api", "context": "backend", "dockerfile": "backend/Dockerfile", "target": "api"},
{"name": "atlas-oss-server", "context": "backend", "dockerfile": "backend/Dockerfile", "target": "server"},
{"name": "atlas-oss-frontend", "context": "frontend", "dockerfile": "frontend/Dockerfile", "target": ""}
]
28 changes: 4 additions & 24 deletions .github/workflows/docker.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ concurrency:
cancel-in-progress: true

jobs:
build-indexer:
name: Indexer Docker (linux/amd64)
build-backend:
name: Backend Docker (linux/amd64)
runs-on: ubuntu-latest
steps:
- name: Checkout
Expand All @@ -21,31 +21,11 @@ jobs:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

- name: Build indexer image
- name: Build backend image
uses: docker/build-push-action@v6
with:
context: backend
target: indexer
platforms: linux/amd64
cache-from: type=gha,scope=backend
cache-to: type=gha,scope=backend,mode=max
outputs: type=cacheonly

build-api:
name: API Docker (linux/amd64)
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

- name: Build api image
uses: docker/build-push-action@v6
with:
context: backend
target: api
target: server
platforms: linux/amd64
cache-from: type=gha,scope=backend
cache-to: type=gha,scope=backend,mode=max
Expand Down
58 changes: 34 additions & 24 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,7 @@ Atlas is an EVM blockchain explorer (indexer + API + frontend) for ev-node based

| Layer | Tech |
|---|---|
| Indexer | Rust, tokio, sqlx, alloy, tokio-postgres (binary COPY) |
| API | Rust, Axum, sqlx, tower-http |
| Server | Rust, tokio, Axum, sqlx, alloy, tokio-postgres (binary COPY), tower-http |
| Database | PostgreSQL (partitioned tables) |
| Frontend | React, TypeScript, Vite, Tailwind CSS, Bun |
| Deployment | Docker Compose, nginx (unprivileged, port 8080→80) |
Expand All @@ -20,9 +19,13 @@ atlas/
│ ├── Cargo.toml # Workspace — all dep versions live here
│ ├── crates/
│ │ ├── atlas-common/ # Shared types, DB pool, error handling, Pagination
│ │ ├── atlas-indexer/ # Block fetcher, batch writer, metadata fetcher
│ │ └── atlas-api/ # Axum REST API
│ └── migrations/ # sqlx migrations (run at startup by both crates)
│ │ └── atlas-server/ # Unified server: indexer + API in a single binary
│ │ └── src/
│ │ ├── main.rs # Startup: migrations, pools, spawn indexer, serve API
│ │ ├── config.rs # Unified config from env vars
│ │ ├── indexer/ # Block fetcher, batch writer, metadata fetcher
│ │ └── api/ # Axum REST API + SSE handlers
│ └── migrations/ # sqlx migrations (run once at startup)
├── frontend/
│ ├── src/
│ │ ├── api/ # Typed API clients (axios)
Expand All @@ -31,18 +34,24 @@ atlas/
│ │ ├── pages/ # One file per page/route
│ │ └── types/ # Shared TypeScript types
│ ├── Dockerfile # Multi-stage: oven/bun:1 → nginx-unprivileged:alpine
│ └── nginx.conf # SPA routing + /api/ reverse proxy to atlas-api:3000
│ └── nginx.conf # SPA routing + /api/ reverse proxy to atlas-server:3000
├── docker-compose.yml
└── .env.example
```

## Key Architectural Decisions

### Single binary
The indexer and API run as concurrent tokio tasks in a single `atlas-server` binary. The indexer pushes block events directly to SSE subscribers via an in-process `broadcast::Sender<()>`. If the indexer task fails, the API keeps running (graceful degradation); the indexer retries with exponential backoff.

### Database connection pools
- **API pool**: 20 connections, `statement_timeout = '10s'` set via `after_connect` hook
- **Indexer pool**: 20 connections (configurable via `DB_MAX_CONNECTIONS`), same timeout
- **API pool**: 20 connections (configurable via `API_DB_MAX_CONNECTIONS`), `statement_timeout = '10s'`
- **Indexer pool**: 20 connections (configurable via `DB_MAX_CONNECTIONS`), same timeout — kept separate so API load can't starve the indexer
- **Binary COPY client**: separate `tokio-postgres` direct connection (bypasses sqlx pool), conditional TLS based on `sslmode` in DATABASE_URL
- **Migrations**: run with a dedicated 1-connection pool with **no** statement_timeout (index builds can take longer than 10s)
- **Migrations**: run once with a dedicated 1-connection pool with **no** statement_timeout (index builds can take longer than 10s)

### SSE live updates
The indexer publishes block updates through `broadcast::Sender<()>`. SSE handler (`GET /api/events`) subscribes to this broadcast channel and refreshes independently of the database write path.

### Pagination — blocks table
The blocks table can have 80M+ rows. `OFFSET` on large pages causes 30s+ full index scans. Instead:
Expand All @@ -57,29 +66,28 @@ let cursor = (total_count - 1) - (pagination.page.saturating_sub(1) as i64) * li
### Row count estimation
For large tables (transactions, addresses), use `pg_class.reltuples` instead of `COUNT(*)`:
```rust
// handlers/mod.rs — get_table_count(pool, "table_name")
// handlers/mod.rs — get_table_count(pool)
// Partition-aware: sums child reltuples, falls back to parent
// For tables < 100k rows: falls back to exact COUNT(*)
```

### HTTP timeout
`TimeoutLayer::with_status_code(StatusCode::REQUEST_TIMEOUT, Duration::from_secs(10))` wraps all routes — returns 408 if any handler exceeds 10s.
`TimeoutLayer::with_status_code(StatusCode::REQUEST_TIMEOUT, Duration::from_secs(10))` wraps all routes except SSE — returns 408 if any handler exceeds 10s.

### AppState (API)
```rust
pub struct AppState {
pub pool: PgPool,
pub pool: PgPool, // API pool only
pub block_events_tx: broadcast::Sender<()>, // shared with indexer
pub rpc_url: String,
pub solc_path: String,
pub admin_api_key: Option<String>,
pub chain_id: u64, // fetched from RPC once at startup via eth_chainId
pub chain_name: String, // from CHAIN_NAME env var, defaults to "Unknown"
}
```

### Frontend API client
- Base URL: `/api` (proxied by nginx to `atlas-api:3000`)
- `GET /api/status` → `{ block_height, indexed_at }` — single key-value lookup from `indexer_state`, sub-ms. This is the **only** chain status endpoint; there is no separate "full chain info" endpoint. Used by the navbar as a polling fallback when SSE is disconnected.
- Base URL: `/api` (proxied by nginx to `atlas-server:3000`)
- `GET /api/status` → `{ block_height, indexed_at }` — single key-value lookup from `indexer_state`, sub-ms. Used by the navbar as a polling fallback when SSE is disconnected.
- `GET /api/events` → SSE stream of `new_block` events, one per block in order. Primary live-update path for navbar counter and blocks page. Falls back to `/api/status` polling on disconnect.

## Important Conventions
Expand All @@ -88,7 +96,7 @@ pub struct AppState {
- **SQL**: never use `OFFSET` for large tables — use keyset/cursor pagination
- **Migrations**: use `run_migrations(&database_url)` (not `&pool`) to get a timeout-free connection
- **Frontend**: uses Bun (not npm/yarn). Lockfile is `bun.lock` (text, Bun ≥ 1.2). Build with `bunx vite build` (skips tsc type check).
- **Docker**: frontend image uses `nginxinc/nginx-unprivileged:alpine` (non-root, port 8080). API/indexer use `alpine` with `ca-certificates`.
- **Docker**: frontend image uses `nginxinc/nginx-unprivileged:alpine` (non-root, port 8080). Server uses `alpine` with `ca-certificates`.
- **Tests**: add unit tests for new logic in a `#[cfg(test)] mod tests` block in the same file. Run with `cargo test --workspace`.
- **Commits**: authored by the user only — no Claude co-author lines.

Expand All @@ -99,29 +107,31 @@ Key vars (see `.env.example` for full list):
| Var | Used by | Default |
|---|---|---|
| `DATABASE_URL` | all | required |
| `RPC_URL` | indexer, api | required |
| `CHAIN_NAME` | api | `"Unknown"` |
| `DB_MAX_CONNECTIONS` | indexer | `20` |
| `RPC_URL` | server | required |
| `DB_MAX_CONNECTIONS` | indexer pool | `20` |
| `API_DB_MAX_CONNECTIONS` | API pool | `20` |
| `BATCH_SIZE` | indexer | `100` |
| `FETCH_WORKERS` | indexer | `10` |
| `ADMIN_API_KEY` | api | none |
| `ADMIN_API_KEY` | API | none |
| `API_HOST` | API | `127.0.0.1` |
| `API_PORT` | API | `3000` |

## Running Locally

```bash
# Start full stack
docker compose up -d

# Rebuild a single service after code changes
docker compose build atlas-api && docker compose up -d atlas-api
# Rebuild after code changes
docker compose build atlas-server && docker compose up -d atlas-server

# Backend only (no Docker)
cd backend && cargo build --workspace
```

## Common Gotchas

- `get_table_count(pool, table_name)` — pass the table name, it's not hardcoded anymore
- `run_migrations` takes `&str` (database URL), not `&PgPool`
- The blocks cursor uses `pagination.limit()` (clamped), not `pagination.offset()` — they diverge when client sends `limit > 100`
- `bun.lock` not `bun.lockb` — Bun ≥ 1.2 uses text format
- SSE uses in-process broadcast, not PG NOTIFY — no PgListener needed
7 changes: 2 additions & 5 deletions Justfile
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,8 @@ backend-clippy:
backend-test:
cd backend && cargo test --workspace --all-targets

backend-api:
cd backend && cargo run --bin atlas-api

backend-indexer:
cd backend && cargo run --bin atlas-indexer
backend-server:
cd backend && cargo run --bin atlas-server

# Combined checks
ci: backend-fmt backend-clippy backend-test frontend-install frontend-lint frontend-build
8 changes: 2 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,14 +28,10 @@ docker-compose up -d postgres
just frontend-install
```

Start backend services (each in its own terminal):
Start the backend:

```bash
just backend-indexer
```

```bash
just backend-api
just backend-server
```

Start frontend:
Expand Down
3 changes: 1 addition & 2 deletions backend/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,7 @@
resolver = "2"
members = [
"crates/atlas-common",
"crates/atlas-indexer",
"crates/atlas-api",
"crates/atlas-server",
]

[workspace.package]
Expand Down
20 changes: 8 additions & 12 deletions backend/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -13,21 +13,17 @@ RUN cargo update wasip2 --precise 0.1.0 || true

RUN cargo build --release

# Indexer image
FROM alpine:3.21 AS indexer
# Server image
FROM alpine:3.21 AS server

RUN apk add --no-cache ca-certificates

COPY --from=builder /app/target/release/atlas-indexer /usr/local/bin/
COPY --from=builder /app/target/release/atlas-server /usr/local/bin/

CMD ["atlas-indexer"]

# API image
FROM alpine:3.21 AS api

RUN apk add --no-cache ca-certificates
EXPOSE 3000
CMD ["atlas-server"]

COPY --from=builder /app/target/release/atlas-api /usr/local/bin/
# Backward-compatible target names for CI jobs that still build the old images.
FROM server AS api

EXPOSE 3000
CMD ["atlas-api"]
FROM server AS indexer
23 changes: 0 additions & 23 deletions backend/crates/atlas-api/src/handlers/auth.rs

This file was deleted.

Loading
Loading