Skip to content

Merge atlas-api and atlas-indexer into atlas-server#25

Merged
pthmas merged 6 commits intomainfrom
pthmas/merge-indexer-api
Mar 16, 2026
Merged

Merge atlas-api and atlas-indexer into atlas-server#25
pthmas merged 6 commits intomainfrom
pthmas/merge-indexer-api

Conversation

@pthmas
Copy link
Collaborator

@pthmas pthmas commented Mar 16, 2026

Merge the API and indexer into a single atlas-server binary and update Docker Compose/nginx to target the unified service.

Because the API and indexer now run in the same process, SSE was also refactored to use an in-process broadcast channel fed directly by the indexer instead of the previous Postgres LISTEN/NOTIFY path.

Summary by CodeRabbit

  • New Features
    • Paginated transaction logs responses; DB migration adds block DA status tracking.
  • Bug Fixes
    • Event notifications moved to an in-process broadcast for more reliable live updates.
  • Removed Features
    • Contract verification, ABI/source retrieval, address labeling, proxy detection, and admin-key verification endpoints removed.
  • Chores
    • Backend consolidated into a single server binary; CI/workflows, Docker, compose, nginx, and startup flow updated.
  • Documentation
    • .env example extended with API host/port and DB connection placeholders.

@coderabbitai
Copy link

coderabbitai bot commented Mar 16, 2026

📝 Walkthrough

Walkthrough

Consolidates indexer and API into a single atlas-server binary, replaces Postgres LISTEN/NOTIFY with an in-process broadcast for block events, removes contract verification and labeling endpoints/types, and updates CI/Docker/config for separate DB pools and API host/port.

Changes

Cohort / File(s) Summary
Config & Env
\.env.example, backend/crates/atlas-server/src/config.rs
Added API_HOST/API_PORT/API_DB_MAX_CONNECTIONS and split DB pool settings into indexer/api-specific fields; removed unified db_max_connections.
Workspace & Manifests
backend/Cargo.toml, backend/crates/atlas-server/Cargo.toml, backend/crates/atlas-indexer/Cargo.toml
Removed separate atlas-indexer/api workspace members, added crates/atlas-server, and moved indexer dependencies into atlas-server manifest (indexer crate removed).
Build & CI
.github/workflows/ci.yml, .github/workflows/docker.yml, Justfile
Consolidated CI/Docker targets to atlas-server (server build target); renamed backend build job, removed build-api job, added frontend build job, and updated just targets.
Docker & Deployment
backend/Dockerfile, docker-compose.yml, frontend/nginx.conf
Dockerfile/compose updated to use server target and atlas-server service; atlas-api removed; nginx proxy targets now point to http://atlas-server:3000.
Server Bootstrap
backend/crates/atlas-server/src/main.rs, backend/crates/atlas-server/src/api/mod.rs
New unified main: separate indexer/API pools, shared SSE broadcast channel, migrations, run_with_retry, and Router builder with slimmer AppState (removed solc_path/admin_api_key).
Indexer Integration
backend/crates/atlas-server/src/indexer/..., backend/crates/atlas-server/src/indexer/indexer.rs
Indexer moved into atlas-server, accepts broadcast::Sender<()>, emits in-process broadcasts after commits instead of PG NOTIFY; Indexer and MetadataFetcher re-exported.
SSE / Fanout
backend/crates/atlas-server/src/api/handlers/sse.rs, backend/crates/atlas-server/src/api/mod.rs
Removed Postgres LISTEN/NOTIFY fanout function; SSE subscribers now rely on in-process broadcast channel wired via AppState.
API Handler Removals
backend/crates/atlas-api/src/handlers/contracts.rs, backend/crates/atlas-api/src/handlers/labels.rs, backend/crates/atlas-api/src/handlers/auth.rs, backend/crates/atlas-server/src/api/handlers/etherscan.rs, backend/crates/atlas-server/src/api/handlers/proxy.rs, backend/crates/atlas-server/src/api/handlers/mod.rs
Removed contract verification endpoints/logic, labels CRUD endpoints/types, admin require_admin, Etherscan POST verification, and proxy-detection endpoint; corresponding module declarations removed.
Common Types
backend/crates/atlas-common/src/types.rs
Removed public types related to contract verification and address labels (e.g., VerifiedContract, VerifyContractRequest/Response, AddressLabel, etc.).
Logs & Pagination
backend/crates/atlas-server/src/api/handlers/logs.rs
Added TransactionLogsQuery with page/limit clamping, switched to LIMIT/OFFSET queries, added PaginatedResponse wrapper and unit tests.
Migrations & Docs
backend/migrations/20240108000001_block_da_status.sql, CLAUDE.md, README.md
Added block_da_status migration with partial index; updated docs to describe unified server, SSE/broadcast behavior, and config changes.

Sequence Diagram

sequenceDiagram
    participant Indexer
    participant DB as Database
    participant Broadcast as "Broadcast Channel"
    participant API
    participant SSEClient as "SSE Client"

    Indexer->>DB: persist new block
    activate Indexer
    Indexer->>Broadcast: send notification (block_events_tx.send(()))
    deactivate Indexer

    Broadcast->>API: notify subscribers
    activate API
    API->>DB: query latest block(s)
    DB-->>API: return block data
    API->>SSEClient: stream block via SSE
    deactivate API

    SSEClient->>SSEClient: update UI
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • tac0turtle

Poem

🐰
Two binaries hopped close and then one,
A broadcast bell tolled for blocks to run.
Inside we chatter, no more NOTIFY din,
API and indexer, kin beneath the skin.
One server hops — let the streaming begin!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 45.45% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the primary change: consolidation of two separate binaries (atlas-api and atlas-indexer) into a unified atlas-server binary.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch pthmas/merge-indexer-api
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@pthmas pthmas marked this pull request as ready for review March 16, 2026 14:31
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
backend/crates/atlas-server/src/indexer/indexer.rs (1)

779-793: ⚠️ Potential issue | 🟠 Major

Send the SSE wake-up only after the transaction commits.

backend/crates/atlas-server/src/api/handlers/sse.rs re-queries blocks as soon as it receives this broadcast. Because the send happens before pg_tx.commit(), subscribers can wake up, see no new rows yet, and then miss the batch entirely when the indexer is sitting at head.

Proposed fix
         if update_watermark {
             let last_value = last_block.to_string();
             pg_tx
                 .execute(
                     "INSERT INTO indexer_state (key, value, updated_at)
                  VALUES ('last_indexed_block', $1, NOW())
                  ON CONFLICT (key) DO UPDATE SET value = $1, updated_at = NOW()",
                     &[&last_value],
                 )
                 .await?;
-            // Notify SSE subscribers directly via in-process broadcast
-            let _ = self.block_events_tx.send(());
         }
 
         pg_tx.commit().await?;
+        if update_watermark {
+            let _ = self.block_events_tx.send(());
+        }
         Ok(())

Based on learnings: "GET /api/events must return an SSE stream of new_block events, one per block in order, as the primary live-update path for navbar counter and blocks page. Client must fall back to /api/status polling on disconnect.`"

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-server/src/indexer/indexer.rs` around lines 779 - 793,
The SSE broadcast is being sent before the DB transaction commits, which can
cause subscribers to re-query blocks and miss rows that aren't committed yet;
move the in-process notify call (self.block_events_tx.send(())) so it happens
only after pg_tx.commit().await? is successful (i.e., after the commit call in
indexer.rs within the update_watermark branch), ensuring the notify runs
post-commit and still guarded by the update_watermark condition and using the
same last_block/last_value context.
🧹 Nitpick comments (3)
backend/crates/atlas-server/src/api/handlers/addresses.rs (1)

340-348: Consider cursor-based pagination for future improvement.

This handler (and others in this file like list_addresses, get_address_nfts, get_address_transfers) uses OFFSET-based pagination. For large tables, cursor-based pagination would be more performant. This is existing behavior and not blocking for this merge PR, but worth tracking for a future optimization pass. Based on learnings: "Never use OFFSET for pagination on large tables — use keyset/cursor pagination instead".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-server/src/api/handlers/addresses.rs` around lines 340 -
348, This uses OFFSET-based pagination (pagination.limit() /
pagination.offset()) in handlers like get_address_transfers (and similarly
list_addresses, get_address_nfts); replace with keyset/cursor pagination by
accepting a cursor (e.g., last_seen_block_number and last_seen_block_index or an
encoded cursor) and change the query to filter with a tuple comparison like
WHERE (block_number, block_index) < ($cursor_block_number, $cursor_block_index)
AND (from_address = $1 OR to_address = $1) ORDER BY block_number DESC,
block_index DESC LIMIT $2, update binding usage to bind the address, cursor
parts, and limit, and adjust the Pagination type and handler inputs to
produce/consume the cursor instead of offset.
backend/migrations/20240108000001_block_da_status.sql (1)

1-21: Migration creates schema for unimplemented DA worker functionality.

The table structure and partial index are well-designed but document infrastructure that has no corresponding implementation. A search confirms:

  • No DA worker code exists in the Rust codebase
  • No evnode_url config field
  • No BlockDaStatus struct in types.rs
  • No DA worker task spawned in lib.rs

Either defer this migration until the DA worker implementation is ready, or add a tracking issue linking this migration to the pending DA worker work.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/migrations/20240108000001_block_da_status.sql` around lines 1 - 21,
This migration creates the block_da_status table and idx_block_da_status_pending
index for a DA worker that doesn’t exist yet (no evnode_url config, no
BlockDaStatus in types.rs, no DA task in lib.rs); either remove/revert this
migration until the DA worker is implemented, or keep it but add a clear
tracking issue link and TODOs: update the migration file header comment to
reference the tracking issue number, and add corresponding TODO stubs/notes in
types.rs (BlockDaStatus), in the config for evnode_url, and in lib.rs where the
DA worker will be spawned so the migration is traceable to the pending
implementation.
backend/crates/atlas-server/Cargo.toml (1)

31-38: Consider updating tokio-postgres-rustls to version 0.13.0.

The specified versions are compatible: tokio-postgres-rustls 0.12 works correctly with rustls 0.23. However, version 0.13.0 (released October 2024) is now available and should be considered given the critical nature of COPY operations in the indexer infrastructure. Additionally, promote tokio-postgres, rustls, and related TLS dependencies to workspace-level for consistency.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-server/Cargo.toml` around lines 31 - 38, Update the
tokio-postgres-rustls dependency from "0.12" to "0.13.0" in Cargo.toml (replace
the tokio-postgres-rustls entry) and ensure compatibility by validating
tokio-postgres and rustls versions; then move/promote tokio-postgres, rustls,
webpki-roots (and any other TLS-related deps such as tokio-postgres-rustls) to
the workspace-level Cargo.toml so these versions are centrally managed and
consistent across crates, and run cargo update/cargo check to confirm no
breakages.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/Cargo.toml`:
- Around line 3-6: The Justfile still defines recipes backend-api and
backend-indexer that reference removed binaries atlas-api and atlas-indexer;
update these recipes to either remove them or change their commands to run the
new atlas-server binary (e.g., replace cargo run --bin atlas-api and cargo run
--bin atlas-indexer with cargo run --bin atlas-server), and ensure recipe names
and any help text reflect the unified atlas-server target.

In `@backend/crates/atlas-server/src/api/handlers/logs.rs`:
- Around line 21-28: The TransactionLogsQuery currently only clamps the SQL
LIMIT, causing offset() and response metadata to use the raw limit; change the
implementation to compute and reuse a single clamped value (e.g., add a method
effective_limit(&self) or make limit(&self) return the clamped value and rename
the old method if needed) and use that clamped value both in offset() (compute
((page.saturating_sub(1)) * effective_limit) as i64) and when populating the
PaginatedResponse page_size/limit metadata so they match the actual fetched
rows; also add a #[cfg(test)] mod tests unit test that constructs a
TransactionLogsQuery with page=2&limit=1000 and asserts the offset and reported
page size use the clamped value (100) to prevent regressions.

In `@backend/crates/atlas-server/src/api/mod.rs`:
- Around line 19-23: The SSE broadcast is sent before the DB transaction commits
in indexer::indexer.rs (the call to block_events_tx.send(()) occurs before
pg_tx.commit().await), causing subscribers to miss events; fix this by moving
the block_events_tx.send(()) to after pg_tx.commit().await? succeeds (i.e., only
send when commit returns Ok), ensure any early returns or error paths do not
send, and preserve ordering guarantees so the new_block notification is emitted
only after pg_tx.commit().await completes successfully.

In `@backend/crates/atlas-server/src/main.rs`:
- Around line 89-94: Update the shutdown_signal() implementation to listen for
SIGTERM on Unix in addition to Ctrl-C: on Unix use
tokio::signal::unix::signal(SignalKind::terminate()) and await either that or
ctrl_c(), and on non-Unix platforms keep the existing tokio::signal::ctrl_c()
fallback (use cfg(unix) / cfg(not(unix)) or a combined async select). Also add
unit tests under a #[cfg(test)] mod tests block that exercise the new signal
handling logic (mock or spawn tasks to send/trigger the signals and assert
shutdown_signal() completes), referencing shutdown_signal() and the tests module
so CI verifies the new behavior.

---

Outside diff comments:
In `@backend/crates/atlas-server/src/indexer/indexer.rs`:
- Around line 779-793: The SSE broadcast is being sent before the DB transaction
commits, which can cause subscribers to re-query blocks and miss rows that
aren't committed yet; move the in-process notify call
(self.block_events_tx.send(())) so it happens only after pg_tx.commit().await?
is successful (i.e., after the commit call in indexer.rs within the
update_watermark branch), ensuring the notify runs post-commit and still guarded
by the update_watermark condition and using the same last_block/last_value
context.

---

Nitpick comments:
In `@backend/crates/atlas-server/Cargo.toml`:
- Around line 31-38: Update the tokio-postgres-rustls dependency from "0.12" to
"0.13.0" in Cargo.toml (replace the tokio-postgres-rustls entry) and ensure
compatibility by validating tokio-postgres and rustls versions; then
move/promote tokio-postgres, rustls, webpki-roots (and any other TLS-related
deps such as tokio-postgres-rustls) to the workspace-level Cargo.toml so these
versions are centrally managed and consistent across crates, and run cargo
update/cargo check to confirm no breakages.

In `@backend/crates/atlas-server/src/api/handlers/addresses.rs`:
- Around line 340-348: This uses OFFSET-based pagination (pagination.limit() /
pagination.offset()) in handlers like get_address_transfers (and similarly
list_addresses, get_address_nfts); replace with keyset/cursor pagination by
accepting a cursor (e.g., last_seen_block_number and last_seen_block_index or an
encoded cursor) and change the query to filter with a tuple comparison like
WHERE (block_number, block_index) < ($cursor_block_number, $cursor_block_index)
AND (from_address = $1 OR to_address = $1) ORDER BY block_number DESC,
block_index DESC LIMIT $2, update binding usage to bind the address, cursor
parts, and limit, and adjust the Pagination type and handler inputs to
produce/consume the cursor instead of offset.

In `@backend/migrations/20240108000001_block_da_status.sql`:
- Around line 1-21: This migration creates the block_da_status table and
idx_block_da_status_pending index for a DA worker that doesn’t exist yet (no
evnode_url config, no BlockDaStatus in types.rs, no DA task in lib.rs); either
remove/revert this migration until the DA worker is implemented, or keep it but
add a clear tracking issue link and TODOs: update the migration file header
comment to reference the tracking issue number, and add corresponding TODO
stubs/notes in types.rs (BlockDaStatus), in the config for evnode_url, and in
lib.rs where the DA worker will be spawned so the migration is traceable to the
pending implementation.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: d7c0bf47-0a12-47f0-8280-5f40c1875278

📥 Commits

Reviewing files that changed from the base of the PR and between 47323e5 and 40768a7.

📒 Files selected for processing (38)
  • .env.example
  • .github/workflows/ci.yml
  • .github/workflows/docker.yml
  • CLAUDE.md
  • backend/Cargo.toml
  • backend/Dockerfile
  • backend/crates/atlas-api/src/handlers/auth.rs
  • backend/crates/atlas-api/src/handlers/contracts.rs
  • backend/crates/atlas-api/src/handlers/labels.rs
  • backend/crates/atlas-common/src/types.rs
  • backend/crates/atlas-indexer/Cargo.toml
  • backend/crates/atlas-indexer/src/main.rs
  • backend/crates/atlas-server/Cargo.toml
  • backend/crates/atlas-server/src/api/error.rs
  • backend/crates/atlas-server/src/api/handlers/addresses.rs
  • backend/crates/atlas-server/src/api/handlers/blocks.rs
  • backend/crates/atlas-server/src/api/handlers/etherscan.rs
  • backend/crates/atlas-server/src/api/handlers/logs.rs
  • backend/crates/atlas-server/src/api/handlers/mod.rs
  • backend/crates/atlas-server/src/api/handlers/nfts.rs
  • backend/crates/atlas-server/src/api/handlers/proxy.rs
  • backend/crates/atlas-server/src/api/handlers/search.rs
  • backend/crates/atlas-server/src/api/handlers/sse.rs
  • backend/crates/atlas-server/src/api/handlers/status.rs
  • backend/crates/atlas-server/src/api/handlers/tokens.rs
  • backend/crates/atlas-server/src/api/handlers/transactions.rs
  • backend/crates/atlas-server/src/api/mod.rs
  • backend/crates/atlas-server/src/config.rs
  • backend/crates/atlas-server/src/indexer/batch.rs
  • backend/crates/atlas-server/src/indexer/copy.rs
  • backend/crates/atlas-server/src/indexer/fetcher.rs
  • backend/crates/atlas-server/src/indexer/indexer.rs
  • backend/crates/atlas-server/src/indexer/metadata.rs
  • backend/crates/atlas-server/src/indexer/mod.rs
  • backend/crates/atlas-server/src/main.rs
  • backend/migrations/20240108000001_block_da_status.sql
  • docker-compose.yml
  • frontend/nginx.conf
💤 Files with no reviewable changes (7)
  • backend/crates/atlas-api/src/handlers/auth.rs
  • backend/crates/atlas-server/src/api/handlers/mod.rs
  • backend/crates/atlas-indexer/src/main.rs
  • backend/crates/atlas-api/src/handlers/contracts.rs
  • backend/crates/atlas-api/src/handlers/labels.rs
  • backend/crates/atlas-indexer/Cargo.toml
  • backend/crates/atlas-common/src/types.rs

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
backend/crates/atlas-server/src/api/handlers/logs.rs (2)

256-277: Test addresses the past review feedback; consider adding coverage for LogsQuery.

The test correctly verifies TransactionLogsQuery clamping behavior for page=2&limit=1000. For completeness, consider adding a parallel test for LogsQuery since it has its own clamping implementation accessing nested pagination fields.

Optional test addition
#[test]
fn logs_query_clamps_limit_for_offset_and_metadata() {
    use super::LogsQuery;
    use atlas_common::Pagination;

    let query = LogsQuery {
        topic0: None,
        pagination: Pagination { page: 2, limit: 1000 },
    };

    assert_eq!(query.clamped_limit(), 100);
    assert_eq!(query.offset(), 100);
    assert_eq!(query.limit(), 100);
}

As per coding guidelines, "Add unit tests for new logic in a #[cfg(test)] mod tests block in the same file."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-server/src/api/handlers/logs.rs` around lines 256 - 277,
Add a unit test mirroring the TransactionLogsQuery test that verifies LogsQuery
clamping: create a LogsQuery with pagination: Pagination { page: 2, limit: 1000
}, then assert that query.clamped_limit() == 100, query.offset() == 100, and
query.limit() == 100; place the test inside the existing #[cfg(test)] mod tests
alongside the TransactionLogsQuery test so LogsQuery’s clamping logic (accessing
pagination) is covered.

94-155: Consider cursor-based pagination for address logs.

For high-activity contracts, the event_logs table filtered by address can contain millions of rows. The current COUNT(*) and OFFSET approach will degrade as offset grows. Based on learnings, cursor-based pagination using WHERE block_number < $cursor ORDER BY block_number DESC, log_index DESC LIMIT $1 would scale better.

This is a recommended improvement rather than a blocker for this PR.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-server/src/api/handlers/logs.rs` around lines 94 - 155,
The current get_address_logs handler (function get_address_logs) and its use of
LogsQuery relies on COUNT(*) and OFFSET which will not scale; switch to
cursor-based pagination by extending LogsQuery with a cursor (e.g.,
last_block_number and last_log_index or a single encoded cursor), update the
SELECT queries on event_logs (both the topic0 and non-topic0 branches) to filter
using the cursor like "WHERE address = $1 AND (block_number, log_index) <
($cursor_block, $cursor_index) ORDER BY block_number DESC, log_index DESC LIMIT
$N" instead of OFFSET, remove/avoid expensive COUNT(*) (or replace it with a
cheaper existence/has_next check), and return the next cursor (based on the last
row's block_number/log_index) in the PaginatedResponse so clients can request
subsequent pages; adjust references to topic0, block_number, log_index,
LogsQuery, get_address_logs, and PaginatedResponse accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@backend/crates/atlas-server/src/api/handlers/logs.rs`:
- Around line 256-277: Add a unit test mirroring the TransactionLogsQuery test
that verifies LogsQuery clamping: create a LogsQuery with pagination: Pagination
{ page: 2, limit: 1000 }, then assert that query.clamped_limit() == 100,
query.offset() == 100, and query.limit() == 100; place the test inside the
existing #[cfg(test)] mod tests alongside the TransactionLogsQuery test so
LogsQuery’s clamping logic (accessing pagination) is covered.
- Around line 94-155: The current get_address_logs handler (function
get_address_logs) and its use of LogsQuery relies on COUNT(*) and OFFSET which
will not scale; switch to cursor-based pagination by extending LogsQuery with a
cursor (e.g., last_block_number and last_log_index or a single encoded cursor),
update the SELECT queries on event_logs (both the topic0 and non-topic0
branches) to filter using the cursor like "WHERE address = $1 AND (block_number,
log_index) < ($cursor_block, $cursor_index) ORDER BY block_number DESC,
log_index DESC LIMIT $N" instead of OFFSET, remove/avoid expensive COUNT(*) (or
replace it with a cheaper existence/has_next check), and return the next cursor
(based on the last row's block_number/log_index) in the PaginatedResponse so
clients can request subsequent pages; adjust references to topic0, block_number,
log_index, LogsQuery, get_address_logs, and PaginatedResponse accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7b93f8c8-829f-4ac1-b94e-c65c114c5635

📥 Commits

Reviewing files that changed from the base of the PR and between 40768a7 and 0573ec6.

📒 Files selected for processing (3)
  • Justfile
  • README.md
  • backend/crates/atlas-server/src/api/handlers/logs.rs

@pthmas pthmas requested a review from tac0turtle March 16, 2026 15:10
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
backend/crates/atlas-server/src/main.rs (1)

136-167: Clarify return type and use saturating arithmetic.

Two observations:

  1. Misleading return type: The function returns Result<()> but the loop never exits, so Ok(()) is unreachable. Consider changing to -> ! (never type) or documenting this is intentional for long-running tasks.

  2. Potential overflow: While unlikely in practice, retry_count += 1 could theoretically overflow. Use idiomatic Rust with saturating_add.

♻️ Suggested improvements
 /// Run an async function with exponential backoff retry
-async fn run_with_retry<F, Fut>(f: F) -> Result<()>
+/// Note: This function runs indefinitely, restarting the task on failure.
+async fn run_with_retry<F, Fut>(f: F) -> !
 where
     F: Fn() -> Fut,
     Fut: std::future::Future<Output = Result<()>>,
 {
     let mut retry_count = 0;
 
     loop {
         match f().await {
             Ok(()) => {
                 retry_count = 0;
             }
             Err(e) => {
                 let delay = RETRY_DELAYS
                     .get(retry_count)
                     .copied()
                     .unwrap_or(MAX_RETRY_DELAY);
 
                 tracing::error!(
                     "Fatal error (internal retries exhausted): {}. Restarting in {}s (attempt {})...",
                     e,
                     delay,
                     retry_count + 1
                 );
 
                 tokio::time::sleep(Duration::from_secs(delay)).await;
-                retry_count += 1;
+                retry_count = retry_count.saturating_add(1);
             }
         }
     }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-server/src/main.rs` around lines 136 - 167, The function
run_with_retry has a misleading Result<()> return because its loop never returns
and also increments retry_count with plain +1; change the signature to return
the never type `-> !` (or explicitly document why Result<()> is intentional) and
replace the increment `retry_count += 1` with idiomatic saturating arithmetic,
e.g. `retry_count = retry_count.saturating_add(1)`, while keeping the logic that
looks up delays from RETRY_DELAYS and falls back to MAX_RETRY_DELAY.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@backend/crates/atlas-server/src/main.rs`:
- Around line 136-167: The function run_with_retry has a misleading Result<()>
return because its loop never returns and also increments retry_count with plain
+1; change the signature to return the never type `-> !` (or explicitly document
why Result<()> is intentional) and replace the increment `retry_count += 1` with
idiomatic saturating arithmetic, e.g. `retry_count =
retry_count.saturating_add(1)`, while keeping the logic that looks up delays
from RETRY_DELAYS and falls back to MAX_RETRY_DELAY.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 140e51cc-dcb8-4eb3-89d5-3222b46f1e1b

📥 Commits

Reviewing files that changed from the base of the PR and between 0573ec6 and fc45b62.

📒 Files selected for processing (2)
  • backend/crates/atlas-server/src/indexer/indexer.rs
  • backend/crates/atlas-server/src/main.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • backend/crates/atlas-server/src/indexer/indexer.rs

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
backend/crates/atlas-server/src/config.rs (1)

33-88: Consider adding unit tests for config parsing.

The new environment variable parsing logic (especially for api_port which requires valid u16) would benefit from tests to catch parsing edge cases. As per coding guidelines, new logic should include tests in a #[cfg(test)] mod tests block.

🧪 Example test structure
#[cfg(test)]
mod tests {
    use super::*;
    use std::env;

    #[test]
    fn test_api_port_default() {
        env::remove_var("API_PORT");
        env::set_var("DATABASE_URL", "postgres://localhost/test");
        env::set_var("RPC_URL", "http://localhost:8545");
        
        let config = Config::from_env().unwrap();
        assert_eq!(config.api_port, 3000);
        
        // cleanup
        env::remove_var("DATABASE_URL");
        env::remove_var("RPC_URL");
    }

    #[test]
    fn test_api_host_default() {
        env::remove_var("API_HOST");
        env::set_var("DATABASE_URL", "postgres://localhost/test");
        env::set_var("RPC_URL", "http://localhost:8545");
        
        let config = Config::from_env().unwrap();
        assert_eq!(config.api_host, "127.0.0.1");
        
        env::remove_var("DATABASE_URL");
        env::remove_var("RPC_URL");
    }
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-server/src/config.rs` around lines 33 - 88, Add unit
tests for Config::from_env to validate environment parsing (especially api_port
and api_host) by creating a #[cfg(test)] mod tests that sets/clears environment
variables (setting DATABASE_URL and RPC_URL as required), calls
Config::from_env(), asserts default values (e.g., api_port == 3000, api_host ==
"127.0.0.1") and also adds cases for invalid/parsing edge inputs (e.g.,
non-numeric API_PORT) to ensure error propagation; remember to clean up or
restore env vars between tests to avoid cross-test pollution and reference the
Config::from_env, api_port, api_host symbols when locating where to add these
tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@CLAUDE.md`:
- Around line 77-85: Update the API docs to match the actual AppState shape by
adding the missing chain_id: u64 and chain_name: String fields to the AppState
struct example (the same fields present in the runtime AppState used by
functions in lib.rs), and add the CHAIN_NAME environment variable to the
env-vars table with a default of "Unknown" so the documented runtime contract
matches the code (ensure references to chain_id/chain_name align with the
AppState example and mention CHAIN_NAME as the source of chain_name).

---

Nitpick comments:
In `@backend/crates/atlas-server/src/config.rs`:
- Around line 33-88: Add unit tests for Config::from_env to validate environment
parsing (especially api_port and api_host) by creating a #[cfg(test)] mod tests
that sets/clears environment variables (setting DATABASE_URL and RPC_URL as
required), calls Config::from_env(), asserts default values (e.g., api_port ==
3000, api_host == "127.0.0.1") and also adds cases for invalid/parsing edge
inputs (e.g., non-numeric API_PORT) to ensure error propagation; remember to
clean up or restore env vars between tests to avoid cross-test pollution and
reference the Config::from_env, api_port, api_host symbols when locating where
to add these tests.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 35ea945b-fa75-4cba-b5cd-f7e0d3c88c53

📥 Commits

Reviewing files that changed from the base of the PR and between fc45b62 and b62f70a.

📒 Files selected for processing (3)
  • .env.example
  • CLAUDE.md
  • backend/crates/atlas-server/src/config.rs

@pthmas pthmas merged commit f6d18cb into main Mar 16, 2026
8 checks passed
@pthmas pthmas deleted the pthmas/merge-indexer-api branch March 16, 2026 15:51
This was referenced Mar 16, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants