Merge atlas-api and atlas-indexer into atlas-server#25
Conversation
📝 WalkthroughWalkthroughConsolidates indexer and API into a single atlas-server binary, replaces Postgres LISTEN/NOTIFY with an in-process broadcast for block events, removes contract verification and labeling endpoints/types, and updates CI/Docker/config for separate DB pools and API host/port. Changes
Sequence DiagramsequenceDiagram
participant Indexer
participant DB as Database
participant Broadcast as "Broadcast Channel"
participant API
participant SSEClient as "SSE Client"
Indexer->>DB: persist new block
activate Indexer
Indexer->>Broadcast: send notification (block_events_tx.send(()))
deactivate Indexer
Broadcast->>API: notify subscribers
activate API
API->>DB: query latest block(s)
DB-->>API: return block data
API->>SSEClient: stream block via SSE
deactivate API
SSEClient->>SSEClient: update UI
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/crates/atlas-server/src/indexer/indexer.rs (1)
779-793:⚠️ Potential issue | 🟠 MajorSend the SSE wake-up only after the transaction commits.
backend/crates/atlas-server/src/api/handlers/sse.rsre-queriesblocksas soon as it receives this broadcast. Because the send happens beforepg_tx.commit(), subscribers can wake up, see no new rows yet, and then miss the batch entirely when the indexer is sitting at head.Proposed fix
if update_watermark { let last_value = last_block.to_string(); pg_tx .execute( "INSERT INTO indexer_state (key, value, updated_at) VALUES ('last_indexed_block', $1, NOW()) ON CONFLICT (key) DO UPDATE SET value = $1, updated_at = NOW()", &[&last_value], ) .await?; - // Notify SSE subscribers directly via in-process broadcast - let _ = self.block_events_tx.send(()); } pg_tx.commit().await?; + if update_watermark { + let _ = self.block_events_tx.send(()); + } Ok(())Based on learnings: "
GET /api/eventsmust return an SSE stream ofnew_blockevents, one per block in order, as the primary live-update path for navbar counter and blocks page. Client must fall back to/api/statuspolling on disconnect.`"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/src/indexer/indexer.rs` around lines 779 - 793, The SSE broadcast is being sent before the DB transaction commits, which can cause subscribers to re-query blocks and miss rows that aren't committed yet; move the in-process notify call (self.block_events_tx.send(())) so it happens only after pg_tx.commit().await? is successful (i.e., after the commit call in indexer.rs within the update_watermark branch), ensuring the notify runs post-commit and still guarded by the update_watermark condition and using the same last_block/last_value context.
🧹 Nitpick comments (3)
backend/crates/atlas-server/src/api/handlers/addresses.rs (1)
340-348: Consider cursor-based pagination for future improvement.This handler (and others in this file like
list_addresses,get_address_nfts,get_address_transfers) uses OFFSET-based pagination. For large tables, cursor-based pagination would be more performant. This is existing behavior and not blocking for this merge PR, but worth tracking for a future optimization pass. Based on learnings: "Never useOFFSETfor pagination on large tables — use keyset/cursor pagination instead".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/src/api/handlers/addresses.rs` around lines 340 - 348, This uses OFFSET-based pagination (pagination.limit() / pagination.offset()) in handlers like get_address_transfers (and similarly list_addresses, get_address_nfts); replace with keyset/cursor pagination by accepting a cursor (e.g., last_seen_block_number and last_seen_block_index or an encoded cursor) and change the query to filter with a tuple comparison like WHERE (block_number, block_index) < ($cursor_block_number, $cursor_block_index) AND (from_address = $1 OR to_address = $1) ORDER BY block_number DESC, block_index DESC LIMIT $2, update binding usage to bind the address, cursor parts, and limit, and adjust the Pagination type and handler inputs to produce/consume the cursor instead of offset.backend/migrations/20240108000001_block_da_status.sql (1)
1-21: Migration creates schema for unimplemented DA worker functionality.The table structure and partial index are well-designed but document infrastructure that has no corresponding implementation. A search confirms:
- No DA worker code exists in the Rust codebase
- No
evnode_urlconfig field- No
BlockDaStatusstruct intypes.rs- No DA worker task spawned in
lib.rsEither defer this migration until the DA worker implementation is ready, or add a tracking issue linking this migration to the pending DA worker work.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/migrations/20240108000001_block_da_status.sql` around lines 1 - 21, This migration creates the block_da_status table and idx_block_da_status_pending index for a DA worker that doesn’t exist yet (no evnode_url config, no BlockDaStatus in types.rs, no DA task in lib.rs); either remove/revert this migration until the DA worker is implemented, or keep it but add a clear tracking issue link and TODOs: update the migration file header comment to reference the tracking issue number, and add corresponding TODO stubs/notes in types.rs (BlockDaStatus), in the config for evnode_url, and in lib.rs where the DA worker will be spawned so the migration is traceable to the pending implementation.backend/crates/atlas-server/Cargo.toml (1)
31-38: Consider updatingtokio-postgres-rustlsto version 0.13.0.The specified versions are compatible:
tokio-postgres-rustls 0.12works correctly withrustls 0.23. However, version 0.13.0 (released October 2024) is now available and should be considered given the critical nature of COPY operations in the indexer infrastructure. Additionally, promotetokio-postgres,rustls, and related TLS dependencies to workspace-level for consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/Cargo.toml` around lines 31 - 38, Update the tokio-postgres-rustls dependency from "0.12" to "0.13.0" in Cargo.toml (replace the tokio-postgres-rustls entry) and ensure compatibility by validating tokio-postgres and rustls versions; then move/promote tokio-postgres, rustls, webpki-roots (and any other TLS-related deps such as tokio-postgres-rustls) to the workspace-level Cargo.toml so these versions are centrally managed and consistent across crates, and run cargo update/cargo check to confirm no breakages.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/Cargo.toml`:
- Around line 3-6: The Justfile still defines recipes backend-api and
backend-indexer that reference removed binaries atlas-api and atlas-indexer;
update these recipes to either remove them or change their commands to run the
new atlas-server binary (e.g., replace cargo run --bin atlas-api and cargo run
--bin atlas-indexer with cargo run --bin atlas-server), and ensure recipe names
and any help text reflect the unified atlas-server target.
In `@backend/crates/atlas-server/src/api/handlers/logs.rs`:
- Around line 21-28: The TransactionLogsQuery currently only clamps the SQL
LIMIT, causing offset() and response metadata to use the raw limit; change the
implementation to compute and reuse a single clamped value (e.g., add a method
effective_limit(&self) or make limit(&self) return the clamped value and rename
the old method if needed) and use that clamped value both in offset() (compute
((page.saturating_sub(1)) * effective_limit) as i64) and when populating the
PaginatedResponse page_size/limit metadata so they match the actual fetched
rows; also add a #[cfg(test)] mod tests unit test that constructs a
TransactionLogsQuery with page=2&limit=1000 and asserts the offset and reported
page size use the clamped value (100) to prevent regressions.
In `@backend/crates/atlas-server/src/api/mod.rs`:
- Around line 19-23: The SSE broadcast is sent before the DB transaction commits
in indexer::indexer.rs (the call to block_events_tx.send(()) occurs before
pg_tx.commit().await), causing subscribers to miss events; fix this by moving
the block_events_tx.send(()) to after pg_tx.commit().await? succeeds (i.e., only
send when commit returns Ok), ensure any early returns or error paths do not
send, and preserve ordering guarantees so the new_block notification is emitted
only after pg_tx.commit().await completes successfully.
In `@backend/crates/atlas-server/src/main.rs`:
- Around line 89-94: Update the shutdown_signal() implementation to listen for
SIGTERM on Unix in addition to Ctrl-C: on Unix use
tokio::signal::unix::signal(SignalKind::terminate()) and await either that or
ctrl_c(), and on non-Unix platforms keep the existing tokio::signal::ctrl_c()
fallback (use cfg(unix) / cfg(not(unix)) or a combined async select). Also add
unit tests under a #[cfg(test)] mod tests block that exercise the new signal
handling logic (mock or spawn tasks to send/trigger the signals and assert
shutdown_signal() completes), referencing shutdown_signal() and the tests module
so CI verifies the new behavior.
---
Outside diff comments:
In `@backend/crates/atlas-server/src/indexer/indexer.rs`:
- Around line 779-793: The SSE broadcast is being sent before the DB transaction
commits, which can cause subscribers to re-query blocks and miss rows that
aren't committed yet; move the in-process notify call
(self.block_events_tx.send(())) so it happens only after pg_tx.commit().await?
is successful (i.e., after the commit call in indexer.rs within the
update_watermark branch), ensuring the notify runs post-commit and still guarded
by the update_watermark condition and using the same last_block/last_value
context.
---
Nitpick comments:
In `@backend/crates/atlas-server/Cargo.toml`:
- Around line 31-38: Update the tokio-postgres-rustls dependency from "0.12" to
"0.13.0" in Cargo.toml (replace the tokio-postgres-rustls entry) and ensure
compatibility by validating tokio-postgres and rustls versions; then
move/promote tokio-postgres, rustls, webpki-roots (and any other TLS-related
deps such as tokio-postgres-rustls) to the workspace-level Cargo.toml so these
versions are centrally managed and consistent across crates, and run cargo
update/cargo check to confirm no breakages.
In `@backend/crates/atlas-server/src/api/handlers/addresses.rs`:
- Around line 340-348: This uses OFFSET-based pagination (pagination.limit() /
pagination.offset()) in handlers like get_address_transfers (and similarly
list_addresses, get_address_nfts); replace with keyset/cursor pagination by
accepting a cursor (e.g., last_seen_block_number and last_seen_block_index or an
encoded cursor) and change the query to filter with a tuple comparison like
WHERE (block_number, block_index) < ($cursor_block_number, $cursor_block_index)
AND (from_address = $1 OR to_address = $1) ORDER BY block_number DESC,
block_index DESC LIMIT $2, update binding usage to bind the address, cursor
parts, and limit, and adjust the Pagination type and handler inputs to
produce/consume the cursor instead of offset.
In `@backend/migrations/20240108000001_block_da_status.sql`:
- Around line 1-21: This migration creates the block_da_status table and
idx_block_da_status_pending index for a DA worker that doesn’t exist yet (no
evnode_url config, no BlockDaStatus in types.rs, no DA task in lib.rs); either
remove/revert this migration until the DA worker is implemented, or keep it but
add a clear tracking issue link and TODOs: update the migration file header
comment to reference the tracking issue number, and add corresponding TODO
stubs/notes in types.rs (BlockDaStatus), in the config for evnode_url, and in
lib.rs where the DA worker will be spawned so the migration is traceable to the
pending implementation.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d7c0bf47-0a12-47f0-8280-5f40c1875278
📒 Files selected for processing (38)
.env.example.github/workflows/ci.yml.github/workflows/docker.ymlCLAUDE.mdbackend/Cargo.tomlbackend/Dockerfilebackend/crates/atlas-api/src/handlers/auth.rsbackend/crates/atlas-api/src/handlers/contracts.rsbackend/crates/atlas-api/src/handlers/labels.rsbackend/crates/atlas-common/src/types.rsbackend/crates/atlas-indexer/Cargo.tomlbackend/crates/atlas-indexer/src/main.rsbackend/crates/atlas-server/Cargo.tomlbackend/crates/atlas-server/src/api/error.rsbackend/crates/atlas-server/src/api/handlers/addresses.rsbackend/crates/atlas-server/src/api/handlers/blocks.rsbackend/crates/atlas-server/src/api/handlers/etherscan.rsbackend/crates/atlas-server/src/api/handlers/logs.rsbackend/crates/atlas-server/src/api/handlers/mod.rsbackend/crates/atlas-server/src/api/handlers/nfts.rsbackend/crates/atlas-server/src/api/handlers/proxy.rsbackend/crates/atlas-server/src/api/handlers/search.rsbackend/crates/atlas-server/src/api/handlers/sse.rsbackend/crates/atlas-server/src/api/handlers/status.rsbackend/crates/atlas-server/src/api/handlers/tokens.rsbackend/crates/atlas-server/src/api/handlers/transactions.rsbackend/crates/atlas-server/src/api/mod.rsbackend/crates/atlas-server/src/config.rsbackend/crates/atlas-server/src/indexer/batch.rsbackend/crates/atlas-server/src/indexer/copy.rsbackend/crates/atlas-server/src/indexer/fetcher.rsbackend/crates/atlas-server/src/indexer/indexer.rsbackend/crates/atlas-server/src/indexer/metadata.rsbackend/crates/atlas-server/src/indexer/mod.rsbackend/crates/atlas-server/src/main.rsbackend/migrations/20240108000001_block_da_status.sqldocker-compose.ymlfrontend/nginx.conf
💤 Files with no reviewable changes (7)
- backend/crates/atlas-api/src/handlers/auth.rs
- backend/crates/atlas-server/src/api/handlers/mod.rs
- backend/crates/atlas-indexer/src/main.rs
- backend/crates/atlas-api/src/handlers/contracts.rs
- backend/crates/atlas-api/src/handlers/labels.rs
- backend/crates/atlas-indexer/Cargo.toml
- backend/crates/atlas-common/src/types.rs
There was a problem hiding this comment.
🧹 Nitpick comments (2)
backend/crates/atlas-server/src/api/handlers/logs.rs (2)
256-277: Test addresses the past review feedback; consider adding coverage forLogsQuery.The test correctly verifies
TransactionLogsQueryclamping behavior forpage=2&limit=1000. For completeness, consider adding a parallel test forLogsQuerysince it has its own clamping implementation accessing nestedpaginationfields.Optional test addition
#[test] fn logs_query_clamps_limit_for_offset_and_metadata() { use super::LogsQuery; use atlas_common::Pagination; let query = LogsQuery { topic0: None, pagination: Pagination { page: 2, limit: 1000 }, }; assert_eq!(query.clamped_limit(), 100); assert_eq!(query.offset(), 100); assert_eq!(query.limit(), 100); }As per coding guidelines, "Add unit tests for new logic in a
#[cfg(test)] mod testsblock in the same file."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/src/api/handlers/logs.rs` around lines 256 - 277, Add a unit test mirroring the TransactionLogsQuery test that verifies LogsQuery clamping: create a LogsQuery with pagination: Pagination { page: 2, limit: 1000 }, then assert that query.clamped_limit() == 100, query.offset() == 100, and query.limit() == 100; place the test inside the existing #[cfg(test)] mod tests alongside the TransactionLogsQuery test so LogsQuery’s clamping logic (accessing pagination) is covered.
94-155: Consider cursor-based pagination for address logs.For high-activity contracts, the
event_logstable filtered by address can contain millions of rows. The currentCOUNT(*)andOFFSETapproach will degrade as offset grows. Based on learnings, cursor-based pagination usingWHERE block_number < $cursor ORDER BY block_number DESC, log_index DESC LIMIT $1would scale better.This is a recommended improvement rather than a blocker for this PR.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/src/api/handlers/logs.rs` around lines 94 - 155, The current get_address_logs handler (function get_address_logs) and its use of LogsQuery relies on COUNT(*) and OFFSET which will not scale; switch to cursor-based pagination by extending LogsQuery with a cursor (e.g., last_block_number and last_log_index or a single encoded cursor), update the SELECT queries on event_logs (both the topic0 and non-topic0 branches) to filter using the cursor like "WHERE address = $1 AND (block_number, log_index) < ($cursor_block, $cursor_index) ORDER BY block_number DESC, log_index DESC LIMIT $N" instead of OFFSET, remove/avoid expensive COUNT(*) (or replace it with a cheaper existence/has_next check), and return the next cursor (based on the last row's block_number/log_index) in the PaginatedResponse so clients can request subsequent pages; adjust references to topic0, block_number, log_index, LogsQuery, get_address_logs, and PaginatedResponse accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@backend/crates/atlas-server/src/api/handlers/logs.rs`:
- Around line 256-277: Add a unit test mirroring the TransactionLogsQuery test
that verifies LogsQuery clamping: create a LogsQuery with pagination: Pagination
{ page: 2, limit: 1000 }, then assert that query.clamped_limit() == 100,
query.offset() == 100, and query.limit() == 100; place the test inside the
existing #[cfg(test)] mod tests alongside the TransactionLogsQuery test so
LogsQuery’s clamping logic (accessing pagination) is covered.
- Around line 94-155: The current get_address_logs handler (function
get_address_logs) and its use of LogsQuery relies on COUNT(*) and OFFSET which
will not scale; switch to cursor-based pagination by extending LogsQuery with a
cursor (e.g., last_block_number and last_log_index or a single encoded cursor),
update the SELECT queries on event_logs (both the topic0 and non-topic0
branches) to filter using the cursor like "WHERE address = $1 AND (block_number,
log_index) < ($cursor_block, $cursor_index) ORDER BY block_number DESC,
log_index DESC LIMIT $N" instead of OFFSET, remove/avoid expensive COUNT(*) (or
replace it with a cheaper existence/has_next check), and return the next cursor
(based on the last row's block_number/log_index) in the PaginatedResponse so
clients can request subsequent pages; adjust references to topic0, block_number,
log_index, LogsQuery, get_address_logs, and PaginatedResponse accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7b93f8c8-829f-4ac1-b94e-c65c114c5635
📒 Files selected for processing (3)
JustfileREADME.mdbackend/crates/atlas-server/src/api/handlers/logs.rs
There was a problem hiding this comment.
🧹 Nitpick comments (1)
backend/crates/atlas-server/src/main.rs (1)
136-167: Clarify return type and use saturating arithmetic.Two observations:
Misleading return type: The function returns
Result<()>but the loop never exits, soOk(())is unreachable. Consider changing to-> !(never type) or documenting this is intentional for long-running tasks.Potential overflow: While unlikely in practice,
retry_count += 1could theoretically overflow. Use idiomatic Rust withsaturating_add.♻️ Suggested improvements
/// Run an async function with exponential backoff retry -async fn run_with_retry<F, Fut>(f: F) -> Result<()> +/// Note: This function runs indefinitely, restarting the task on failure. +async fn run_with_retry<F, Fut>(f: F) -> ! where F: Fn() -> Fut, Fut: std::future::Future<Output = Result<()>>, { let mut retry_count = 0; loop { match f().await { Ok(()) => { retry_count = 0; } Err(e) => { let delay = RETRY_DELAYS .get(retry_count) .copied() .unwrap_or(MAX_RETRY_DELAY); tracing::error!( "Fatal error (internal retries exhausted): {}. Restarting in {}s (attempt {})...", e, delay, retry_count + 1 ); tokio::time::sleep(Duration::from_secs(delay)).await; - retry_count += 1; + retry_count = retry_count.saturating_add(1); } } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/src/main.rs` around lines 136 - 167, The function run_with_retry has a misleading Result<()> return because its loop never returns and also increments retry_count with plain +1; change the signature to return the never type `-> !` (or explicitly document why Result<()> is intentional) and replace the increment `retry_count += 1` with idiomatic saturating arithmetic, e.g. `retry_count = retry_count.saturating_add(1)`, while keeping the logic that looks up delays from RETRY_DELAYS and falls back to MAX_RETRY_DELAY.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@backend/crates/atlas-server/src/main.rs`:
- Around line 136-167: The function run_with_retry has a misleading Result<()>
return because its loop never returns and also increments retry_count with plain
+1; change the signature to return the never type `-> !` (or explicitly document
why Result<()> is intentional) and replace the increment `retry_count += 1` with
idiomatic saturating arithmetic, e.g. `retry_count =
retry_count.saturating_add(1)`, while keeping the logic that looks up delays
from RETRY_DELAYS and falls back to MAX_RETRY_DELAY.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 140e51cc-dcb8-4eb3-89d5-3222b46f1e1b
📒 Files selected for processing (2)
backend/crates/atlas-server/src/indexer/indexer.rsbackend/crates/atlas-server/src/main.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/crates/atlas-server/src/indexer/indexer.rs
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
backend/crates/atlas-server/src/config.rs (1)
33-88: Consider adding unit tests for config parsing.The new environment variable parsing logic (especially for
api_portwhich requires valid u16) would benefit from tests to catch parsing edge cases. As per coding guidelines, new logic should include tests in a#[cfg(test)] mod testsblock.🧪 Example test structure
#[cfg(test)] mod tests { use super::*; use std::env; #[test] fn test_api_port_default() { env::remove_var("API_PORT"); env::set_var("DATABASE_URL", "postgres://localhost/test"); env::set_var("RPC_URL", "http://localhost:8545"); let config = Config::from_env().unwrap(); assert_eq!(config.api_port, 3000); // cleanup env::remove_var("DATABASE_URL"); env::remove_var("RPC_URL"); } #[test] fn test_api_host_default() { env::remove_var("API_HOST"); env::set_var("DATABASE_URL", "postgres://localhost/test"); env::set_var("RPC_URL", "http://localhost:8545"); let config = Config::from_env().unwrap(); assert_eq!(config.api_host, "127.0.0.1"); env::remove_var("DATABASE_URL"); env::remove_var("RPC_URL"); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/src/config.rs` around lines 33 - 88, Add unit tests for Config::from_env to validate environment parsing (especially api_port and api_host) by creating a #[cfg(test)] mod tests that sets/clears environment variables (setting DATABASE_URL and RPC_URL as required), calls Config::from_env(), asserts default values (e.g., api_port == 3000, api_host == "127.0.0.1") and also adds cases for invalid/parsing edge inputs (e.g., non-numeric API_PORT) to ensure error propagation; remember to clean up or restore env vars between tests to avoid cross-test pollution and reference the Config::from_env, api_port, api_host symbols when locating where to add these tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@CLAUDE.md`:
- Around line 77-85: Update the API docs to match the actual AppState shape by
adding the missing chain_id: u64 and chain_name: String fields to the AppState
struct example (the same fields present in the runtime AppState used by
functions in lib.rs), and add the CHAIN_NAME environment variable to the
env-vars table with a default of "Unknown" so the documented runtime contract
matches the code (ensure references to chain_id/chain_name align with the
AppState example and mention CHAIN_NAME as the source of chain_name).
---
Nitpick comments:
In `@backend/crates/atlas-server/src/config.rs`:
- Around line 33-88: Add unit tests for Config::from_env to validate environment
parsing (especially api_port and api_host) by creating a #[cfg(test)] mod tests
that sets/clears environment variables (setting DATABASE_URL and RPC_URL as
required), calls Config::from_env(), asserts default values (e.g., api_port ==
3000, api_host == "127.0.0.1") and also adds cases for invalid/parsing edge
inputs (e.g., non-numeric API_PORT) to ensure error propagation; remember to
clean up or restore env vars between tests to avoid cross-test pollution and
reference the Config::from_env, api_port, api_host symbols when locating where
to add these tests.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 35ea945b-fa75-4cba-b5cd-f7e0d3c88c53
📒 Files selected for processing (3)
.env.exampleCLAUDE.mdbackend/crates/atlas-server/src/config.rs
Merge the API and indexer into a single
atlas-serverbinary and update Docker Compose/nginx to target the unified service.Because the API and indexer now run in the same process, SSE was also refactored to use an in-process broadcast channel fed directly by the indexer instead of the previous Postgres LISTEN/NOTIFY path.
Summary by CodeRabbit