Skip to content

Chapter 8: Event Indexing

The indexer is the bridge between on-chain events and Convex state. When a Safe module executes a payment, when an employee claims funds, when tokens move in or out of a treasury — the indexer detects these events and calls Convex mutations to keep the platform’s data current.

Without the indexer, invoices stay stuck in payment_pending, claim receipts are never generated, treasury balance on the dashboard is stale, and the bidirectional link between on-chain and off-chain never closes.

Architecture Decision: Why a Custom Indexer

Section titled “Architecture Decision: Why a Custom Indexer”

Four approaches were evaluated against a rubric weighted by Capxul’s priorities: cost first, then latency, then reliability, then operational simplicity.

CriterionConvex-Native PollingThe GraphPonder + SyncCustom Indexer
CostHigh — burns millions of empty calls/monthMedium — hosted infraMedium — $7-12/moLowest — $5-7/mo
Latency2-3 seconds5-15 seconds3-4 seconds3-4 seconds
ReliabilitySelf-healing but no reorg handlingReorg built in, but sync layer adds failuresReorg built inCursor in Convex survives crashes
Operational simplicityBest — zero servicesWorst — AssemblyScript, Graph NodeMedium — Ponder + PostgresGood — one small process

Decision: Custom indexer. Lowest cost, meets sub-5-second latency requirement, keeps Convex as the single source of truth with no intermediate databases. The tradeoff is hand-built cursor management and reorg handling. For Base (centralized sequencer, no practical reorg risk) and current volume (under 1,000 txs/day), this tradeoff is correct.

Upgrade path: If volume grows past 10,000/day or the team expands, Ponder becomes the natural upgrade. The Convex HTTP action interface remains identical — only the caller changes.

A small Node.js/TypeScript process using viem for chain interaction. It polls eth_getLogs on a 2-second interval per chain, decodes events using contract ABIs, and calls Convex HTTP actions to process each event. The block cursor (last processed block per chain) lives in a Convex table. If the process crashes, it reads the cursor from Convex and resumes exactly where it stopped.

No intermediate database. No Postgres. No subgraph. Convex is the only state store. The indexer process is stateless.

The indexer watches four categories of on-chain events.

Event: PaymentExecuted — emitted when a discrete payment executes.

Indexer action: Match-and-update. Find the financialDocument by document hash, transition to paid, record the tx hash.

Events:

  • StreamCreated — new payment stream established
  • StreamModified — rate or parameters changed
  • StreamCancelled — stream terminated early

Indexer action: Create or update stream records in Convex. Feeds payslip generation and dashboard metrics.

Event: FundsClaimed (or equivalent) — employee claims accrued funds.

Indexer action: Create. Births a claim receipt. No prior document exists. Creates a new receipt in financialDocuments with status available.

Category 4: Safe-Level Events and ERC20 Transfers

Section titled “Category 4: Safe-Level Events and ERC20 Transfers”

ERC20 Transfer events to/from Safe addresses: every token transfer changes treasury balance. The indexer watches Transfer events on all token contracts where either from or to is a registered Safe.

Safe native events: Module enabled/disabled, ownership changes. Lower priority, useful for audit.

Indexer action for transfers: Update the treasuryBalances table. Decrement or increment per-token balances. Store each transfer as a treasuryActivity record for the payment activity feed.

Why Index Balances Instead of Reading balanceOf

Section titled “Why Index Balances Instead of Reading balanceOf”
  1. The dashboard needs historical balance data for charts and trends, not just the current number
  2. A single balanceOf call requires an RPC round trip on every dashboard load
  3. Tracking transfers gives raw data for payment activity feeds

The tradeoff: indexed balances can drift from actual on-chain balances if a Transfer event is missed. The system includes periodic reconciliation (see Failure Handling).

Reactive: When a Transfer event is detected with a Safe address as recipient for an untracked token, register it and begin tracking.

Registry-based: Org admin explicitly registers token addresses through the UI.

Initial setup: Backfill by querying Transfer events from the Safe’s deployment block.

A Convex table chainConfigs stores per-chain configuration: chain ID, name, RPC endpoint, block time, confirmation depth (0 for Base), and active flag. Adding a chain is a data operation — no code changes.

A Convex table safeRegistry maps Safes to organizations: Safe address, chain ID, org ID, role (primary, child, standalone), module addresses, and active flag.

One independent polling loop per active chain. Each reads its cursor from Convex, calls eth_getLogs, decodes events, calls Convex HTTP actions, and advances the cursor. Loops run concurrently. A failure on one chain does not affect others. Each chain’s cursor is a separate row in indexerCursors.

2 seconds per chain (matching Base’s block time). Worst-case detection latency: 2 seconds + HTTP action round-trip (typically under 500ms). Total: well under the 5-second budget.

The indexer calls Convex HTTP actions to push event data. Each event type maps to a dedicated endpoint. The HTTP action validates the payload, authorizes via shared secret, and delegates to an internal mutation with full ACID guarantees.

processPaymentExecuted

  1. Look up financialDocument by documentHash (indexed field)
  2. Not found: log warning, store in unmatchedEvents table
  3. Found but already paid: skip idempotently
  4. Transition to paid, write tx hash/block/timestamp
  5. If invoice: trigger receipt generation
  6. Update treasury balance

processClaimEvent

  1. Look up org by claimer’s address
  2. Create receipt document: amount, token, address, tx hash, block timestamp, status available
  3. Set sourceDocumentReference if stream ID is determinable

processStreamEvent

  1. StreamCreated: insert/update stream record
  2. StreamModified: update rate and effective timestamp
  3. StreamCancelled: mark cancelled, record final accrued amount, trigger partial-period payslip if applicable

processTransferEvent

  1. Determine direction: inbound (Safe is to) or outbound (Safe is from)
  2. Update treasury balance for org/chain/token
  3. Register new tokens if first seen
  4. Store as treasuryActivity record

Every mutation is idempotent. The deduplication key is chainId + transactionHash + logIndex. Before processing, check the indexedEvents table. If the key exists, return early. If not, process and insert the key atomically within the same transaction.

Match-and-update: For events tied to existing documents (invoice payments, stream modifications). Find by hash or stream ID. If the document does not exist, something is wrong — the event goes to unmatchedEvents.

Create: For events that birth new documents (claim receipts, stream creation records). No pre-existing document expected. The mode is determined by event type, not runtime detection.

indexerCursors with one row per chain: chain ID, last processed block number, last processed timestamp, last updated at.

The cursor advances only after all events in a block range have been processed:

  1. Fetch logs for [cursor+1, latestBlock]
  2. Process each event via Convex HTTP action
  3. All succeed: advance cursor to latestBlock
  4. Any fail: do not advance. Next cycle retries from the same position.

Events may be delivered more than once after a failure. This is fine because mutations are idempotent.

On start: read cursors from Convex. If the gap is small (under 1,000 blocks), process normally. If large (extended outage), enter backfill mode in batches of 500-1,000 blocks.

For new orgs: cursor starts at the Safe’s deployment block (captures all events from inception). For existing orgs onboarding: cursor starts at the current block with a separate backfill batch job.

Restarts automatically (Railway). Reads cursors from Convex. Resumes. Maximum data staleness: equal to outage duration.

Retry with exponential backoff (2s, 4s, 8s, 16s, capped at 60s). Other chains unaffected. Alert after 5 minutes of consecutive failures. Consider configuring a fallback RPC endpoint.

Retry up to 3 times per event with exponential backoff. All retries fail: log to local error queue, do not advance cursor. Next cycle re-fetches and re-processes.

Circuit breaker: If a specific event consistently fails (mutation bug), skip after N failures (default 10). Log prominently. Advance cursor. Skipped event goes to unmatchedEvents for manual resolution. This prevents a single bad event from permanently stalling the indexer.

A Convex scheduled function runs every hour:

  1. For each org’s Safe, call balanceOf for each tracked token and compare to indexed balance. Flag discrepancies beyond dust threshold.
  2. Check for financialDocuments stuck in payment_pending longer than 10 minutes. Query the chain for tx status.
  3. Surface unmatchedEvents entries in admin alerts.

Base uses a centralized sequencer with near-instant soft finality. Reorgs are not a practical concern today. However, the architecture supports recovery:

  • Every event is keyed by chainId + txHash + logIndex. A reorg produces a different hash.
  • Financial document transitions are auditable. A recovery process can identify documents transitioned by orphaned transactions.
  • The chainConfigs table includes confirmationDepth per chain. For Base: 0 (process immediately). For chains with reorg risk: 3-12 blocks.

Active reorg detection should be built when Capxul expands to chains where reorgs occur in practice.

  • Cursor lag per chain (current head minus last processed block, should be 0-2)
  • Events processed per minute per chain
  • Convex HTTP action success rate (should be >99.9%)
  • RPC call latency per chain
  • Process uptime (heartbeat)
  • Cursor lag > 50 blocks for more than 2 minutes
  • HTTP action error rate > 5% over a 1-minute window
  • RPC failures for a chain for more than 5 minutes
  • Heartbeat missing for more than 30 seconds
  • Reconciliation balance discrepancy > $1
  • unmatchedEvents entries older than 15 minutes

The org admin dashboard shows a “data may be delayed” badge when the last cursor update for the org’s chain is more than 30 seconds old. A Convex query on indexerCursors — trivial to implement, important for user trust.

  • Runtime: Node.js / TypeScript
  • Dependencies: viem, standard HTTP client for Convex
  • Hosting: Railway (always-on, auto-restart, $5-7/month)
  • Configuration: Env vars for RPC endpoints, Convex deployment URL, HTTP action auth secret
  • RPC: Single eth_getLogs call per polling cycle per chain (most efficient pattern)
  • Deployment: Separate repo or package from the Convex backend. Deploys independently. Version coordination with Convex via shared TypeScript types.
ScaleRailwayRPCConvexTotal
Pre-launch~$5/mo$0 (free tier)$0 (within limits)~$5/mo
100-1,000 txs/day, 3 chains~$7/mo$0-49Included in Pro~$7-56/mo

The indexer is one of the cheapest components in the stack. The dominant cost at scale is the RPC provider.