Skip to main content

Data Flow

Tracera processes data through multiple pipelines, each optimized for its specific workload.

Price Ingestion Pipeline

┌────────────┐     ┌────────────┐     ┌────────────────┐
│   Steam    │     │PriceEmpire │     │  Future APIs   │
│ Market API │     │    API     │     │ (CSFloat, etc) │
└─────┬──────┘     └─────┬──────┘     └───────┬────────┘
      │                  │                     │
      └──────────┬───────┘─────────────────────┘

      ┌─────────────────────┐
      │   Ingestion Layer   │
      │  • Concurrent fetch │
      │  • Normalization    │
      │  • Deduplication    │
      └────────┬────────────┘

      ┌────────┴─────────┐
      ▼                  ▼
┌──────────┐      ┌──────────┐
│TimescaleDB│      │  Redis   │
│ COPY batch│      │  Cache   │
│  insert   │      │ + PubSub │
└──────────┘      └──────────┘

How it works

  1. Scheduler triggers a price fetch cycle at a configurable interval
  2. Providers are queried concurrently using goroutine fan-out with sync.WaitGroup
  3. Each provider returns normalized RawPrice structs
  4. Prices are batch-inserted into TimescaleDB using pgx.CopyFrom for minimal DB round-trips
  5. Latest prices are cached in Redis for instant reads
  6. Price updates are published to Redis Pub/Sub for real-time broadcasting

Provider Isolation

Each provider runs in its own goroutine. If one provider fails (network timeout, API error), other providers still complete their cycle. Failures are logged but don’t block the pipeline.

Volatility Computation Pipeline

┌─────────────────────┐
│ TimescaleDB          │
│ Continuous Aggregates│
│ (1h, 24h, 7d)       │
└─────────┬───────────┘

┌─────────────────────┐
│  Volatility Engine  │
│  • Std Dev          │
│  • Bollinger Bands  │
│  • % Change         │
│  • Trend Score      │
│  • CoV              │
└────────┬────────────┘

    ┌────┴────┐
    ▼         ▼
┌───────┐ ┌───────┐
│ Redis │ │PubSub │
│ Cache │ │Publish│
└───────┘ └───────┘
The volatility worker runs periodically, reading pre-aggregated data from TimescaleDB continuous aggregates and computing metrics in Go for speed.

Real-Time Delivery Pipeline

┌──────────────┐
│ Redis PubSub │
│  Subscriber  │
└──────┬───────┘

┌──────────────┐
│ WebSocket Hub│
│  (Go)        │
└──────┬───────┘

  ┌────┼────┐
  ▼    ▼    ▼
┌──┐ ┌──┐ ┌──┐
│C1│ │C2│ │C3│  Connected clients
└──┘ └──┘ └──┘
  1. The WebSocket hub subscribes to Redis Pub/Sub channels
  2. When a price or volatility update is published, the hub receives it
  3. The hub fans out the update to all connected clients subscribed to that item
  4. Clients receive updates over persistent WebSocket connections — no polling

Authentication Flow

User ──▶ OAuth Provider (Google/GitHub/Steam)
  │              │
  │         callback
  │              │
  │              ▼
  │     ┌────────────────┐
  │     │  Auth Handler  │
  │     │ • Validate     │
  │     │ • Upsert user  │
  │     │ • Create session│
  └────▶└────────┬───────┘

          ┌──────┴──────┐
          ▼             ▼
    ┌──────────┐  ┌──────────┐
    │TimescaleDB│  │  Redis   │
    │  (users)  │  │(sessions)│
    └──────────┘  └──────────┘