Skip to content

Architecture

Orphnet Logging is built on Cloudflare Workers with a layered architecture that separates routing, business logic, and infrastructure concerns. This page explains the major components and how they interact.

Request Flow

Every API request passes through a consistent pipeline: CORS handling, request ID assignment, authentication, rate limiting, handler execution, and service/repository delegation.

Layered Architecture

LayerResponsibilityLocation
RoutesHTTP method + path mapping, middleware compositionsrc/endpoints/
HandlersRequest parsing, validation, response formattingsrc/handlers/
ServicesBusiness logic, orchestration, error handlingsrc/services/
RepositoriesData access, query building, storage abstractionsrc/repositories/
InfrastructureD1 (SQL), KV (cache), R2 (archive), QueueCloudflare bindings

No business logic lives in route files or handlers. Handlers validate input and format responses. Services contain all domain rules.

Data Model

The core data model centers on workspaces. Every resource (project, API key, log) is scoped to a workspace.

Key Relationships

  • A user can be a member of multiple workspaces with different roles
  • Each workspace has exactly one owner
  • API keys can be workspace-wide or project-scoped
  • Logs always belong to a project (identified by projectId)

Authentication Flow

Orphnet Logging supports four authentication methods. JWT access/refresh tokens handle user sessions, while API keys handle programmatic access.

Token Types

TokenPrefixLifetimePurpose
Access TokeneyJ (JWT)15 minutesAuthenticate user requests
Refresh Tokenrt_7 daysObtain new access tokens
API Keysk_Until revokedProgrammatic access (client SDK)
Verification Tokenvt_24 hoursEmail verification, password reset, magic links

Scope System

API keys carry scopes that control what operations they can perform:

ScopeGrants
logs:readQuery logs
logs:writeIngest logs (implies logs:read)
adminFull workspace management

Queue-Based Ingestion

Log entries are never written synchronously during request handling. Instead, they are sent to a Cloudflare Queue for async processing:

  1. Handler validates the log payload
  2. Payload is sent to QUEUE_LOGGING via c.env.QUEUE_LOGGING.send(payload)
  3. Response returns immediately with 202 Accepted
  4. Queue consumer writes to KV (for fast recent access) and D1 (for queryable storage)
  5. Periodic archival exports to R2 as NDJSON files

This decoupling ensures log ingestion never blocks application responses.

Smart Query Routing

When querying logs, the LogQueryService automatically selects the best backend:

  • KV -- Used when the query window is within the last 24 hours. Fastest reads, but limited filtering.
  • D1 -- Used for queries beyond 24 hours or when complex filtering (by type, category, level) is needed. Full SQL query capabilities.
  • R2 -- Used for archive exports. Returns NDJSON streams for bulk data retrieval.

You can override automatic routing by passing "source": "d1" or "source": "kv" in query requests.

Next Steps

LogVista — Edge-native structured logging API