Architecture
Orphnet Logging is built on Cloudflare Workers with a layered architecture that separates routing, business logic, and infrastructure concerns. This page explains the major components and how they interact.
Request Flow
Every API request passes through a consistent pipeline: CORS handling, request ID assignment, authentication, rate limiting, handler execution, and service/repository delegation.
Layered Architecture
| Layer | Responsibility | Location |
|---|---|---|
| Routes | HTTP method + path mapping, middleware composition | src/endpoints/ |
| Handlers | Request parsing, validation, response formatting | src/handlers/ |
| Services | Business logic, orchestration, error handling | src/services/ |
| Repositories | Data access, query building, storage abstraction | src/repositories/ |
| Infrastructure | D1 (SQL), KV (cache), R2 (archive), Queue | Cloudflare bindings |
No business logic lives in route files or handlers. Handlers validate input and format responses. Services contain all domain rules.
Data Model
The core data model centers on workspaces. Every resource (project, API key, log) is scoped to a workspace.
Key Relationships
- A user can be a member of multiple workspaces with different roles
- Each workspace has exactly one owner
- API keys can be workspace-wide or project-scoped
- Logs always belong to a project (identified by
projectId)
Authentication Flow
Orphnet Logging supports four authentication methods. JWT access/refresh tokens handle user sessions, while API keys handle programmatic access.
Token Types
| Token | Prefix | Lifetime | Purpose |
|---|---|---|---|
| Access Token | eyJ (JWT) | 15 minutes | Authenticate user requests |
| Refresh Token | rt_ | 7 days | Obtain new access tokens |
| API Key | sk_ | Until revoked | Programmatic access (client SDK) |
| Verification Token | vt_ | 24 hours | Email verification, password reset, magic links |
Scope System
API keys carry scopes that control what operations they can perform:
| Scope | Grants |
|---|---|
logs:read | Query logs |
logs:write | Ingest logs (implies logs:read) |
admin | Full workspace management |
Queue-Based Ingestion
Log entries are never written synchronously during request handling. Instead, they are sent to a Cloudflare Queue for async processing:
- Handler validates the log payload
- Payload is sent to
QUEUE_LOGGINGviac.env.QUEUE_LOGGING.send(payload) - Response returns immediately with
202 Accepted - Queue consumer writes to KV (for fast recent access) and D1 (for queryable storage)
- Periodic archival exports to R2 as NDJSON files
This decoupling ensures log ingestion never blocks application responses.
Smart Query Routing
When querying logs, the LogQueryService automatically selects the best backend:
- KV -- Used when the query window is within the last 24 hours. Fastest reads, but limited filtering.
- D1 -- Used for queries beyond 24 hours or when complex filtering (by type, category, level) is needed. Full SQL query capabilities.
- R2 -- Used for archive exports. Returns NDJSON streams for bulk data retrieval.
You can override automatic routing by passing "source": "d1" or "source": "kv" in query requests.
Next Steps
- Authentication -- Detailed guide for all four auth methods
- Backends -- Storage backend reference
- Configuration -- Environment variables and bindings