# Phase 3: PostgreSQL Support - Context
**Gathered:** 2026-03-24
**Status:** Ready for planning
## Phase Boundary
Add PostgreSQL as an alternative database backend alongside SQLite. Users with PostgreSQL infrastructure can point DiunDashboard at a Postgres database via `DATABASE_URL` and the dashboard works identically to the SQLite deployment. Existing SQLite users upgrade without data loss.
## Implementation Decisions
### PostgreSQL driver interface
- **D-01:** Use `pgx/v5/stdlib` as the database/sql adapter — matches SQLiteStore's `*sql.DB` pattern so PostgresStore has the same constructor signature (`*sql.DB` in, Store out)
- **D-02:** Do NOT use pgx native interface directly — keeping both stores on `database/sql` means the Store interface stays unchanged and `NewServer(store Store, ...)` works identically
### SQL dialect handling
- **D-03:** Each store implementation has its own raw SQL — no runtime dialect switching, no query builder, no shared SQL templates
- **D-04:** PostgreSQL-specific syntax differences handled in PostgresStore methods:
- `SERIAL` instead of `INTEGER PRIMARY KEY AUTOINCREMENT` for tags.id
- `$1, $2, $3` positional params instead of `?` placeholders
- `NOW()` or `CURRENT_TIMESTAMP` instead of `datetime('now')` for acknowledged_at
- `ON CONFLICT ... DO UPDATE SET` syntax is compatible (PostgreSQL 9.5+)
- `INSERT ... ON CONFLICT DO UPDATE` for UPSERT (same pattern, different param style)
- `INSERT ... ON CONFLICT` for tag assignments instead of `INSERT OR REPLACE`
### Connection pooling
- **D-05:** PostgresStore does NOT use a mutex — PostgreSQL handles concurrent writes natively
- **D-06:** Use `database/sql` default pool settings with sensible overrides: `MaxOpenConns(25)`, `MaxIdleConns(5)`, `ConnMaxLifetime(5 * time.Minute)` — appropriate for a low-traffic self-hosted dashboard
### Database selection logic (main.go)
- **D-07:** `DATABASE_URL` env var present → PostgreSQL; absent → SQLite with `DB_PATH` (already decided in STATE.md)
- **D-08:** No separate `DB_DRIVER` variable — the presence of `DATABASE_URL` is the switch
- **D-09:** Startup log clearly indicates which backend is active: `"Using PostgreSQL database"` vs `"Using SQLite database at {path}"`
### Migration structure
- **D-10:** Separate migration directories: `migrations/sqlite/` (exists) and `migrations/postgres/` (new)
- **D-11:** PostgreSQL baseline migration `0001_initial_schema.up.sql` creates the same 3 tables with PostgreSQL-native types
- **D-12:** `RunMigrations` becomes dialect-aware or split into `RunSQLiteMigrations`/`RunPostgresMigrations` — researcher should determine best approach
- **D-13:** PostgreSQL migrations embedded via separate `//go:embed migrations/postgres` directive
### Docker Compose integration
- **D-14:** Use Docker Compose profiles — `docker compose --profile postgres up` activates the postgres service
- **D-15:** Default compose (no profile) remains SQLite-only for simple deploys
- **D-16:** Compose file includes a `postgres` service with health check, and the app service gets `DATABASE_URL` when the profile is active
### Testing strategy
- **D-17:** PostgresStore integration tests use a `//go:build postgres` build tag — they only run when a PostgreSQL instance is available
- **D-18:** CI can optionally run `-tags postgres` with a postgres service container; SQLite tests always run
- **D-19:** Test helper `NewTestPostgresServer()` creates a test database and runs migrations, similar to `NewTestServer()` for SQLite
### Claude's Discretion
- Exact PostgreSQL connection pool tuning beyond the defaults in D-06
- Whether to split RunMigrations into two functions or use a dialect parameter
- Error message formatting for PostgreSQL connection failures
- Whether to add a health check endpoint that verifies database connectivity
## Canonical References
**Downstream agents MUST read these before planning or implementing.**
### Store interface and patterns
- `pkg/diunwebhook/store.go` — Store interface definition (9 methods that PostgresStore must implement)
- `pkg/diunwebhook/sqlite_store.go` — Reference implementation with exact SQL operations to port
- `pkg/diunwebhook/migrate.go` — Current migration runner (SQLite-only, needs PostgreSQL support)
### Schema
- `pkg/diunwebhook/migrations/sqlite/0001_initial_schema.up.sql` — Baseline schema to translate to PostgreSQL dialect
### Wiring
- `cmd/diunwebhook/main.go` — Current startup wiring (SQLite-only, needs DATABASE_URL branching)
- `pkg/diunwebhook/export_test.go` — Test server helpers (pattern for NewTestPostgresServer)
### Deployment
- `Dockerfile` — Current build (may need postgres client libs or build tag)
- `compose.yml` — Production compose (needs postgres profile)
- `compose.dev.yml` — Dev compose (needs postgres profile for local dev)
## Existing Code Insights
### Reusable Assets
- `Store` interface in `store.go`: PostgresStore implements the same 9 methods — no handler changes needed
- `SQLiteStore` in `sqlite_store.go`: Reference for all SQL operations — port each method to PostgreSQL dialect
- `RunMigrations` in `migrate.go`: Pattern for migration runner with `embed.FS` + `iofs` + `golang-migrate`
- `NewTestServer()` in `export_test.go`: Pattern for test helper — clone for PostgreSQL variant
### Established Patterns
- `database/sql` as the DB abstraction layer — PostgresStore follows the same pattern
- `sync.Mutex` for SQLite write serialization — NOT needed for PostgreSQL (native concurrent writes)
- `//go:embed` for migration files — same pattern for `migrations/postgres/`
- Constructor returns concrete type implementing Store: `NewSQLiteStore(*sql.DB) *SQLiteStore` → `NewPostgresStore(*sql.DB) *PostgresStore`
### Integration Points
- `main.go` line 24: `sql.Open("sqlite", dbPath)` — add conditional for `sql.Open("pgx", databaseURL)`
- `main.go` line 29: `diun.RunMigrations(db)` — needs to call the right migration runner
- `main.go` line 33: `diun.NewSQLiteStore(db)` — needs to call `diun.NewPostgresStore(db)` when using PostgreSQL
- `Dockerfile` Stage 2: May need `CGO_ENABLED=0` to remain — verify pgx/v5/stdlib is pure Go
## Specific Ideas
No specific requirements — open to standard approaches. The core constraint is functional parity: every operation that works on SQLite must work identically on PostgreSQL.
## Deferred Ideas
None — discussion stayed within phase scope.
---
*Phase: 03-postgresql-support*
*Context gathered: 2026-03-24 via auto mode*