Compare commits

..

24 Commits

Author SHA1 Message Date
e35b4f882d test(02-02): rewrite all tests to use per-test in-memory databases via NewTestServer
All checks were successful
CI / build-test (push) Successful in 1m42s
- Remove TestMain (no longer needed; each test is isolated)
- Replace all diun.UpdatesReset() with diun.NewTestServer() per test
- Replace all diun.SetWebhookSecret/ResetWebhookSecret with NewTestServerWithSecret
- Replace all diun.WebhookHandler etc with srv.WebhookHandler (method calls)
- Replace diun.UpdateEvent with srv.TestUpsertEvent
- Replace diun.GetUpdatesMap with srv.TestGetUpdatesMap
- Update helper functions postTag/postTagAndGetID to accept *diun.Server parameter
- Change t.Fatalf to t.Errorf inside goroutine in TestConcurrentUpdateEvent
- Add error check on second TestUpsertEvent in TestDismissHandler_ReappearsAfterNewWebhook
- All 32 tests pass with zero failures
2026-03-23 22:05:09 +01:00
78543d79e9 feat(02-02): convert handlers to Server struct methods, remove globals
- Add Server struct with store Store and webhookSecret fields
- Add NewServer constructor
- Convert all 6 handler functions to methods on *Server
- Replace all inline SQL with s.store.X() calls
- Remove package-level globals db, mu, webhookSecret
- Remove InitDB, SetWebhookSecret, UpdateEvent, GetUpdates functions
- Update export_test.go: replace old helpers with NewTestServer, NewTestServerWithSecret, TestUpsertEvent, TestGetUpdatesMap
- Update main.go: sql.Open -> RunMigrations -> NewSQLiteStore -> NewServer -> routes
2026-03-23 22:02:53 +01:00
50805b103f docs(02-01): complete Store interface and migration infrastructure plan
- 02-01-SUMMARY.md: Store interface + SQLiteStore + golang-migrate v4.19.1
- STATE.md: advanced to plan 2 of 2, recorded decisions and metrics
- ROADMAP.md: phase 02 progress updated (1/2 summaries)
- REQUIREMENTS.md: REFAC-01 and REFAC-03 marked complete
2026-03-23 21:59:41 +01:00
6506d93eea feat(02-01): add migration infrastructure with golang-migrate and embedded SQL
- RunMigrations applies versioned SQL files via golang-migrate + embed.FS (iofs)
- ErrNoChange handled correctly - not treated as failure
- Migration 0001 creates full current schema with CREATE TABLE IF NOT EXISTS
- All three tables (updates, tags, tag_assignments) with acknowledged_at and ON DELETE CASCADE
- Uses database/sqlite sub-package (modernc.org/sqlite, no CGO)
- go mod tidy applied after adding dependencies
2026-03-23 21:56:34 +01:00
57bf3bdfe5 feat(02-01): add Store interface and SQLiteStore implementation
- Store interface with 9 methods covering all persistence operations
- SQLiteStore implements all 9 methods with exact SQL from current handlers
- NewSQLiteStore sets MaxOpenConns(1) and PRAGMA foreign_keys = ON
- UpsertEvent uses ON CONFLICT DO UPDATE with acknowledged_at reset to NULL
- AssignTag uses INSERT OR REPLACE for tag_assignments table
- golang-migrate v4.19.1 dependency added to go.mod
2026-03-23 21:53:05 +01:00
12cf34ce57 docs(02-backend-refactor): create phase plan 2026-03-23 21:46:57 +01:00
e72e1d1bea docs(phase-02): research backend refactor phase 2026-03-23 21:40:16 +01:00
fcc66b77e9 docs(phase-01): evolve PROJECT.md after phase completion 2026-03-23 21:30:12 +01:00
99813ee5a9 docs(phase-01): complete phase execution 2026-03-23 21:29:19 +01:00
03c3d5d6d7 docs(01-02): complete body-size-limits and test-hardening plan
- 01-02-SUMMARY.md: plan completion summary with deviations
- STATE.md: advanced plan position, added decisions, updated metrics
- ROADMAP.md: phase 01 marked complete (2/2 plans)
- REQUIREMENTS.md: DATA-03 and DATA-04 marked complete
2026-03-23 21:26:02 +01:00
7bdfc5ffec fix(01-02): replace silent test setup returns with t.Fatalf at 6 sites
- TestUpdateEventAndGetUpdates: UpdateEvent error now fails test
- TestUpdatesHandler: UpdateEvent error now fails test
- TestConcurrentUpdateEvent goroutine: UpdateEvent error now fails test
- TestDismissHandler_Success: UpdateEvent error now fails test
- TestDismissHandler_SlashInImageName: UpdateEvent error now fails test
- TestDismissHandler_ReappearsAfterNewWebhook: bare UpdateEvent call now checked
All 6 silent-return sites replaced; test failures are always visible to CI
2026-03-23 21:24:08 +01:00
98dfd76e15 feat(01-02): add request body size limits (1MB) to webhook and tag handlers
- Add maxBodyBytes constant (1 << 20 = 1 MB)
- Add errors import to production file
- Apply http.MaxBytesReader + errors.As(err, *http.MaxBytesError) pattern in:
  WebhookHandler, TagsHandler POST, TagAssignmentHandler PUT and DELETE
- Return HTTP 413 RequestEntityTooLarge when body exceeds limit
- Fix oversized body test strategy: use JSON prefix so decoder reads past limit
  (Rule 1 deviation: all-x body fails at byte 1 before MaxBytesReader triggers)
2026-03-23 21:20:52 +01:00
311e91d3ff test(01-02): add failing tests for oversized body (413) - RED
- TestWebhookHandler_OversizedBody: POST /webhook with >1MB body expects 413
- TestTagsHandler_OversizedBody: POST /api/tags with >1MB body expects 413
- TestTagAssignmentHandler_OversizedBody: PUT /api/tag-assignments with >1MB body expects 413
2026-03-23 21:18:39 +01:00
fb16d0db61 docs(01-01): complete UPSERT + FK enforcement plan
- Create 01-01-SUMMARY.md documenting both bug fixes and test addition
- Advance plan counter to 2/2 in STATE.md
- Record decisions and metrics in STATE.md
- Update ROADMAP.md plan progress (1/2 summaries)
- Mark requirements DATA-01 and DATA-02 complete
2026-03-23 21:16:49 +01:00
e2d388cfd4 test(01-01): add TestUpdateEvent_PreservesTagOnUpsert regression test
- Verifies tag survives a second UpdateEvent() for the same image (DATA-01)
- Verifies acknowledged_at is reset to NULL by the new event
- Verifies event fields (Status) are updated by the new event
2026-03-23 21:14:21 +01:00
7edbaad362 fix(01-01): replace INSERT OR REPLACE with UPSERT and enable FK enforcement
- Add PRAGMA foreign_keys = ON in InitDB() after SetMaxOpenConns(1)
- Replace INSERT OR REPLACE INTO updates with named-column INSERT ON CONFLICT UPSERT
- UPSERT preserves tag_assignments rows on re-insert (fixes DATA-01)
- FK enforcement makes ON DELETE CASCADE fire on tag deletion (fixes DATA-02)
2026-03-23 21:13:43 +01:00
b89e607493 docs(01-data-integrity): create phase 1 plans 2026-03-23 20:04:57 +01:00
19d757d060 docs(phase-01): research data integrity phase
Investigates SQLite UPSERT semantics, FK enforcement per-connection
requirement, http.MaxBytesReader behavior, and t.Fatal test patterns.
All four DATA-0x bugs confirmed with authoritative sources and line
numbers. No open blockers; ready for planning.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 19:58:48 +01:00
112c17a701 docs: create roadmap (4 phases) 2026-03-23 19:51:36 +01:00
1f5df8c36a docs: define v1 requirements 2026-03-23 19:48:53 +01:00
e4d59d4788 docs: complete project research 2026-03-23 19:45:06 +01:00
5b273e17bd chore: add project config 2026-03-23 19:37:22 +01:00
256a1ddfb7 docs: initialize project 2026-03-23 19:35:56 +01:00
96c4012e2f chore: add GSD codebase map with 7 analysis documents
Parallel analysis of tech stack, architecture, structure,
conventions, testing patterns, integrations, and concerns.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 19:13:23 +01:00
39 changed files with 7248 additions and 332 deletions

91
.planning/PROJECT.md Normal file
View File

@@ -0,0 +1,91 @@
# DiunDashboard
## What This Is
A web-based dashboard that receives DIUN webhook events and presents a persistent, visual overview of which Docker services have available updates. Built for self-hosters who use DIUN to monitor container images but need something better than dismissable push notifications — a place that nags you until you actually update.
## Core Value
Reliable, persistent visibility into which services need updating — data never disappears, and the dashboard is the one place you trust to show the full picture.
## Requirements
### Validated
- ✓ Receive and store DIUN webhook events — existing
- ✓ Display all tracked images with update status — existing
- ✓ Acknowledge/dismiss individual updates — existing
- ✓ Manual tag/group organization via drag-and-drop — existing
- ✓ Tag CRUD (create, delete, assign, unassign) — existing
- ✓ Optional webhook authentication via WEBHOOK_SECRET — existing
- ✓ Docker deployment with volume-mounted SQLite — existing
- ✓ Auto-polling for new updates (5s interval) — existing
- ✓ Service icon detection from image names — existing
- ✓ SQLite foreign key enforcement (PRAGMA foreign_keys = ON) — Phase 1
- ✓ Proper UPSERT preserving tag assignments on re-webhook — Phase 1
- ✓ Request body size limits (1MB) on webhook and API endpoints — Phase 1
- ✓ Test error handling uses t.Fatalf (no silent failures) — Phase 1
### Active
- [ ] Add PostgreSQL support alongside SQLite (dual DB, user chooses)
- [ ] Bulk acknowledge (dismiss all, dismiss by group)
- [ ] Filtering and search across updates
- [ ] In-dashboard new-update indicators (badge/counter/toast)
- [ ] Data persistence resilience (survive container restarts reliably)
### Out of Scope
- DIUN bundling / unified deployment — future milestone, requires deeper DIUN integration research
- Auto-grouping by Docker stack/compose project — future milestone, requires Docker socket or DIUN metadata research
- Visual DIUN config management UI — future milestone, depends on DIUN bundling
- Notification channel management UI — DIUN already handles this; visual config deferred to DIUN integration milestone
- OAuth / user accounts — single-user self-hosted tool, auth beyond webhook secret not needed now
- Mobile app — web-first, responsive design sufficient
## Context
- User hosts services on a VPS using Coolify (Docker-based PaaS)
- DIUN monitors container images for new versions and sends webhooks
- Previous approach (Gotify push notifications) failed because notifications were easy to dismiss and forget
- Dashboard was a daily driver but data loss (likely volume misconfiguration + SQLite bugs) eroded trust
- Coolify doesn't show available updates — this fills that gap
- Target audience: self-hosters using DIUN, not limited to Coolify users
- Existing codebase: Go 1.26 backend, React 19 + Tailwind + shadcn/ui frontend, SQLite via modernc.org/sqlite
## Constraints
- **Tech stack**: Go backend + React frontend — established, no migration
- **Database**: Must support both SQLite (simple deploys) and PostgreSQL (robust deploys)
- **Deployment**: Docker-first, single-container with optional compose
- **No CGO**: Pure Go SQLite driver (modernc.org/sqlite) — must maintain this for easy cross-compilation
- **Backward compatible**: Existing users with SQLite databases should be able to upgrade without data loss
## Key Decisions
| Decision | Rationale | Outcome |
|----------|-----------|---------|
| Dual DB (SQLite + PostgreSQL) | SQLite is fine for simple setups, Postgres for users who want robustness | — Pending |
| Fix SQLite bugs before adding features | Data trust is the #1 priority; features on a broken foundation waste effort | — Pending |
| Defer auto-grouping to future milestone | Requires research into Docker socket / DIUN metadata; don't want to slow down stability fixes | — Pending |
| Defer DIUN bundling to future milestone | Significant scope; need stability and UX improvements first | — Pending |
## Evolution
This document evolves at phase transitions and milestone boundaries.
**After each phase transition** (via `/gsd:transition`):
1. Requirements invalidated? → Move to Out of Scope with reason
2. Requirements validated? → Move to Validated with phase reference
3. New requirements emerged? → Add to Active
4. Decisions to log? → Add to Key Decisions
5. "What This Is" still accurate? → Update if drifted
**After each milestone** (via `/gsd:complete-milestone`):
1. Full review of all sections
2. Core Value check — still the right priority?
3. Audit Out of Scope — reasons still valid?
4. Update Context with current state
---
*Last updated: 2026-03-23 after Phase 1 completion*

124
.planning/REQUIREMENTS.md Normal file
View File

@@ -0,0 +1,124 @@
# Requirements: DiunDashboard
**Defined:** 2026-03-23
**Core Value:** Reliable, persistent visibility into which services need updating — data never disappears, and the dashboard is the one place you trust to show the full picture.
## v1 Requirements
Requirements for this milestone. Each maps to roadmap phases.
### Data Integrity
- [x] **DATA-01**: Webhook events use proper UPSERT (ON CONFLICT DO UPDATE) instead of INSERT OR REPLACE, preserving tag assignments when an image receives a new event
- [x] **DATA-02**: SQLite foreign key enforcement is enabled (PRAGMA foreign_keys = ON) so tag deletion properly cascades to tag assignments
- [x] **DATA-03**: Webhook and API endpoints enforce request body size limits (e.g., 1MB) to prevent OOM from oversized payloads
- [x] **DATA-04**: Test error handling uses t.Fatal instead of silent returns, so test failures are never swallowed
### Backend Refactor
- [x] **REFAC-01**: Database operations are behind a Store interface with separate SQLite and PostgreSQL implementations
- [ ] **REFAC-02**: Package-level global state (db, mu, webhookSecret) is replaced with a Server struct that holds dependencies
- [x] **REFAC-03**: Schema migrations use golang-migrate with separate migration directories per dialect (sqlite/, postgres/)
### Database
- [ ] **DB-01**: PostgreSQL is supported as an alternative to SQLite via pgx v5 driver
- [ ] **DB-02**: Database backend is selected via DATABASE_URL env var (present = PostgreSQL, absent = SQLite with DB_PATH)
- [ ] **DB-03**: Existing SQLite users can upgrade without data loss (baseline migration represents current schema)
### Bulk Actions
- [ ] **BULK-01**: User can acknowledge all pending updates at once with a single action
- [ ] **BULK-02**: User can acknowledge all pending updates within a specific tag/group
### Search & Filter
- [ ] **SRCH-01**: User can search updates by image name (text search)
- [ ] **SRCH-02**: User can filter updates by status (pending vs acknowledged)
- [ ] **SRCH-03**: User can filter updates by tag/group
- [ ] **SRCH-04**: User can sort updates by date, image name, or registry
### Update Indicators
- [ ] **INDIC-01**: Dashboard shows a badge/counter of pending (unacknowledged) updates
- [ ] **INDIC-02**: Browser tab title includes pending update count (e.g., "DiunDash (3)")
- [ ] **INDIC-03**: In-page toast notification appears when new updates arrive during polling
- [ ] **INDIC-04**: Updates that arrived since the user's last visit are visually highlighted
### Accessibility & Theme
- [ ] **A11Y-01**: Light/dark theme toggle with system preference detection (prefers-color-scheme)
- [ ] **A11Y-02**: Drag handle for tag reordering is always visible (not hover-only)
## v2 Requirements
Deferred to future milestone. Tracked but not in current roadmap.
### Auto-Grouping
- **GROUP-01**: Images are automatically grouped by Docker stack/compose project
- **GROUP-02**: Auto-grouping source is configurable (Docker socket, DIUN metadata, manual)
### DIUN Integration
- **DIUN-01**: DIUN and dashboard deploy as a single stack
- **DIUN-02**: Visual UI for managing DIUN notification channels
- **DIUN-03**: Visual UI for managing DIUN watched images
### Additional UX
- **UX-01**: Data retention with configurable TTL for acknowledged entries
- **UX-02**: Alternative tag assignment via dropdown (non-drag method)
- **UX-03**: Keyboard shortcuts for common actions
- **UX-04**: Browser notification API for background tab alerts
- **UX-05**: Filter by registry
## Out of Scope
| Feature | Reason |
|---------|--------|
| Auto-triggering image pulls or container restarts | Dashboard is a viewer, not an orchestrator; Docker socket access is a security risk |
| Notification channel management UI | DIUN already handles this; duplicating creates config drift |
| OAuth / multi-user accounts | Single-user self-hosted tool; reverse proxy auth is sufficient |
| Real-time WebSocket / SSE | 5s polling is adequate for low-frequency update signals |
| Mobile-native / PWA | Responsive web design is sufficient for internal tool |
| Changelog or CVE lookups per image | Requires external API integrations; different product scope |
| Undo for dismiss actions | Next DIUN scan recovers dismissed items; state complexity not justified |
## Traceability
Which phases cover which requirements. Updated during roadmap creation.
| Requirement | Phase | Status |
|-------------|-------|--------|
| DATA-01 | Phase 1 | Complete |
| DATA-02 | Phase 1 | Complete |
| DATA-03 | Phase 1 | Complete |
| DATA-04 | Phase 1 | Complete |
| REFAC-01 | Phase 2 | Complete |
| REFAC-02 | Phase 2 | Pending |
| REFAC-03 | Phase 2 | Complete |
| DB-01 | Phase 3 | Pending |
| DB-02 | Phase 3 | Pending |
| DB-03 | Phase 3 | Pending |
| BULK-01 | Phase 4 | Pending |
| BULK-02 | Phase 4 | Pending |
| SRCH-01 | Phase 4 | Pending |
| SRCH-02 | Phase 4 | Pending |
| SRCH-03 | Phase 4 | Pending |
| SRCH-04 | Phase 4 | Pending |
| INDIC-01 | Phase 4 | Pending |
| INDIC-02 | Phase 4 | Pending |
| INDIC-03 | Phase 4 | Pending |
| INDIC-04 | Phase 4 | Pending |
| A11Y-01 | Phase 4 | Pending |
| A11Y-02 | Phase 4 | Pending |
**Coverage:**
- v1 requirements: 22 total
- Mapped to phases: 22
- Unmapped: 0
---
*Requirements defined: 2026-03-23*
*Last updated: 2026-03-23 after roadmap creation*

88
.planning/ROADMAP.md Normal file
View File

@@ -0,0 +1,88 @@
# Roadmap: DiunDashboard
## Overview
This milestone restores data trust and then extends the foundation. Phase 1 fixes active bugs that silently corrupt user data today. Phase 2 refactors the backend into a testable, interface-driven structure — the structural prerequisite for everything that follows. Phase 3 adds PostgreSQL as a first-class alternative to SQLite. Phase 4 delivers the UX features that make the dashboard genuinely usable at scale: bulk dismiss, search/filter, new-update indicators, and accessibility fixes.
## Phases
**Phase Numbering:**
- Integer phases (1, 2, 3): Planned milestone work
- Decimal phases (2.1, 2.2): Urgent insertions (marked with INSERTED)
Decimal phases appear between their surrounding integers in numeric order.
- [ ] **Phase 1: Data Integrity** - Fix active SQLite bugs that silently delete tag assignments and suppress test failures
- [ ] **Phase 2: Backend Refactor** - Replace global state with Store interface + Server struct; prerequisite for PostgreSQL
- [ ] **Phase 3: PostgreSQL Support** - Add PostgreSQL as an alternative backend via DATABASE_URL, with versioned migrations
- [ ] **Phase 4: UX Improvements** - Bulk dismiss, search/filter, new-update indicators, and accessibility fixes
## Phase Details
### Phase 1: Data Integrity
**Goal**: Users can trust that their data is never silently corrupted — tag assignments survive new DIUN events, foreign key constraints are enforced, and test failures are always visible
**Depends on**: Nothing (first phase)
**Requirements**: DATA-01, DATA-02, DATA-03, DATA-04
**Success Criteria** (what must be TRUE):
1. A second DIUN event for the same image does not remove its tag assignment
2. Deleting a tag removes all associated tag assignments (foreign key cascade enforced)
3. An oversized webhook payload is rejected with a 413 response, not processed silently
4. A failing assertion in a test causes the test run to report failure, not pass silently
**Plans**: 2 plans
Plans:
- [x] 01-01-PLAN.md — Fix INSERT OR REPLACE → UPSERT in UpdateEvent(); enable PRAGMA foreign_keys = ON in InitDB(); add regression test
- [x] 01-02-PLAN.md — Add http.MaxBytesReader body limits to 3 handlers (413 on oversized); replace 6 silent test returns with t.Fatalf
### Phase 2: Backend Refactor
**Goal**: The codebase has a clean Store interface and Server struct so the SQLite implementation can be swapped without touching HTTP handlers, enabling parallel test execution and PostgreSQL support
**Depends on**: Phase 1
**Requirements**: REFAC-01, REFAC-02, REFAC-03
**Success Criteria** (what must be TRUE):
1. All existing tests pass with zero behavior change after the refactor
2. HTTP handlers contain no SQL — all persistence goes through named Store methods
3. Package-level global variables (db, mu, webhookSecret) no longer exist
4. Schema changes are applied via versioned migration files, not ad-hoc DDL in application code
**Plans**: 2 plans
Plans:
- [x] 02-01-PLAN.md — Create Store interface (9 methods), SQLiteStore implementation, golang-migrate migration infrastructure with embedded SQL files
- [ ] 02-02-PLAN.md — Convert handlers to Server struct methods, remove globals, rewrite tests for per-test isolated databases, update main.go wiring
### Phase 3: PostgreSQL Support
**Goal**: Users running PostgreSQL infrastructure can point DiunDashboard at a Postgres database via DATABASE_URL and the dashboard works identically to the SQLite deployment
**Depends on**: Phase 2
**Requirements**: DB-01, DB-02, DB-03
**Success Criteria** (what must be TRUE):
1. Setting DATABASE_URL starts the app using PostgreSQL; omitting it falls back to SQLite with DB_PATH
2. A fresh PostgreSQL deployment receives all schema tables via automatic migration on startup
3. An existing SQLite user can upgrade to the new binary without any data loss or manual schema changes
4. The app can be run with Docker Compose using an optional postgres service profile
**Plans**: TBD
**UI hint**: no
### Phase 4: UX Improvements
**Goal**: Users can manage a large list of updates efficiently — dismissing many at once, finding specific images quickly, and seeing new arrivals without manual refreshes
**Depends on**: Phase 2
**Requirements**: BULK-01, BULK-02, SRCH-01, SRCH-02, SRCH-03, SRCH-04, INDIC-01, INDIC-02, INDIC-03, INDIC-04, A11Y-01, A11Y-02
**Success Criteria** (what must be TRUE):
1. User can dismiss all pending updates with a single button click
2. User can dismiss all pending updates within a specific tag group with a single action
3. User can search by image name and filter by status, tag, and sort order without a page reload
4. A badge/counter showing pending update count is always visible; the browser tab title reflects it (e.g., "DiunDash (3)")
5. New updates arriving during active polling trigger a visible in-page toast, and updates seen for the first time since the user's last visit are visually highlighted
6. The light/dark theme toggle is available and respects system preference; the drag handle for tag reordering is always visible without hover
**Plans**: TBD
**UI hint**: yes
## Progress
**Execution Order:**
Phases execute in numeric order: 1 → 2 → 3 → 4
| Phase | Plans Complete | Status | Completed |
|-------|----------------|--------|-----------|
| 1. Data Integrity | 0/2 | Not started | - |
| 2. Backend Refactor | 0/2 | Not started | - |
| 3. PostgreSQL Support | 0/? | Not started | - |
| 4. UX Improvements | 0/? | Not started | - |

84
.planning/STATE.md Normal file
View File

@@ -0,0 +1,84 @@
---
gsd_state_version: 1.0
milestone: v1.0
milestone_name: milestone
status: Ready to execute
stopped_at: Completed 02-01-PLAN.md (Store interface, SQLiteStore, migration infrastructure)
last_updated: "2026-03-23T20:59:13.329Z"
progress:
total_phases: 4
completed_phases: 1
total_plans: 4
completed_plans: 3
---
# Project State
## Project Reference
See: .planning/PROJECT.md (updated 2026-03-23)
**Core value:** Reliable, persistent visibility into which services need updating — data never disappears, and the dashboard is the one place you trust to show the full picture.
**Current focus:** Phase 02 — backend-refactor
## Current Position
Phase: 02 (backend-refactor) — EXECUTING
Plan: 2 of 2
## Performance Metrics
**Velocity:**
- Total plans completed: 0
- Average duration: —
- Total execution time: —
**By Phase:**
| Phase | Plans | Total | Avg/Plan |
|-------|-------|-------|----------|
| - | - | - | - |
**Recent Trend:**
- Last 5 plans: —
- Trend: —
*Updated after each plan completion*
| Phase 01 P01 | 2 | 2 tasks | 2 files |
| Phase 01-data-integrity P02 | 7 | 2 tasks | 2 files |
| Phase 02-backend-refactor P01 | 7min | 2 tasks | 7 files |
## Accumulated Context
### Decisions
Decisions are logged in PROJECT.md Key Decisions table.
Recent decisions affecting current work:
- Fix SQLite bugs before any other work — data trust is the #1 priority; bug-fix tests become the regression suite for the refactor
- Backend refactor must be behavior-neutral — all existing tests must pass before PostgreSQL is introduced
- No ORM or query builder — raw SQL per store implementation; 8 operations across 3 tables is too small to justify a dependency
- `DATABASE_URL` present activates PostgreSQL; absent falls back to SQLite with `DB_PATH` — no separate `DB_DRIVER` variable
- [Phase 01]: Use named-column UPSERT (ON CONFLICT DO UPDATE) to preserve tag_assignments child rows on re-insert
- [Phase 01]: Enable PRAGMA foreign_keys = ON in InitDB() before DDL to activate ON DELETE CASCADE for tag deletion
- [Phase 01-data-integrity]: Use MaxBytesReader + errors.As(*http.MaxBytesError) per-handler (not middleware) for request body size limiting — consistent with no-middleware architecture
- [Phase 01-data-integrity]: Oversized body tests need valid JSON prefix so decoder reads past 1MB limit; all-x bytes fail at byte 1 before MaxBytesReader triggers
- [Phase 02-backend-refactor]: Store interface with 9 methods is the persistence abstraction; SQLiteStore holds *sql.DB and sync.Mutex as struct fields (not package globals)
- [Phase 02-backend-refactor]: golang-migrate v4.19.1 database/sqlite sub-package confirmed to use modernc.org/sqlite (no CGO); single 0001 baseline migration uses CREATE TABLE IF NOT EXISTS for backward compatibility
### Pending Todos
None yet.
### Blockers/Concerns
- Phase 3: Verify `pgx/v5/stdlib` import path against pkg.go.dev before writing PostgreSQL query strings
- Phase 3: Re-confirm `golang-migrate` v4.19.1 `database/sqlite` sub-package uses `modernc.org/sqlite` (not `mattn/go-sqlite3`) at implementation time
## Session Continuity
Last session: 2026-03-23T20:59:13.327Z
Stopped at: Completed 02-01-PLAN.md (Store interface, SQLiteStore, migration infrastructure)
Resume file: None

View File

@@ -0,0 +1,165 @@
# Architecture
**Analysis Date:** 2026-03-23
## Pattern Overview
**Overall:** Monolithic Go HTTP server with embedded React SPA frontend
**Key Characteristics:**
- Single Go binary serves both the JSON API and the static frontend assets
- All backend logic lives in one library package (`pkg/diunwebhook/`)
- SQLite database for persistence (pure-Go driver, no CGO)
- Frontend is a standalone React SPA that communicates via REST polling
- No middleware framework -- uses `net/http` standard library directly
## Layers
**HTTP Layer (Handlers):**
- Purpose: Accept HTTP requests, validate input, delegate to storage functions, return JSON responses
- Location: `pkg/diunwebhook/diunwebhook.go` (functions: `WebhookHandler`, `UpdatesHandler`, `DismissHandler`, `TagsHandler`, `TagByIDHandler`, `TagAssignmentHandler`)
- Contains: Request parsing, method checks, JSON encoding/decoding, HTTP status responses
- Depends on: Storage layer (package-level `db` and `mu` variables)
- Used by: Route registration in `cmd/diunwebhook/main.go`
**Storage Layer (SQLite):**
- Purpose: Persist and query DIUN events, tags, and tag assignments
- Location: `pkg/diunwebhook/diunwebhook.go` (functions: `InitDB`, `UpdateEvent`, `GetUpdates`; inline SQL in handlers)
- Contains: Schema creation, migrations, CRUD operations via raw SQL
- Depends on: `modernc.org/sqlite` driver, `database/sql` stdlib
- Used by: HTTP handlers in the same file
**Entry Point / Wiring:**
- Purpose: Initialize database, configure routes, start HTTP server with graceful shutdown
- Location: `cmd/diunwebhook/main.go`
- Contains: Environment variable reading, mux setup, signal handling, server lifecycle
- Depends on: `pkg/diunwebhook` (imported as `diun`)
- Used by: Docker container CMD, direct `go run`
**Frontend SPA:**
- Purpose: Display DIUN update events in an interactive dashboard with drag-and-drop grouping
- Location: `frontend/src/`
- Contains: React components, custom hooks for data fetching, TypeScript type definitions
- Depends on: Backend REST API (`/api/*` endpoints)
- Used by: Served as static files from `frontend/dist/` by the Go server
## Data Flow
**Webhook Ingestion:**
1. DIUN sends `POST /webhook` with JSON payload containing image update event
2. `WebhookHandler` in `pkg/diunwebhook/diunwebhook.go` validates the `Authorization` header (if `WEBHOOK_SECRET` is set) using constant-time comparison
3. JSON body is decoded into `DiunEvent` struct; `image` field is required
4. `UpdateEvent()` acquires `mu.Lock()`, executes `INSERT OR REPLACE` into `updates` table (keyed on `image`), sets `received_at` to current time, resets `acknowledged_at` to `NULL`
5. Returns `200 OK`
**Dashboard Polling:**
1. React SPA (`useUpdates` hook in `frontend/src/hooks/useUpdates.ts`) polls `GET /api/updates` every 5 seconds
2. `UpdatesHandler` in `pkg/diunwebhook/diunwebhook.go` queries `updates` table with `LEFT JOIN` on `tag_assignments` and `tags`
3. Returns `map[string]UpdateEntry` as JSON (keyed by image name)
4. Frontend groups entries by tag, displays in `TagSection` components with `ServiceCard` children
**Acknowledge (Dismiss):**
1. User clicks acknowledge button on a `ServiceCard`
2. Frontend sends `PATCH /api/updates/{image}` via `useUpdates.acknowledge()`
3. Frontend performs optimistic update on local state
4. `DismissHandler` sets `acknowledged_at = datetime('now')` for matching image row
**Tag Management:**
1. Tags are fetched once on mount via `useTags` hook (`GET /api/tags`)
2. Create: `POST /api/tags` with `{ name }` -- tag names must be unique (409 on conflict)
3. Delete: `DELETE /api/tags/{id}` -- cascades to `tag_assignments` via FK constraint
4. Assign: `PUT /api/tag-assignments` with `{ image, tag_id }` -- `INSERT OR REPLACE`
5. Unassign: `DELETE /api/tag-assignments` with `{ image }`
6. Drag-and-drop in frontend uses `@dnd-kit/core`; `DndContext.onDragEnd` calls `assignTag()` which performs optimistic UI update then fires API call
**State Management:**
- **Backend:** No in-memory state beyond the `sync.Mutex`. All data lives in SQLite. The `db` and `mu` variables are package-level globals in `pkg/diunwebhook/diunwebhook.go`.
- **Frontend:** React `useState` hooks in two custom hooks:
- `useUpdates` (`frontend/src/hooks/useUpdates.ts`): holds `UpdatesMap`, loading/error state, polling countdown
- `useTags` (`frontend/src/hooks/useTags.ts`): holds `Tag[]`, provides create/delete callbacks
- No global state library (no Redux, Zustand, etc.) -- state is passed via props from `App.tsx`
## Key Abstractions
**DiunEvent:**
- Purpose: Represents a single DIUN webhook payload (image update notification)
- Defined in: `pkg/diunwebhook/diunwebhook.go` (Go struct), `frontend/src/types/diun.ts` (TypeScript interface)
- Pattern: Direct JSON mapping between Go struct tags and TypeScript interface
**UpdateEntry:**
- Purpose: Wraps a `DiunEvent` with metadata (received timestamp, acknowledged flag, optional tag)
- Defined in: `pkg/diunwebhook/diunwebhook.go` (Go), `frontend/src/types/diun.ts` (TypeScript)
- Pattern: The API returns `map[string]UpdateEntry` keyed by image name (`UpdatesMap` type in frontend)
**Tag:**
- Purpose: User-defined grouping label for organizing images
- Defined in: `pkg/diunwebhook/diunwebhook.go` (Go), `frontend/src/types/diun.ts` (TypeScript)
- Pattern: Simple ID + name, linked to images via `tag_assignments` join table
## Entry Points
**Go Server:**
- Location: `cmd/diunwebhook/main.go`
- Triggers: `go run ./cmd/diunwebhook/` or Docker container `CMD ["./server"]`
- Responsibilities: Read env vars (`DB_PATH`, `PORT`, `WEBHOOK_SECRET`), init DB, register routes, start HTTP server, handle graceful shutdown on SIGINT/SIGTERM
**Frontend SPA:**
- Location: `frontend/src/main.tsx`
- Triggers: Browser loads `index.html` from `frontend/dist/` (served by Go file server at `/`)
- Responsibilities: Mount React app, force dark mode (`document.documentElement.classList.add('dark')`)
**Webhook Endpoint:**
- Location: `POST /webhook` -> `WebhookHandler` in `pkg/diunwebhook/diunwebhook.go`
- Triggers: External DIUN instance sends webhook on image update detection
- Responsibilities: Authenticate (if secret set), validate payload, upsert event into database
## Concurrency Model
**Mutex-based serialization:**
- A single `sync.Mutex` (`mu`) in `pkg/diunwebhook/diunwebhook.go` guards all write operations to the database
- `UpdateEvent()`, `DismissHandler`, `TagsHandler` (POST), `TagByIDHandler` (DELETE), and `TagAssignmentHandler` (PUT/DELETE) all acquire `mu.Lock()` before writing
- Read operations (`GetUpdates`, `TagsHandler` GET) do NOT acquire the mutex
- SQLite connection is configured with `db.SetMaxOpenConns(1)` to prevent concurrent write issues
**HTTP Server:**
- Standard `net/http` server handles requests concurrently via goroutines
- Graceful shutdown with 15-second timeout on SIGINT/SIGTERM
## Error Handling
**Strategy:** Return appropriate HTTP status codes with plain-text error messages; log errors server-side via `log.Printf`
**Backend Patterns:**
- Method validation: Return `405 Method Not Allowed` for wrong HTTP methods
- Input validation: Return `400 Bad Request` for missing/malformed fields
- Authentication: Return `401 Unauthorized` if webhook secret doesn't match
- Not found: Return `404 Not Found` when row doesn't exist (e.g., dismiss nonexistent image)
- Conflict: Return `409 Conflict` for unique constraint violations (duplicate tag name)
- Internal errors: Return `500 Internal Server Error` for database failures
- Fatal startup errors: `log.Fatalf` on `InitDB` failure
**Frontend Patterns:**
- `useUpdates`: catches fetch errors, stores error message in state, displays error banner
- `useTags`: catches errors, logs to `console.error`, fails silently (no user-visible error)
- `assignTag`: uses optimistic update -- updates local state first, fires API call, logs errors to console but does not revert on failure
## Cross-Cutting Concerns
**Logging:** Standard library `log` package. Logs webhook receipt, decode errors, storage errors. No structured logging or log levels beyond `log.Printf` and `log.Fatalf`.
**Validation:** Manual validation in each handler. No validation library or middleware. Each handler checks HTTP method, decodes body, validates required fields individually.
**Authentication:** Optional token-based auth on webhook endpoint only. `WEBHOOK_SECRET` env var compared via `crypto/subtle.ConstantTimeCompare` against `Authorization` header. No auth on API endpoints (`/api/*`).
**CORS:** Not configured. Frontend is served from the same origin as the API, so CORS is not needed in production. Vite dev server proxies `/api` and `/webhook` to `localhost:8080`.
**Database Migrations:** Inline in `InitDB()`. Uses `CREATE TABLE IF NOT EXISTS` for initial schema and `ALTER TABLE ADD COLUMN` (error silently ignored) for adding `acknowledged_at` to existing databases.
---
*Architecture analysis: 2026-03-23*

View File

@@ -0,0 +1,195 @@
# Codebase Concerns
**Analysis Date:** 2026-03-23
## Tech Debt
**Global mutable state in library package:**
- Issue: The package uses package-level `var db *sql.DB`, `var mu sync.Mutex`, and `var webhookSecret string`. This makes the package non-reusable and harder to test — only one "instance" can exist per process.
- Files: `pkg/diunwebhook/diunwebhook.go` (lines 48-52)
- Impact: Cannot run multiple instances, cannot run tests in parallel safely, tight coupling to global state.
- Fix approach: Refactor to a struct-based design (e.g., `type Server struct { db *sql.DB; mu sync.Mutex; secret string }`) with methods instead of package functions. Priority: Medium.
**Module name is "awesomeProject":**
- Issue: The Go module is named `awesomeProject` (a Go IDE default placeholder), not a meaningful name like `github.com/user/diun-dashboard` or similar.
- Files: `go.mod` (line 1), `cmd/diunwebhook/main.go` (line 13), `pkg/diunwebhook/diunwebhook_test.go` (line 15)
- Impact: Confusing for contributors, unprofessional in imports, cannot be used as a Go library.
- Fix approach: Rename module to a proper path (e.g., `gitea.jeanlucmakiola.de/makiolaj/diun-dashboard`) and update all imports. Priority: Low.
**Empty error handlers on rows.Close():**
- Issue: Multiple `defer rows.Close()` wrappers silently swallow errors with empty `if err != nil {}` blocks.
- Files: `pkg/diunwebhook/diunwebhook.go` (lines 131-136, 248-253)
- Impact: Suppressed errors make debugging harder. Not functionally critical since close errors on read queries rarely matter, but the pattern is misleading.
- Fix approach: Either log the error or use a simple `defer rows.Close()` without the wrapper. Priority: Low.
**Silent error swallowing in tests:**
- Issue: Several tests do `if err != nil { return }` instead of `t.Fatal(err)`, silently passing on failure.
- Files: `pkg/diunwebhook/diunwebhook_test.go` (lines 38-40, 153-154, 228-231, 287-289)
- Impact: Tests can silently pass when they should fail, hiding bugs.
- Fix approach: Replace `return` with `t.Fatalf("...: %v", err)` in all test error checks. Priority: Medium.
**Ad-hoc SQL migration strategy:**
- Issue: Schema migrations are done inline with silent `ALTER TABLE` that ignores errors: `_, _ = db.Exec("ALTER TABLE updates ADD COLUMN acknowledged_at TEXT")`.
- Files: `pkg/diunwebhook/diunwebhook.go` (line 87)
- Impact: Works for a single column addition but does not scale. No version tracking, no rollback, no way to know which migrations have run.
- Fix approach: Introduce a `schema_version` table or use a lightweight migration library. Priority: Low (acceptable for current scope).
**INSERT OR REPLACE loses tag assignments:**
- Issue: `UpdateEvent()` uses `INSERT OR REPLACE` which deletes and re-inserts the row. Because `tag_assignments` references `updates.image` but there is no `ON DELETE CASCADE` on that FK (and SQLite FK enforcement may not be enabled), the assignment row becomes orphaned or the behavior is undefined.
- Files: `pkg/diunwebhook/diunwebhook.go` (line 109)
- Impact: When DIUN sends a new event for an already-tracked image, the tag assignment may be lost. Users would need to re-tag images after each update.
- Fix approach: Use `INSERT ... ON CONFLICT(image) DO UPDATE SET ...` (UPSERT) instead of `INSERT OR REPLACE`, or enable FK enforcement with `PRAGMA foreign_keys = ON` and add CASCADE. Priority: High.
**Foreign key enforcement not enabled:**
- Issue: SQLite does not enforce foreign keys by default. The `tag_assignments.tag_id REFERENCES tags(id) ON DELETE CASCADE` constraint exists in the schema but `PRAGMA foreign_keys = ON` is never executed.
- Files: `pkg/diunwebhook/diunwebhook.go` (lines 58-103)
- Impact: Deleting a tag may not cascade-delete assignments, leaving orphaned rows in `tag_assignments`. The test `TestDeleteTagHandler_CascadesAssignment` may pass due to the LEFT JOIN query hiding orphans rather than them actually being deleted.
- Fix approach: Add `db.Exec("PRAGMA foreign_keys = ON")` immediately after opening the database connection in `InitDB()`. Priority: High.
## Security Considerations
**No authentication on API endpoints:**
- Risk: All API endpoints (`GET /api/updates`, `PATCH /api/updates/*`, `GET/POST /api/tags`, etc.) are completely unauthenticated. Only `POST /webhook` supports optional token auth.
- Files: `cmd/diunwebhook/main.go` (lines 38-44), `pkg/diunwebhook/diunwebhook.go` (all handler functions)
- Current mitigation: The dashboard is presumably deployed on a private network.
- Recommendations: Add optional basic auth or token auth middleware for API endpoints. At minimum, document the assumption that the dashboard should not be exposed to the public internet. Priority: Medium.
**No request body size limit on webhook:**
- Risk: `json.NewDecoder(r.Body).Decode(&event)` reads the entire body without limit. A malicious client could send a multi-GB payload causing OOM.
- Files: `pkg/diunwebhook/diunwebhook.go` (line 179)
- Current mitigation: `ReadTimeout: 10 * time.Second` on the server provides some protection.
- Recommendations: Wrap `r.Body` with `http.MaxBytesReader(w, r.Body, maxSize)` (e.g., 1MB). Apply the same to `TagsHandler` POST and `TagAssignmentHandler`. Priority: Medium.
**No CORS headers configured:**
- Risk: In development the Vite proxy handles cross-origin, but if the API is accessed directly from a different origin in production, there are no CORS headers.
- Files: `cmd/diunwebhook/main.go` (lines 38-45)
- Current mitigation: SPA is served from the same origin as the API.
- Recommendations: Not urgent since the SPA and API share an origin. Document this constraint. Priority: Low.
**Webhook secret sent as raw Authorization header:**
- Risk: The webhook secret is compared against the raw `Authorization` header value, not using a standard scheme like `Bearer <token>`. This is non-standard but functionally fine.
- Files: `pkg/diunwebhook/diunwebhook.go` (lines 164-170)
- Current mitigation: Uses `crypto/subtle.ConstantTimeCompare` which prevents timing attacks.
- Recommendations: Consider supporting `Bearer <token>` format for standard compliance. Priority: Low.
## Performance Bottlenecks
**Frontend polls entire dataset every 5 seconds:**
- Problem: `GET /api/updates` returns ALL updates as a single JSON map. The query joins three tables every time. As the number of tracked images grows, both the query and the JSON payload grow linearly.
- Files: `frontend/src/hooks/useUpdates.ts` (line 4, `POLL_INTERVAL = 5000`), `pkg/diunwebhook/diunwebhook.go` (lines 120-161)
- Cause: No incremental/differential update mechanism. No pagination. No caching headers.
- Improvement path: Add `If-Modified-Since` / `ETag` support, or switch to Server-Sent Events (SSE) / WebSocket for push-based updates. Add pagination for large datasets. Priority: Medium (fine for <1000 images, problematic beyond).
**Global mutex on all write operations:**
- Problem: A single `sync.Mutex` serializes all database writes across all handlers.
- Files: `pkg/diunwebhook/diunwebhook.go` (line 49, used at lines 107, 224, 281, 317, 351, 369)
- Cause: SQLite single-writer limitation addressed with a process-level mutex.
- Improvement path: `SetMaxOpenConns(1)` already serializes at the driver level, so the mutex is redundant for correctness but adds belt-and-suspenders safety. For higher throughput, consider WAL mode (`PRAGMA journal_mode=WAL`) which allows concurrent reads. Priority: Low.
**GetUpdates() not protected by mutex but reads are not serialized:**
- Problem: `GetUpdates()` does not acquire the mutex, so it can read while a write is in progress. With `SetMaxOpenConns(1)`, the driver serializes connections, but the function could block waiting for the connection.
- Files: `pkg/diunwebhook/diunwebhook.go` (lines 120-161)
- Cause: Inconsistent locking strategy — writes lock the mutex, reads do not.
- Improvement path: Either lock reads too (for consistency) or enable WAL mode and document the strategy. Priority: Low.
## Scalability Limitations
**SQLite single-file database:**
- Current capacity: Suitable for hundreds to low thousands of tracked images.
- Limit: SQLite single-writer bottleneck. No replication. Database file grows unbounded since old updates are never purged.
- Scaling path: Add a retention/cleanup mechanism for old acknowledged updates. For multi-instance deployments, migrate to PostgreSQL. Priority: Low (appropriate for the use case).
**No data retention or cleanup:**
- Current capacity: Every image update is kept forever in the `updates` table.
- Limit: Database will grow indefinitely. No mechanism to archive or delete old, acknowledged entries.
- Scaling path: Add a configurable retention period (e.g., auto-delete acknowledged entries older than N days). Priority: Medium.
## Fragile Areas
**URL path parsing for route parameters:**
- Files: `pkg/diunwebhook/diunwebhook.go` (lines 219, 311)
- Why fragile: Image names and tag IDs are extracted via `strings.TrimPrefix(r.URL.Path, "/api/updates/")` and `strings.TrimPrefix(r.URL.Path, "/api/tags/")`. This works but is brittle — any change to the route prefix requires changing these strings in two places (handler + `main.go`).
- Safe modification: If adding new routes or refactoring, ensure the prefix strings stay in sync with `mux.HandleFunc` registrations in `cmd/diunwebhook/main.go`.
- Test coverage: Good — `TestDismissHandler_SlashInImageName` covers the tricky case of slashes in image names.
**Optimistic UI updates without rollback:**
- Files: `frontend/src/hooks/useUpdates.ts` (lines 60-84)
- Why fragile: `assignTag()` performs an optimistic state update before the API call. If the API call fails, the UI shows the new tag but the server still has the old one. No rollback occurs — only a `console.error`.
- Safe modification: Store previous state before optimistic update, restore on error.
- Test coverage: No frontend tests exist.
**Single monolithic handler file:**
- Files: `pkg/diunwebhook/diunwebhook.go` (380 lines)
- Why fragile: All database logic, HTTP handlers, data types, and initialization live in a single file. As features are added, this file will become increasingly difficult to navigate.
- Safe modification: Split into `models.go`, `storage.go`, `handlers.go`, and `init.go` within the same package.
- Test coverage: Good test coverage for existing functionality.
## Dependencies at Risk
**No pinned dependency versions in go.mod:**
- Risk: All Go dependencies are marked `// indirect` — the project has only one direct dependency (`modernc.org/sqlite`) but it is not explicitly listed as direct.
- Files: `go.mod`
- Impact: `go mod tidy` behavior may be unpredictable. The `go.sum` file provides integrity but the intent is unclear.
- Migration plan: Run `go mod tidy` and ensure `modernc.org/sqlite` is listed without the `// indirect` comment. Priority: Low.
## Missing Critical Features
**No frontend tests:**
- Problem: Zero test files exist for the React frontend. No unit tests, no integration tests, no E2E tests.
- Blocks: Cannot verify frontend behavior automatically, cannot catch regressions in UI logic (tag assignment, acknowledge flow, drag-and-drop).
- Priority: Medium.
**No "acknowledge all" or bulk operations:**
- Problem: Users must acknowledge images one by one. No bulk dismiss, no "acknowledge all in group" action.
- Blocks: Tedious workflow when many images have updates.
- Priority: Low.
**No dark/light theme toggle (hardcoded dark):**
- Problem: The UI uses CSS variables that assume a dark theme. No toggle or system preference detection.
- Files: `frontend/src/index.css`, `frontend/src/App.tsx`
- Blocks: Users who prefer light mode have no option.
- Priority: Low.
## Test Coverage Gaps
**No tests for TagAssignmentHandler edge cases:**
- What's not tested: Assigning a non-existent image (image not in `updates` table), assigning with `tag_id: 0` or negative values, malformed JSON bodies.
- Files: `pkg/diunwebhook/diunwebhook.go` (lines 333-379)
- Risk: Unknown behavior for invalid inputs.
- Priority: Low.
**No tests for concurrent tag operations:**
- What's not tested: Concurrent create/delete of tags, concurrent assign/unassign operations.
- Files: `pkg/diunwebhook/diunwebhook_test.go`
- Risk: Potential race conditions in tag operations under load.
- Priority: Low.
**No frontend test infrastructure:**
- What's not tested: All React components, hooks, drag-and-drop behavior, polling logic, optimistic updates.
- Files: `frontend/src/**/*.{ts,tsx}`
- Risk: UI regressions go undetected. The `useUpdates` hook contains business logic (polling, optimistic updates) that should be tested.
- Priority: Medium.
## Accessibility Concerns
**Drag handle only visible on hover:**
- Issue: The grip handle for drag-and-drop (`GripVertical` icon) has `opacity-0 group-hover:opacity-100`, making it invisible until hover. Keyboard-only and touch users cannot discover this interaction.
- Files: `frontend/src/components/ServiceCard.tsx` (line 96)
- Impact: Drag-and-drop is the only way to re-tag images. Users without hover capability cannot reorganize.
- Fix approach: Make the handle always visible (or visible on focus), and provide an alternative non-drag method for tag assignment (e.g., a dropdown). Priority: Medium.
**Delete button invisible until hover:**
- Issue: The tag section delete button has `opacity-0 group-hover:opacity-100`, same discoverability problem.
- Files: `frontend/src/components/TagSection.tsx` (line 62)
- Impact: Cannot discover delete action without hover.
- Fix approach: Keep visible or show on focus. Priority: Low.
**No skip-to-content link, no ARIA landmarks:**
- Issue: The page lacks skip navigation links and semantic ARIA roles beyond basic HTML.
- Files: `frontend/src/App.tsx`, `frontend/src/components/Header.tsx`
- Impact: Screen reader users must tab through the entire header to reach content.
- Fix approach: Add `<a href="#main" className="sr-only focus:not-sr-only">Skip to content</a>` and `role="main"` / `aria-label` attributes. Priority: Low.
---
*Concerns audit: 2026-03-23*

View File

@@ -0,0 +1,198 @@
# Coding Conventions
**Analysis Date:** 2026-03-23
## Naming Patterns
**Go Files:**
- Package-level source files use the package name: `diunwebhook.go`
- Test files follow Go convention: `diunwebhook_test.go`
- Test-only export files: `export_test.go`
- Entry point: `main.go` inside `cmd/diunwebhook/`
**Go Functions:**
- PascalCase for exported functions: `WebhookHandler`, `UpdateEvent`, `InitDB`, `GetUpdates`
- Handler functions are named `<Noun>Handler`: `WebhookHandler`, `UpdatesHandler`, `DismissHandler`, `TagsHandler`, `TagByIDHandler`, `TagAssignmentHandler`
- Test functions use `Test<FunctionName>_<Scenario>`: `TestWebhookHandler_BadRequest`, `TestDismissHandler_NotFound`
**Go Types:**
- PascalCase structs: `DiunEvent`, `UpdateEntry`, `Tag`
- JSON tags use snake_case: `json:"diun_version"`, `json:"hub_link"`, `json:"received_at"`
**Go Variables:**
- Package-level unexported variables use short names: `mu`, `db`, `webhookSecret`
- Local variables use short idiomatic Go names: `w`, `r`, `err`, `res`, `n`, `e`
**TypeScript Files:**
- Components: PascalCase `.tsx` files: `ServiceCard.tsx`, `AcknowledgeButton.tsx`, `Header.tsx`, `TagSection.tsx`
- Hooks: camelCase with `use` prefix: `useUpdates.ts`, `useTags.ts`
- Types: camelCase `.ts` files: `diun.ts`
- Utilities: camelCase `.ts` files: `utils.ts`, `time.ts`, `serviceIcons.ts`
- UI primitives (shadcn): lowercase `.tsx` files: `badge.tsx`, `button.tsx`, `card.tsx`, `tooltip.tsx`
**TypeScript Functions:**
- camelCase for regular functions and hooks: `fetchUpdates`, `useUpdates`, `getServiceIcon`
- PascalCase for React components: `ServiceCard`, `StatCard`, `AcknowledgeButton`
- Helper functions within components use camelCase: `getInitials`, `getTag`, `getShortName`
- Event handlers prefixed with `handle`: `handleDragEnd`, `handleNewGroupSubmit`
**TypeScript Types:**
- PascalCase interfaces: `DiunEvent`, `UpdateEntry`, `Tag`, `ServiceCardProps`
- Type aliases: PascalCase: `UpdatesMap`
- Interface properties use snake_case matching the Go JSON tags: `diun_version`, `hub_link`
## Code Style
**Go Formatting:**
- `gofmt` enforced in CI (formatting check fails the build)
- No additional Go linter (golangci-lint) configured
- `go vet` runs in CI
- Standard Go formatting: tabs for indentation
**TypeScript Formatting:**
- No ESLint or Prettier configured in the frontend
- No formatting enforcement in CI for frontend code
- Consistent 2-space indentation observed in all `.tsx` and `.ts` files
- Single quotes for strings in TypeScript
- No semicolons (observed in all frontend files)
- Trailing commas used in multi-line constructs
**TypeScript Strictness:**
- `strict: true` in `tsconfig.app.json`
- `noUnusedLocals: true`
- `noUnusedParameters: true`
- `noFallthroughCasesInSwitch: true`
- `noUncheckedSideEffectImports: true`
## Import Organization
**Go Import Order:**
Standard library imports come first, followed by a blank line, then the project import using the module alias:
```go
import (
"bytes"
"encoding/json"
"net/http"
"net/http/httptest"
"os"
"testing"
diun "awesomeProject/pkg/diunwebhook"
)
```
- The project module is aliased as `diun` in both `main.go` and test files
- The blank-import pattern `_ "modernc.org/sqlite"` is used for the SQLite driver in `pkg/diunwebhook/diunwebhook.go`
**TypeScript Import Order:**
1. React and framework imports (`react`, `@dnd-kit/core`)
2. Internal imports using `@/` path alias (`@/hooks/useUpdates`, `@/components/Header`)
3. Type-only imports: `import type { Tag, UpdatesMap } from '@/types/diun'`
**Path Aliases:**
- `@/` maps to `frontend/src/` (configured in `vite.config.ts` and `tsconfig.app.json`)
## Error Handling
**Go Patterns:**
- Handlers use `http.Error(w, message, statusCode)` for all error responses
- Error messages are lowercase: `"bad request"`, `"internal error"`, `"not found"`, `"method not allowed"`
- Internal errors are logged with `log.Printf` before returning HTTP 500
- Decode errors include context: `log.Printf("WebhookHandler: failed to decode request: %v", err)`
- Fatal errors in `main.go` use `log.Fatalf`
- `errors.Is()` used for sentinel error comparison (e.g., `http.ErrServerClosed`)
- String matching used for SQLite constraint errors: `strings.Contains(err.Error(), "UNIQUE")`
**TypeScript Patterns:**
- API errors throw with HTTP status: `throw new Error(\`HTTP ${res.status}\`)`
- Catch blocks use `console.error` for logging
- Error state stored in hook state: `setError(e instanceof Error ? e.message : 'Failed to fetch updates')`
- Optimistic updates used for tag assignment (update UI first, then call API)
## Logging
**Framework:** Go standard `log` package
**Patterns:**
- Startup messages: `log.Printf("Listening on :%s", port)`
- Warnings: `log.Println("WARNING: WEBHOOK_SECRET not set ...")`
- Request logging on success: `log.Printf("Update received: %s (%s)", event.Image, event.Status)`
- Error logging before HTTP error response: `log.Printf("WebhookHandler: failed to store event: %v", err)`
- Handler name prefixed to log messages: `"WebhookHandler: ..."`, `"UpdatesHandler: ..."`
**Frontend:** `console.error` for API failures, no structured logging
## Comments
**When to Comment:**
- Comments are sparse in the Go codebase
- Handler functions have short doc comments describing the routes they handle:
```go
// TagsHandler handles GET /api/tags and POST /api/tags
// TagByIDHandler handles DELETE /api/tags/{id}
// TagAssignmentHandler handles PUT /api/tag-assignments and DELETE /api/tag-assignments
```
- Inline comments used for non-obvious behavior: `// Migration: add acknowledged_at to existing databases`
- No JSDoc/TSDoc in the frontend codebase
## Function Design
**Go Handler Pattern:**
- Each handler is a standalone `func(http.ResponseWriter, *http.Request)`
- Method checking done at the top of each handler (not via middleware)
- Multi-method handlers use `switch r.Method`
- URL path parameters extracted via `strings.TrimPrefix`
- Request bodies decoded with `json.NewDecoder(r.Body).Decode(&target)`
- Responses written with `json.NewEncoder(w).Encode(data)` or `w.WriteHeader(status)`
- Mutex (`mu`) used around write operations to SQLite
**TypeScript Hook Pattern:**
- Custom hooks return object with state and action functions
- `useCallback` wraps all action functions
- `useEffect` for side effects (polling, initial fetch)
- State updates use functional form: `setUpdates(prev => { ... })`
## Module Design
**Go Exports:**
- Single package `diunwebhook` exports all types and handler functions
- No barrel files; single source file `diunwebhook.go` contains everything
- Test helpers exposed via `export_test.go` (only visible to `_test` packages)
**TypeScript Exports:**
- Named exports for all components, hooks, and utilities
- Default export only for the root `App` component (`export default function App()`)
- Type exports use `export interface` or `export type`
- `@/components/ui/` contains shadcn primitives (`badge.tsx`, `button.tsx`, etc.)
## Git Commit Message Conventions
**Format:** Conventional Commits with bold markdown formatting
**Pattern:** `**<type>(<scope>):** <description>`
**Types observed:**
- `feat` - new features
- `fix` - bug fixes
- `docs` - documentation changes
- `chore` - maintenance tasks (deps, config)
- `refactor` - code restructuring
- `style` - UI/styling changes
- `test` - test additions
**Scopes observed:** `docs`, `compose`, `webhook`, `ci`, `ui`, `main`, `errors`, `sql`, `api`, `deps`, `stats`
**Examples:**
```
**feat(webhook):** add `WEBHOOK_SECRET` for token authentication support
**fix(ci):** improve version bump script for robustness and compatibility
**docs:** expand `index.md` with architecture, quick start, and tech stack
**chore(docs):** add `.gitignore` for `docs` and introduce `bun.lock` file
```
**Multi-change commits:** Use bullet list with each item prefixed by `- **type(scope):**`
---
*Convention analysis: 2026-03-23*

View File

@@ -0,0 +1,255 @@
# External Integrations
**Analysis Date:** 2026-03-23
## APIs & External Services
**DIUN (Docker Image Update Notifier):**
- DIUN sends webhook POST requests when container image updates are detected
- Endpoint: `POST /webhook`
- SDK/Client: None (DIUN pushes to this app; this app is the receiver)
- Auth: `Authorization` header must match `WEBHOOK_SECRET` env var (when set)
- Source: `pkg/diunwebhook/diunwebhook.go` lines 163-199
## API Contracts
### Webhook Ingestion
**`POST /webhook`** - Receive a DIUN event
- Handler: `WebhookHandler` in `pkg/diunwebhook/diunwebhook.go`
- Auth: `Authorization` header checked via constant-time compare against `WEBHOOK_SECRET`
- Request body:
```json
{
"diun_version": "4.28.0",
"hostname": "docker-host",
"status": "new",
"provider": "docker",
"image": "registry/org/image:tag",
"hub_link": "https://hub.docker.com/r/...",
"mime_type": "application/vnd.docker.distribution.manifest.v2+json",
"digest": "sha256:abc123...",
"created": "2026-03-23T10:00:00Z",
"platform": "linux/amd64",
"metadata": {
"ctn_names": "container-name",
"ctn_id": "abc123",
"ctn_state": "running",
"ctn_status": "Up 2 days"
}
}
```
- Response: `200 OK` (empty body) on success
- Errors: `401 Unauthorized`, `405 Method Not Allowed`, `400 Bad Request` (missing `image` field or invalid JSON), `500 Internal Server Error`
- Behavior: Upserts into `updates` table keyed by `image`. Replaces existing entry and resets `acknowledged_at` to NULL.
### Updates API
**`GET /api/updates`** - List all tracked image updates
- Handler: `UpdatesHandler` in `pkg/diunwebhook/diunwebhook.go`
- Response: `200 OK` with JSON object keyed by image name:
```json
{
"registry/org/image:tag": {
"event": { /* DiunEvent fields */ },
"received_at": "2026-03-23T10:00:00Z",
"acknowledged": false,
"tag": { "id": 1, "name": "production" } // or null
}
}
```
**`PATCH /api/updates/{image}`** - Dismiss (acknowledge) an update
- Handler: `DismissHandler` in `pkg/diunwebhook/diunwebhook.go`
- URL parameter: `{image}` is the full image name (URL-encoded)
- Response: `204 No Content` on success
- Errors: `405 Method Not Allowed`, `400 Bad Request`, `404 Not Found`, `500 Internal Server Error`
- Behavior: Sets `acknowledged_at = datetime('now')` on the matching row
### Tags API
**`GET /api/tags`** - List all tags
- Handler: `TagsHandler` in `pkg/diunwebhook/diunwebhook.go`
- Response: `200 OK` with JSON array:
```json
[{ "id": 1, "name": "production" }, { "id": 2, "name": "staging" }]
```
**`POST /api/tags`** - Create a new tag
- Handler: `TagsHandler` in `pkg/diunwebhook/diunwebhook.go`
- Request body: `{ "name": "production" }`
- Response: `201 Created` with `{ "id": 1, "name": "production" }`
- Errors: `400 Bad Request` (empty name), `409 Conflict` (duplicate name), `500 Internal Server Error`
**`DELETE /api/tags/{id}`** - Delete a tag
- Handler: `TagByIDHandler` in `pkg/diunwebhook/diunwebhook.go`
- URL parameter: `{id}` is integer tag ID
- Response: `204 No Content`
- Errors: `405 Method Not Allowed`, `400 Bad Request` (invalid ID), `404 Not Found`, `500 Internal Server Error`
- Behavior: Cascading delete removes all `tag_assignments` referencing this tag
### Tag Assignments API
**`PUT /api/tag-assignments`** - Assign an image to a tag
- Handler: `TagAssignmentHandler` in `pkg/diunwebhook/diunwebhook.go`
- Request body: `{ "image": "registry/org/image:tag", "tag_id": 1 }`
- Response: `204 No Content`
- Errors: `400 Bad Request`, `404 Not Found` (tag doesn't exist), `500 Internal Server Error`
- Behavior: `INSERT OR REPLACE` - reassigns if already assigned
**`DELETE /api/tag-assignments`** - Unassign an image from its tag
- Handler: `TagAssignmentHandler` in `pkg/diunwebhook/diunwebhook.go`
- Request body: `{ "image": "registry/org/image:tag" }`
- Response: `204 No Content`
- Errors: `400 Bad Request`, `500 Internal Server Error`
### Static File Serving
**`GET /` and all unmatched routes** - Serve React SPA
- Handler: `http.FileServer(http.Dir("./frontend/dist"))` in `cmd/diunwebhook/main.go`
- Serves the production build of the React frontend
## Data Storage
**Database:**
- SQLite (file-based, single-writer)
- Connection: `DB_PATH` env var (default `./diun.db`)
- Driver: `modernc.org/sqlite` (pure Go, registered as `"sqlite"` in `database/sql`)
- Max open connections: 1 (`db.SetMaxOpenConns(1)`)
- Write concurrency: `sync.Mutex` in `pkg/diunwebhook/diunwebhook.go`
**Schema:**
```sql
-- Table: updates (one row per unique image)
CREATE TABLE IF NOT EXISTS updates (
image TEXT PRIMARY KEY,
diun_version TEXT NOT NULL DEFAULT '',
hostname TEXT NOT NULL DEFAULT '',
status TEXT NOT NULL DEFAULT '',
provider TEXT NOT NULL DEFAULT '',
hub_link TEXT NOT NULL DEFAULT '',
mime_type TEXT NOT NULL DEFAULT '',
digest TEXT NOT NULL DEFAULT '',
created TEXT NOT NULL DEFAULT '',
platform TEXT NOT NULL DEFAULT '',
ctn_name TEXT NOT NULL DEFAULT '',
ctn_id TEXT NOT NULL DEFAULT '',
ctn_state TEXT NOT NULL DEFAULT '',
ctn_status TEXT NOT NULL DEFAULT '',
received_at TEXT NOT NULL,
acknowledged_at TEXT
);
-- Table: tags (user-defined grouping labels)
CREATE TABLE IF NOT EXISTS tags (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL UNIQUE
);
-- Table: tag_assignments (image-to-tag mapping, one tag per image)
CREATE TABLE IF NOT EXISTS tag_assignments (
image TEXT PRIMARY KEY,
tag_id INTEGER NOT NULL REFERENCES tags(id) ON DELETE CASCADE
);
```
**Migrations:**
- Schema is created on startup via `InitDB()` in `pkg/diunwebhook/diunwebhook.go`
- Uses `CREATE TABLE IF NOT EXISTS` for all tables
- One manual migration: `ALTER TABLE updates ADD COLUMN acknowledged_at TEXT` (silently ignored if already present)
- No formal migration framework; migrations are inline Go code
**File Storage:** Local filesystem only (SQLite database file)
**Caching:** None
## Authentication & Identity
**Webhook Authentication:**
- Token-based via `WEBHOOK_SECRET` env var
- Checked in `WebhookHandler` using `crypto/subtle.ConstantTimeCompare` against the `Authorization` header
- When `WEBHOOK_SECRET` is empty, the webhook endpoint is unprotected (warning logged at startup)
- Implementation: `pkg/diunwebhook/diunwebhook.go` lines 54-56, 163-170
**User Authentication:** None. The dashboard and all API endpoints (except webhook) are open/unauthenticated.
## Monitoring & Observability
**Error Tracking:** None (no Sentry, Datadog, etc.)
**Logs:**
- Go stdlib `log` package writing to stdout
- Key log points: startup warnings, webhook receipt, errors in handlers
- No structured logging framework
## CI/CD & Deployment
**Hosting:** Self-hosted via Docker on a Gitea instance at `gitea.jeanlucmakiola.de`
**Container Registry:** `gitea.jeanlucmakiola.de/makiolaj/diundashboard`
**CI Pipeline (Gitea Actions):**
- Config: `.gitea/workflows/ci.yml`
- Triggers: Push to `develop`, PRs targeting `develop`
- Steps: `gofmt` check, `go vet`, tests with coverage (warn below 80%), `go build`
- Runner: Custom Docker image with Go + Node/Bun toolchains
**Release Pipeline (Gitea Actions):**
- Config: `.gitea/workflows/release.yml`
- Trigger: Manual `workflow_dispatch` with semver bump choice (patch/minor/major)
- Steps: Run full CI checks, compute new version tag, create git tag, build and push Docker image (versioned + `latest`), create Gitea release with changelog
- Secrets required: `GITEA_TOKEN`, `REGISTRY_TOKEN`
**Docker Build:**
- Multi-stage Dockerfile at project root (`Dockerfile`)
- Stage 1: `oven/bun:1-alpine` - Build frontend (`bun install --frozen-lockfile && bun run build`)
- Stage 2: `golang:1.26-alpine` - Build Go binary (`CGO_ENABLED=0 go build`)
- Stage 3: `alpine:3.18` - Runtime with binary + static assets, exposes port 8080
**Docker Compose:**
- `compose.yml` - Production deploy (pulls `latest` from registry, mounts `diun-data` volume at `/data`)
- `compose.dev.yml` - Local development (builds from Dockerfile)
**Documentation Site:**
- Separate `docs/Dockerfile` and `docs/nginx.conf` for static site deployment via Nginx
- Built with VitePress, served as static HTML
## Environment Configuration
**Required env vars (production):**
- None strictly required (all have defaults)
**Recommended env vars:**
- `WEBHOOK_SECRET` - Protect webhook endpoint from unauthorized access
- `DB_PATH` - Set to `/data/diun.db` in Docker for persistent volume mount
- `PORT` - Override default port 8080
**Secrets:**
- `WEBHOOK_SECRET` - Shared secret between DIUN and this app
- `GITEA_TOKEN` - CI/CD pipeline (Gitea API access)
- `REGISTRY_TOKEN` - CI/CD pipeline (Docker registry push)
## Webhooks & Callbacks
**Incoming:**
- `POST /webhook` - Receives DIUN image update notifications
**Outgoing:**
- None
## Frontend-Backend Communication
**Dev Mode:**
- Vite dev server on `:5173` proxies `/api` and `/webhook` to `http://localhost:8080` (`frontend/vite.config.ts`)
**Production:**
- Go server serves `frontend/dist/` at `/` via `http.FileServer`
- API and webhook routes are on the same origin (no CORS needed)
**Polling:**
- React SPA polls `GET /api/updates` every 5 seconds (no WebSocket/SSE)
---
*Integration audit: 2026-03-23*

121
.planning/codebase/STACK.md Normal file
View File

@@ -0,0 +1,121 @@
# Technology Stack
**Analysis Date:** 2026-03-23
## Languages
**Primary:**
- Go 1.26 - Backend HTTP server and all API logic (`cmd/diunwebhook/main.go`, `pkg/diunwebhook/diunwebhook.go`)
- TypeScript ~5.7 - Frontend React SPA (`frontend/src/`)
**Secondary:**
- SQL (SQLite dialect) - Inline schema DDL and queries in `pkg/diunwebhook/diunwebhook.go`
## Runtime
**Environment:**
- Go 1.26 (compiled binary, no runtime needed in production)
- Bun (frontend build toolchain, uses `oven/bun:1-alpine` Docker image)
- Alpine Linux 3.18 (production container base)
**Package Manager:**
- Go modules - `go.mod` at project root (module name: `awesomeProject`)
- Bun - `frontend/bun.lock` present for frontend dependencies
- Bun - `docs/bun.lock` present for documentation site dependencies
## Frameworks
**Core:**
- `net/http` (Go stdlib) - HTTP server, routing, and handler registration. No third-party router.
- React 19 (`^19.0.0`) - Frontend SPA (`frontend/`)
- Vite 6 (`^6.0.5`) - Frontend dev server and build tool (`frontend/vite.config.ts`)
**UI:**
- Tailwind CSS 3.4 (`^3.4.17`) - Utility-first CSS (`frontend/tailwind.config.ts`)
- shadcn/ui - Component library (uses Radix UI primitives, `class-variance-authority`, `clsx`, `tailwind-merge`)
- Radix UI (`@radix-ui/react-tooltip` `^1.1.6`) - Accessible tooltip primitives
- dnd-kit (`@dnd-kit/core` `^6.3.1`, `@dnd-kit/utilities` `^3.2.2`) - Drag and drop
- Lucide React (`^0.469.0`) - Icon library
- simple-icons (`^16.9.0`) - Brand/service icons
**Documentation:**
- VitePress (`^1.6.3`) - Static documentation site (`docs/`)
**Testing:**
- Go stdlib `testing` package with `httptest` for handler tests
- No frontend test framework detected
**Build/Dev:**
- Vite 6 (`^6.0.5`) - Frontend bundler (`frontend/vite.config.ts`)
- TypeScript ~5.7 (`^5.7.2`) - Type checking (`tsc -b` runs before `vite build`)
- PostCSS 8.4 (`^8.4.49`) with Autoprefixer 10.4 (`^10.4.20`) - CSS processing (`frontend/postcss.config.js`)
- `@vitejs/plugin-react` (`^4.3.4`) - React Fast Refresh for Vite
## Key Dependencies
**Critical (Go):**
- `modernc.org/sqlite` v1.46.1 - Pure-Go SQLite driver (no CGO required). Registered as `database/sql` driver named `"sqlite"`.
- `modernc.org/libc` v1.67.6 - C runtime emulation for pure-Go SQLite
- `modernc.org/memory` v1.11.0 - Memory allocator for pure-Go SQLite
**Transitive (Go):**
- `github.com/dustin/go-humanize` v1.0.1 - Human-readable formatting (indirect dep of modernc.org/sqlite)
- `github.com/google/uuid` v1.6.0 - UUID generation (indirect)
- `github.com/mattn/go-isatty` v0.0.20 - Terminal detection (indirect)
- `golang.org/x/sys` v0.37.0 - System calls (indirect)
- `golang.org/x/exp` v0.0.0-20251023 - Experimental packages (indirect)
**Critical (Frontend):**
- `react` / `react-dom` `^19.0.0` - UI framework
- `@dnd-kit/core` `^6.3.1` - Drag-and-drop for tag assignment
- `tailwindcss` `^3.4.17` - Styling
**Infrastructure:**
- `class-variance-authority` `^0.7.1` - shadcn/ui component variant management
- `clsx` `^2.1.1` - Conditional CSS class composition
- `tailwind-merge` `^2.6.0` - Tailwind class deduplication
## Configuration
**Environment Variables:**
- `PORT` - HTTP listen port (default: `8080`)
- `DB_PATH` - SQLite database file path (default: `./diun.db`)
- `WEBHOOK_SECRET` - Token for webhook authentication (optional; when unset, webhook is open)
**Build Configuration:**
- `go.mod` - Go module definition (module `awesomeProject`)
- `frontend/vite.config.ts` - Vite config with `@` path alias to `./src`, dev proxy for `/api` and `/webhook` to `:8080`
- `frontend/tailwind.config.ts` - Tailwind with shadcn/ui theme tokens (dark mode via `class` strategy)
- `frontend/postcss.config.js` - PostCSS with Tailwind and Autoprefixer plugins
- `frontend/tsconfig.json` - Project references to `tsconfig.node.json` and `tsconfig.app.json`
**Frontend Path Alias:**
- `@` resolves to `frontend/src/` (configured in `frontend/vite.config.ts`)
## Database
**Engine:** SQLite (file-based)
**Driver:** `modernc.org/sqlite` v1.46.1 (pure Go, CGO_ENABLED=0 compatible)
**Connection:** Single connection (`db.SetMaxOpenConns(1)`) with `sync.Mutex` guarding writes
**File:** Configurable via `DB_PATH` env var, default `./diun.db`
## Platform Requirements
**Development:**
- Go 1.26+
- Bun (for frontend and docs development)
- No CGO required (pure-Go SQLite driver)
**Production:**
- Single static binary + `frontend/dist/` static assets
- Alpine Linux 3.18 Docker container
- Persistent volume at `/data/` for SQLite database
- Port 8080 (configurable via `PORT`)
**CI:**
- Gitea Actions with custom Docker image `gitea.jeanlucmakiola.de/makiolaj/docker-node-and-go` (contains both Go and Node/Bun toolchains)
- `GOTOOLCHAIN=local` env var set in CI
---
*Stack analysis: 2026-03-23*

View File

@@ -0,0 +1,240 @@
# Codebase Structure
**Analysis Date:** 2026-03-23
## Directory Layout
```
DiunDashboard/
├── cmd/
│ └── diunwebhook/
│ └── main.go # Application entry point
├── pkg/
│ └── diunwebhook/
│ ├── diunwebhook.go # Core library: types, DB, handlers
│ ├── diunwebhook_test.go # Tests (external test package)
│ └── export_test.go # Test-only exports
├── frontend/
│ ├── src/
│ │ ├── main.tsx # React entry point
│ │ ├── App.tsx # Root component (layout, state wiring)
│ │ ├── index.css # Tailwind CSS base styles
│ │ ├── vite-env.d.ts # Vite type declarations
│ │ ├── components/
│ │ │ ├── Header.tsx # Top nav bar with refresh button
│ │ │ ├── TagSection.tsx # Droppable tag group container
│ │ │ ├── ServiceCard.tsx # Individual image/service card (draggable)
│ │ │ ├── AcknowledgeButton.tsx # Dismiss/acknowledge button
│ │ │ └── ui/ # shadcn/ui primitives
│ │ │ ├── badge.tsx
│ │ │ ├── button.tsx
│ │ │ ├── card.tsx
│ │ │ └── tooltip.tsx
│ │ ├── hooks/
│ │ │ ├── useUpdates.ts # Polling, acknowledge, tag assignment
│ │ │ └── useTags.ts # Tag CRUD operations
│ │ ├── lib/
│ │ │ ├── utils.ts # cn() class merge utility
│ │ │ ├── time.ts # timeAgo() relative time formatter
│ │ │ ├── serviceIcons.ts # Map Docker image names to simple-icons
│ │ │ └── serviceIcons.json # Image name -> icon slug mapping
│ │ └── types/
│ │ └── diun.ts # TypeScript interfaces (DiunEvent, UpdateEntry, Tag, UpdatesMap)
│ ├── public/
│ │ └── favicon.svg
│ ├── index.html # SPA HTML shell
│ ├── package.json # Frontend dependencies
│ ├── vite.config.ts # Vite build + dev proxy config
│ ├── tailwind.config.ts # Tailwind theme configuration
│ ├── tsconfig.json # TypeScript project references
│ ├── tsconfig.app.json # App TypeScript config
│ ├── tsconfig.node.json # Node/Vite TypeScript config
│ ├── postcss.config.js # PostCSS/Tailwind pipeline
│ └── components.json # shadcn/ui component config
├── docs/
│ ├── index.md # VitePress docs homepage
│ ├── guide/
│ │ └── index.md # Getting started guide
│ ├── package.json # Docs site dependencies
│ ├── Dockerfile # Docs site Nginx container
│ ├── nginx.conf # Docs site Nginx config
│ └── .gitignore # Ignore docs build artifacts
├── .claude/
│ └── CLAUDE.md # Claude Code project instructions
├── .gitea/
│ └── workflows/
│ ├── ci.yml # CI pipeline (test + build)
│ └── release.yml # Release/deploy pipeline
├── .planning/
│ └── codebase/ # GSD codebase analysis documents
├── Dockerfile # Multi-stage build (frontend + Go + runtime)
├── compose.yml # Docker Compose for deployment (pulls image)
├── compose.dev.yml # Docker Compose for local dev (builds locally)
├── go.mod # Go module definition
├── go.sum # Go dependency checksums
├── .gitignore # Git ignore rules
├── README.md # Project readme
├── CONTRIBUTING.md # Developer guide
└── LICENSE # License file
```
## Directory Purposes
**`cmd/diunwebhook/`:**
- Purpose: Application binary entry point
- Contains: Single `main.go` file
- Key files: `cmd/diunwebhook/main.go`
**`pkg/diunwebhook/`:**
- Purpose: Core library containing all backend logic (types, database, HTTP handlers)
- Contains: One implementation file, one test file, one test-exports file
- Key files: `pkg/diunwebhook/diunwebhook.go`, `pkg/diunwebhook/diunwebhook_test.go`, `pkg/diunwebhook/export_test.go`
**`frontend/src/components/`:**
- Purpose: React UI components
- Contains: Feature components (`Header`, `TagSection`, `ServiceCard`, `AcknowledgeButton`) and `ui/` subdirectory with shadcn/ui primitives
**`frontend/src/components/ui/`:**
- Purpose: Reusable UI primitives from shadcn/ui
- Contains: `badge.tsx`, `button.tsx`, `card.tsx`, `tooltip.tsx`
- Note: These are generated/copied from shadcn/ui CLI and customized via `components.json`
**`frontend/src/hooks/`:**
- Purpose: Custom React hooks encapsulating data fetching and state management
- Contains: `useUpdates.ts` (polling, acknowledge, tag assignment), `useTags.ts` (tag CRUD)
**`frontend/src/lib/`:**
- Purpose: Shared utility functions and data
- Contains: `utils.ts` (Tailwind class merge), `time.ts` (relative time), `serviceIcons.ts` + `serviceIcons.json` (Docker image icon lookup)
**`frontend/src/types/`:**
- Purpose: TypeScript type definitions shared across the frontend
- Contains: `diun.ts` with interfaces matching Go backend structs
**`docs/`:**
- Purpose: VitePress documentation site (separate from main app)
- Contains: Markdown content, VitePress config, Dockerfile for static deployment
- Build output: `docs/.vitepress/dist/` (gitignored)
**`.gitea/workflows/`:**
- Purpose: CI/CD pipeline definitions for Gitea Actions
- Contains: `ci.yml` (test + build), `release.yml` (release/deploy)
## Key File Locations
**Entry Points:**
- `cmd/diunwebhook/main.go`: Go server entry point -- init DB, register routes, start server
- `frontend/src/main.tsx`: React SPA mount point -- renders `<App />` into DOM, enables dark mode
**Configuration:**
- `go.mod`: Go module `awesomeProject`, Go 1.26, SQLite dependency
- `frontend/vite.config.ts`: Vite build config, `@` path alias to `src/`, dev proxy for `/api` and `/webhook` to `:8080`
- `frontend/tailwind.config.ts`: Tailwind CSS theme customization
- `frontend/components.json`: shadcn/ui component generation config
- `frontend/tsconfig.json`: TypeScript project references (app + node configs)
- `Dockerfile`: Multi-stage build (Bun frontend build, Go binary build, Alpine runtime)
- `compose.yml`: Production deployment config (pulls from `gitea.jeanlucmakiola.de` registry)
- `compose.dev.yml`: Local development config (builds from Dockerfile)
**Core Logic:**
- `pkg/diunwebhook/diunwebhook.go`: ALL backend logic -- struct definitions, database init/migrations, event storage, all 6 HTTP handlers
- `frontend/src/App.tsx`: Root component -- stat cards, tag section rendering, drag-and-drop context, new group creation UI
- `frontend/src/hooks/useUpdates.ts`: Primary data hook -- 5s polling, acknowledge, tag assignment with optimistic updates
- `frontend/src/hooks/useTags.ts`: Tag management hook -- fetch, create, delete
**Testing:**
- `pkg/diunwebhook/diunwebhook_test.go`: All backend tests (external test package `diunwebhook_test`)
- `pkg/diunwebhook/export_test.go`: Exports internal functions for testing (`GetUpdatesMap`, `UpdatesReset`, `ResetTags`, `ResetWebhookSecret`)
## Naming Conventions
**Files:**
- Go: lowercase, single word or underscore-separated (`diunwebhook.go`, `export_test.go`)
- React components: PascalCase (`ServiceCard.tsx`, `TagSection.tsx`)
- Hooks: camelCase prefixed with `use` (`useUpdates.ts`, `useTags.ts`)
- Utilities: camelCase (`time.ts`, `utils.ts`)
- shadcn/ui primitives: lowercase (`badge.tsx`, `button.tsx`)
**Directories:**
- Go: lowercase (`cmd/`, `pkg/`)
- Frontend: lowercase (`components/`, `hooks/`, `lib/`, `types/`, `ui/`)
## Where to Add New Code
**New API Endpoint:**
- Add handler function to `pkg/diunwebhook/diunwebhook.go`
- Register route in `cmd/diunwebhook/main.go` on the `mux`
- Add tests in `pkg/diunwebhook/diunwebhook_test.go`
- If new test helpers are needed, add exports in `pkg/diunwebhook/export_test.go`
**New Database Table or Migration:**
- Add `CREATE TABLE IF NOT EXISTS` or `ALTER TABLE` in `InitDB()` in `pkg/diunwebhook/diunwebhook.go`
- Follow existing pattern: `CREATE TABLE IF NOT EXISTS` for new tables, silent `ALTER TABLE` for column additions
**New React Component:**
- Feature component: `frontend/src/components/YourComponent.tsx`
- Reusable UI primitive: `frontend/src/components/ui/yourprimitive.tsx` (use shadcn/ui CLI or follow existing pattern)
**New Custom Hook:**
- Place in `frontend/src/hooks/useYourHook.ts`
- Follow pattern from `useUpdates.ts`: export a function returning state and callbacks
**New TypeScript Type:**
- Add to `frontend/src/types/diun.ts` if related to the DIUN domain
- Create new file in `frontend/src/types/` for unrelated domains
**New Utility Function:**
- Add to `frontend/src/lib/` in an existing file or new file by domain
- Time-related: `frontend/src/lib/time.ts`
- CSS/styling: `frontend/src/lib/utils.ts`
**New Go Package:**
- Create under `pkg/yourpackage/` following Go conventions
- Import from `awesomeProject/pkg/yourpackage` (module name is `awesomeProject`)
## Special Directories
**`frontend/dist/`:**
- Purpose: Production build output served by Go file server at `/`
- Generated: Yes, by `bun run build` in `frontend/`
- Committed: No (gitignored)
**`docs/.vitepress/dist/`:**
- Purpose: Documentation site build output
- Generated: Yes, by `bun run build` in `docs/`
- Committed: No (gitignored)
**`.planning/codebase/`:**
- Purpose: GSD codebase analysis documents for AI-assisted development
- Generated: Yes, by codebase mapping agents
- Committed: Yes
**`.idea/`:**
- Purpose: JetBrains IDE project settings
- Generated: Yes, by GoLand/IntelliJ
- Committed: Partially (has its own `.gitignore`)
## Build Artifacts and Outputs
**Go Binary:**
- Built by: `go build -o server ./cmd/diunwebhook/main.go` (in Docker) or `go run ./cmd/diunwebhook/` (local)
- Output: `./server` binary (in Docker build stage)
**Frontend Bundle:**
- Built by: `bun run build` (runs `tsc -b && vite build`)
- Output: `frontend/dist/` directory
- Consumed by: Go file server at `/` route, copied into Docker image at `/app/frontend/dist/`
**Docker Image:**
- Built by: `docker build -t diun-webhook-dashboard .`
- Multi-stage: frontend build (Bun) -> Go build (golang) -> runtime (Alpine)
- Contains: Go binary at `/app/server`, frontend at `/app/frontend/dist/`
**SQLite Database:**
- Created at runtime by `InitDB()`
- Default path: `./diun.db` (overridable via `DB_PATH` env var)
- Docker: `/data/diun.db` with volume mount
---
*Structure analysis: 2026-03-23*

View File

@@ -0,0 +1,309 @@
# Testing Patterns
**Analysis Date:** 2026-03-23
## Test Framework
**Runner:**
- Go standard `testing` package
- No third-party test frameworks (no testify, gomega, etc.)
- Config: none beyond standard Go tooling
**Assertion Style:**
- Manual assertions using `t.Errorf` and `t.Fatalf` (no assertion library)
- `t.Fatalf` for fatal precondition failures that should stop the test
- `t.Errorf` for non-fatal check failures
**Run Commands:**
```bash
go test -v -coverprofile=coverage.out -coverpkg=./... ./... # All tests with coverage
go test -v -run TestWebhookHandler ./pkg/diunwebhook/ # Single test
go tool cover -func=coverage.out # View coverage by function
go tool cover -html=coverage.out # View coverage in browser
```
## Test File Organization
**Location:**
- Co-located with source code in `pkg/diunwebhook/`
**Files:**
- `pkg/diunwebhook/diunwebhook_test.go` - All tests (external test package `package diunwebhook_test`)
- `pkg/diunwebhook/export_test.go` - Test-only exports (internal package `package diunwebhook`)
**Naming:**
- Test functions: `Test<Function>_<Scenario>` (e.g., `TestWebhookHandler_BadRequest`, `TestDismissHandler_NotFound`)
- Helper functions: lowercase descriptive names (e.g., `postTag`, `postTagAndGetID`)
**Structure:**
```
pkg/diunwebhook/
├── diunwebhook.go # All production code
├── diunwebhook_test.go # All tests (external package)
└── export_test.go # Test-only exports
```
## Test Structure
**External Test Package:**
Tests use `package diunwebhook_test` (not `package diunwebhook`), which forces testing through the public API only. The production package is imported with an alias:
```go
package diunwebhook_test
import (
diun "awesomeProject/pkg/diunwebhook"
)
```
**Test Initialization:**
`TestMain` resets the database to an in-memory SQLite instance before all tests:
```go
func TestMain(m *testing.M) {
diun.UpdatesReset()
os.Exit(m.Run())
}
```
**Individual Test Pattern:**
Each test resets state at the start, then performs arrange-act-assert:
```go
func TestDismissHandler_Success(t *testing.T) {
diun.UpdatesReset() // Reset DB
err := diun.UpdateEvent(diun.DiunEvent{Image: "nginx:latest"}) // Arrange
if err != nil {
return
}
req := httptest.NewRequest(http.MethodPatch, "/api/updates/nginx:latest", nil) // Act
rec := httptest.NewRecorder()
diun.DismissHandler(rec, req)
if rec.Code != http.StatusNoContent { // Assert
t.Errorf("expected 204, got %d", rec.Code)
}
m := diun.GetUpdatesMap()
if !m["nginx:latest"].Acknowledged {
t.Errorf("expected entry to be acknowledged")
}
}
```
**Helper Functions:**
Test helpers use `t.Helper()` for proper error line reporting:
```go
func postTag(t *testing.T, name string) (int, int) {
t.Helper()
body, _ := json.Marshal(map[string]string{"name": name})
req := httptest.NewRequest(http.MethodPost, "/api/tags", bytes.NewReader(body))
rec := httptest.NewRecorder()
diun.TagsHandler(rec, req)
return rec.Code, rec.Body.Len()
}
```
## Mocking
**Framework:** No mocking framework used
**Patterns:**
- In-memory SQLite database via `InitDB(":memory:")` replaces the real database
- `httptest.NewRequest` and `httptest.NewRecorder` for HTTP handler testing
- `httptest.NewServer` for integration-level tests
- Custom `failWriter` struct to simulate broken `http.ResponseWriter`:
```go
type failWriter struct{ http.ResponseWriter }
func (f failWriter) Header() http.Header { return http.Header{} }
func (f failWriter) Write([]byte) (int, error) { return 0, errors.New("forced error") }
func (f failWriter) WriteHeader(_ int) {}
```
**What to Mock:**
- Database: use in-memory SQLite (`:memory:`)
- HTTP layer: use `httptest` package
- ResponseWriter errors: use custom struct implementing `http.ResponseWriter`
**What NOT to Mock:**
- Handler logic (test through the HTTP interface)
- JSON encoding/decoding (test with real payloads)
## Fixtures and Factories
**Test Data:**
Events are constructed inline with struct literals:
```go
event := diun.DiunEvent{
DiunVersion: "1.0",
Hostname: "host",
Status: "new",
Provider: "docker",
Image: "nginx:latest",
HubLink: "https://hub.docker.com/nginx",
MimeType: "application/json",
Digest: "sha256:abc",
Created: time.Now(),
Platform: "linux/amd64",
}
```
Minimal events are also used when only the image field matters:
```go
event := diun.DiunEvent{Image: "nginx:latest"}
```
**Location:**
- No separate fixtures directory; all test data is inline in `pkg/diunwebhook/diunwebhook_test.go`
## Test-Only Exports
**File:** `pkg/diunwebhook/export_test.go`
These functions are only accessible to test packages (files ending in `_test.go`):
```go
func GetUpdatesMap() map[string]UpdateEntry // Convenience wrapper around GetUpdates()
func UpdatesReset() // Re-initializes DB with in-memory SQLite
func ResetTags() // Clears tag_assignments and tags tables
func ResetWebhookSecret() // Sets webhookSecret to ""
```
## Coverage
**Requirements:** CI warns (does not fail) when coverage drops below 80%
**CI Coverage Check:**
```bash
go test -v -coverprofile=coverage.out -coverpkg=./... ./...
go tool cover -func=coverage.out | tee coverage.txt
cov=$(go tool cover -func=coverage.out | grep total: | awk '{print substr($3, 1, length($3)-1)}')
cov=${cov%.*}
if [ "$cov" -lt 80 ]; then
echo "::warning::Test coverage is below 80% ($cov%)"
fi
```
**View Coverage:**
```bash
go test -coverprofile=coverage.out -coverpkg=./... ./...
go tool cover -func=coverage.out # Text summary
go tool cover -html=coverage.out # Browser view
```
## CI Pipeline
**Platform:** Gitea Actions (Forgejo-compatible)
**CI Workflow:** `.gitea/workflows/ci.yml`
- Triggers: push to `develop`, PRs targeting `develop`
- Container: custom Docker image with Go and Node.js
- Steps:
1. `gofmt -l .` - Formatting check (fails build if unformatted)
2. `go vet ./...` - Static analysis
3. `go test -v -coverprofile=coverage.out -coverpkg=./... ./...` - Tests with coverage
4. Coverage threshold check (80%, warning only)
5. `go build ./...` - Build verification
**Release Workflow:** `.gitea/workflows/release.yml`
- Triggers: manual dispatch with version bump type (patch/minor/major)
- Runs the same build-test job, then creates a Docker image and Gitea release
**Missing from CI:**
- No frontend build or type-check step
- No frontend test step (no frontend tests exist)
- No linting beyond `gofmt` and `go vet`
## Test Types
**Unit Tests:**
- Handler tests using `httptest.NewRequest` / `httptest.NewRecorder`
- Direct function tests: `TestUpdateEventAndGetUpdates`
- All tests in `pkg/diunwebhook/diunwebhook_test.go`
**Concurrency Tests:**
- `TestConcurrentUpdateEvent` - 100 concurrent goroutines writing to the database via `sync.WaitGroup`
**Integration Tests:**
- `TestMainHandlerIntegration` - Full HTTP server via `httptest.NewServer`, tests webhook POST followed by updates GET
**Error Path Tests:**
- `TestWebhookHandler_BadRequest` - invalid JSON body
- `TestWebhookHandler_EmptyImage` - missing required field
- `TestWebhookHandler_MethodNotAllowed` - wrong HTTP methods
- `TestWebhookHandler_Unauthorized` / `TestWebhookHandler_WrongToken` - auth failures
- `TestDismissHandler_NotFound` - dismiss nonexistent entry
- `TestDismissHandler_EmptyImage` - empty path parameter
- `TestUpdatesHandler_EncodeError` - broken ResponseWriter
- `TestCreateTagHandler_DuplicateName` - UNIQUE constraint
- `TestCreateTagHandler_EmptyName` - validation
**Behavioral Tests:**
- `TestDismissHandler_ReappearsAfterNewWebhook` - acknowledged state resets on new webhook
- `TestDeleteTagHandler_CascadesAssignment` - tag deletion cascades to assignments
- `TestTagAssignmentHandler_Reassign` - reassigning image to different tag
- `TestDismissHandler_SlashInImageName` - image names with slashes in URL path
**E2E Tests:**
- Not implemented
- No frontend tests of any kind (no test runner configured, no test files)
## Test Coverage Gaps
**Frontend (no tests at all):**
- `frontend/src/App.tsx` - main application component
- `frontend/src/hooks/useUpdates.ts` - polling, acknowledge, tag assignment logic
- `frontend/src/hooks/useTags.ts` - tag CRUD logic
- `frontend/src/components/ServiceCard.tsx` - image name parsing, registry detection
- `frontend/src/lib/time.ts` - time formatting utilities
- `frontend/src/lib/serviceIcons.ts` - icon lookup logic
- Priority: Medium (pure utility functions like `getShortName`, `getRegistry`, `timeAgo` would benefit from unit tests)
**Backend gaps:**
- `cmd/diunwebhook/main.go` - server startup, graceful shutdown, env var reading (not tested)
- `TagsHandler` and `TagByIDHandler` method-not-allowed paths for unsupported HTTP methods
- `TagAssignmentHandler` bad request paths (missing image, invalid tag_id)
- Priority: Low (main.go is thin; handler edge cases are minor)
## Common Patterns
**HTTP Handler Testing:**
```go
func TestSomeHandler(t *testing.T) {
diun.UpdatesReset()
// arrange: create test data
body, _ := json.Marshal(payload)
req := httptest.NewRequest(http.MethodPost, "/path", bytes.NewReader(body))
rec := httptest.NewRecorder()
// act
diun.SomeHandler(rec, req)
// assert status code
if rec.Code != http.StatusOK {
t.Errorf("expected 200, got %d", rec.Code)
}
// assert response body
var got SomeType
json.NewDecoder(rec.Body).Decode(&got)
}
```
**State Reset Pattern:**
Every test calls `diun.UpdatesReset()` at the start, which re-initializes the in-memory SQLite database. This ensures test isolation without needing parallel-safe fixtures.
**Auth Testing Pattern:**
```go
diun.SetWebhookSecret("my-secret")
defer diun.ResetWebhookSecret()
// ... test with/without Authorization header
```
---
*Testing analysis: 2026-03-23*

35
.planning/config.json Normal file
View File

@@ -0,0 +1,35 @@
{
"model_profile": "balanced",
"commit_docs": true,
"parallelization": true,
"search_gitignored": false,
"brave_search": false,
"firecrawl": false,
"exa_search": false,
"git": {
"branching_strategy": "none",
"phase_branch_template": "gsd/phase-{phase}-{slug}",
"milestone_branch_template": "gsd/{milestone}-{slug}",
"quick_branch_template": null
},
"workflow": {
"research": true,
"plan_check": true,
"verifier": true,
"nyquist_validation": false,
"auto_advance": false,
"node_repair": true,
"node_repair_budget": 2,
"ui_phase": true,
"ui_safety_gate": true,
"text_mode": false,
"research_before_questions": false,
"discuss_mode": "discuss",
"skip_discuss": false
},
"hooks": {
"context_warnings": true
},
"mode": "yolo",
"granularity": "coarse"
}

View File

@@ -0,0 +1,264 @@
---
phase: 01-data-integrity
plan: 01
type: execute
wave: 1
depends_on: []
files_modified:
- pkg/diunwebhook/diunwebhook.go
- pkg/diunwebhook/diunwebhook_test.go
autonomous: true
requirements:
- DATA-01
- DATA-02
must_haves:
truths:
- "A second DIUN event for an already-tagged image does not remove its tag assignment"
- "Deleting a tag removes all associated tag_assignments rows (ON DELETE CASCADE fires)"
- "The full test suite passes with no new failures introduced"
artifacts:
- path: "pkg/diunwebhook/diunwebhook.go"
provides: "UPSERT in UpdateEvent(); PRAGMA foreign_keys = ON in InitDB()"
contains: "ON CONFLICT(image) DO UPDATE SET"
- path: "pkg/diunwebhook/diunwebhook_test.go"
provides: "Regression test TestUpdateEvent_PreservesTagOnUpsert"
contains: "TestUpdateEvent_PreservesTagOnUpsert"
key_links:
- from: "InitDB()"
to: "PRAGMA foreign_keys = ON"
via: "db.Exec immediately after db.SetMaxOpenConns(1)"
pattern: "PRAGMA foreign_keys = ON"
- from: "UpdateEvent()"
to: "INSERT INTO updates ... ON CONFLICT(image) DO UPDATE SET"
via: "db.Exec with named column list"
pattern: "ON CONFLICT\\(image\\) DO UPDATE SET"
---
<objective>
Fix the two data-destruction bugs that are silently corrupting tag assignments today.
Bug 1 (DATA-01): `UpdateEvent()` uses `INSERT OR REPLACE` which SQLite implements as DELETE + INSERT. The DELETE fires the `ON DELETE CASCADE` on `tag_assignments.image`, destroying the child row. Every new DIUN event for an already-tagged image loses its tag.
Bug 2 (DATA-02): `PRAGMA foreign_keys = ON` is never executed. SQLite disables FK enforcement by default. The `ON DELETE CASCADE` on `tag_assignments.tag_id` does not fire when a tag is deleted.
These two bugs are fixed in the same plan because fixing DATA-01 without DATA-02 causes `TestDeleteTagHandler_CascadesAssignment` to break (tag assignments now survive UPSERT but FK cascades still do not fire on tag deletion).
Purpose: Users can trust that tagging an image is permanent until they explicitly remove it, and that deleting a tag group cleans up all assignments.
Output: Updated `diunwebhook.go` with UPSERT + FK pragma; new regression test `TestUpdateEvent_PreservesTagOnUpsert` in `diunwebhook_test.go`.
</objective>
<execution_context>
@$HOME/.claude/get-shit-done/workflows/execute-plan.md
@$HOME/.claude/get-shit-done/templates/summary.md
</execution_context>
<context>
@.planning/PROJECT.md
@.planning/ROADMAP.md
@.planning/STATE.md
@.planning/phases/01-data-integrity/01-RESEARCH.md
</context>
<tasks>
<task type="auto" tdd="true">
<name>Task 1: Replace INSERT OR REPLACE with UPSERT in UpdateEvent() and add PRAGMA FK enforcement in InitDB()</name>
<files>pkg/diunwebhook/diunwebhook.go</files>
<read_first>
- pkg/diunwebhook/diunwebhook.go — read the entire file before touching it; understand current InitDB() structure (lines 58-104) and UpdateEvent() structure (lines 106-118)
</read_first>
<behavior>
- Test 1 (existing, must still pass): TestDismissHandler_ReappearsAfterNewWebhook — a new webhook event resets acknowledged_at to NULL
- Test 2 (existing, must still pass): TestDeleteTagHandler_CascadesAssignment — deleting a tag removes the tag_assignment row (requires both UPSERT and PRAGMA fixes)
- Test 3 (new, added in Task 2): TestUpdateEvent_PreservesTagOnUpsert — tag survives a second UpdateEvent() for the same image
</behavior>
<action>
Make exactly two changes to pkg/diunwebhook/diunwebhook.go:
CHANGE 1 — Add PRAGMA to InitDB():
After line 64 (`db.SetMaxOpenConns(1)`), insert:
```go
if _, err = db.Exec(`PRAGMA foreign_keys = ON`); err != nil {
return err
}
```
This must appear before any CREATE TABLE statement. The error must not be swallowed.
CHANGE 2 — Replace INSERT OR REPLACE in UpdateEvent():
Replace the entire db.Exec call at lines 109-116 (the `INSERT OR REPLACE INTO updates VALUES (...)` statement and its argument list) with:
```go
_, err := db.Exec(`
INSERT INTO updates (
image, diun_version, hostname, status, provider,
hub_link, mime_type, digest, created, platform,
ctn_name, ctn_id, ctn_state, ctn_status,
received_at, acknowledged_at
) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,NULL)
ON CONFLICT(image) DO UPDATE SET
diun_version = excluded.diun_version,
hostname = excluded.hostname,
status = excluded.status,
provider = excluded.provider,
hub_link = excluded.hub_link,
mime_type = excluded.mime_type,
digest = excluded.digest,
created = excluded.created,
platform = excluded.platform,
ctn_name = excluded.ctn_name,
ctn_id = excluded.ctn_id,
ctn_state = excluded.ctn_state,
ctn_status = excluded.ctn_status,
received_at = excluded.received_at,
acknowledged_at = NULL`,
event.Image, event.DiunVersion, event.Hostname, event.Status, event.Provider,
event.HubLink, event.MimeType, event.Digest,
event.Created.Format(time.RFC3339), event.Platform,
event.Metadata.ContainerName, event.Metadata.ContainerID,
event.Metadata.State, event.Metadata.Status,
time.Now().Format(time.RFC3339),
)
```
The column count (15 named columns + NULL for acknowledged_at = 16 positional `?` placeholders) must match the 15 bound arguments (acknowledged_at is hardcoded NULL, not a bound arg).
No other changes to diunwebhook.go in this task. Do not add imports — `errors` is not needed here.
</action>
<verify>
<automated>cd /home/jean-luc-makiola/Development/projects/DiunDashboard && go test -v -run "TestDismissHandler_ReappearsAfterNewWebhook|TestDeleteTagHandler_CascadesAssignment" ./pkg/diunwebhook/</automated>
</verify>
<done>
- pkg/diunwebhook/diunwebhook.go contains the string `PRAGMA foreign_keys = ON`
- pkg/diunwebhook/diunwebhook.go contains the string `ON CONFLICT(image) DO UPDATE SET`
- pkg/diunwebhook/diunwebhook.go does NOT contain `INSERT OR REPLACE INTO updates`
- TestDismissHandler_ReappearsAfterNewWebhook passes
- TestDeleteTagHandler_CascadesAssignment passes
</done>
</task>
<task type="auto" tdd="true">
<name>Task 2: Add regression test TestUpdateEvent_PreservesTagOnUpsert</name>
<files>pkg/diunwebhook/diunwebhook_test.go</files>
<read_first>
- pkg/diunwebhook/diunwebhook_test.go — read the entire file before touching it; the new test must follow the established patterns (httptest.NewRequest, diun.UpdatesReset(), postTagAndGetID helper, diun.GetUpdatesMap())
- pkg/diunwebhook/export_test.go — verify GetUpdatesMap() and UpdatesReset() signatures
</read_first>
<behavior>
- Test: First UpdateEvent() for "nginx:latest" → assign tag "webservers" via TagAssignmentHandler → second UpdateEvent() for "nginx:latest" with Status "update" → GetUpdatesMap()["nginx:latest"].Tag must be non-nil → Tag.ID must equal tagID → Acknowledged must be false
</behavior>
<action>
Add the following test function to pkg/diunwebhook/diunwebhook_test.go, appended after the existing TestGetUpdates_IncludesTag function (at the end of the file):
```go
func TestUpdateEvent_PreservesTagOnUpsert(t *testing.T) {
diun.UpdatesReset()
// Insert image
if err := diun.UpdateEvent(diun.DiunEvent{Image: "nginx:latest", Status: "new"}); err != nil {
t.Fatalf("first UpdateEvent failed: %v", err)
}
// Assign tag
tagID := postTagAndGetID(t, "webservers")
body, _ := json.Marshal(map[string]interface{}{"image": "nginx:latest", "tag_id": tagID})
req := httptest.NewRequest(http.MethodPut, "/api/tag-assignments", bytes.NewReader(body))
rec := httptest.NewRecorder()
diun.TagAssignmentHandler(rec, req)
if rec.Code != http.StatusNoContent {
t.Fatalf("tag assignment failed: got %d", rec.Code)
}
// Dismiss (acknowledge) the image — second event must reset this
req = httptest.NewRequest(http.MethodPatch, "/api/updates/nginx:latest", nil)
rec = httptest.NewRecorder()
diun.DismissHandler(rec, req)
if rec.Code != http.StatusNoContent {
t.Fatalf("dismiss failed: got %d", rec.Code)
}
// Receive a second event for the same image
if err := diun.UpdateEvent(diun.DiunEvent{Image: "nginx:latest", Status: "update"}); err != nil {
t.Fatalf("second UpdateEvent failed: %v", err)
}
// Tag must survive the second event
m := diun.GetUpdatesMap()
entry, ok := m["nginx:latest"]
if !ok {
t.Fatal("nginx:latest missing from updates after second event")
}
if entry.Tag == nil {
t.Error("tag was lost after second UpdateEvent — UPSERT bug not fixed")
}
if entry.Tag != nil && entry.Tag.ID != tagID {
t.Errorf("tag ID changed: expected %d, got %d", tagID, entry.Tag.ID)
}
// Acknowledged state must be reset by the new event
if entry.Acknowledged {
t.Error("acknowledged state must be reset by new event")
}
// Status must reflect the new event
if entry.Event.Status != "update" {
t.Errorf("expected status 'update', got %q", entry.Event.Status)
}
}
```
This test verifies all three observable behaviors from DATA-01:
1. Tag survives the UPSERT (the primary bug)
2. acknowledged_at is reset to NULL by the new event
3. Event fields (Status) are updated by the new event
</action>
<verify>
<automated>cd /home/jean-luc-makiola/Development/projects/DiunDashboard && go test -v -run "TestUpdateEvent_PreservesTagOnUpsert" ./pkg/diunwebhook/</automated>
</verify>
<done>
- pkg/diunwebhook/diunwebhook_test.go contains the function `TestUpdateEvent_PreservesTagOnUpsert`
- TestUpdateEvent_PreservesTagOnUpsert passes (tag non-nil, ID matches, Acknowledged false, Status "update")
- Full test suite still passes: `go test ./pkg/diunwebhook/` exits 0
</done>
</task>
</tasks>
<verification>
Run the full test suite after both tasks are complete:
```bash
cd /home/jean-luc-makiola/Development/projects/DiunDashboard && go test -v -coverprofile=coverage.out -coverpkg=./... ./...
```
Expected outcome:
- All existing tests pass (no regressions)
- TestUpdateEvent_PreservesTagOnUpsert passes
- TestDeleteTagHandler_CascadesAssignment passes (proves DATA-02)
Spot-check the fixes with grep:
```bash
grep -n "PRAGMA foreign_keys" pkg/diunwebhook/diunwebhook.go
grep -n "ON CONFLICT(image) DO UPDATE SET" pkg/diunwebhook/diunwebhook.go
grep -c "INSERT OR REPLACE INTO updates" pkg/diunwebhook/diunwebhook.go # must output 0
```
</verification>
<success_criteria>
- `grep -c "INSERT OR REPLACE INTO updates" pkg/diunwebhook/diunwebhook.go` outputs `0`
- `grep -c "PRAGMA foreign_keys = ON" pkg/diunwebhook/diunwebhook.go` outputs `1`
- `grep -c "ON CONFLICT(image) DO UPDATE SET" pkg/diunwebhook/diunwebhook.go` outputs `1`
- `go test ./pkg/diunwebhook/` exits 0
- `TestUpdateEvent_PreservesTagOnUpsert` exists in diunwebhook_test.go and passes
</success_criteria>
<output>
After completion, create `.planning/phases/01-data-integrity/01-01-SUMMARY.md` following the summary template at `@$HOME/.claude/get-shit-done/templates/summary.md`.
</output>

View File

@@ -0,0 +1,81 @@
---
phase: 01-data-integrity
plan: "01"
subsystem: backend/storage
tags: [sqlite, bug-fix, data-integrity, upsert, foreign-keys]
dependency_graph:
requires: []
provides: [DATA-01, DATA-02]
affects: [pkg/diunwebhook/diunwebhook.go]
tech_stack:
added: []
patterns: [SQLite UPSERT (ON CONFLICT DO UPDATE), PRAGMA foreign_keys = ON]
key_files:
created: []
modified:
- pkg/diunwebhook/diunwebhook.go
- pkg/diunwebhook/diunwebhook_test.go
decisions:
- "Use named-column INSERT with ON CONFLICT(image) DO UPDATE SET (UPSERT) instead of INSERT OR REPLACE to preserve tag_assignments child rows"
- "Enable PRAGMA foreign_keys = ON immediately after SetMaxOpenConns(1) so all connections (single-connection pool) enforce FK constraints"
metrics:
duration: "2 minutes"
completed_date: "2026-03-23"
tasks_completed: 2
files_modified: 2
---
# Phase 01 Plan 01: Fix SQLite Data-Destruction Bugs (UPSERT + FK Enforcement) Summary
**One-liner:** SQLite UPSERT replaces INSERT OR REPLACE to preserve tag_assignments on re-insert, and PRAGMA foreign_keys = ON enables ON DELETE CASCADE for tag deletion.
## What Was Built
Fixed two silent data-destruction bugs in the SQLite persistence layer:
**Bug DATA-01 (INSERT OR REPLACE destroying tags):** SQLite's `INSERT OR REPLACE` is implemented as DELETE + INSERT. The DELETE fired `ON DELETE CASCADE` on `tag_assignments.image`, silently removing the tag assignment every time a new DIUN event arrived for an already-tagged image. Fixed by replacing the statement with a proper UPSERT (`INSERT INTO ... ON CONFLICT(image) DO UPDATE SET`) that only updates the non-key columns, leaving `tag_assignments` untouched.
**Bug DATA-02 (FK enforcement disabled):** SQLite disables foreign key enforcement by default. `PRAGMA foreign_keys = ON` was never executed, so `ON DELETE CASCADE` on `tag_assignments.tag_id` did not fire when a tag was deleted. Fixed by executing the pragma immediately after `db.SetMaxOpenConns(1)` in `InitDB()`, before any DDL statements.
## Tasks Completed
| Task | Name | Commit | Files |
|------|------|--------|-------|
| 1 | Replace INSERT OR REPLACE with UPSERT + add PRAGMA FK enforcement | 7edbaad | pkg/diunwebhook/diunwebhook.go |
| 2 | Add regression test TestUpdateEvent_PreservesTagOnUpsert | e2d388c | pkg/diunwebhook/diunwebhook_test.go |
## Decisions Made
1. **Named-column UPSERT over positional INSERT OR REPLACE:** The UPSERT explicitly lists 15 columns and their `excluded.*` counterparts in the DO UPDATE SET clause, making column mapping unambiguous and safe for future schema additions.
2. **acknowledged_at hardcoded NULL in UPSERT:** The UPSERT sets `acknowledged_at = NULL` unconditionally in both the INSERT and the ON CONFLICT update clause. This ensures a new event always resets the acknowledged state, matching the pre-existing behavior and test expectations.
3. **PRAGMA placement before DDL:** The FK pragma is placed before all CREATE TABLE statements to ensure the enforcement is active when foreign key relationships are first defined, not just at query time.
## Deviations from Plan
None — plan executed exactly as written.
## Verification Results
- `grep -c "INSERT OR REPLACE INTO updates" pkg/diunwebhook/diunwebhook.go``0` (confirmed)
- `grep -c "PRAGMA foreign_keys = ON" pkg/diunwebhook/diunwebhook.go``1` (confirmed)
- `grep -c "ON CONFLICT(image) DO UPDATE SET" pkg/diunwebhook/diunwebhook.go``1` (confirmed)
- Full test suite: 29 tests pass, 0 failures, coverage 63.6%
- `TestDismissHandler_ReappearsAfterNewWebhook` — PASS
- `TestDeleteTagHandler_CascadesAssignment` — PASS
- `TestUpdateEvent_PreservesTagOnUpsert` — PASS (new regression test)
## Known Stubs
None.
## Self-Check: PASSED
Files exist:
- FOUND: pkg/diunwebhook/diunwebhook.go (modified)
- FOUND: pkg/diunwebhook/diunwebhook_test.go (modified, contains TestUpdateEvent_PreservesTagOnUpsert)
Commits exist:
- FOUND: 7edbaad — fix(01-01): replace INSERT OR REPLACE with UPSERT and enable FK enforcement
- FOUND: e2d388c — test(01-01): add TestUpdateEvent_PreservesTagOnUpsert regression test

View File

@@ -0,0 +1,414 @@
---
phase: 01-data-integrity
plan: 02
type: execute
wave: 2
depends_on:
- 01-01
files_modified:
- pkg/diunwebhook/diunwebhook.go
- pkg/diunwebhook/diunwebhook_test.go
autonomous: true
requirements:
- DATA-03
- DATA-04
must_haves:
truths:
- "An oversized webhook payload (>1MB) is rejected with HTTP 413, not processed"
- "A failing test setup call (UpdateEvent error, DB error) causes the test run to report FAIL, not pass silently"
- "The full test suite passes with no regressions from Plan 01"
artifacts:
- path: "pkg/diunwebhook/diunwebhook.go"
provides: "maxBodyBytes constant; MaxBytesReader + errors.As pattern in WebhookHandler, TagsHandler POST, TagAssignmentHandler PUT and DELETE"
contains: "maxBodyBytes"
- path: "pkg/diunwebhook/diunwebhook_test.go"
provides: "New tests TestWebhookHandler_OversizedBody, TestTagsHandler_OversizedBody, TestTagAssignmentHandler_OversizedBody; t.Fatalf replacements at 6 call sites"
contains: "TestWebhookHandler_OversizedBody"
key_links:
- from: "WebhookHandler"
to: "http.StatusRequestEntityTooLarge (413)"
via: "r.Body = http.MaxBytesReader(w, r.Body, maxBodyBytes) then errors.As(err, &maxBytesErr)"
pattern: "MaxBytesReader"
- from: "diunwebhook_test.go setup calls"
to: "t.Fatalf"
via: "replace `if err != nil { return }` with `t.Fatalf(...)`"
pattern: "t\\.Fatalf"
---
<objective>
Fix two remaining bugs: unbounded request body reads (DATA-03) and silently swallowed test failures (DATA-04).
Bug 3 (DATA-03): `WebhookHandler`, `TagsHandler` POST branch, and `TagAssignmentHandler` PUT/DELETE branches decode JSON directly from `r.Body` with no size limit. A malicious or buggy DIUN installation could POST a multi-GB payload causing OOM. The fix applies `http.MaxBytesReader` before each decode and returns HTTP 413 when the limit is exceeded.
Bug 4 (DATA-04): Six test call sites use `if err != nil { return }` instead of `t.Fatalf(...)`. When test setup fails (e.g., InitDB fails, UpdateEvent fails), the test silently exits with PASS, hiding the real failure from CI.
These two bugs are fixed in the same plan because they are independent of Plan 01's changes and both small enough to fit comfortably together.
Purpose: Webhook endpoint is safe from OOM attacks; test failures are always visible to the developer and CI.
Output: Updated `diunwebhook.go` with MaxBytesReader in three handlers; updated `diunwebhook_test.go` with t.Fatalf at 6 sites and 3 new 413 tests.
</objective>
<execution_context>
@$HOME/.claude/get-shit-done/workflows/execute-plan.md
@$HOME/.claude/get-shit-done/templates/summary.md
</execution_context>
<context>
@.planning/PROJECT.md
@.planning/ROADMAP.md
@.planning/STATE.md
@.planning/phases/01-data-integrity/01-RESEARCH.md
@.planning/phases/01-data-integrity/01-01-SUMMARY.md
</context>
<tasks>
<task type="auto" tdd="true">
<name>Task 1: Add request body size limits to WebhookHandler, TagsHandler, and TagAssignmentHandler</name>
<files>pkg/diunwebhook/diunwebhook.go</files>
<read_first>
- pkg/diunwebhook/diunwebhook.go — read the entire file before touching it; locate the exact lines for each handler's JSON decode call; the Plan 01 changes (UPSERT, PRAGMA) are already present — do not revert them
</read_first>
<behavior>
- Test (new): POST /webhook with a body of 1MB + 1 byte returns HTTP 413
- Test (new): POST /api/tags with a body of 1MB + 1 byte returns HTTP 413
- Test (new): PUT /api/tag-assignments with a body of 1MB + 1 byte returns HTTP 413
- Test (existing): POST /webhook with valid JSON still returns HTTP 200
- Test (existing): POST /api/tags with valid JSON still returns HTTP 201
</behavior>
<action>
Make the following changes to pkg/diunwebhook/diunwebhook.go:
CHANGE 1 — Add package-level constant after the import block, before the type declarations:
```go
const maxBodyBytes = 1 << 20 // 1 MB
```
CHANGE 2 — Add `"errors"` to the import block (it is not currently imported in diunwebhook.go; it is imported in the test file but not the production file).
The import block becomes:
```go
import (
"crypto/subtle"
"database/sql"
"encoding/json"
"errors"
"log"
"net/http"
"strconv"
"strings"
"sync"
"time"
_ "modernc.org/sqlite"
)
```
CHANGE 3 — In WebhookHandler, BEFORE the `var event DiunEvent` line (currently line ~177), add:
```go
r.Body = http.MaxBytesReader(w, r.Body, maxBodyBytes)
```
Then update the decode error handling block to distinguish 413 from 400:
```go
if err := json.NewDecoder(r.Body).Decode(&event); err != nil {
var maxBytesErr *http.MaxBytesError
if errors.As(err, &maxBytesErr) {
http.Error(w, "request body too large", http.StatusRequestEntityTooLarge)
return
}
log.Printf("WebhookHandler: failed to decode request: %v", err)
http.Error(w, "bad request", http.StatusBadRequest)
return
}
```
CHANGE 4 — In TagsHandler POST branch, BEFORE `var req struct { Name string }`, add:
```go
r.Body = http.MaxBytesReader(w, r.Body, maxBodyBytes)
```
Then update the decode error handling — the current code is:
```go
if err := json.NewDecoder(r.Body).Decode(&req); err != nil || req.Name == "" {
http.Error(w, "bad request: name required", http.StatusBadRequest)
return
}
```
Replace with:
```go
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
var maxBytesErr *http.MaxBytesError
if errors.As(err, &maxBytesErr) {
http.Error(w, "request body too large", http.StatusRequestEntityTooLarge)
return
}
http.Error(w, "bad request: name required", http.StatusBadRequest)
return
}
if req.Name == "" {
http.Error(w, "bad request: name required", http.StatusBadRequest)
return
}
```
(The `req.Name == ""` check must remain, now as a separate if-block after the decode succeeds.)
CHANGE 5 — In TagAssignmentHandler PUT branch, BEFORE `var req struct { Image string; TagID int }`, add:
```go
r.Body = http.MaxBytesReader(w, r.Body, maxBodyBytes)
```
Then update the decode error handling — the current code is:
```go
if err := json.NewDecoder(r.Body).Decode(&req); err != nil || req.Image == "" {
http.Error(w, "bad request", http.StatusBadRequest)
return
}
```
Replace with:
```go
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
var maxBytesErr *http.MaxBytesError
if errors.As(err, &maxBytesErr) {
http.Error(w, "request body too large", http.StatusRequestEntityTooLarge)
return
}
http.Error(w, "bad request", http.StatusBadRequest)
return
}
if req.Image == "" {
http.Error(w, "bad request", http.StatusBadRequest)
return
}
```
CHANGE 6 — In TagAssignmentHandler DELETE branch, BEFORE `var req struct { Image string }`, add:
```go
r.Body = http.MaxBytesReader(w, r.Body, maxBodyBytes)
```
Then update the decode error handling — the current code is:
```go
if err := json.NewDecoder(r.Body).Decode(&req); err != nil || req.Image == "" {
http.Error(w, "bad request", http.StatusBadRequest)
return
}
```
Replace with:
```go
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
var maxBytesErr *http.MaxBytesError
if errors.As(err, &maxBytesErr) {
http.Error(w, "request body too large", http.StatusRequestEntityTooLarge)
return
}
http.Error(w, "bad request", http.StatusBadRequest)
return
}
if req.Image == "" {
http.Error(w, "bad request", http.StatusBadRequest)
return
}
```
No other changes to diunwebhook.go in this task.
</action>
<verify>
<automated>cd /home/jean-luc-makiola/Development/projects/DiunDashboard && go build ./pkg/diunwebhook/ && go test -v -run "TestWebhookHandler_BadRequest|TestCreateTagHandler_EmptyName|TestTagAssignmentHandler_Assign" ./pkg/diunwebhook/</automated>
</verify>
<done>
- `grep -c "maxBodyBytes" pkg/diunwebhook/diunwebhook.go` outputs `5` (1 constant definition + 4 MaxBytesReader calls)
- `grep -c "MaxBytesReader" pkg/diunwebhook/diunwebhook.go` outputs `4`
- `grep -c "errors.As" pkg/diunwebhook/diunwebhook.go` outputs `4`
- `go build ./pkg/diunwebhook/` exits 0
- All pre-existing handler tests still pass
</done>
</task>
<task type="auto">
<name>Task 2: Replace silent returns with t.Fatalf at 6 test setup call sites; add 3 oversized-body tests</name>
<files>pkg/diunwebhook/diunwebhook_test.go</files>
<read_first>
- pkg/diunwebhook/diunwebhook_test.go — read the entire file; locate the exact 6 `if err != nil { return }` call sites at lines 38-40, 153-154, 228-231, 287-289, 329-331, 350-351 and verify they still exist after Plan 01 (which only appended to the file)
</read_first>
<action>
CHANGE 1 — Replace the 6 silent-return call sites with t.Fatalf. Each replacement follows this pattern:
OLD (line ~38-40, in TestUpdateEventAndGetUpdates):
```go
err := diun.UpdateEvent(event)
if err != nil {
return
}
```
NEW:
```go
if err := diun.UpdateEvent(event); err != nil {
t.Fatalf("test setup: UpdateEvent failed: %v", err)
}
```
OLD (line ~153-154, in TestUpdatesHandler):
```go
err := diun.UpdateEvent(event)
if err != nil {
return
}
```
NEW:
```go
if err := diun.UpdateEvent(event); err != nil {
t.Fatalf("test setup: UpdateEvent failed: %v", err)
}
```
OLD (line ~228-231, in TestConcurrentUpdateEvent goroutine):
```go
err := diun.UpdateEvent(diun.DiunEvent{Image: fmt.Sprintf("image:%d", i)})
if err != nil {
return
}
```
NEW (note: in a goroutine, t.Fatalf is safe — testing.T.Fatalf calls runtime.Goexit which unwinds the goroutine cleanly):
```go
if err := diun.UpdateEvent(diun.DiunEvent{Image: fmt.Sprintf("image:%d", i)}); err != nil {
t.Fatalf("test setup: UpdateEvent[%d] failed: %v", i, err)
}
```
OLD (line ~287-289, in TestDismissHandler_Success):
```go
err := diun.UpdateEvent(diun.DiunEvent{Image: "nginx:latest"})
if err != nil {
return
}
```
NEW:
```go
if err := diun.UpdateEvent(diun.DiunEvent{Image: "nginx:latest"}); err != nil {
t.Fatalf("test setup: UpdateEvent failed: %v", err)
}
```
OLD (line ~329-331, in TestDismissHandler_SlashInImageName):
```go
err := diun.UpdateEvent(diun.DiunEvent{Image: "ghcr.io/user/image:tag"})
if err != nil {
return
}
```
NEW:
```go
if err := diun.UpdateEvent(diun.DiunEvent{Image: "ghcr.io/user/image:tag"}); err != nil {
t.Fatalf("test setup: UpdateEvent failed: %v", err)
}
```
OLD (line ~350-351, in TestDismissHandler_ReappearsAfterNewWebhook — note: line 350 is `diun.UpdateEvent(diun.DiunEvent{Image: "nginx:latest"})` with no error check at all):
```go
diun.UpdateEvent(diun.DiunEvent{Image: "nginx:latest"})
```
NEW:
```go
if err := diun.UpdateEvent(diun.DiunEvent{Image: "nginx:latest"}); err != nil {
t.Fatalf("test setup: UpdateEvent failed: %v", err)
}
```
CHANGE 2 — Add three new test functions after all existing tests (at the end of the file, after TestUpdateEvent_PreservesTagOnUpsert which was added in Plan 01):
```go
func TestWebhookHandler_OversizedBody(t *testing.T) {
// Generate a body that exceeds 1 MB (maxBodyBytes = 1<<20 = 1,048,576 bytes)
oversized := make([]byte, 1<<20+1)
for i := range oversized {
oversized[i] = 'x'
}
req := httptest.NewRequest(http.MethodPost, "/webhook", bytes.NewReader(oversized))
rec := httptest.NewRecorder()
diun.WebhookHandler(rec, req)
if rec.Code != http.StatusRequestEntityTooLarge {
t.Errorf("expected 413 for oversized body, got %d", rec.Code)
}
}
func TestTagsHandler_OversizedBody(t *testing.T) {
oversized := make([]byte, 1<<20+1)
for i := range oversized {
oversized[i] = 'x'
}
req := httptest.NewRequest(http.MethodPost, "/api/tags", bytes.NewReader(oversized))
rec := httptest.NewRecorder()
diun.TagsHandler(rec, req)
if rec.Code != http.StatusRequestEntityTooLarge {
t.Errorf("expected 413 for oversized body, got %d", rec.Code)
}
}
func TestTagAssignmentHandler_OversizedBody(t *testing.T) {
oversized := make([]byte, 1<<20+1)
for i := range oversized {
oversized[i] = 'x'
}
req := httptest.NewRequest(http.MethodPut, "/api/tag-assignments", bytes.NewReader(oversized))
rec := httptest.NewRecorder()
diun.TagAssignmentHandler(rec, req)
if rec.Code != http.StatusRequestEntityTooLarge {
t.Errorf("expected 413 for oversized body, got %d", rec.Code)
}
}
```
No new imports are needed — `bytes`, `net/http`, `net/http/httptest`, and `testing` are already imported.
</action>
<verify>
<automated>cd /home/jean-luc-makiola/Development/projects/DiunDashboard && go test -v -run "TestWebhookHandler_OversizedBody|TestTagsHandler_OversizedBody|TestTagAssignmentHandler_OversizedBody" ./pkg/diunwebhook/</automated>
</verify>
<done>
- `grep -c "if err != nil {" pkg/diunwebhook/diunwebhook_test.go` is reduced by 6 compared to before this task (the 6 setup-path returns are gone; other `if err != nil {` blocks with t.Fatal/t.Fatalf remain)
- `grep -c "return$" pkg/diunwebhook/diunwebhook_test.go` no longer contains bare `return` in error-check positions (the 6 silent returns are gone)
- TestWebhookHandler_OversizedBody passes (413)
- TestTagsHandler_OversizedBody passes (413)
- TestTagAssignmentHandler_OversizedBody passes (413)
- Full test suite passes: `go test ./pkg/diunwebhook/` exits 0
</done>
</task>
</tasks>
<verification>
Run the full test suite after both tasks are complete:
```bash
cd /home/jean-luc-makiola/Development/projects/DiunDashboard && go test -v -coverprofile=coverage.out -coverpkg=./... ./...
```
Expected outcome:
- All tests pass (no regressions from Plan 01 or Plan 02)
- Three new 413 tests pass (proves DATA-03)
- Six `if err != nil { return }` patterns replaced with t.Fatalf (proves DATA-04)
Spot-check the fixes:
```bash
grep -n "maxBodyBytes\|MaxBytesReader\|errors.As" pkg/diunwebhook/diunwebhook.go
grep -n "t.Fatalf" pkg/diunwebhook/diunwebhook_test.go | wc -l # should be >= 6 more than before
```
</verification>
<success_criteria>
- `grep -c "MaxBytesReader" pkg/diunwebhook/diunwebhook.go` outputs `4`
- `grep -c "maxBodyBytes" pkg/diunwebhook/diunwebhook.go` outputs `5`
- `grep -c "StatusRequestEntityTooLarge" pkg/diunwebhook/diunwebhook.go` outputs `4`
- TestWebhookHandler_OversizedBody, TestTagsHandler_OversizedBody, TestTagAssignmentHandler_OversizedBody all exist and pass
- `grep -c "if err != nil {$" pkg/diunwebhook/diunwebhook_test.go` followed by `return` no longer appears at the 6 original sites
- `go test -coverprofile=coverage.out -coverpkg=./... ./...` exits 0
</success_criteria>
<output>
After completion, create `.planning/phases/01-data-integrity/01-02-SUMMARY.md` following the summary template at `@$HOME/.claude/get-shit-done/templates/summary.md`.
</output>

View File

@@ -0,0 +1,127 @@
---
phase: 01-data-integrity
plan: 02
subsystem: api
tags: [go, http, security, testing, maxbytesreader, body-size-limit]
# Dependency graph
requires:
- phase: 01-data-integrity plan 01
provides: UPSERT fix and FK enforcement already applied; test file structure established
provides:
- HTTP 413 response for oversized request bodies (>1MB) on WebhookHandler, TagsHandler POST, TagAssignmentHandler PUT/DELETE
- maxBodyBytes constant (1 << 20) and MaxBytesReader + errors.As pattern
- t.Fatalf at all 6 test setup call sites (no more silent test pass-on-setup-failure)
- 3 new oversized-body tests proving DATA-03 fixed
affects:
- phase-02 (database refactor — handlers are now correct and hardened, test suite is reliable)
- any future handler additions that accept a body (pattern established)
# Tech tracking
tech-stack:
added: []
patterns:
- "MaxBytesReader + errors.As(*http.MaxBytesError) pattern for request body size limiting in handlers"
- "JSON-prefix oversized body test: use valid JSON opening so decoder reads past limit before MaxBytesReader triggers"
key-files:
created: []
modified:
- pkg/diunwebhook/diunwebhook.go
- pkg/diunwebhook/diunwebhook_test.go
key-decisions:
- "Use MaxBytesReader wrapping r.Body before each JSON decode; distinguish 413 from 400 via errors.As on *http.MaxBytesError"
- "Oversized body test bodies must use valid JSON prefix (e.g. {\"image\":\") + padding — all-x bodies trigger JSON parse error before MaxBytesReader limit is reached"
patterns-established:
- "MaxBytesReader body guard pattern: r.Body = http.MaxBytesReader(w, r.Body, maxBodyBytes) before decode, errors.As for 413 vs 400"
- "Test setup errors must use t.Fatalf, never silent return"
requirements-completed:
- DATA-03
- DATA-04
# Metrics
duration: 7min
completed: 2026-03-23
---
# Phase 01 Plan 02: Body Size Limits and Test Setup Hardening Summary
**Request body size limits (1MB cap, HTTP 413) added to four handler paths; six silent test-setup returns replaced with t.Fatalf to surface setup failures in CI**
## Performance
- **Duration:** 7 min
- **Started:** 2026-03-23T20:17:30Z
- **Completed:** 2026-03-23T20:24:37Z
- **Tasks:** 2
- **Files modified:** 2
## Accomplishments
- Added `maxBodyBytes` constant and `errors` import to `diunwebhook.go`; applied `http.MaxBytesReader` + `errors.As(*http.MaxBytesError)` guard before JSON decode in WebhookHandler, TagsHandler POST, TagAssignmentHandler PUT and DELETE — returns HTTP 413 on body > 1MB
- Replaced 6 silent `if err != nil { return }` test setup patterns with `t.Fatalf(...)` so failing setup always fails the test, not silently passes
- Added 3 new oversized-body tests (TestWebhookHandler_OversizedBody, TestTagsHandler_OversizedBody, TestTagAssignmentHandler_OversizedBody); all pass with 413
## Task Commits
Each task was committed atomically:
1. **RED: add failing 413 tests** - `311e91d` (test)
2. **Task 1: MaxBytesReader in handlers + GREEN test fix** - `98dfd76` (feat)
3. **Task 2: Replace silent returns with t.Fatalf** - `7bdfc5f` (fix)
**Plan metadata:** (docs commit — see below)
_Note: TDD task 1 has a RED commit followed by a combined feat commit covering the implementation and the test body correction._
## Files Created/Modified
- `pkg/diunwebhook/diunwebhook.go` — added `errors` import, `maxBodyBytes` constant, MaxBytesReader guards in 4 handler paths
- `pkg/diunwebhook/diunwebhook_test.go` — 3 new oversized-body tests; 6 t.Fatalf replacements
## Decisions Made
- `http.MaxBytesReader` is applied per-handler (not via middleware) to match the existing no-middleware architecture
- Body limit set at 1MB (`1 << 20`) matching the plan spec
- Oversized body test bodies use a valid JSON prefix (`{"image":"` + padding) rather than all-`x` bytes — the JSON decoder reads only 1 byte of invalid content before failing, so all-`x` never triggers MaxBytesReader; a JSON string value causes the decoder to read the full field before the limit fires
## Deviations from Plan
### Auto-fixed Issues
**1. [Rule 1 - Bug] Oversized body tests used all-x bytes; fixed to use valid JSON prefix**
- **Found during:** Task 1 GREEN phase
- **Issue:** Test body `make([]byte, 1<<20+1)` filled with `'x'` causes JSON decoder to fail at byte 1 with "invalid character" — MaxBytesReader never triggers because the read count never reaches the limit
- **Fix:** Changed test bodies to `{"image":"` (or `{"name":"`) + `bytes.Repeat([]byte("x"), 1<<20+1)` so the decoder reads past 1MB before encountering an unterminated string
- **Files modified:** pkg/diunwebhook/diunwebhook_test.go
- **Verification:** All 3 oversized-body tests now pass with HTTP 413
- **Committed in:** 98dfd76 (Task 1 feat commit)
---
**Total deviations:** 1 auto-fixed (Rule 1 - test bug)
**Impact on plan:** The fix is necessary for tests to validate what they claim. No scope creep; the handler implementation is exactly as specified.
## Issues Encountered
None beyond the test body deviation documented above.
## User Setup Required
None - no external service configuration required.
## Next Phase Readiness
- DATA-03 and DATA-04 fixed; all Phase 01 plans complete
- Full test suite passes with 0 failures
- Handler hardening pattern (MaxBytesReader + errors.As) established for future handlers
- Ready to transition to Phase 02 (database refactor / PostgreSQL support)
---
*Phase: 01-data-integrity*
*Completed: 2026-03-23*

View File

@@ -0,0 +1,491 @@
# Phase 1: Data Integrity — Research
**Researched:** 2026-03-23
**Domain:** Go / SQLite — UPSERT semantics, FK enforcement, HTTP body limits, test correctness
**Confidence:** HIGH (all four bugs confirmed via direct code analysis and authoritative sources)
---
## Summary
Phase 1 fixes four concrete, active bugs in `pkg/diunwebhook/diunwebhook.go` and its test file. None of these changes alter the public API, the database schema, or the HTTP route surface. They are surgical line-level fixes to existing functions.
Bug 1 (DATA-01) is the most damaging: `INSERT OR REPLACE` in `UpdateEvent()` at line 109 performs a DELETE + INSERT on conflict, which cascades to delete any `tag_assignments` row that references the image being updated. Every new DIUN event for an already-tagged image silently destroys the tag. The fix is a one-statement replacement: `INSERT INTO updates (...) VALUES (...) ON CONFLICT(image) DO UPDATE SET ...` using the `excluded.` qualifier for new values.
Bug 2 (DATA-02) is directly related: even with the UPSERT fix in place, the `ON DELETE CASCADE` constraint on `tag_assignments.tag_id` cannot fire during a tag delete because `PRAGMA foreign_keys = ON` is never executed. SQLite disables FK enforcement by default at the connection level. The fix is one `db.Exec` call immediately after `sql.Open` in `InitDB()`. Since the codebase already uses `db.SetMaxOpenConns(1)`, the single-connection constraint makes this safe without needing DSN parameters or connection hooks.
Bug 3 (DATA-03) is a security/reliability issue: `json.NewDecoder(r.Body).Decode(&event)` in `WebhookHandler` reads an unbounded body. The fix is `r.Body = http.MaxBytesReader(w, r.Body, 1<<20)` before the decode, plus an `errors.As(err, &maxBytesError)` check in the decode error path to return 413. The same pattern applies to the POST body in `TagsHandler` and the PUT/DELETE body in `TagAssignmentHandler`.
Bug 4 (DATA-04) is in the test file: six call sites use `if err != nil { return }` instead of `t.Fatalf(...)`, causing test setup failures to appear as passing tests. These are pure test-file changes with no production impact.
**Primary recommendation:** Fix all four bugs in order (DATA-01 through DATA-04) as separate commits. Each fix is independent and can be verified by its own targeted test.
---
## Project Constraints (from CLAUDE.md)
| Directive | Category |
|-----------|----------|
| No CGO — uses `modernc.org/sqlite` (pure Go) | Dependency constraint |
| Run tests: `go test -v -coverprofile=coverage.out -coverpkg=./... ./...` | Test command |
| Run single test: `go test -v -run TestWebhookHandler ./pkg/diunwebhook/` | Test command |
| CI warns (but does not fail) when coverage drops below 80% | Coverage policy |
| No ORM or query builder — raw SQL only | SQL constraint |
| Module name is `awesomeProject` — do not rename in this phase | Scope constraint |
---
<phase_requirements>
## Phase Requirements
| ID | Description | Research Support |
|----|-------------|------------------|
| DATA-01 | Webhook events use proper UPSERT (ON CONFLICT DO UPDATE) instead of INSERT OR REPLACE, preserving tag assignments when an image receives a new event | SQLite 3.24+ UPSERT syntax confirmed; `excluded.` qualifier for column update values documented; fix is line 109 of diunwebhook.go |
| DATA-02 | SQLite foreign key enforcement is enabled (PRAGMA foreign_keys = ON) so tag deletion properly cascades to tag assignments | FK enforcement is per-connection; with SetMaxOpenConns(1) a single db.Exec after Open is sufficient; modernc.org/sqlite also supports DSN `_pragma=foreign_keys(1)` as a future-proof alternative |
| DATA-03 | Webhook and API endpoints enforce request body size limits (e.g., 1MB) to prevent OOM from oversized payloads | `http.MaxBytesReader` wraps r.Body before decode; `errors.As(err, &maxBytesError)` detects limit exceeded; caller must explicitly return 413 — the reader does not set it automatically |
| DATA-04 | Test error handling uses t.Fatal instead of silent returns, so test failures are never swallowed | Six call sites identified in diunwebhook_test.go (lines 38-40, 153-154, 228-231, 287-289, 329-331, 350-351); all follow the same `if err != nil { return }` pattern |
</phase_requirements>
---
## Standard Stack
### Core (no new dependencies required)
| Library | Version | Purpose | Why Standard |
|---------|---------|---------|--------------|
| `modernc.org/sqlite` | v1.46.1 (current) | SQLite driver (pure Go, no CGO) | Already used; UPSERT and PRAGMA support confirmed |
| `database/sql` | stdlib | SQL connection and query interface | Already used |
| `net/http` | stdlib | `http.MaxBytesReader`, `http.MaxBytesError` | Available since Go 1.19; Go module specifies 1.26 |
| `errors` | stdlib | `errors.As` for typed error detection | Already imported in test file |
No new `go.mod` entries are needed for this phase. All required functionality is in the existing standard library and the already-present `modernc.org/sqlite` driver.
### Alternatives Considered
| Instead of | Could Use | Tradeoff |
|------------|-----------|----------|
| `db.Exec("PRAGMA foreign_keys = ON")` after Open | DSN `?_pragma=foreign_keys(1)` | DSN approach applies to every future connection including pooled ones; direct Exec is sufficient given `SetMaxOpenConns(1)` but DSN is more robust if pooling ever changes |
| `errors.As(err, &maxBytesError)` | `strings.Contains(err.Error(), "http: request body too large")` | String matching is fragile and not API-stable; `errors.As` with `*http.MaxBytesError` is the documented pattern |
---
## Architecture Patterns
### Existing Code Structure (not changing in Phase 1)
Phase 1 does NOT restructure the package. All fixes are line-level edits within the existing `pkg/diunwebhook/diunwebhook.go` and `pkg/diunwebhook/diunwebhook_test.go` files. The package-level global state, handler functions, and overall architecture are left for Phase 2.
### Pattern 1: SQLite UPSERT with excluded. qualifier
**What:** Replace `INSERT OR REPLACE INTO updates VALUES (...)` with a proper UPSERT that only updates event fields, never touching the row's relationship to `tag_assignments`.
**When to use:** Any time an INSERT must update an existing row without deleting it — which is the always-correct choice when foreign key children must survive.
**Why INSERT OR REPLACE is wrong:** SQLite implements `INSERT OR REPLACE` as DELETE + INSERT. The DELETE fires the `ON DELETE CASCADE` on `tag_assignments.image`, destroying the child row. Even if FK enforcement is OFF, the row is physically deleted and reinserted with a new rowid, making the FK relationship stale.
**Example:**
```go
// Source: https://sqlite.org/lang_upsert.html
// Replace line 109 in UpdateEvent():
_, err := db.Exec(`
INSERT INTO updates (
image, diun_version, hostname, status, provider,
hub_link, mime_type, digest, created, platform,
ctn_name, ctn_id, ctn_state, ctn_status,
received_at, acknowledged_at
) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,NULL)
ON CONFLICT(image) DO UPDATE SET
diun_version = excluded.diun_version,
hostname = excluded.hostname,
status = excluded.status,
provider = excluded.provider,
hub_link = excluded.hub_link,
mime_type = excluded.mime_type,
digest = excluded.digest,
created = excluded.created,
platform = excluded.platform,
ctn_name = excluded.ctn_name,
ctn_id = excluded.ctn_id,
ctn_state = excluded.ctn_state,
ctn_status = excluded.ctn_status,
received_at = excluded.received_at,
acknowledged_at = NULL`,
event.Image, event.DiunVersion, ...
)
```
Key points:
- `excluded.column_name` refers to the value that would have been inserted (the new value)
- `acknowledged_at = NULL` explicitly resets the acknowledged state on each new event — this matches the test `TestDismissHandler_ReappearsAfterNewWebhook`
- `tag_assignments` is untouched because the UPDATE path never deletes the `updates` row
### Pattern 2: PRAGMA foreign_keys = ON placement
**What:** Execute `PRAGMA foreign_keys = ON` immediately after `sql.Open`, before any schema creation.
**When to use:** Every SQLite database that defines FK constraints with `ON DELETE CASCADE`.
**Why it must be immediate:** SQLite FK enforcement is a connection-level setting, not a database-level setting. It resets to OFF when the connection closes. With `db.SetMaxOpenConns(1)`, there is exactly one connection and it lives for the process lifetime, so one `db.Exec` call is sufficient.
**Example:**
```go
// Source: https://sqlite.org/foreignkeys.html
// Add in InitDB() after sql.Open, before schema creation:
func InitDB(path string) error {
var err error
db, err = sql.Open("sqlite", path)
if err != nil {
return err
}
db.SetMaxOpenConns(1)
// Enable FK enforcement — must be first SQL executed on this connection
if _, err = db.Exec(`PRAGMA foreign_keys = ON`); err != nil {
return err
}
// ... CREATE TABLE IF NOT EXISTS ...
}
```
The error from `db.Exec("PRAGMA foreign_keys = ON")` must NOT be swallowed. If the pragma fails (which is extremely unlikely with `modernc.org/sqlite`), returning the error prevents silent misconfiguration.
**Future-proof alternative (if SetMaxOpenConns(1) is ever removed):**
```go
db, err = sql.Open("sqlite", path+"?_pragma=foreign_keys(1)")
```
The `_pragma` DSN parameter in `modernc.org/sqlite` applies the pragma on every new connection, making it pool-safe.
### Pattern 3: http.MaxBytesReader with typed error detection
**What:** Wrap `r.Body` before JSON decoding; check for `*http.MaxBytesError` to return 413.
**When to use:** Any handler that reads a request body from untrusted clients.
**Example:**
```go
// Source: https://pkg.go.dev/net/http#MaxBytesReader
// Source: https://www.alexedwards.net/blog/how-to-properly-parse-a-json-request-body
const maxBodyBytes = 1 << 20 // 1 MB
func WebhookHandler(w http.ResponseWriter, r *http.Request) {
// ... auth check, method check ...
r.Body = http.MaxBytesReader(w, r.Body, maxBodyBytes)
var event DiunEvent
if err := json.NewDecoder(r.Body).Decode(&event); err != nil {
var maxBytesErr *http.MaxBytesError
if errors.As(err, &maxBytesErr) {
http.Error(w, "request body too large", http.StatusRequestEntityTooLarge)
return
}
log.Printf("WebhookHandler: failed to decode request: %v", err)
http.Error(w, "bad request", http.StatusBadRequest)
return
}
// ...
}
```
Critical details:
- `http.MaxBytesReader` does NOT automatically set the 413 status. The caller must detect `*http.MaxBytesError` via `errors.As` and call `http.Error(w, ..., 413)`.
- `maxBodyBytes` should be defined as a package-level constant so all three handlers share the same limit.
- Apply to: `WebhookHandler` (POST /webhook), `TagsHandler` POST branch, `TagAssignmentHandler` PUT and DELETE branches.
### Pattern 4: t.Fatalf in test setup paths
**What:** Replace `if err != nil { return }` with `t.Fatalf("...: %v", err)` in test setup code.
**When to use:** Any `t.Test*` function where an error in setup (not the system under test) would make subsequent assertions meaningless.
**Example:**
```go
// Before (silently swallows test setup failure — test appears to pass):
err := diun.UpdateEvent(event)
if err != nil {
return
}
// After (test is marked failed, execution stops, CI catches the failure):
if err := diun.UpdateEvent(event); err != nil {
t.Fatalf("UpdateEvent setup failed: %v", err)
}
```
**Distinction from `t.Errorf`:** Use `t.Fatal`/`t.Fatalf` when the test cannot proceed meaningfully after the failure (setup failure). Use `t.Errorf` for the assertion being tested (allows collecting multiple failures in one run).
### Anti-Patterns to Avoid
- **`INSERT OR REPLACE` for any table with FK children:** Always use `ON CONFLICT DO UPDATE` when child rows in related tables must survive the conflict resolution.
- **`_, _ = db.Exec("PRAGMA ...")`:** Never swallow errors on PRAGMA execution. FK enforcement silently failing means the test `TestDeleteTagHandler_CascadesAssignment` appears to pass while the bug exists in production.
- **`strings.Contains(err.Error(), "request body too large")`:** The error message string is not part of the stable Go API. Use `errors.As(err, &maxBytesError)` instead.
- **Sharing the `maxBodyBytes` constant as a magic number:** Define it once (`const maxBodyBytes = 1 << 20`) so all three handlers use the same value.
---
## Don't Hand-Roll
| Problem | Don't Build | Use Instead | Why |
|---------|-------------|-------------|-----|
| SQLite UPSERT | A "check if exists, then INSERT or UPDATE" two-step | `INSERT ... ON CONFLICT DO UPDATE` | Two-step is non-atomic; concurrent writes between the SELECT and INSERT/UPDATE can create duplicates or miss updates |
| Request body size limit | Manual `io.ReadAll` with size check | `http.MaxBytesReader` | `MaxBytesReader` also signals the server to close the connection after the limit, preventing slow clients from holding connections open |
| Typed error detection | `err.Error() == "http: request body too large"` | `errors.As(err, &maxBytesError)` | String comparison is fragile; `MaxBytesError` is a stable exported type since Go 1.19 |
---
## Common Pitfalls
### Pitfall 1: PRAGMA foreign_keys = ON placed after schema creation
**What goes wrong:** If `PRAGMA foreign_keys = ON` is placed after `CREATE TABLE IF NOT EXISTS tag_assignments (... REFERENCES tags(id) ON DELETE CASCADE)`, on an in-memory database the tables may already have orphaned rows from a prior test run (via `UpdatesReset()` which calls `InitDB(":memory:")`). The pragma is correctly placed but the tables were created in an FK-off state. This is fine because the pragma affects enforcement of future writes, not table creation syntax.
**Why it matters:** The ordering within `InitDB()` is: Open → PRAGMA → CREATE TABLE. If PRAGMA is after CREATE TABLE, it still works for enforcement purposes (FK enforcement applies at write time, not table creation time). However, putting PRAGMA first is cleaner and avoids any ambiguity.
**How to avoid:** Place `db.Exec("PRAGMA foreign_keys = ON")` as the very first SQL statement after `sql.Open` — before any schema DDL.
### Pitfall 2: ON CONFLICT UPSERT must list columns explicitly
**What goes wrong:** `INSERT OR REPLACE INTO updates VALUES (?,?,?,...)` uses positional VALUES with no column list. The replacement `INSERT INTO updates (...) VALUES (...) ON CONFLICT(image) DO UPDATE SET` must explicitly name every column in the VALUES list. If a column is added to the schema later (e.g., another migration), the VALUES list must be updated too.
**Why it happens:** The current `INSERT OR REPLACE` implicitly inserts into all columns by position. The UPSERT syntax requires an explicit conflict target column (`image`) which means the column list must be explicit.
**How to avoid:** The explicit column list in the UPSERT is actually safer — column additions to the schema won't silently insert NULL into unmentioned columns.
### Pitfall 3: MaxBytesReader must wrap r.Body before any read
**What goes wrong:** `http.MaxBytesReader` wraps the reader; it does not inspect an already-partially-read body. If any code reads from `r.Body` before `MaxBytesReader` is applied (e.g., a middleware that logs the request), the limit applies only to the remaining bytes. In the current codebase this is not a problem — no reads happen before the JSON decode.
**How to avoid:** Apply `r.Body = http.MaxBytesReader(w, r.Body, maxBodyBytes)` as the first operation in each handler body, before any reads.
### Pitfall 4: TestDeleteTagHandler_CascadesAssignment currently passes for the wrong reason
**What goes wrong:** This test passes today even though `PRAGMA foreign_keys = ON` is not set. The reason: `GetUpdates()` uses a `LEFT JOIN tag_assignments ta ON u.image = ta.image`. When `INSERT OR REPLACE` deletes the `tag_assignments` row as a side effect (either via the FK cascade on a different code path, or by direct `tag_assignments` cleanup), the LEFT JOIN simply returns NULL for the tag columns — and the test checks `m["nginx:latest"].Tag != nil`. So the test correctly detects the absence of a tag, but for the wrong reason.
**Warning sign:** After fixing DATA-01 (UPSERT), if DATA-02 (FK enforcement) is not also fixed, `TestDeleteTagHandler_CascadesAssignment` may start failing because tag assignments now survive the UPSERT but FK cascades still do not fire on tag deletion.
**How to avoid:** Fix DATA-01 and DATA-02 together, not separately. The regression test for DATA-02 must assert that deleting a tag removes its assignments.
### Pitfall 5: Silent errors in test helpers (export_test.go)
**What goes wrong:** `ResetTags()` in `export_test.go` calls `db.Exec(...)` twice with no error checking. If the DELETE fails (e.g., FK violation because FK enforcement is now ON and there is a constraint preventing the delete), the reset silently leaves stale data.
**How to avoid:** After fixing DATA-02, verify that `ResetTags()` in `export_test.go` does not need `PRAGMA foreign_keys = OFF` temporarily, or that the DELETE cascade order is correct (delete `tag_assignments` first, then `tags`). The current order is correct — `DELETE FROM tag_assignments` first, then `DELETE FROM tags`. With FK enforcement ON, deleting from `tag_assignments` first and then `tags` will succeed cleanly.
---
## Code Examples
Verified patterns from official sources:
### DATA-01: Full UPSERT replacement for UpdateEvent()
```go
// Source: https://sqlite.org/lang_upsert.html
func UpdateEvent(event DiunEvent) error {
mu.Lock()
defer mu.Unlock()
_, err := db.Exec(`
INSERT INTO updates (
image, diun_version, hostname, status, provider,
hub_link, mime_type, digest, created, platform,
ctn_name, ctn_id, ctn_state, ctn_status,
received_at, acknowledged_at
) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,NULL)
ON CONFLICT(image) DO UPDATE SET
diun_version = excluded.diun_version,
hostname = excluded.hostname,
status = excluded.status,
provider = excluded.provider,
hub_link = excluded.hub_link,
mime_type = excluded.mime_type,
digest = excluded.digest,
created = excluded.created,
platform = excluded.platform,
ctn_name = excluded.ctn_name,
ctn_id = excluded.ctn_id,
ctn_state = excluded.ctn_state,
ctn_status = excluded.ctn_status,
received_at = excluded.received_at,
acknowledged_at = NULL`,
event.Image, event.DiunVersion, event.Hostname, event.Status, event.Provider,
event.HubLink, event.MimeType, event.Digest,
event.Created.Format(time.RFC3339), event.Platform,
event.Metadata.ContainerName, event.Metadata.ContainerID,
event.Metadata.State, event.Metadata.Status,
time.Now().Format(time.RFC3339),
)
return err
}
```
### DATA-02: PRAGMA placement in InitDB()
```go
// Source: https://sqlite.org/foreignkeys.html
func InitDB(path string) error {
var err error
db, err = sql.Open("sqlite", path)
if err != nil {
return err
}
db.SetMaxOpenConns(1)
// Enable FK enforcement on the single connection before any schema work
if _, err = db.Exec(`PRAGMA foreign_keys = ON`); err != nil {
return err
}
// ... existing CREATE TABLE statements unchanged ...
}
```
### DATA-03: MaxBytesReader + typed error check
```go
// Source: https://pkg.go.dev/net/http#MaxBytesReader
// Source: https://www.alexedwards.net/blog/how-to-properly-parse-a-json-request-body
const maxBodyBytes = 1 << 20 // 1 MB — package-level constant, shared by all handlers
// In WebhookHandler, after method and auth checks:
r.Body = http.MaxBytesReader(w, r.Body, maxBodyBytes)
var event DiunEvent
if err := json.NewDecoder(r.Body).Decode(&event); err != nil {
var maxBytesErr *http.MaxBytesError
if errors.As(err, &maxBytesErr) {
http.Error(w, "request body too large", http.StatusRequestEntityTooLarge)
return
}
log.Printf("WebhookHandler: failed to decode request: %v", err)
http.Error(w, "bad request", http.StatusBadRequest)
return
}
```
### DATA-04: t.Fatalf replacements
```go
// Before — silent test pass on setup failure:
err := diun.UpdateEvent(event)
if err != nil {
return
}
// After — test fails loudly, CI catches the failure:
if err := diun.UpdateEvent(event); err != nil {
t.Fatalf("test setup: UpdateEvent failed: %v", err)
}
```
### DATA-04: New regression test for DATA-01 (tag survives new event)
This test does not exist yet and must be added as part of DATA-01:
```go
func TestUpdateEvent_PreservesTagOnUpsert(t *testing.T) {
diun.UpdatesReset()
// Insert image and assign a tag
if err := diun.UpdateEvent(diun.DiunEvent{Image: "nginx:latest", Status: "new"}); err != nil {
t.Fatalf("first UpdateEvent failed: %v", err)
}
tagID := postTagAndGetID(t, "webservers")
body, _ := json.Marshal(map[string]interface{}{"image": "nginx:latest", "tag_id": tagID})
req := httptest.NewRequest(http.MethodPut, "/api/tag-assignments", bytes.NewReader(body))
rec := httptest.NewRecorder()
diun.TagAssignmentHandler(rec, req)
if rec.Code != http.StatusNoContent {
t.Fatalf("tag assignment failed: %d", rec.Code)
}
// Receive a second event for the same image (simulates DIUN re-notification)
if err := diun.UpdateEvent(diun.DiunEvent{Image: "nginx:latest", Status: "update"}); err != nil {
t.Fatalf("second UpdateEvent failed: %v", err)
}
// Tag must survive the second event
m := diun.GetUpdatesMap()
if m["nginx:latest"].Tag == nil {
t.Error("tag was lost after second UpdateEvent — INSERT OR REPLACE bug not fixed")
}
if m["nginx:latest"].Tag != nil && m["nginx:latest"].Tag.ID != tagID {
t.Errorf("tag ID changed: expected %d, got %d", tagID, m["nginx:latest"].Tag.ID)
}
// Acknowledged state should be reset
if m["nginx:latest"].Acknowledged {
t.Error("acknowledged state should be reset by new event")
}
}
```
---
## State of the Art
| Old Approach | Current Approach | When Changed | Impact |
|--------------|------------------|--------------|--------|
| `INSERT OR REPLACE` (DELETE+INSERT) | `INSERT ... ON CONFLICT DO UPDATE` | SQLite 3.24 (2018-06-04) | Preserves FK child rows; row identity unchanged |
| Manual PRAGMA per session | DSN `?_pragma=foreign_keys(1)` | modernc.org/sqlite (current) | Pool-safe; applies to every future connection automatically |
| `io.LimitReader` for body limits | `http.MaxBytesReader` | Go 1.0+ (always) | Signals connection close; returns typed `MaxBytesError` |
| `*http.MaxBytesError` type assertion | `errors.As(err, &maxBytesErr)` | Go 1.19 (MaxBytesError exported) | Type-safe; works with wrapped errors |
**Deprecated/outdated:**
- `INSERT OR REPLACE`: Still valid SQLite syntax but semantically wrong for tables with FK children. Use `ON CONFLICT DO UPDATE` instead.
- String-matching on error messages: `strings.Contains(err.Error(), "request body too large")` — not API-stable. `errors.As` with `*http.MaxBytesError` is the correct pattern since Go 1.19.
---
## Open Questions
1. **Does `PRAGMA foreign_keys = ON` interfere with `UpdatesReset()` calling `InitDB(":memory:")`?**
- What we know: `UpdatesReset()` in `export_test.go` calls `InitDB(":memory:")` which re-runs the full schema creation on a fresh in-memory database. The PRAGMA will be set on the new connection.
- What's unclear: Whether setting the PRAGMA on `:memory:` changes any test behavior for existing passing tests.
- Recommendation: Run the full test suite immediately after adding the PRAGMA. If any test regresses, inspect whether it was relying on FK non-enforcement. This is unlikely since the existing tests do not create FK-violation scenarios intentionally.
2. **Should `TagAssignmentHandler`'s `INSERT OR REPLACE INTO tag_assignments` (line 352) also be changed to a proper UPSERT?**
- What we know: `tag_assignments` has `image TEXT PRIMARY KEY`, so `INSERT OR REPLACE` on it also deletes and reinserts. Since `tag_assignments` has no FK children, the delete+insert is functionally harmless here.
- What's unclear: Whether this is in scope for Phase 1 or Phase 2.
- Recommendation: Include it in Phase 1 for consistency and to eliminate all `INSERT OR REPLACE` occurrences. The fix is trivial: `INSERT INTO tag_assignments (image, tag_id) VALUES (?, ?) ON CONFLICT(image) DO UPDATE SET tag_id = excluded.tag_id`.
---
## Environment Availability
Step 2.6: SKIPPED — Phase 1 is code-only edits to existing Go source files and test files. No external tools, services, runtimes, databases, or CLIs beyond the existing project toolchain are required.
---
## Sources
### Primary (HIGH confidence)
- [SQLite UPSERT documentation](https://sqlite.org/lang_upsert.html) — ON CONFLICT DO UPDATE syntax, `excluded.` qualifier behavior, availability since SQLite 3.24
- [SQLite Foreign Key Support](https://sqlite.org/foreignkeys.html) — per-connection enforcement, must enable with PRAGMA, not stored in DB file
- [Go net/http package — MaxBytesReader](https://pkg.go.dev/net/http) — function signature, MaxBytesError type, behavior on limit exceeded
- [modernc.org/sqlite package](https://pkg.go.dev/modernc.org/sqlite) — DSN `_pragma` parameter, RegisterConnectionHook API
- Direct code analysis: `pkg/diunwebhook/diunwebhook.go` lines 58-118, 179, 277, 340, 352 — HIGH confidence (source of truth)
- Direct code analysis: `pkg/diunwebhook/diunwebhook_test.go` lines 38-40, 153-154, 228-231, 287-289, 329-331, 350-351 — HIGH confidence (source of truth)
- Direct code analysis: `pkg/diunwebhook/export_test.go` — HIGH confidence
### Secondary (MEDIUM confidence)
- [Alex Edwards — How to properly parse a JSON request body](https://www.alexedwards.net/blog/how-to-properly-parse-a-json-request-body) — MaxBytesReader + errors.As pattern, verified against pkg.go.dev
- [TIL: SQLite Foreign Key Support with Go](https://www.rockyourcode.com/til-sqlite-foreign-key-support-with-go/) — per-connection requirement, connection pool implications
- `.planning/codebase/CONCERNS.md` — pre-existing bug audit (lines 37-47) — HIGH (prior analysis by same team)
- `.planning/research/PITFALLS.md` — Pitfall 2 (INSERT OR REPLACE) — HIGH (direct codebase evidence cited)
### Tertiary (LOW confidence)
- None
---
## Metadata
**Confidence breakdown:**
- DATA-01 fix (UPSERT): HIGH — SQLite official docs confirm syntax, codebase confirms bug location at line 109
- DATA-02 fix (FK enforcement): HIGH — SQLite official docs confirm per-connection behavior, modernc.org/sqlite docs confirm DSN approach, SetMaxOpenConns(1) makes simple Exec sufficient
- DATA-03 fix (MaxBytesReader): HIGH — Go stdlib docs confirm API, MaxBytesError exported since Go 1.19, module requires Go 1.26
- DATA-04 fix (t.Fatal): HIGH — Direct test file analysis, standard Go testing idiom
**Research date:** 2026-03-23
**Valid until:** 2026-06-23 (SQLite and Go stdlib APIs are extremely stable; UPSERT syntax has not changed since 3.24 in 2018)

View File

@@ -0,0 +1,152 @@
---
phase: 01-data-integrity
verified: 2026-03-23T21:30:00Z
status: passed
score: 6/6 must-haves verified
re_verification: false
---
# Phase 1: Data Integrity Verification Report
**Phase Goal:** Users can trust that their data is never silently corrupted — tag assignments survive new DIUN events, foreign key constraints are enforced, and test failures are always visible
**Verified:** 2026-03-23T21:30:00Z
**Status:** passed
**Re-verification:** No — initial verification
---
## Goal Achievement
### Observable Truths
Source: ROADMAP.md Success Criteria (4 items) + must_haves from both PLANs (2 additional).
| # | Truth | Status | Evidence |
|----|--------------------------------------------------------------------------------------------------|------------|---------------------------------------------------------------------------------|
| 1 | A second DIUN event for the same image does not remove its tag assignment | VERIFIED | UPSERT at diunwebhook.go:115-144; TestUpdateEvent_PreservesTagOnUpsert passes |
| 2 | Deleting a tag removes all associated tag assignments (foreign key cascade enforced) | VERIFIED | PRAGMA at diunwebhook.go:68-70; TestDeleteTagHandler_CascadesAssignment passes |
| 3 | An oversized webhook payload (>1MB) is rejected with HTTP 413, not processed | VERIFIED | MaxBytesReader at diunwebhook.go:205,308,380,415; 3 oversized-body tests pass |
| 4 | A failing assertion in a test causes the test run to report failure, not pass silently | VERIFIED | 27 t.Fatalf calls in diunwebhook_test.go; zero silent `if err != nil { return }` patterns remain |
| 5 | INSERT OR REPLACE is gone from UpdateEvent() (plan 01-01 truth) | VERIFIED | grep count 0 for "INSERT OR REPLACE INTO updates" in diunwebhook.go |
| 6 | Full test suite passes with no regressions (plan 01-01 + 01-02 truths) | VERIFIED | 33/33 tests pass; coverage 63.8% |
**Score:** 6/6 truths verified
---
### Required Artifacts
#### Plan 01-01 Artifacts
| Artifact | Provides | Status | Details |
|----------------------------------------------|--------------------------------------------------------|------------|-----------------------------------------------------------------------|
| `pkg/diunwebhook/diunwebhook.go` | UPSERT in UpdateEvent(); PRAGMA foreign_keys = ON in InitDB() | VERIFIED | Contains "ON CONFLICT(image) DO UPDATE SET" (line 122) and "PRAGMA foreign_keys = ON" (line 68); no "INSERT OR REPLACE INTO updates" |
| `pkg/diunwebhook/diunwebhook_test.go` | Regression test TestUpdateEvent_PreservesTagOnUpsert | VERIFIED | Function present at line 652; passes |
#### Plan 01-02 Artifacts
| Artifact | Provides | Status | Details |
|----------------------------------------------|--------------------------------------------------------|------------|-----------------------------------------------------------------------|
| `pkg/diunwebhook/diunwebhook.go` | maxBodyBytes constant; MaxBytesReader + errors.As in 4 handler paths | VERIFIED | maxBodyBytes count=5 (1 const + 4 usage); MaxBytesReader count=4; errors.As count=4; StatusRequestEntityTooLarge count=4 |
| `pkg/diunwebhook/diunwebhook_test.go` | 3 oversized-body tests; t.Fatalf at all 6 setup sites | VERIFIED | TestWebhookHandler_OversizedBody (line 613), TestTagsHandler_OversizedBody (line 628), TestTagAssignmentHandler_OversizedBody (line 640) all present and passing; t.Fatalf count=27 |
---
### Key Link Verification
#### Plan 01-01 Key Links
| From | To | Via | Status | Details |
|--------------|----------------------------------------|--------------------------------------------------|----------|------------------------------------------------------------|
| `InitDB()` | `PRAGMA foreign_keys = ON` | db.Exec immediately after db.SetMaxOpenConns(1) | VERIFIED | diunwebhook.go lines 67-70: SetMaxOpenConns then Exec PRAGMA before any DDL |
| `UpdateEvent()` | INSERT ... ON CONFLICT(image) DO UPDATE SET | db.Exec with named column list | VERIFIED | diunwebhook.go lines 115-144: full UPSERT with 15 named columns |
#### Plan 01-02 Key Links
| From | To | Via | Status | Details |
|---------------------------------------|---------------------------------------------|-----------------------------------------------------|----------|--------------------------------------------------------------------------|
| `WebhookHandler` | `http.StatusRequestEntityTooLarge` (413) | MaxBytesReader + errors.As(*http.MaxBytesError) | VERIFIED | diunwebhook.go line 205 (MaxBytesReader), lines 209-213 (errors.As + 413) |
| `TagsHandler POST branch` | `http.StatusRequestEntityTooLarge` (413) | MaxBytesReader + errors.As(*http.MaxBytesError) | VERIFIED | diunwebhook.go line 308, lines 312-316 |
| `TagAssignmentHandler PUT branch` | `http.StatusRequestEntityTooLarge` (413) | MaxBytesReader + errors.As(*http.MaxBytesError) | VERIFIED | diunwebhook.go line 380, lines 385-390 |
| `TagAssignmentHandler DELETE branch` | `http.StatusRequestEntityTooLarge` (413) | MaxBytesReader + errors.As(*http.MaxBytesError) | VERIFIED | diunwebhook.go line 415, lines 419-424 |
| `diunwebhook_test.go setup calls` | `t.Fatalf` | replace `if err != nil { return }` with t.Fatalf | VERIFIED | All 3 remaining `if err != nil` blocks use t.Fatalf; zero silent returns |
---
### Data-Flow Trace (Level 4)
Not applicable. Phase 01 modifies persistence and HTTP handler logic — no new components rendering dynamic data are introduced. Existing data flow (WebhookHandler → UpdateEvent → SQLite → GetUpdates → UpdatesHandler → React SPA) is unchanged in structure.
---
### Behavioral Spot-Checks
| Behavior | Check | Result | Status |
|-----------------------------------------------|------------------------------------------------|-------------------------------|----------|
| No INSERT OR REPLACE remains | grep -c "INSERT OR REPLACE INTO updates" | 0 | PASS |
| PRAGMA foreign_keys present once | grep -c "PRAGMA foreign_keys = ON" | 1 | PASS |
| UPSERT present once | grep -c "ON CONFLICT(image) DO UPDATE SET" | 1 | PASS |
| maxBodyBytes defined and used (5 occurrences) | grep -c "maxBodyBytes" | 5 | PASS |
| MaxBytesReader applied in 4 handler paths | grep -c "MaxBytesReader" | 4 | PASS |
| errors.As used for 413 detection (4 paths) | grep -c "errors.As" | 4 | PASS |
| 413 returned in 4 handler paths | grep -c "StatusRequestEntityTooLarge" | 4 | PASS |
| All 33 tests pass | go test ./pkg/diunwebhook/ (with Go binary) | PASS (33/33, coverage 63.8%) | PASS |
| t.Fatalf used for test setup (27 occurrences) | grep -c "t\.Fatalf" | 27 | PASS |
---
### Requirements Coverage
All four requirement IDs declared across both plans are cross-referenced against REQUIREMENTS.md.
| Requirement | Source Plan | Description | Status | Evidence |
|-------------|-------------|----------------------------------------------------------------------------------------------|-----------|-------------------------------------------------------------------------------------|
| DATA-01 | 01-01-PLAN | Webhook events use proper UPSERT preserving tag assignments on re-event | SATISFIED | ON CONFLICT(image) DO UPDATE SET at diunwebhook.go:122; TestUpdateEvent_PreservesTagOnUpsert passes |
| DATA-02 | 01-01-PLAN | SQLite FK enforcement enabled (PRAGMA foreign_keys = ON) so tag deletion cascades | SATISFIED | PRAGMA at diunwebhook.go:68; TestDeleteTagHandler_CascadesAssignment passes |
| DATA-03 | 01-02-PLAN | Webhook and API endpoints enforce 1MB body size limit, return 413 on oversized payload | SATISFIED | MaxBytesReader in 4 handler paths; 3 oversized-body tests all return 413 |
| DATA-04 | 01-02-PLAN | Test error handling uses t.Fatal/t.Fatalf, test failures are never swallowed | SATISFIED | 27 t.Fatalf calls; zero silent `if err != nil { return }` patterns remain |
**Orphaned requirements check:** REQUIREMENTS.md maps DATA-01, DATA-02, DATA-03, DATA-04 to Phase 1. All four are claimed by plans 01-01 and 01-02. No orphaned requirements.
**Coverage:** 4/4 Phase 1 requirements satisfied.
---
### Anti-Patterns Found
| File | Line | Pattern | Severity | Impact |
|------|------|---------|----------|--------|
| `pkg/diunwebhook/diunwebhook_test.go` | 359 | `diun.UpdateEvent(...)` with no error check in `TestDismissHandler_ReappearsAfterNewWebhook` | Info | The call at line 359 is a non-setup call (it is the action under test, not setup); the test proceeds to assert state, so a failure would surface via the assertions below. Not a silent swallow of setup failure. |
No blocker or warning anti-patterns found. The single info item (line 359 unchecked call) is in `TestDismissHandler_ReappearsAfterNewWebhook` and is the test's subject action, not a setup call — the test assertions on lines 362-369 would catch a failure.
---
### Human Verification Required
None. All phase 01 goals are verifiable programmatically via grep patterns and test execution. No UI, visual, or real-time behaviors were added in this phase.
---
### Gaps Summary
No gaps. All 6 truths verified, all 4 artifacts substantive and wired, all 5 key links confirmed, all 4 requirements satisfied, full test suite passes (33/33), and no blocker anti-patterns found.
---
### Commit Traceability
All commits documented in SUMMARYs are present in git history on `develop` branch:
| Commit | Description | Plan |
|-----------|----------------------------------------------------------------------|-------|
| `7edbaad` | fix(01-01): replace INSERT OR REPLACE with UPSERT and enable FK enforcement | 01-01 |
| `e2d388c` | test(01-01): add TestUpdateEvent_PreservesTagOnUpsert regression test | 01-01 |
| `311e91d` | test(01-02): add failing tests for oversized body (413) - RED | 01-02 |
| `98dfd76` | feat(01-02): add request body size limits (1MB) to webhook and tag handlers | 01-02 |
| `7bdfc5f` | fix(01-02): replace silent test setup returns with t.Fatalf at 6 sites | 01-02 |
---
_Verified: 2026-03-23T21:30:00Z_
_Verifier: Claude (gsd-verifier)_

View File

@@ -0,0 +1,362 @@
---
phase: 02-backend-refactor
plan: 01
type: execute
wave: 1
depends_on: []
files_modified:
- pkg/diunwebhook/store.go
- pkg/diunwebhook/sqlite_store.go
- pkg/diunwebhook/migrate.go
- pkg/diunwebhook/migrations/sqlite/0001_initial_schema.up.sql
- pkg/diunwebhook/migrations/sqlite/0001_initial_schema.down.sql
- go.mod
- go.sum
autonomous: true
requirements: [REFAC-01, REFAC-03]
must_haves:
truths:
- "A Store interface defines all 9 persistence operations with no SQL or *sql.DB in the contract"
- "SQLiteStore implements every Store method using raw SQL and a sync.Mutex"
- "RunMigrations applies embedded SQL files via golang-migrate and tolerates ErrNoChange"
- "Migration 0001 creates the full current schema including acknowledged_at using CREATE TABLE IF NOT EXISTS"
- "PRAGMA foreign_keys = ON is set in NewSQLiteStore before any queries"
artifacts:
- path: "pkg/diunwebhook/store.go"
provides: "Store interface with 9 methods"
exports: ["Store"]
- path: "pkg/diunwebhook/sqlite_store.go"
provides: "SQLiteStore struct implementing Store"
exports: ["SQLiteStore", "NewSQLiteStore"]
- path: "pkg/diunwebhook/migrate.go"
provides: "RunMigrations function using golang-migrate + embed.FS"
exports: ["RunMigrations"]
- path: "pkg/diunwebhook/migrations/sqlite/0001_initial_schema.up.sql"
provides: "Baseline schema DDL"
contains: "CREATE TABLE IF NOT EXISTS updates"
key_links:
- from: "pkg/diunwebhook/sqlite_store.go"
to: "pkg/diunwebhook/store.go"
via: "interface implementation"
pattern: "func \\(s \\*SQLiteStore\\)"
- from: "pkg/diunwebhook/migrate.go"
to: "pkg/diunwebhook/migrations/sqlite/"
via: "embed.FS"
pattern: "go:embed migrations/sqlite"
---
<objective>
Create the Store interface, SQLiteStore implementation, and golang-migrate migration infrastructure as new files alongside the existing code.
Purpose: Establish the persistence abstraction layer and migration system that Plan 02 will wire into the Server struct and handlers. These are additive-only changes -- nothing existing breaks.
Output: store.go, sqlite_store.go, migrate.go, migration SQL files, golang-migrate dependency installed.
</objective>
<execution_context>
@$HOME/.claude/get-shit-done/workflows/execute-plan.md
@$HOME/.claude/get-shit-done/templates/summary.md
</execution_context>
<context>
@.planning/PROJECT.md
@.planning/ROADMAP.md
@.planning/STATE.md
@.planning/phases/02-backend-refactor/02-RESEARCH.md
<interfaces>
<!-- Current types from diunwebhook.go that Store interface methods must use -->
From pkg/diunwebhook/diunwebhook.go:
```go
type DiunEvent struct {
DiunVersion string `json:"diun_version"`
Hostname string `json:"hostname"`
Status string `json:"status"`
Provider string `json:"provider"`
Image string `json:"image"`
HubLink string `json:"hub_link"`
MimeType string `json:"mime_type"`
Digest string `json:"digest"`
Created time.Time `json:"created"`
Platform string `json:"platform"`
Metadata struct {
ContainerName string `json:"ctn_names"`
ContainerID string `json:"ctn_id"`
State string `json:"ctn_state"`
Status string `json:"ctn_status"`
} `json:"metadata"`
}
type Tag struct {
ID int `json:"id"`
Name string `json:"name"`
}
type UpdateEntry struct {
Event DiunEvent `json:"event"`
ReceivedAt time.Time `json:"received_at"`
Acknowledged bool `json:"acknowledged"`
Tag *Tag `json:"tag"`
}
```
</interfaces>
</context>
<tasks>
<task type="auto">
<name>Task 1: Create Store interface and SQLiteStore implementation</name>
<files>pkg/diunwebhook/store.go, pkg/diunwebhook/sqlite_store.go</files>
<read_first>
- pkg/diunwebhook/diunwebhook.go (current SQL operations to extract)
- .planning/phases/02-backend-refactor/02-RESEARCH.md (Store interface design, SQL operations inventory)
</read_first>
<action>
**Install golang-migrate dependency first:**
```bash
cd /home/jean-luc-makiola/Development/projects/DiunDashboard
go get github.com/golang-migrate/migrate/v4@v4.19.1
go get github.com/golang-migrate/migrate/v4/database/sqlite
go get github.com/golang-migrate/migrate/v4/source/iofs
```
**Create `pkg/diunwebhook/store.go`** with exactly this interface (per REFAC-01):
```go
package diunwebhook
// Store defines all persistence operations. Implementations must be safe
// for concurrent use from HTTP handlers.
type Store interface {
UpsertEvent(event DiunEvent) error
GetUpdates() (map[string]UpdateEntry, error)
AcknowledgeUpdate(image string) (found bool, err error)
ListTags() ([]Tag, error)
CreateTag(name string) (Tag, error)
DeleteTag(id int) (found bool, err error)
AssignTag(image string, tagID int) error
UnassignTag(image string) error
TagExists(id int) (bool, error)
}
```
**Create `pkg/diunwebhook/sqlite_store.go`** with `SQLiteStore` struct implementing all 9 Store methods:
```go
package diunwebhook
import (
"database/sql"
"sync"
"time"
)
type SQLiteStore struct {
db *sql.DB
mu sync.Mutex
}
func NewSQLiteStore(db *sql.DB) *SQLiteStore {
return &SQLiteStore{db: db}
}
```
Move all SQL from current handlers/functions into Store methods:
1. **UpsertEvent** -- move the INSERT...ON CONFLICT from current `UpdateEvent()` function. Keep exact same SQL including `ON CONFLICT(image) DO UPDATE SET` with all 14 columns and `acknowledged_at = NULL`. Use `time.Now().Format(time.RFC3339)` for received_at. Acquire `s.mu.Lock()`.
2. **GetUpdates** -- move the SELECT...LEFT JOIN from current `GetUpdates()` function. Exact same query: `SELECT u.image, u.diun_version, ...` with LEFT JOIN on tag_assignments and tags. Same row scanning logic with `sql.NullInt64`/`sql.NullString` for tag fields. No mutex needed (read-only).
3. **AcknowledgeUpdate** -- move SQL from `DismissHandler`: `UPDATE updates SET acknowledged_at = datetime('now') WHERE image = ?`. Return `(found bool, err error)` where found = RowsAffected() > 0. Acquire `s.mu.Lock()`.
4. **ListTags** -- move SQL from `TagsHandler` GET case: `SELECT id, name FROM tags ORDER BY name`. Return `([]Tag, error)`. No mutex.
5. **CreateTag** -- move SQL from `TagsHandler` POST case: `INSERT INTO tags (name) VALUES (?)`. Return `(Tag{ID: int(lastInsertId), Name: name}, error)`. Acquire `s.mu.Lock()`.
6. **DeleteTag** -- move SQL from `TagByIDHandler`: `DELETE FROM tags WHERE id = ?`. Return `(found bool, err error)` where found = RowsAffected() > 0. Acquire `s.mu.Lock()`.
7. **AssignTag** -- move SQL from `TagAssignmentHandler` PUT case: `INSERT OR REPLACE INTO tag_assignments (image, tag_id) VALUES (?, ?)`. Keep `INSERT OR REPLACE` (correct for SQLite, per research Pitfall 6). Acquire `s.mu.Lock()`.
8. **UnassignTag** -- move SQL from `TagAssignmentHandler` DELETE case: `DELETE FROM tag_assignments WHERE image = ?`. Acquire `s.mu.Lock()`.
9. **TagExists** -- move SQL from `TagAssignmentHandler` PUT check: `SELECT COUNT(*) FROM tags WHERE id = ?`. Return `(bool, error)` where bool = count > 0. No mutex (read-only).
**CRITICAL:** `NewSQLiteStore` must run `PRAGMA foreign_keys = ON` on the db connection and `db.SetMaxOpenConns(1)` -- these currently live in `InitDB` and must NOT be lost. Specifically:
```go
func NewSQLiteStore(db *sql.DB) *SQLiteStore {
db.SetMaxOpenConns(1)
// PRAGMA foreign_keys must be set per-connection; with MaxOpenConns(1) this covers all queries
db.Exec("PRAGMA foreign_keys = ON")
return &SQLiteStore{db: db}
}
```
**rows.Close() pattern:** Use `defer rows.Close()` directly (not the verbose closure pattern from the current code). The error from Close() is safe to ignore in read paths.
</action>
<verify>
<automated>cd /home/jean-luc-makiola/Development/projects/DiunDashboard && go build ./pkg/diunwebhook/ && echo "BUILD OK"</automated>
</verify>
<acceptance_criteria>
- pkg/diunwebhook/store.go contains `type Store interface {`
- pkg/diunwebhook/store.go contains exactly these 9 method signatures: UpsertEvent, GetUpdates, AcknowledgeUpdate, ListTags, CreateTag, DeleteTag, AssignTag, UnassignTag, TagExists
- pkg/diunwebhook/sqlite_store.go contains `type SQLiteStore struct {`
- pkg/diunwebhook/sqlite_store.go contains `func NewSQLiteStore(db *sql.DB) *SQLiteStore`
- pkg/diunwebhook/sqlite_store.go contains `db.SetMaxOpenConns(1)`
- pkg/diunwebhook/sqlite_store.go contains `PRAGMA foreign_keys = ON`
- pkg/diunwebhook/sqlite_store.go contains `func (s *SQLiteStore) UpsertEvent(event DiunEvent) error`
- pkg/diunwebhook/sqlite_store.go contains `s.mu.Lock()` (mutex usage in write methods)
- pkg/diunwebhook/sqlite_store.go contains `INSERT OR REPLACE INTO tag_assignments` (not ON CONFLICT for this table)
- pkg/diunwebhook/sqlite_store.go contains `ON CONFLICT(image) DO UPDATE SET` (UPSERT for updates table)
- `go build ./pkg/diunwebhook/` exits 0
</acceptance_criteria>
<done>Store interface defines 9 methods; SQLiteStore implements all 9 with exact SQL from current handlers; package compiles with no errors</done>
</task>
<task type="auto">
<name>Task 2: Create migration infrastructure and SQL files</name>
<files>pkg/diunwebhook/migrate.go, pkg/diunwebhook/migrations/sqlite/0001_initial_schema.up.sql, pkg/diunwebhook/migrations/sqlite/0001_initial_schema.down.sql</files>
<read_first>
- pkg/diunwebhook/diunwebhook.go (current DDL in InitDB to extract)
- .planning/phases/02-backend-refactor/02-RESEARCH.md (RunMigrations pattern, migration file design, Pitfall 2 and 4)
</read_first>
<action>
**Create migration SQL files:**
Create directory `pkg/diunwebhook/migrations/sqlite/`.
**`0001_initial_schema.up.sql`** -- Full current schema as a single baseline migration. Use `CREATE TABLE IF NOT EXISTS` for backward compatibility with existing databases (per research recommendation):
```sql
CREATE TABLE IF NOT EXISTS updates (
image TEXT PRIMARY KEY,
diun_version TEXT NOT NULL DEFAULT '',
hostname TEXT NOT NULL DEFAULT '',
status TEXT NOT NULL DEFAULT '',
provider TEXT NOT NULL DEFAULT '',
hub_link TEXT NOT NULL DEFAULT '',
mime_type TEXT NOT NULL DEFAULT '',
digest TEXT NOT NULL DEFAULT '',
created TEXT NOT NULL DEFAULT '',
platform TEXT NOT NULL DEFAULT '',
ctn_name TEXT NOT NULL DEFAULT '',
ctn_id TEXT NOT NULL DEFAULT '',
ctn_state TEXT NOT NULL DEFAULT '',
ctn_status TEXT NOT NULL DEFAULT '',
received_at TEXT NOT NULL,
acknowledged_at TEXT
);
CREATE TABLE IF NOT EXISTS tags (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL UNIQUE
);
CREATE TABLE IF NOT EXISTS tag_assignments (
image TEXT PRIMARY KEY,
tag_id INTEGER NOT NULL REFERENCES tags(id) ON DELETE CASCADE
);
```
**`0001_initial_schema.down.sql`** -- Reverse of up migration:
```sql
DROP TABLE IF EXISTS tag_assignments;
DROP TABLE IF EXISTS tags;
DROP TABLE IF EXISTS updates;
```
**Create `pkg/diunwebhook/migrate.go`:**
```go
package diunwebhook
import (
"database/sql"
"embed"
"errors"
"github.com/golang-migrate/migrate/v4"
sqlitemigrate "github.com/golang-migrate/migrate/v4/database/sqlite"
"github.com/golang-migrate/migrate/v4/source/iofs"
_ "modernc.org/sqlite"
)
//go:embed migrations/sqlite
var sqliteMigrations embed.FS
// RunMigrations applies all pending schema migrations to the given SQLite database.
// Returns nil if all migrations applied successfully or if database is already up to date.
func RunMigrations(db *sql.DB) error {
src, err := iofs.New(sqliteMigrations, "migrations/sqlite")
if err != nil {
return err
}
driver, err := sqlitemigrate.WithInstance(db, &sqlitemigrate.Config{})
if err != nil {
return err
}
m, err := migrate.NewWithInstance("iofs", src, "sqlite", driver)
if err != nil {
return err
}
if err := m.Up(); err != nil && !errors.Is(err, migrate.ErrNoChange) {
return err
}
return nil
}
```
**CRITICAL imports:**
- Use `database/sqlite` (NOT `database/sqlite3`) -- the sqlite3 variant requires CGO which is forbidden
- Import alias `sqlitemigrate` for `github.com/golang-migrate/migrate/v4/database/sqlite` to avoid collision with the blank import of `modernc.org/sqlite`
- The `_ "modernc.org/sqlite"` blank import must be present so the "sqlite" driver is registered for `sql.Open`
**After creating files, run:**
```bash
cd /home/jean-luc-makiola/Development/projects/DiunDashboard && go mod tidy
```
</action>
<verify>
<automated>cd /home/jean-luc-makiola/Development/projects/DiunDashboard && go build ./pkg/diunwebhook/ && go vet ./pkg/diunwebhook/ && echo "BUILD+VET OK"</automated>
</verify>
<acceptance_criteria>
- pkg/diunwebhook/migrate.go contains `//go:embed migrations/sqlite`
- pkg/diunwebhook/migrate.go contains `func RunMigrations(db *sql.DB) error`
- pkg/diunwebhook/migrate.go contains `!errors.Is(err, migrate.ErrNoChange)` (Pitfall 2 guard)
- pkg/diunwebhook/migrate.go contains `database/sqlite` import (NOT `database/sqlite3`)
- pkg/diunwebhook/migrations/sqlite/0001_initial_schema.up.sql contains `CREATE TABLE IF NOT EXISTS updates`
- pkg/diunwebhook/migrations/sqlite/0001_initial_schema.up.sql contains `CREATE TABLE IF NOT EXISTS tags`
- pkg/diunwebhook/migrations/sqlite/0001_initial_schema.up.sql contains `CREATE TABLE IF NOT EXISTS tag_assignments`
- pkg/diunwebhook/migrations/sqlite/0001_initial_schema.up.sql contains `acknowledged_at TEXT` (included in baseline, not a separate migration)
- pkg/diunwebhook/migrations/sqlite/0001_initial_schema.up.sql contains `ON DELETE CASCADE`
- pkg/diunwebhook/migrations/sqlite/0001_initial_schema.down.sql contains `DROP TABLE IF EXISTS`
- `go build ./pkg/diunwebhook/` exits 0
- `go vet ./pkg/diunwebhook/` exits 0
- go.mod contains `github.com/golang-migrate/migrate/v4`
</acceptance_criteria>
<done>Migration files exist with full current schema as baseline; RunMigrations function compiles and handles ErrNoChange; golang-migrate v4.19.1 in go.mod; go vet passes</done>
</task>
</tasks>
<verification>
- `go build ./pkg/diunwebhook/` compiles without errors (new files coexist with existing code)
- `go vet ./pkg/diunwebhook/` reports no issues
- `go test ./pkg/diunwebhook/` still passes (existing tests unchanged, new files are additive only)
- go.mod contains golang-migrate v4 dependency
- No CGO: `go mod graph | grep sqlite3` returns empty (no mattn/go-sqlite3 pulled in)
</verification>
<success_criteria>
- Store interface with 9 methods exists in store.go
- SQLiteStore implements all 9 methods in sqlite_store.go with exact SQL semantics from current handlers
- NewSQLiteStore sets PRAGMA foreign_keys = ON and MaxOpenConns(1)
- RunMigrations in migrate.go uses golang-migrate + embed.FS + iofs, handles ErrNoChange
- Migration 0001 contains full current schema with CREATE TABLE IF NOT EXISTS
- All existing tests still pass (no existing code modified)
- No CGO dependency introduced
</success_criteria>
<output>
After completion, create `.planning/phases/02-backend-refactor/02-01-SUMMARY.md`
</output>

View File

@@ -0,0 +1,132 @@
---
phase: 02-backend-refactor
plan: "01"
subsystem: database
tags: [golang-migrate, sqlite, store-interface, dependency-injection, migrations, embed-fs]
# Dependency graph
requires:
- phase: 01-data-integrity
provides: PRAGMA foreign_keys enforcement and UPSERT semantics in existing diunwebhook.go
provides:
- Store interface with 9 methods covering all persistence operations
- SQLiteStore implementing Store with exact SQL from current handlers
- RunMigrations function using golang-migrate + embed.FS (iofs source)
- Baseline migration 0001 with full current schema (CREATE TABLE IF NOT EXISTS)
affects:
- 02-02 (Server struct refactor will use Store interface and RunMigrations)
- 03-postgresql (PostgreSQLStore will implement same Store interface)
# Tech tracking
tech-stack:
added:
- github.com/golang-migrate/migrate/v4 v4.19.1
- github.com/golang-migrate/migrate/v4/database/sqlite (modernc.org/sqlite driver, no CGO)
- github.com/golang-migrate/migrate/v4/source/iofs (embed.FS migration source)
patterns:
- Store interface pattern - persistence abstraction hiding *sql.DB from handlers
- SQLiteStore with per-struct sync.Mutex (replaces package-level global)
- golang-migrate with embedded SQL files via //go:embed migrations/sqlite
- ErrNoChange guard in RunMigrations (startup idempotency)
- CREATE TABLE IF NOT EXISTS in baseline migration (backward compatible with existing databases)
key-files:
created:
- pkg/diunwebhook/store.go
- pkg/diunwebhook/sqlite_store.go
- pkg/diunwebhook/migrate.go
- pkg/diunwebhook/migrations/sqlite/0001_initial_schema.up.sql
- pkg/diunwebhook/migrations/sqlite/0001_initial_schema.down.sql
modified:
- go.mod
- go.sum
key-decisions:
- "Used database/sqlite sub-package (not database/sqlite3) to avoid CGO - confirmed modernc.org/sqlite usage in sqlite.go source"
- "Single 0001 baseline migration with full schema including acknowledged_at - safe for existing databases via CREATE TABLE IF NOT EXISTS"
- "NewSQLiteStore sets MaxOpenConns(1) and PRAGMA foreign_keys = ON - moved from InitDB which will be removed in Plan 02"
- "AssignTag preserves INSERT OR REPLACE (not ON CONFLICT DO UPDATE) per research Pitfall 6 - correct semantics for tag_assignments PRIMARY KEY"
- "defer rows.Close() directly (not verbose closure pattern) as plan specifies"
patterns-established:
- "Store interface: all persistence behind 9 named methods, no *sql.DB in interface signature"
- "SQLiteStore field mutex: sync.Mutex as struct field, not package global - enables parallel test isolation"
- "Migration files: versioned SQL files embedded via //go:embed, applied via golang-migrate at startup"
- "ErrNoChange is not an error: errors.Is(err, migrate.ErrNoChange) guard ensures idempotent startup"
requirements-completed: [REFAC-01, REFAC-03]
# Metrics
duration: 6min
completed: "2026-03-23"
---
# Phase 02 Plan 01: Store Interface and Migration Infrastructure Summary
**Store interface (9 methods) + SQLiteStore implementation + golang-migrate v4.19.1 migration infrastructure with embedded SQL files**
## Performance
- **Duration:** ~6 min
- **Started:** 2026-03-23T20:50:31Z
- **Completed:** 2026-03-23T20:56:56Z
- **Tasks:** 2
- **Files modified:** 7
## Accomplishments
- Store interface with 9 methods extracted from current handler SQL (UpsertEvent, GetUpdates, AcknowledgeUpdate, ListTags, CreateTag, DeleteTag, AssignTag, UnassignTag, TagExists)
- SQLiteStore implementing all 9 Store methods with exact SQL semantics preserved from diunwebhook.go
- golang-migrate v4.19.1 migration infrastructure with RunMigrations using embed.FS and iofs source
- Baseline migration 0001 with full current schema using CREATE TABLE IF NOT EXISTS (safe for existing databases)
- All existing tests pass; no existing code modified (additive-only changes as specified)
## Task Commits
Each task was committed atomically:
1. **Task 1: Create Store interface and SQLiteStore implementation** - `57bf3bd` (feat)
2. **Task 2: Create migration infrastructure and SQL files** - `6506d93` (feat)
**Plan metadata:** (docs commit follows)
## Files Created/Modified
- `pkg/diunwebhook/store.go` - Store interface with 9 persistence methods
- `pkg/diunwebhook/sqlite_store.go` - SQLiteStore struct implementing Store; NewSQLiteStore sets MaxOpenConns(1) and PRAGMA foreign_keys = ON
- `pkg/diunwebhook/migrate.go` - RunMigrations using golang-migrate + embed.FS + iofs; handles ErrNoChange
- `pkg/diunwebhook/migrations/sqlite/0001_initial_schema.up.sql` - Full baseline schema (updates, tags, tag_assignments) with CREATE TABLE IF NOT EXISTS
- `pkg/diunwebhook/migrations/sqlite/0001_initial_schema.down.sql` - DROP TABLE IF EXISTS for all three tables
- `go.mod` - Added github.com/golang-migrate/migrate/v4 v4.19.1 and sub-packages
- `go.sum` - Updated checksums
## Decisions Made
- Used `database/sqlite` (not `database/sqlite3`) for golang-migrate driver — confirmed at source level that it imports `modernc.org/sqlite`, satisfying no-CGO constraint
- Single 0001 baseline migration includes `acknowledged_at` from the start; safe for existing databases because `CREATE TABLE IF NOT EXISTS` makes it idempotent on pre-existing schemas
- `NewSQLiteStore` sets `MaxOpenConns(1)` and `PRAGMA foreign_keys = ON` — these will no longer live in `InitDB` once Plan 02 removes globals
- `AssignTag` uses `INSERT OR REPLACE` (not `ON CONFLICT DO UPDATE`) — preserves semantics per research Pitfall 6
## Deviations from Plan
None - plan executed exactly as written.
## Issues Encountered
- `go vet` reports a pre-existing issue in `diunwebhook_test.go:227` (`call to (*testing.T).Fatalf from a non-test goroutine`) — confirmed pre-existing before any changes; out of scope for this plan. Logged to deferred-items.
- `mattn/go-sqlite3` appears in `go mod graph` as an indirect dependency of the `golang-migrate` module itself, but our code only imports `database/sqlite` (confirmed no CGO import in our code chain via `go mod graph | grep sqlite3 | grep -v golang-migrate`).
## User Setup Required
None - no external service configuration required.
## Next Phase Readiness
- Store interface and SQLiteStore ready for Plan 02 to wire into Server struct
- RunMigrations ready to call from main.go instead of InitDB
- All existing tests pass — Plan 02 can refactor handlers with confidence
- Blocker: Plan 02 must redesign export_test.go (currently references package-level globals that will be removed)
---
*Phase: 02-backend-refactor*
*Completed: 2026-03-23*

View File

@@ -0,0 +1,573 @@
---
phase: 02-backend-refactor
plan: 02
type: execute
wave: 2
depends_on: [02-01]
files_modified:
- pkg/diunwebhook/diunwebhook.go
- pkg/diunwebhook/export_test.go
- pkg/diunwebhook/diunwebhook_test.go
- cmd/diunwebhook/main.go
autonomous: true
requirements: [REFAC-01, REFAC-02, REFAC-03]
must_haves:
truths:
- "All 33 existing tests pass with zero behavior change after the refactor"
- "HTTP handlers contain no SQL -- all persistence goes through Store method calls"
- "Package-level globals db, mu, and webhookSecret no longer exist"
- "main.go constructs SQLiteStore, runs migrations, builds Server, and registers routes"
- "Each test gets its own in-memory database via NewTestServer (no shared global state)"
artifacts:
- path: "pkg/diunwebhook/diunwebhook.go"
provides: "Server struct with handler methods, types, maxBodyBytes constant"
exports: ["Server", "NewServer", "DiunEvent", "UpdateEntry", "Tag"]
- path: "pkg/diunwebhook/export_test.go"
provides: "NewTestServer helper for tests"
exports: ["NewTestServer"]
- path: "cmd/diunwebhook/main.go"
provides: "Wiring: sql.Open -> RunMigrations -> NewSQLiteStore -> NewServer -> route registration"
key_links:
- from: "pkg/diunwebhook/diunwebhook.go"
to: "pkg/diunwebhook/store.go"
via: "Server.store field of type Store"
pattern: "s\\.store\\."
- from: "cmd/diunwebhook/main.go"
to: "pkg/diunwebhook/sqlite_store.go"
via: "diun.NewSQLiteStore(db)"
pattern: "NewSQLiteStore"
- from: "cmd/diunwebhook/main.go"
to: "pkg/diunwebhook/migrate.go"
via: "diun.RunMigrations(db)"
pattern: "RunMigrations"
- from: "pkg/diunwebhook/diunwebhook_test.go"
to: "pkg/diunwebhook/export_test.go"
via: "diun.NewTestServer()"
pattern: "NewTestServer"
---
<objective>
Convert all handlers from package-level functions to Server struct methods, remove global state, rewrite tests to use per-test in-memory databases, and update main.go to wire everything together.
Purpose: Complete the refactor so handlers use the Store interface (no SQL in handlers), globals are eliminated, and each test is isolated with its own database. This is the "big flip" that makes the codebase ready for PostgreSQL support.
Output: Refactored diunwebhook.go, rewritten export_test.go + test file, updated main.go. All existing tests pass.
</objective>
<execution_context>
@$HOME/.claude/get-shit-done/workflows/execute-plan.md
@$HOME/.claude/get-shit-done/templates/summary.md
</execution_context>
<context>
@.planning/PROJECT.md
@.planning/ROADMAP.md
@.planning/STATE.md
@.planning/phases/02-backend-refactor/02-RESEARCH.md
@.planning/phases/02-backend-refactor/02-01-SUMMARY.md
<interfaces>
<!-- From Plan 01 outputs -- these files will exist when this plan runs -->
From pkg/diunwebhook/store.go:
```go
type Store interface {
UpsertEvent(event DiunEvent) error
GetUpdates() (map[string]UpdateEntry, error)
AcknowledgeUpdate(image string) (found bool, err error)
ListTags() ([]Tag, error)
CreateTag(name string) (Tag, error)
DeleteTag(id int) (found bool, err error)
AssignTag(image string, tagID int) error
UnassignTag(image string) error
TagExists(id int) (bool, error)
}
```
From pkg/diunwebhook/sqlite_store.go:
```go
type SQLiteStore struct { db *sql.DB; mu sync.Mutex }
func NewSQLiteStore(db *sql.DB) *SQLiteStore
```
From pkg/diunwebhook/migrate.go:
```go
func RunMigrations(db *sql.DB) error
```
</interfaces>
</context>
<tasks>
<task type="auto">
<name>Task 1: Convert diunwebhook.go to Server struct and update main.go</name>
<files>pkg/diunwebhook/diunwebhook.go, cmd/diunwebhook/main.go</files>
<read_first>
- pkg/diunwebhook/diunwebhook.go (full current file -- handlers to convert)
- pkg/diunwebhook/store.go (Store interface from Plan 01)
- pkg/diunwebhook/sqlite_store.go (SQLiteStore from Plan 01)
- pkg/diunwebhook/migrate.go (RunMigrations from Plan 01)
- cmd/diunwebhook/main.go (current wiring to replace)
- .planning/phases/02-backend-refactor/02-RESEARCH.md (Server struct pattern, handler method pattern)
</read_first>
<action>
**Refactor `pkg/diunwebhook/diunwebhook.go`:**
1. **Remove all package-level globals** -- delete these 3 lines entirely:
```go
var (
mu sync.Mutex
db *sql.DB
webhookSecret string
)
```
2. **Remove `SetWebhookSecret` function** -- delete entirely (replaced by NewServer constructor).
3. **Remove `InitDB` function** -- delete entirely (replaced by RunMigrations + NewSQLiteStore in main.go).
4. **Remove `UpdateEvent` function** -- delete entirely (moved to SQLiteStore.UpsertEvent in sqlite_store.go).
5. **Remove `GetUpdates` function** -- delete entirely (moved to SQLiteStore.GetUpdates in sqlite_store.go).
6. **Add Server struct and constructor:**
```go
type Server struct {
store Store
webhookSecret string
}
func NewServer(store Store, webhookSecret string) *Server {
return &Server{store: store, webhookSecret: webhookSecret}
}
```
7. **Convert all 6 handler functions to methods on `*Server`:**
- `func WebhookHandler(w, r)` becomes `func (s *Server) WebhookHandler(w, r)`
- `func UpdatesHandler(w, r)` becomes `func (s *Server) UpdatesHandler(w, r)`
- `func DismissHandler(w, r)` becomes `func (s *Server) DismissHandler(w, r)`
- `func TagsHandler(w, r)` becomes `func (s *Server) TagsHandler(w, r)`
- `func TagByIDHandler(w, r)` becomes `func (s *Server) TagByIDHandler(w, r)`
- `func TagAssignmentHandler(w, r)` becomes `func (s *Server) TagAssignmentHandler(w, r)`
8. **Replace all inline SQL in handlers with Store method calls:**
In `WebhookHandler`: replace `UpdateEvent(event)` with `s.store.UpsertEvent(event)`. Keep all auth checks, method checks, MaxBytesReader, and JSON decode logic. Keep exact same error messages and status codes.
In `UpdatesHandler`: replace `GetUpdates()` with `s.store.GetUpdates()`. Keep JSON encoding logic.
In `DismissHandler`: replace the `mu.Lock(); db.Exec(UPDATE...); mu.Unlock()` block with:
```go
found, err := s.store.AcknowledgeUpdate(image)
if err != nil {
http.Error(w, "internal error", http.StatusInternalServerError)
return
}
if !found {
http.Error(w, "not found", http.StatusNotFound)
return
}
```
In `TagsHandler` GET case: replace `db.Query(SELECT...)` block with:
```go
tags, err := s.store.ListTags()
if err != nil {
http.Error(w, "internal error", http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(tags)
```
In `TagsHandler` POST case: replace `mu.Lock(); db.Exec(INSERT...)` block with:
```go
tag, err := s.store.CreateTag(req.Name)
if err != nil {
if strings.Contains(err.Error(), "UNIQUE") {
http.Error(w, "conflict: tag name already exists", http.StatusConflict)
return
}
http.Error(w, "internal error", http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
json.NewEncoder(w).Encode(tag)
```
In `TagByIDHandler`: replace `mu.Lock(); db.Exec(DELETE...)` block with:
```go
found, err := s.store.DeleteTag(id)
if err != nil {
http.Error(w, "internal error", http.StatusInternalServerError)
return
}
if !found {
http.Error(w, "not found", http.StatusNotFound)
return
}
```
In `TagAssignmentHandler` PUT case: replace tag-exists check + INSERT with:
```go
exists, err := s.store.TagExists(req.TagID)
if err != nil || !exists {
http.Error(w, "not found: tag does not exist", http.StatusNotFound)
return
}
if err := s.store.AssignTag(req.Image, req.TagID); err != nil {
http.Error(w, "internal error", http.StatusInternalServerError)
return
}
```
In `TagAssignmentHandler` DELETE case: replace `mu.Lock(); db.Exec(DELETE...)` with:
```go
if err := s.store.UnassignTag(req.Image); err != nil {
http.Error(w, "internal error", http.StatusInternalServerError)
return
}
```
9. **Keep in diunwebhook.go:** The 3 type definitions (`DiunEvent`, `Tag`, `UpdateEntry`), the `maxBodyBytes` constant. Remove imports that are no longer needed (`database/sql`, `sync`, `time` if unused). Add `time` back only if still needed. The `crypto/subtle` import stays for webhook auth.
10. **Update `diunwebhook.go` imports** -- remove: `database/sql`, `sync`, `time` (if no longer used after removing UpdateEvent/GetUpdates). Keep: `crypto/subtle`, `encoding/json`, `errors`, `log`, `net/http`, `strconv`, `strings`. Remove the blank import `_ "modernc.org/sqlite"` (it moves to migrate.go or sqlite_store.go).
**Update `cmd/diunwebhook/main.go`:**
Replace the current `InitDB` + `SetWebhookSecret` + package-level handler registration with:
```go
package main
import (
"context"
"database/sql"
"errors"
"log"
"net/http"
"os"
"os/signal"
"syscall"
"time"
diun "awesomeProject/pkg/diunwebhook"
_ "modernc.org/sqlite"
)
func main() {
dbPath := os.Getenv("DB_PATH")
if dbPath == "" {
dbPath = "./diun.db"
}
db, err := sql.Open("sqlite", dbPath)
if err != nil {
log.Fatalf("sql.Open: %v", err)
}
if err := diun.RunMigrations(db); err != nil {
log.Fatalf("RunMigrations: %v", err)
}
store := diun.NewSQLiteStore(db)
secret := os.Getenv("WEBHOOK_SECRET")
if secret == "" {
log.Println("WARNING: WEBHOOK_SECRET not set — webhook endpoint is unprotected")
} else {
log.Println("Webhook endpoint protected with token authentication")
}
srv := diun.NewServer(store, secret)
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
mux := http.NewServeMux()
mux.HandleFunc("/webhook", srv.WebhookHandler)
mux.HandleFunc("/api/updates/", srv.DismissHandler)
mux.HandleFunc("/api/updates", srv.UpdatesHandler)
mux.HandleFunc("/api/tags", srv.TagsHandler)
mux.HandleFunc("/api/tags/", srv.TagByIDHandler)
mux.HandleFunc("/api/tag-assignments", srv.TagAssignmentHandler)
mux.Handle("/", http.FileServer(http.Dir("./frontend/dist")))
httpSrv := &http.Server{
Addr: ":" + port,
Handler: mux,
ReadTimeout: 10 * time.Second,
WriteTimeout: 10 * time.Second,
IdleTimeout: 60 * time.Second,
}
stop := make(chan os.Signal, 1)
signal.Notify(stop, syscall.SIGINT, syscall.SIGTERM)
go func() {
log.Printf("Listening on :%s", port)
if err := httpSrv.ListenAndServe(); err != nil && !errors.Is(err, http.ErrServerClosed) {
log.Fatalf("ListenAndServe: %v", err)
}
}()
<-stop
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
defer cancel()
if err := httpSrv.Shutdown(ctx); err != nil {
log.Printf("Shutdown error: %v", err)
} else {
log.Println("Server stopped cleanly")
}
}
```
Key changes in main.go:
- `sql.Open` called directly (not via InitDB)
- `diun.RunMigrations(db)` called before store creation
- `diun.NewSQLiteStore(db)` creates the store (sets PRAGMA, MaxOpenConns internally)
- `diun.NewServer(store, secret)` creates the server
- Route registration uses `srv.WebhookHandler` (method) instead of `diun.WebhookHandler` (package function)
- `_ "modernc.org/sqlite"` blank import is in main.go (driver registration)
</action>
<verify>
<automated>cd /home/jean-luc-makiola/Development/projects/DiunDashboard && go build ./cmd/diunwebhook/ && go build ./pkg/diunwebhook/ && go vet ./... && echo "BUILD+VET OK"</automated>
</verify>
<acceptance_criteria>
- pkg/diunwebhook/diunwebhook.go contains `type Server struct {`
- pkg/diunwebhook/diunwebhook.go contains `func NewServer(store Store, webhookSecret string) *Server`
- pkg/diunwebhook/diunwebhook.go contains `func (s *Server) WebhookHandler(`
- pkg/diunwebhook/diunwebhook.go contains `func (s *Server) UpdatesHandler(`
- pkg/diunwebhook/diunwebhook.go contains `func (s *Server) DismissHandler(`
- pkg/diunwebhook/diunwebhook.go contains `func (s *Server) TagsHandler(`
- pkg/diunwebhook/diunwebhook.go contains `func (s *Server) TagByIDHandler(`
- pkg/diunwebhook/diunwebhook.go contains `func (s *Server) TagAssignmentHandler(`
- pkg/diunwebhook/diunwebhook.go contains `s.store.UpsertEvent` (handler calls store, not direct SQL)
- pkg/diunwebhook/diunwebhook.go does NOT contain `var db *sql.DB` (global removed)
- pkg/diunwebhook/diunwebhook.go does NOT contain `var mu sync.Mutex` (global removed)
- pkg/diunwebhook/diunwebhook.go does NOT contain `var webhookSecret string` (global removed)
- pkg/diunwebhook/diunwebhook.go does NOT contain `func InitDB(` (removed)
- pkg/diunwebhook/diunwebhook.go does NOT contain `func SetWebhookSecret(` (removed)
- pkg/diunwebhook/diunwebhook.go does NOT contain `db.Exec(` or `db.Query(` (no SQL in handlers)
- cmd/diunwebhook/main.go contains `diun.RunMigrations(db)`
- cmd/diunwebhook/main.go contains `diun.NewSQLiteStore(db)`
- cmd/diunwebhook/main.go contains `diun.NewServer(store, secret)`
- cmd/diunwebhook/main.go contains `srv.WebhookHandler` (method reference, not package function)
- `go build ./cmd/diunwebhook/` exits 0
- `go vet ./...` exits 0
</acceptance_criteria>
<done>Handlers are methods on Server calling s.store.X(); no package-level globals remain; main.go wires sql.Open -> RunMigrations -> NewSQLiteStore -> NewServer -> routes; both packages compile and pass go vet</done>
</task>
<task type="auto">
<name>Task 2: Rewrite export_test.go and update all tests for Server/Store</name>
<files>pkg/diunwebhook/export_test.go, pkg/diunwebhook/diunwebhook_test.go</files>
<read_first>
- pkg/diunwebhook/diunwebhook_test.go (all 33 existing tests to convert)
- pkg/diunwebhook/export_test.go (current helpers to replace)
- pkg/diunwebhook/diunwebhook.go (refactored Server/handler signatures from Task 1)
- pkg/diunwebhook/store.go (Store interface)
- pkg/diunwebhook/sqlite_store.go (NewSQLiteStore)
- pkg/diunwebhook/migrate.go (RunMigrations)
- .planning/phases/02-backend-refactor/02-RESEARCH.md (export_test.go redesign pattern)
</read_first>
<action>
**Rewrite `pkg/diunwebhook/export_test.go`:**
Replace the entire file. The old helpers (`UpdatesReset`, `GetUpdatesMap`, `ResetTags`, `ResetWebhookSecret`) relied on package-level globals that no longer exist.
New content:
```go
package diunwebhook
import "database/sql"
// NewTestServer constructs a Server with a fresh in-memory SQLite database.
// Each call returns an isolated server -- tests do not share state.
func NewTestServer() (*Server, error) {
db, err := sql.Open("sqlite", ":memory:")
if err != nil {
return nil, err
}
if err := RunMigrations(db); err != nil {
return nil, err
}
store := NewSQLiteStore(db)
return NewServer(store, ""), nil
}
// NewTestServerWithSecret constructs a Server with webhook authentication enabled.
func NewTestServerWithSecret(secret string) (*Server, error) {
db, err := sql.Open("sqlite", ":memory:")
if err != nil {
return nil, err
}
if err := RunMigrations(db); err != nil {
return nil, err
}
store := NewSQLiteStore(db)
return NewServer(store, secret), nil
}
```
**Rewrite `pkg/diunwebhook/diunwebhook_test.go`:**
The test file is `package diunwebhook_test` (external test package). Every test that previously called `diun.UpdatesReset()` to get a clean global DB must now call `diun.NewTestServer()` to get its own isolated server.
**Conversion pattern for every test:**
OLD:
```go
func TestFoo(t *testing.T) {
diun.UpdatesReset()
// ... uses diun.WebhookHandler, diun.UpdateEvent, diun.GetUpdatesMap, etc.
}
```
NEW:
```go
func TestFoo(t *testing.T) {
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
// ... uses srv.WebhookHandler, srv.Store().UpsertEvent, etc.
}
```
**But wait:** `srv.Store()` does not exist -- the `store` field is unexported. Tests need a way to call `UpsertEvent` and `GetUpdates` directly. Two options:
Option A: Add a `Store()` accessor method to Server (exported, for tests).
Option B: Add test-helper functions in export_test.go that access `s.store` directly (since export_test.go is in the internal package).
**Use Option B** -- add these helpers in export_test.go:
```go
// TestUpsertEvent calls UpsertEvent on the server's store (for test setup).
func (s *Server) TestUpsertEvent(event DiunEvent) error {
return s.store.UpsertEvent(event)
}
// TestGetUpdates calls GetUpdates on the server's store (for test assertions).
func (s *Server) TestGetUpdates() (map[string]UpdateEntry, error) {
return s.store.GetUpdates()
}
// TestGetUpdatesMap is a convenience wrapper that returns the map without error.
func (s *Server) TestGetUpdatesMap() map[string]UpdateEntry {
m, _ := s.store.GetUpdates()
return m
}
```
**Now convert each test function. Here are the specific conversions for ALL tests:**
1. **Remove `TestMain`** -- it only called `diun.UpdatesReset()` which is no longer needed since each test creates its own server.
2. **`TestUpdateEventAndGetUpdates`** -- replace `diun.UpdatesReset()` with `srv, err := diun.NewTestServer()`. Replace `diun.UpdateEvent(event)` with `srv.TestUpsertEvent(event)`. Replace `diun.GetUpdates()` with `srv.TestGetUpdates()`.
3. **`TestWebhookHandler`** -- replace `diun.UpdatesReset()` with `srv, err := diun.NewTestServer()`. Replace `diun.WebhookHandler(rec, req)` with `srv.WebhookHandler(rec, req)`. Replace `diun.GetUpdatesMap()` with `srv.TestGetUpdatesMap()`.
4. **`TestWebhookHandler_Unauthorized`** -- replace with `srv, err := diun.NewTestServerWithSecret("my-secret")`. Remove `defer diun.ResetWebhookSecret()`. Replace `diun.WebhookHandler` with `srv.WebhookHandler`.
5. **`TestWebhookHandler_WrongToken`** -- same as Unauthorized: use `NewTestServerWithSecret("my-secret")`.
6. **`TestWebhookHandler_ValidToken`** -- use `NewTestServerWithSecret("my-secret")`.
7. **`TestWebhookHandler_NoSecretConfigured`** -- use `diun.NewTestServer()` (no secret = open webhook).
8. **`TestWebhookHandler_BadRequest`** -- use `diun.NewTestServer()`. (Note: the old test did NOT call `UpdatesReset`, but it should use a server now.) Replace `diun.WebhookHandler` with `srv.WebhookHandler`.
9. **`TestUpdatesHandler`** -- use `diun.NewTestServer()`. Replace `diun.UpdateEvent(event)` with `srv.TestUpsertEvent(event)`. Replace `diun.UpdatesHandler` with `srv.UpdatesHandler`.
10. **`TestUpdatesHandler_EncodeError`** -- use `diun.NewTestServer()`. Replace `diun.UpdatesHandler` with `srv.UpdatesHandler`.
11. **`TestWebhookHandler_MethodNotAllowed`** -- use `diun.NewTestServer()`. Replace all `diun.WebhookHandler` with `srv.WebhookHandler`.
12. **`TestWebhookHandler_EmptyImage`** -- use `diun.NewTestServer()`. Replace handler + `GetUpdatesMap` calls.
13. **`TestConcurrentUpdateEvent`** -- use `diun.NewTestServer()`. Replace `diun.UpdateEvent(...)` with `srv.TestUpsertEvent(...)`. Replace `diun.GetUpdatesMap()` with `srv.TestGetUpdatesMap()`. **Note:** t.Fatalf cannot be called from goroutines. This is a pre-existing issue in the test. Change to `t.Errorf` inside goroutines (or use a channel/error collection pattern). The current code already has this bug -- preserve the existing behavior for now but change `t.Fatalf` to `t.Errorf` inside the goroutine.
14. **`TestMainHandlerIntegration`** -- use `diun.NewTestServer()`. Replace the inline handler router to use `srv.WebhookHandler` and `srv.UpdatesHandler` in the httptest.NewServer setup.
15. **`TestDismissHandler_Success`** -- use `diun.NewTestServer()`. Replace `diun.UpdateEvent` -> `srv.TestUpsertEvent`. Replace `diun.DismissHandler` -> `srv.DismissHandler`. Replace `diun.GetUpdatesMap` -> `srv.TestGetUpdatesMap`.
16. **`TestDismissHandler_NotFound`** -- use `diun.NewTestServer()`. Replace handler call.
17. **`TestDismissHandler_EmptyImage`** -- use `diun.NewTestServer()`. Replace handler call.
18. **`TestDismissHandler_SlashInImageName`** -- use `diun.NewTestServer()`. Replace all calls.
19. **`TestDismissHandler_ReappearsAfterNewWebhook`** -- use `diun.NewTestServer()`. Replace all calls. The `diun.UpdateEvent(...)` call without error check becomes `srv.TestUpsertEvent(...)` -- add an error check.
20. **Helper functions `postTag` and `postTagAndGetID`** -- these need the server as a parameter. Change signatures:
```go
func postTag(t *testing.T, srv *diun.Server, name string) (int, int)
func postTagAndGetID(t *testing.T, srv *diun.Server, name string) int
```
Replace `diun.TagsHandler(rec, req)` with `srv.TagsHandler(rec, req)`.
21. **All tag tests** (`TestCreateTagHandler_Success`, `TestCreateTagHandler_DuplicateName`, `TestCreateTagHandler_EmptyName`, `TestGetTagsHandler_Empty`, `TestGetTagsHandler_WithTags`, `TestDeleteTagHandler_Success`, `TestDeleteTagHandler_NotFound`, `TestDeleteTagHandler_CascadesAssignment`) -- use `diun.NewTestServer()`. Replace all handler calls. Pass `srv` to helper functions.
22. **All tag assignment tests** (`TestTagAssignmentHandler_Assign`, `TestTagAssignmentHandler_Reassign`, `TestTagAssignmentHandler_Unassign`, `TestGetUpdates_IncludesTag`) -- use `diun.NewTestServer()`. Replace all calls.
23. **Oversized body tests** (`TestWebhookHandler_OversizedBody`, `TestTagsHandler_OversizedBody`, `TestTagAssignmentHandler_OversizedBody`) -- use `diun.NewTestServer()`. Replace handler calls.
24. **`TestUpdateEvent_PreservesTagOnUpsert`** -- use `diun.NewTestServer()`. Replace `diun.UpdateEvent` -> `srv.TestUpsertEvent`. Replace handler calls. Replace `diun.GetUpdatesMap` -> `srv.TestGetUpdatesMap`.
**Remove these imports from test file** (no longer needed):
- `os` (was for TestMain's os.Exit)
**Verify all HTTP status codes, error messages, and assertion logic remain IDENTICAL to the original tests.** The only change is the source of the handler function (method on srv instead of package function) and the source of test data (srv.TestUpsertEvent instead of diun.UpdateEvent).
</action>
<verify>
<automated>cd /home/jean-luc-makiola/Development/projects/DiunDashboard && go test -v -count=1 ./pkg/diunwebhook/ 2>&1 | tail -40</automated>
</verify>
<acceptance_criteria>
- pkg/diunwebhook/export_test.go contains `func NewTestServer() (*Server, error)`
- pkg/diunwebhook/export_test.go contains `func NewTestServerWithSecret(secret string) (*Server, error)`
- pkg/diunwebhook/export_test.go contains `func (s *Server) TestUpsertEvent(event DiunEvent) error`
- pkg/diunwebhook/export_test.go contains `func (s *Server) TestGetUpdatesMap() map[string]UpdateEntry`
- pkg/diunwebhook/export_test.go does NOT contain `func UpdatesReset()` (old helper removed)
- pkg/diunwebhook/export_test.go does NOT contain `func ResetWebhookSecret()` (old helper removed)
- pkg/diunwebhook/diunwebhook_test.go does NOT contain `diun.UpdatesReset()` (replaced with NewTestServer)
- pkg/diunwebhook/diunwebhook_test.go does NOT contain `diun.SetWebhookSecret(` (replaced with NewTestServerWithSecret)
- pkg/diunwebhook/diunwebhook_test.go contains `diun.NewTestServer()` (new pattern)
- pkg/diunwebhook/diunwebhook_test.go contains `srv.WebhookHandler(` (method call, not package function)
- pkg/diunwebhook/diunwebhook_test.go contains `srv.TestUpsertEvent(` (test helper)
- pkg/diunwebhook/diunwebhook_test.go contains `srv.TestGetUpdatesMap()` (test helper)
- pkg/diunwebhook/diunwebhook_test.go does NOT contain `func TestMain(` (removed, no longer needed)
- `go test -v -count=1 ./pkg/diunwebhook/` exits 0 with all tests passing
- `go test -v -count=1 ./pkg/diunwebhook/` output contains `PASS`
</acceptance_criteria>
<done>All existing tests pass against the new Server/Store architecture; each test has its own in-memory database; no shared global state; test output shows PASS with 0 failures</done>
</task>
</tasks>
<verification>
- `go test -v -count=1 ./pkg/diunwebhook/` -- ALL tests pass (same test count as before the refactor)
- `go build ./cmd/diunwebhook/` -- binary compiles
- `go vet ./...` -- no issues
- `grep -r 'var db \|var mu \|var webhookSecret' pkg/diunwebhook/diunwebhook.go` -- returns empty (globals removed)
- `grep -r 'db\.Exec\|db\.Query\|db\.QueryRow' pkg/diunwebhook/diunwebhook.go` -- returns empty (no SQL in handlers)
- `grep 's\.store\.' pkg/diunwebhook/diunwebhook.go` -- returns multiple matches (handlers use Store interface)
- `grep 'diun\.UpdatesReset' pkg/diunwebhook/diunwebhook_test.go` -- returns empty (old pattern gone)
</verification>
<success_criteria>
- All existing tests pass with zero behavior change (same HTTP status codes, same error messages, same data semantics)
- HTTP handlers contain no SQL -- every persistence call goes through s.store.X()
- Package-level globals db, mu, webhookSecret are deleted from diunwebhook.go
- main.go wires: sql.Open -> RunMigrations -> NewSQLiteStore -> NewServer -> route registration
- Each test creates its own in-memory database via NewTestServer() (parallel-safe)
- go vet passes on all packages
</success_criteria>
<output>
After completion, create `.planning/phases/02-backend-refactor/02-02-SUMMARY.md`
</output>

View File

@@ -0,0 +1,495 @@
# Phase 2: Backend Refactor - Research
**Researched:** 2026-03-23
**Domain:** Go interface extraction, dependency injection, golang-migrate with modernc.org/sqlite
**Confidence:** HIGH
## Summary
Phase 2 replaces three package-level globals (`db`, `mu`, `webhookSecret`) with a `Server` struct that holds a `Store` interface. HTTP handlers become methods on `Server`. SQL is extracted from handlers into named `Store` methods with a concrete `SQLiteStore` implementation. Schema management moves to versioned SQL migration files run by `golang-migrate/v4` at startup via `embed.FS`.
The change is purely structural. No API contracts, no HTTP status codes, no SQL query semantics change. The test suite must pass before the phase is complete. Tests currently rely on `export_test.go` helpers (`UpdatesReset`, `GetUpdatesMap`, `ResetTags`, `ResetWebhookSecret`) that call package-level functions directly — these must be redesigned to work against the new `Server`/`Store` seam.
The critical library constraint is that `golang-migrate/v4/database/sqlite` (not `database/sqlite3`) uses `modernc.org/sqlite` — the same pure-Go driver already in use. This is the only migration path that avoids introducing CGO.
**Primary recommendation:** Extract a `Store` interface with one method per logical operation, implement `SQLiteStore` backed by `*sql.DB`, replace globals with a `Server` struct holding `Store` and `webhookSecret`, move all DDL to embedded SQL files under `migrations/sqlite/`, run migrations on startup via `golang-migrate/v4`.
<user_constraints>
## User Constraints (from CONTEXT.md)
No CONTEXT.md exists for this phase. Constraints are drawn from CLAUDE.md and STATE.md decisions.
### Locked Decisions (from STATE.md Accumulated Context)
- Backend refactor must be behavior-neutral — all existing tests must pass before PostgreSQL is introduced
- No ORM or query builder — raw SQL per store implementation; 8 operations across 3 tables is too small to justify a dependency
- `DATABASE_URL` present activates PostgreSQL; absent falls back to SQLite with `DB_PATH` — no separate `DB_DRIVER` variable (deferred to Phase 3; Store interface must accommodate it)
### Claude's Discretion
- Internal file layout within `pkg/diunwebhook/` and new sub-packages (e.g., `store/`)
- Migration file naming convention within the chosen scheme
- Whether `Server` lives in the same package as `Store` or a separate one
### Deferred Ideas (OUT OF SCOPE for Phase 2)
- PostgreSQL implementation of `Store` (Phase 3)
- Any new API endpoints or behavioral changes
- DATABASE_URL env var routing (Phase 3)
</user_constraints>
<phase_requirements>
## Phase Requirements
| ID | Description | Research Support |
|----|-------------|------------------|
| REFAC-01 | Database operations are behind a Store interface with separate SQLite and PostgreSQL implementations | Store interface design, SQLiteStore struct with `*sql.DB`, method inventory below |
| REFAC-02 | Package-level global state (db, mu, webhookSecret) is replaced with a Server struct that holds dependencies | Server struct pattern, handler-as-method pattern, export_test.go redesign |
| REFAC-03 | Schema migrations use golang-migrate with separate migration directories per dialect (sqlite/, postgres/) | golang-migrate v4.19.1, `database/sqlite` sub-package uses modernc.org/sqlite, iofs embed.FS source |
</phase_requirements>
---
## Standard Stack
### Core
| Library | Version | Purpose | Why Standard |
|---------|---------|---------|--------------|
| `github.com/golang-migrate/migrate/v4` | v4.19.1 | Versioned schema migrations | De-facto standard in Go; supports multiple DB drivers; iofs source enables single-binary deploy |
| `github.com/golang-migrate/migrate/v4/database/sqlite` | v4.19.1 (same module) | golang-migrate driver for modernc.org/sqlite | Only non-CGO sqlite driver in golang-migrate; uses pure-Go modernc.org/sqlite |
| `github.com/golang-migrate/migrate/v4/source/iofs` | v4.19.1 (same module) | Read migrations from embed.FS | Keeps migrations bundled in the binary — required for single-binary Docker deploy |
**Note on sqlite sub-package:** Use `database/sqlite` (NOT `database/sqlite3`). The `sqlite3` sub-package requires CGO via `mattn/go-sqlite3`, which violates the project's no-CGO constraint. Verified against pkg.go.dev documentation.
### Supporting (already in go.mod — no new additions for the Store/Server pattern)
| Library | Version | Purpose | When to Use |
|---------|---------|---------|-------------|
| `modernc.org/sqlite` | v1.46.1 (current) | Pure-Go SQLite driver | Already present; imported as `_ "modernc.org/sqlite"` for side-effect registration |
| Go stdlib `sync` | — | `sync.Mutex` inside SQLiteStore | Mutex moves from package-level to a field on SQLiteStore |
| Go stdlib `embed` | — | `//go:embed` for migration files | Embed SQL files into compiled binary |
### Alternatives Considered
| Instead of | Could Use | Tradeoff |
|------------|-----------|----------|
| `golang-migrate` iofs source | Raw DDL in `InitDB` (current) | Current approach blocks versioned migrations and PostgreSQL parity; golang-migrate handles ordering, locking, and checksums |
| `database/sqlite` sub-package | `database/sqlite3` | `sqlite3` requires CGO — forbidden by project constraint |
| Handler methods on `Server` | Function closures over `Server` | Methods are idiomatic Go, simpler to test, consistent with `net/http` handler signature `func(w, r)` via thin wrapper |
**Installation (new dependencies only):**
```bash
go get github.com/golang-migrate/migrate/v4@v4.19.1
go get github.com/golang-migrate/migrate/v4/database/sqlite
go get github.com/golang-migrate/migrate/v4/source/iofs
```
**Version verification:** `v4.19.1` confirmed via Go module proxy (`proxy.golang.org`) on 2026-03-23. Published 2025-11-29.
---
## Architecture Patterns
### Recommended Project Structure
```
pkg/diunwebhook/
├── diunwebhook.go # Types (DiunEvent, UpdateEntry, Tag), Server struct, handler methods
├── store.go # Store interface definition
├── sqlite_store.go # SQLiteStore — concrete implementation
├── migrate.go # RunMigrations() using golang-migrate + iofs
├── export_test.go # Test-only helpers (redesigned for Server/Store)
├── diunwebhook_test.go # Handler tests (unchanged HTTP assertions)
└── migrations/
└── sqlite/
├── 0001_initial_schema.up.sql
├── 0001_initial_schema.down.sql
└── 0002_add_acknowledged_at.up.sql # baseline migration for existing acknowledged_at column
cmd/diunwebhook/
└── main.go # Constructs SQLiteStore, calls RunMigrations, builds Server, registers routes
```
**Why keep everything in `pkg/diunwebhook/`:** CLAUDE.md says "No barrel files; single source file" — this phase is allowed to split into multiple files within the same package to keep things navigable, but a new sub-package is not required. All existing import paths (`awesomeProject/pkg/diunwebhook`) stay valid.
### Pattern 1: Store Interface
**What:** A Go interface that names every persistence operation the HTTP handlers need. One method per logical operation. No `*sql.DB` in the interface — callers never see the database type.
**When to use:** Always, for all DB access from handlers.
```go
// store.go
type Store interface {
UpsertEvent(event DiunEvent) error
GetUpdates() (map[string]UpdateEntry, error)
AcknowledgeUpdate(image string) (found bool, err error)
ListTags() ([]Tag, error)
CreateTag(name string) (Tag, error)
DeleteTag(id int) (found bool, err error)
AssignTag(image string, tagID int) error
UnassignTag(image string) error
TagExists(id int) (bool, error)
}
```
**Method count:** 9 methods covering all current SQL operations across `updates`, `tags`, and `tag_assignments`. Each method maps 1:1 to a logical DB operation that currently appears inline in a handler or in `UpdateEvent`/`GetUpdates`.
### Pattern 2: SQLiteStore
**What:** Concrete struct holding `*sql.DB` and `sync.Mutex`. Implements every method on `Store`. All SQL currently in handlers moves here.
```go
// sqlite_store.go
type SQLiteStore struct {
db *sql.DB
mu sync.Mutex
}
func NewSQLiteStore(db *sql.DB) *SQLiteStore {
return &SQLiteStore{db: db}
}
func (s *SQLiteStore) UpsertEvent(event DiunEvent) error {
s.mu.Lock()
defer s.mu.Unlock()
_, err := s.db.Exec(`INSERT INTO updates (...) ON CONFLICT ...`, ...)
return err
}
```
**Key:** The mutex moves from a package global `var mu sync.Mutex` to a `SQLiteStore` field. This enables parallel tests (each test gets its own `SQLiteStore` with its own in-memory DB).
### Pattern 3: Server Struct
**What:** Holds the `Store` interface and `webhookSecret`. Handler methods hang off `Server`. `main.go` constructs it and registers routes.
```go
// diunwebhook.go
type Server struct {
store Store
webhookSecret string
}
func NewServer(store Store, webhookSecret string) *Server {
return &Server{store: store, webhookSecret: webhookSecret}
}
func (s *Server) WebhookHandler(w http.ResponseWriter, r *http.Request) { ... }
func (s *Server) UpdatesHandler(w http.ResponseWriter, r *http.Request) { ... }
// ... etc
```
**Route registration in main.go:**
```go
srv := diun.NewServer(store, secret)
mux.HandleFunc("/webhook", srv.WebhookHandler)
mux.HandleFunc("/api/updates/", srv.DismissHandler)
// ...
```
### Pattern 4: RunMigrations with embed.FS
**What:** `RunMigrations(db *sql.DB, dialect string)` uses `golang-migrate/v4` to apply versioned SQL files embedded in the binary. Called from `main.go` before routes are registered.
```go
// migrate.go
import (
"embed"
"github.com/golang-migrate/migrate/v4"
"github.com/golang-migrate/migrate/v4/database/sqlite"
"github.com/golang-migrate/migrate/v4/source/iofs"
_ "modernc.org/sqlite"
)
//go:embed migrations/sqlite
var sqliteMigrations embed.FS
func RunMigrations(db *sql.DB) error {
src, err := iofs.New(sqliteMigrations, "migrations/sqlite")
if err != nil {
return err
}
driver, err := sqlite.WithInstance(db, &sqlite.Config{})
if err != nil {
return err
}
m, err := migrate.NewWithInstance("iofs", src, "sqlite", driver)
if err != nil {
return err
}
if err := m.Up(); err != nil && err != migrate.ErrNoChange {
return err
}
return nil
}
```
**CRITICAL:** `migrate.ErrNoChange` is not an error — it means all migrations already applied. Must not treat it as failure.
### Pattern 5: export_test.go Redesign
**What:** The current `export_test.go` calls package-level functions (`InitDB`, `db.Exec`). After the refactor, these globals are gone. Test helpers must construct a `Server` backed by a `SQLiteStore` using an in-memory DB.
```go
// export_test.go — new design
package diunwebhook
// TestServer constructs a Server with a fresh in-memory SQLiteStore.
// Used by test files to get a clean server per test.
func NewTestServer() (*Server, error) {
db, err := sql.Open("sqlite", ":memory:")
if err != nil {
return nil, err
}
if err := RunMigrations(db); err != nil {
return nil, err
}
store := NewSQLiteStore(db)
return NewServer(store, ""), nil
}
```
Tests that previously called `diun.UpdatesReset()` will call `diun.NewTestServer()` at the start of each test and operate on the returned server instance. Handler tests pass `srv.WebhookHandler` instead of `diun.WebhookHandler`.
**Impact on test signatures:** All test functions that currently call package-level handler functions will receive the server as a local variable. `TestMain` simplifies (no global reset needed — each test owns its DB).
### Anti-Patterns to Avoid
- **Direct SQL in handlers:** After REFAC-01, handlers must call `s.store.SomeMethod(...)` — never `s.store.(*SQLiteStore).db.Exec(...)`. The interface hides the DB type.
- **Single migration file containing all schema:** `InitDB`'s current DDL represents TWO logical migrations (initial schema + `acknowledged_at` column). These must become two separate numbered files so existing databases do not re-apply the already-applied column addition. Baseline migration (file 0001) represents the state of existing databases; file 0002 adds `acknowledged_at` to represent the already-run ad-hoc migration.
- **Calling `m.Up()` and treating `ErrNoChange` as fatal:** Always check `err != migrate.ErrNoChange` before returning an error from `RunMigrations`.
- **Removing `PRAGMA foreign_keys = ON` during refactor:** The SQLite connection setup must still run this pragma. Move it from `InitDB` into `NewSQLiteStore` or the connection-open step in `main.go`.
- **Replacing `db.SetMaxOpenConns(1)` with nothing:** This setting prevents concurrent write contention in SQLite. It must be preserved on the `*sql.DB` instance passed to `NewSQLiteStore`.
---
## Don't Hand-Roll
| Problem | Don't Build | Use Instead | Why |
|---------|-------------|-------------|-----|
| Versioned schema migration | Custom migration runner with version table | `golang-migrate/v4` | Migration ordering, dirty-state detection, locking, and ErrNoChange handling already solved |
| Embedding SQL files in binary | Copying SQL into string constants | Go `embed.FS` + `iofs` source | Single-binary deploy; embed handles file reading at compile time |
| Migration down-file generation | Omitting `.down.sql` files | Create stub down files | golang-migrate requires down files exist even if empty to resolve migration history |
**Key insight:** The migration machinery looks simple but has multiple edge cases (dirty state after failed migration, concurrent migration race, no-change idempotency). golang-migrate handles all of these.
---
## Common Pitfalls
### Pitfall 1: Wrong sqlite sub-package (CGO contamination)
**What goes wrong:** Developer imports `github.com/golang-migrate/migrate/v4/database/sqlite3` (the one with the `3`) — this pulls in `mattn/go-sqlite3` which requires CGO. The build succeeds on developer machines with a C compiler but fails in Alpine/cross-compilation.
**Why it happens:** The two sub-packages have nearly identical names. The `sqlite3` one appears first in search results.
**How to avoid:** Always import `database/sqlite` (no `3`). Verify with `go mod graph | grep sqlite`.
**Warning signs:** Build output mentions `gcc` or `cgo`; `go build` fails with "cgo: C compiler not found".
### Pitfall 2: ErrNoChange treated as fatal
**What goes wrong:** `RunMigrations` returns an error when the database is already at the latest migration version, causing every startup after the first to crash.
**Why it happens:** `m.Up()` returns `migrate.ErrNoChange` (a non-nil error) when no new migrations exist.
**How to avoid:** `if err := m.Up(); err != nil && !errors.Is(err, migrate.ErrNoChange) { return err }`.
**Warning signs:** App starts successfully once, crashes with "no change" on every subsequent start.
### Pitfall 3: PRAGMA foreign_keys lost during refactor
**What goes wrong:** The pragma is in `InitDB` which is being deleted. If it is not moved to the connection-open step, foreign key cascades silently stop working. The `TestDeleteTagHandler_CascadesAssignment` test catches this — but only if the pragma is active.
**Why it happens:** Refactor focuses on interface extraction and forgets the SQLite-specific connection setup.
**How to avoid:** Set `PRAGMA foreign_keys = ON` immediately after `sql.Open` and before any queries, inside `NewSQLiteStore` or via `sql.DB.Exec` in `main.go`.
### Pitfall 4: Migration baseline mismatch with existing databases
**What goes wrong:** Migration file 0001 creates the `acknowledged_at` column, but existing databases already have it (from the current ad-hoc migration). golang-migrate fails with "column already exists".
**Why it happens:** The baseline migration (0001) must represent the schema of *new* databases, while the ad-hoc migration (`ALTER TABLE updates ADD COLUMN acknowledged_at TEXT`) already ran on all existing ones.
**How to avoid:** Two migration files: `0001_initial_schema.up.sql` creates all tables including `acknowledged_at` (for fresh databases). `0002_acknowledged_at.up.sql` is a no-op or empty migration for existing databases that already ran the ALTER TABLE. Actually: since golang-migrate tracks which migrations have run, running 0001 on a new database creates the full schema; it is never run on an existing database that has already been opened by the old binary. The schema_migrations table created by golang-migrate tracks this. **The safe approach:** 0001 creates all three tables with `acknowledged_at` included from the start. Old databases that pre-exist migration tracking will need to have golang-migrate's `schema_migrations` table bootstrapped, but since `CREATE TABLE IF NOT EXISTS` is used, existing tables are not re-created.
**Warning signs:** Integration test with a pre-seeded SQLite file fails; startup error "table already exists" or "duplicate column name".
### Pitfall 5: export_test.go still references deleted globals
**What goes wrong:** After removing `var db`, `var mu`, `var webhookSecret`, the `export_test.go` that calls `db.Exec(...)` or `InitDB(":memory:")` directly fails to compile.
**Why it happens:** export_test.go provides internal access that previously relied on the globals.
**How to avoid:** Rewrite export_test.go to use `NewTestServer()` (a test-only constructor that returns a fresh `*Server` with in-memory DB). All test helpers become methods on `*Server` or use the public `Store` interface.
### Pitfall 6: INSERT OR REPLACE in TagAssignmentHandler
**What goes wrong:** The current handler uses `INSERT OR REPLACE INTO tag_assignments` — this is correct for SQLite but differs from the `ON CONFLICT DO UPDATE` pattern used in `UpdateEvent`. The `AssignTag` Store method should preserve the working behavior, not silently change semantics.
**Why it happens:** Developer unifies syntax without checking that both approaches are semantically identical for the tag_assignments table.
**How to avoid:** Keep `INSERT OR REPLACE` in `SQLiteStore.AssignTag` (it is correct — tag_assignments has `image` as PRIMARY KEY so REPLACE works). Document the intent.
---
## Code Examples
### Store interface (verified pattern)
```go
// Source: project-derived from current diunwebhook.go SQL operations audit
type Store interface {
UpsertEvent(event DiunEvent) error
GetUpdates() (map[string]UpdateEntry, error)
AcknowledgeUpdate(image string) (found bool, err error)
ListTags() ([]Tag, error)
CreateTag(name string) (Tag, error)
DeleteTag(id int) (found bool, err error)
AssignTag(image string, tagID int) error
UnassignTag(image string) error
TagExists(id int) (bool, error)
}
```
### golang-migrate with embed.FS + modernc/sqlite (verified against pkg.go.dev)
```go
// Source: pkg.go.dev/github.com/golang-migrate/migrate/v4/source/iofs
//go:embed migrations/sqlite
var sqliteMigrations embed.FS
func RunMigrations(db *sql.DB) error {
src, err := iofs.New(sqliteMigrations, "migrations/sqlite")
if err != nil {
return err
}
driver, err := sqlitemigrate.WithInstance(db, &sqlitemigrate.Config{})
if err != nil {
return err
}
m, err := migrate.NewWithInstance("iofs", src, "sqlite", driver)
if err != nil {
return err
}
if err := m.Up(); err != nil && !errors.Is(err, migrate.ErrNoChange) {
return err
}
return nil
}
```
### Migration file naming convention
```
migrations/sqlite/
0001_initial_schema.up.sql -- CREATE TABLE IF NOT EXISTS updates, tags, tag_assignments
0001_initial_schema.down.sql -- DROP TABLE tag_assignments; DROP TABLE tags; DROP TABLE updates
0002_acknowledged_at.up.sql -- (empty or no-op: column exists in 0001 baseline)
0002_acknowledged_at.down.sql -- (empty)
```
**Note on 0002:** The current `InitDB` has an ad-hoc `ALTER TABLE updates ADD COLUMN acknowledged_at TEXT`. Since 0001 will include `acknowledged_at` in the CREATE TABLE, file 0002 documents the migration history for databases that were created before this field existed but does not need to run anything — it can contain only a comment. Alternatively, since this is a greenfield migration setup, 0001 can simply include `acknowledged_at` from the start, making 0002 unnecessary. Single-file baseline (0001 only) is simpler and correct.
### Handler method on Server (verified pattern for net/http)
```go
// Source: project CLAUDE.md conventions + stdlib net/http
func (s *Server) WebhookHandler(w http.ResponseWriter, r *http.Request) {
if s.webhookSecret != "" {
auth := r.Header.Get("Authorization")
if subtle.ConstantTimeCompare([]byte(auth), []byte(s.webhookSecret)) != 1 {
http.Error(w, "unauthorized", http.StatusUnauthorized)
return
}
}
if r.Method != http.MethodPost { ... }
// ...
if err := s.store.UpsertEvent(event); err != nil {
log.Printf("WebhookHandler: failed to store event: %v", err)
http.Error(w, "internal error", http.StatusInternalServerError)
return
}
}
```
---
## SQL Operations Inventory
All current SQL in `diunwebhook.go` that must move into `SQLiteStore` methods:
| Current location | Operation | Store method |
|-----------------|-----------|--------------|
| `UpdateEvent()` | UPSERT into `updates` | `UpsertEvent` |
| `GetUpdates()` | SELECT updates JOIN tags | `GetUpdates` |
| `DismissHandler` | UPDATE `acknowledged_at` | `AcknowledgeUpdate` |
| `TagsHandler GET` | SELECT from `tags` | `ListTags` |
| `TagsHandler POST` | INSERT into `tags` | `CreateTag` |
| `TagByIDHandler DELETE` | DELETE from `tags` | `DeleteTag` |
| `TagAssignmentHandler PUT` (check) | SELECT COUNT from `tags` | `TagExists` |
| `TagAssignmentHandler PUT` (assign) | INSERT OR REPLACE into `tag_assignments` | `AssignTag` |
| `TagAssignmentHandler DELETE` | DELETE from `tag_assignments` | `UnassignTag` |
**Total: 9 Store methods.** All inline SQL moves to `SQLiteStore`. Handlers call `s.store.X(...)` only.
---
## State of the Art
| Old Approach | Current Approach | When Changed | Impact |
|--------------|------------------|--------------|--------|
| Ad-hoc DDL in application code | Versioned migration files | golang-migrate has been standard since ~2017 | Migration history tracked; dirty-state recovery available |
| Package-level globals for DB | Struct-held dependencies | Standard Go since Go 1.0; best practice since ~2016 | Enables parallel tests, multiple instances |
| CGO SQLite drivers | Pure-Go `modernc.org/sqlite` | ~2020 | No C toolchain needed; Alpine-friendly |
**Deprecated/outdated patterns in this codebase:**
- `var db *sql.DB` (package-level): replaced by `SQLiteStore.db` field
- `var mu sync.Mutex` (package-level): replaced by `SQLiteStore.mu` field
- `var webhookSecret string` (package-level): replaced by `Server.webhookSecret` field
- `SetWebhookSecret()` function: replaced by `NewServer(store, secret)` constructor
- `InitDB()` function: replaced by `RunMigrations()` + `NewSQLiteStore()`
- `export_test.go` calling `InitDB(":memory:")`: replaced by `NewTestServer()` constructor
---
## Open Questions
1. **Migration 0001 vs 0001+0002 baseline**
- What we know: The current schema has `acknowledged_at` added via an ad-hoc migration after initial creation. Two approaches exist: (a) single 0001 migration that creates all tables including `acknowledged_at` from the start; (b) 0001 creates original schema, 0002 adds `acknowledged_at`.
- What's unclear: Whether any existing deployed databases lack `acknowledged_at`. The code has `_, _ = db.Exec("ALTER TABLE ... ADD COLUMN acknowledged_at TEXT")` which silently ignores errors — meaning every database that ran this code has the column.
- Recommendation: Use a single 0001 migration with the full current schema (including `acknowledged_at`). Since this is the first time golang-migrate is introduced, all databases are either: (a) new — get full schema from 0001; (b) existing — already have `acknowledged_at`, and since `CREATE TABLE IF NOT EXISTS` is used, 0001 is a no-op for the table structures but creates the `schema_migrations` tracking table. **However**, golang-migrate does not re-run 0001 just because tables exist — it checks `schema_migrations`. On an existing DB with no `schema_migrations` table, golang-migrate will try to run 0001. If 0001 uses `CREATE TABLE IF NOT EXISTS`, it succeeds even when tables exist. This is the safe path.
2. **`TagExists` vs inline check in `AssignTag`**
- What we know: `TagAssignmentHandler` currently does a `SELECT COUNT(*)` before the INSERT. Some designs inline this into `AssignTag` and return an error code when the tag is missing.
- What's unclear: Whether the `not found` vs `internal error` distinction in the handler is best expressed as a separate `TagExists` call or a sentinel error from `AssignTag`.
- Recommendation: Keep `TagExists` as a separate method matching the current two-step pattern. This keeps the Store methods simple and the handler logic readable. A future refactor can merge them.
---
## Environment Availability
Step 2.6: SKIPPED — this phase is code/configuration-only. All changes are within the Go module already present. No new external services, CLIs, or runtimes are required beyond the existing Go 1.26 toolchain.
---
## Project Constraints (from CLAUDE.md)
The planner MUST verify all generated plans comply with these directives:
| Directive | Source | Applies To |
|-----------|--------|------------|
| No CGO — use `modernc.org/sqlite` only | CLAUDE.md Constraints | golang-migrate sub-package selection |
| Pure Go SQLite driver (`modernc.org/sqlite`) registered as `"sqlite"` | CLAUDE.md Key Dependencies | `sql.Open("sqlite", path)` — never `"sqlite3"` |
| No ORM or query builder | STATE.md Decisions | All SQLiteStore methods use raw `database/sql` |
| `go vet` runs in CI; `gofmt` enforced | CLAUDE.md Code Style | All new Go files must be gofmt-compliant |
| Handler naming pattern: `<Noun>Handler` | CLAUDE.md Naming Patterns | Handler methods on Server keep existing names |
| Test functions: `Test<FunctionName>_<Scenario>` | CLAUDE.md Naming Patterns | New test functions follow this convention |
| No barrel files; logic in `diunwebhook.go` | CLAUDE.md Module Design | New files within package are fine; no new packages required |
| Error messages lowercase: `"internal error"`, `"not found"` | CLAUDE.md Error Handling | Handler error strings must not change |
| `log.Printf` with handler name prefix on errors | CLAUDE.md Logging | e.g., `"WebhookHandler: failed to store event: %v"` |
| Single-container Docker deploy | CLAUDE.md Deployment | Migrations must run at startup from embedded files — no external migration tool |
| Backward compatible — existing SQLite users upgrade without data loss | CLAUDE.md Constraints | Migration 0001 must use `CREATE TABLE IF NOT EXISTS` |
---
## Sources
### Primary (HIGH confidence)
- `pkg.go.dev/github.com/golang-migrate/migrate/v4` — version v4.19.1 confirmed via Go module proxy on 2026-03-23
- `pkg.go.dev/github.com/golang-migrate/migrate/v4/database/sqlite` — confirmed uses `modernc.org/sqlite` (pure Go, not CGO)
- `pkg.go.dev/github.com/golang-migrate/migrate/v4/source/iofs``iofs.New(fsys, path)` API signature verified
- Project source: `pkg/diunwebhook/diunwebhook.go` — complete SQL operations inventory derived from direct code read
### Secondary (MEDIUM confidence)
- `github.com/golang-migrate/migrate/blob/master/database/sqlite/README.md` — confirms modernc.org/sqlite driver and pure-Go status
### Tertiary (LOW confidence)
- WebSearch results on Go Store interface patterns — general patterns verified against known stdlib conventions; no single authoritative source
---
## Metadata
**Confidence breakdown:**
- Standard stack: HIGH — golang-migrate version confirmed from Go proxy; sqlite sub-package driver verified from pkg.go.dev
- Architecture (Store interface, Server struct): HIGH — derived directly from auditing current source code; all 9 operations enumerated
- Migration design: HIGH — iofs API verified; ErrNoChange behavior documented in pkg.go.dev
- Pitfalls: HIGH — CGO pitfall verified by checking sqlite vs sqlite3 sub-packages; other pitfalls derived from code analysis
**Research date:** 2026-03-23
**Valid until:** 2026-09-23 (golang-migrate is stable; modernc.org/sqlite API is stable)

View File

@@ -0,0 +1,407 @@
# Architecture Patterns
**Domain:** Container update dashboard with dual-database support
**Project:** DiunDashboard
**Researched:** 2026-03-23
**Confidence:** HIGH (based on direct codebase analysis + established Go patterns)
---
## Current Architecture (Before Milestone)
The app is a single monolithic package (`pkg/diunwebhook/diunwebhook.go`) where database logic and HTTP handlers live in the same file and share package-level globals:
```
cmd/diunwebhook/main.go
└── pkg/diunwebhook/diunwebhook.go
├── package-level var db *sql.DB ← global, opaque
├── package-level var mu sync.Mutex ← global, opaque
├── InitDB(), UpdateEvent(), GetUpdates() ← storage functions
└── WebhookHandler, UpdatesHandler, ... ← handlers call db directly
```
**The problem for dual-database support:** SQL is written inline in handler functions and storage functions using SQLite-specific syntax:
- `INSERT OR REPLACE` (SQLite only; PostgreSQL uses `INSERT ... ON CONFLICT DO UPDATE`)
- `datetime('now')` (SQLite only; PostgreSQL uses `NOW()`)
- `AUTOINCREMENT` (SQLite only; PostgreSQL uses `SERIAL` or `GENERATED ALWAYS AS IDENTITY`)
- `PRAGMA foreign_keys = ON` (SQLite only; PostgreSQL enforces FKs by default)
- `modernc.org/sqlite` driver import (SQLite only)
There is no abstraction layer. Adding PostgreSQL directly to the current code would mean `if dialect == "postgres"` branches scattered across 380 lines — unmaintainable.
---
## Recommended Architecture
### Core Pattern: Repository Interface
Extract all database operations behind a Go interface. Each database backend implements the interface. The HTTP handlers receive the interface, not a concrete `*sql.DB`.
```
cmd/diunwebhook/main.go
├── reads DB_DRIVER env var ("sqlite" | "postgres")
├── constructs concrete store (SQLiteStore or PostgresStore)
└── passes store to Server struct
pkg/diunwebhook/
├── store.go ← Store interface definition
├── sqlite.go ← SQLiteStore implements Store
├── postgres.go ← PostgresStore implements Store
├── server.go ← Server struct holds Store, secret; methods = handlers
├── handlers.go ← HTTP handler methods on Server (no direct DB access)
└── models.go ← DiunEvent, UpdateEntry, Tag structs
```
### The Store Interface
```go
// pkg/diunwebhook/store.go
type Store interface {
// Lifecycle
Close() error
// Updates
UpsertEvent(ctx context.Context, event DiunEvent) error
GetAllUpdates(ctx context.Context) (map[string]UpdateEntry, error)
AcknowledgeUpdate(ctx context.Context, image string) (found bool, err error)
AcknowledgeAll(ctx context.Context) error
AcknowledgeByTag(ctx context.Context, tagID int) error
// Tags
ListTags(ctx context.Context) ([]Tag, error)
CreateTag(ctx context.Context, name string) (Tag, error)
DeleteTag(ctx context.Context, id int) (found bool, err error)
// Tag assignments
AssignTag(ctx context.Context, image string, tagID int) error
UnassignTag(ctx context.Context, image string) error
}
```
**Why this interface boundary:**
- Handlers never import a database driver — they only call `Store` methods.
- Tests inject a fake/in-memory implementation with no database.
- Adding a third backend (e.g., MySQL) requires implementing the interface, not modifying handlers.
- The interface expresses domain intent (`AcknowledgeUpdate`) not SQL mechanics (`UPDATE SET acknowledged_at`).
### Server Struct (Replaces Package Globals)
```go
// pkg/diunwebhook/server.go
type Server struct {
store Store
secret string
}
func NewServer(store Store, secret string) *Server {
return &Server{store: store, secret: secret}
}
// Handler methods: func (s *Server) WebhookHandler(w http.ResponseWriter, r *http.Request)
```
This addresses the "global mutable state" concern in CONCERNS.md. Multiple instances can coexist (useful for tests). Tests construct `NewServer(fakeStore, "")` without touching a real database.
---
## Component Boundaries
| Component | Responsibility | Communicates With | Location |
|-----------|---------------|-------------------|----------|
| `main.go` | Read env vars, construct store, wire server, run HTTP | `Server`, `SQLiteStore` or `PostgresStore` | `cmd/diunwebhook/` |
| `Server` | HTTP request lifecycle: parse, validate, delegate, respond | `Store` interface | `pkg/diunwebhook/server.go` |
| `Store` interface | Contract for all persistence operations | Implemented by `SQLiteStore`, `PostgresStore` | `pkg/diunwebhook/store.go` |
| `SQLiteStore` | All SQLite-specific SQL, schema init, migrations | `database/sql` + `modernc.org/sqlite` | `pkg/diunwebhook/sqlite.go` |
| `PostgresStore` | All PostgreSQL-specific SQL, schema init, migrations | `database/sql` + `pgx` stdlib driver | `pkg/diunwebhook/postgres.go` |
| `models.go` | Shared data structs (`DiunEvent`, `UpdateEntry`, `Tag`) | Imported by all components | `pkg/diunwebhook/models.go` |
| Frontend SPA | Visual dashboard, REST polling, drag-and-drop | HTTP API only (`/api/*`) | `frontend/src/` |
**Strict boundary rules:**
- `Server` never imports `modernc.org/sqlite` or `pgx` — only `Store`.
- `SQLiteStore` and `PostgresStore` never import `net/http`.
- `main.go` is the only place that chooses which backend to construct.
- `models.go` has zero imports beyond stdlib.
---
## Data Flow
### Webhook Ingestion
```
DIUN (external)
POST /webhook
→ Server.WebhookHandler
→ validate auth header (constant-time compare)
→ decode JSON into DiunEvent
→ store.UpsertEvent(ctx, event)
→ SQLiteStore: INSERT INTO updates ... ON CONFLICT(image) DO UPDATE SET ...
OR
→ PostgresStore: INSERT INTO updates ... ON CONFLICT (image) DO UPDATE SET ...
→ 200 OK
```
Both backends use standard SQL UPSERT syntax (fixing the current `INSERT OR REPLACE` bug). The SQL differs only in timestamp functions and driver-specific syntax, isolated to each store file.
### Dashboard Polling
```
Browser (every 5s)
GET /api/updates
→ Server.UpdatesHandler
→ store.GetAllUpdates(ctx)
→ SQLiteStore: SELECT ... LEFT JOIN ... (SQLite datetime handling)
OR
→ PostgresStore: SELECT ... LEFT JOIN ... (PostgreSQL timestamp handling)
→ encode map[string]UpdateEntry as JSON
→ 200 OK with body
```
### Acknowledge Flow
```
Browser click
PATCH /api/updates/{image}
→ Server.DismissHandler
→ extract image from URL path
→ store.AcknowledgeUpdate(ctx, image)
→ SQLiteStore: UPDATE ... SET acknowledged_at = datetime('now') WHERE image = ?
OR
→ PostgresStore: UPDATE ... SET acknowledged_at = NOW() WHERE image = $1
→ if not found: 404; else 204
```
### Startup / Initialization
```
main()
→ read DB_DRIVER env var ("sqlite" default, "postgres" opt-in)
→ if sqlite: NewSQLiteStore(DB_PATH) → opens modernc/sqlite, runs migrations
→ if postgres: NewPostgresStore(DSN) → opens pgx driver, runs migrations
→ NewServer(store, WEBHOOK_SECRET)
→ register handler methods on mux
→ srv.ListenAndServe()
```
---
## Migration Strategy: Dual Schema Management
Each store manages its own schema independently. No shared migration runner.
### SQLiteStore migrations
```go
func (s *SQLiteStore) migrate() error {
// Enable FK enforcement (fixes current bug)
s.db.Exec("PRAGMA foreign_keys = ON")
// Create tables with IF NOT EXISTS
// Apply ALTER TABLE migrations with error-ignore for idempotency
// Future: schema_version table for tracked migrations
}
```
### PostgresStore migrations
```go
func (s *PostgresStore) migrate() error {
// CREATE TABLE IF NOT EXISTS with PostgreSQL syntax
// SERIAL or IDENTITY for auto-increment
// FK enforcement is on by default — no PRAGMA needed
// Timestamp columns as TIMESTAMPTZ not TEXT
// Future: schema_version table for tracked migrations
}
```
**Key difference:** SQLite stores timestamps as RFC3339 TEXT (current behavior, must be preserved for backward compatibility). PostgreSQL stores timestamps as `TIMESTAMPTZ`. Each store handles its own serialization/deserialization of `time.Time`.
---
## Patterns to Follow
### Pattern 1: Constructor-Injected Store
**What:** `NewServer(store Store, secret string)` — store is a parameter, not a global.
**When:** Always. This replaces `var db *sql.DB` and `var mu sync.Mutex` package globals.
**Why:** Enables parallel test execution (each test creates its own `Server` with its own store). Eliminates the "single instance per process" constraint documented in CONCERNS.md.
### Pattern 2: Context Propagation
**What:** All `Store` interface methods accept `context.Context` as first argument.
**When:** From the initial Store interface design — do not add it later.
**Why:** Enables request cancellation and timeout propagation. PostgreSQL's `pgx` driver uses context natively. Without context, long-running queries cannot be cancelled when the client disconnects.
### Pattern 3: Driver-Specific SQL Isolated in Store Files
**What:** Each store file contains all SQL for that backend. No SQL strings in handlers.
**When:** Any time a handler needs to read or write data — call a Store method instead.
**Why:** SQLite uses `?` placeholders; PostgreSQL uses `$1, $2`. SQLite uses `datetime('now')`; PostgreSQL uses `NOW()`. SQLite uses `INTEGER PRIMARY KEY AUTOINCREMENT`; PostgreSQL uses `BIGSERIAL`. Mixing these in handler code creates unmaintainable conditional branches.
### Pattern 4: Idempotent Schema Creation
**What:** Both store constructors run schema setup on every startup via `CREATE TABLE IF NOT EXISTS`.
**When:** In `NewSQLiteStore()` and `NewPostgresStore()` constructors.
**Why:** Preserves current behavior where existing databases are safely upgraded. No external migration tool required for the current scope.
---
## Anti-Patterns to Avoid
### Anti-Pattern 1: Dialect Switches in Handlers
**What:** `if s.dialect == "postgres" { query = "..." } else { query = "..." }` inside handler methods.
**Why bad:** Handlers become aware of database internals. Every handler must be updated when adding a new backend. Tests must cover both branches per handler.
**Instead:** Move all dialect differences into the Store implementation. Handlers call `store.AcknowledgeUpdate(ctx, image)` — they never see SQL.
### Anti-Pattern 2: Shared `database/sql` Pool Exposed to Handlers
**What:** Passing `*sql.DB` directly to handlers (as the current package globals effectively do).
**Why bad:** Handlers can write arbitrary SQL, bypassing any abstraction. Type system cannot enforce the boundary.
**Instead:** Expose only the `Store` interface to `Server`. The `*sql.DB` is a private field of `SQLiteStore` / `PostgresStore`.
### Anti-Pattern 3: Single Store File for Both Backends
**What:** One `store.go` file with SQLite and PostgreSQL implementations side by side.
**Why bad:** The two implementations use different drivers, different SQL syntax, different connection setup. Merging them creates a large file with low cohesion.
**Instead:** `sqlite.go` for `SQLiteStore`, `postgres.go` for `PostgresStore`. Both in `pkg/diunwebhook/` package. Build tags are not needed since both compile — `main.go` chooses at runtime.
### Anti-Pattern 4: Reusing the Mutex from the Current Code
**What:** Keeping `var mu sync.Mutex` as a package global once the Store abstraction is introduced.
**Why bad:** `SQLiteStore` needs its own mutex (SQLite single-writer limitation). `PostgresStore` does not — PostgreSQL has its own concurrency control. Sharing a mutex across backends is wrong for Postgres and forces a false constraint.
**Instead:** `SQLiteStore` embeds `sync.Mutex` as a private field. `PostgresStore` does not use a mutex — it relies on `pgx`'s connection pool.
---
## Suggested Build Order
The dependency graph dictates this order. Each step must complete before the next.
### Step 1: Fix Current SQLite Bugs (prerequisite)
Fix `INSERT OR REPLACE` → proper UPSERT, add `PRAGMA foreign_keys = ON`. These bugs exist independent of the refactor and will be harder to fix correctly after the abstraction layer is introduced. Do this on the current flat code, with tests confirming the fix.
**Rationale:** Existing users rely on SQLite working correctly. The refactor must not change behavior — fixing bugs before refactoring means the tests that pass after bugfix become the regression suite for the refactor.
### Step 2: Extract Models
Move `DiunEvent`, `UpdateEntry`, `Tag` into `models.go`. No logic changes. This is a safe mechanical split — confirms the package compiles and tests pass after file reorganization.
**Rationale:** Models are referenced by both Store implementations and by Server. Extracting them first removes a coupling that would otherwise force all files to reference a single monolith.
### Step 3: Define Store Interface + SQLiteStore
Define the `Store` interface in `store.go`. Implement `SQLiteStore` in `sqlite.go` by moving all SQL from the current monolith into `SQLiteStore` methods. All existing tests must still pass with zero behavior changes. This step does not add PostgreSQL — it only restructures.
**Rationale:** Restructuring and new backend introduction must be separate commits. If tests break, the cause is isolated to the refactor, not the PostgreSQL code.
### Step 4: Introduce Server Struct
Refactor `pkg/diunwebhook/` to a struct-based design: `Server` with injected `Store`. Update `main.go` to construct `NewServer(store, secret)` and register `s.WebhookHandler` etc. on the mux. All existing tests must still pass.
**Rationale:** This decouples handler tests from database initialization. Tests can now construct a `Server` with a stub `Store` — faster, no filesystem I/O, parallelisable.
### Step 5: Implement PostgresStore
Add `postgres.go` with `PostgresStore` implementing the `Store` interface. Add `pgx` (`github.com/jackc/pgx/v5`) as a dependency using its `database/sql` compatibility shim (`pgx/v5/stdlib`) to avoid changing the `*sql.DB` usage pattern in `SQLiteStore`. Add `DB_DRIVER` env var to `main.go``"sqlite"` (default) or `"postgres"`. Add `DATABASE_URL` env var for PostgreSQL DSN. Update `compose.dev.yml` and deployment docs.
**Rationale:** `pgx/v5/stdlib` registers as a `database/sql` driver, so `PostgresStore` can use the same `*sql.DB` API as `SQLiteStore`. This minimizes the interface surface difference between the two implementations.
### Step 6: Update Docker Compose and Configuration Docs
Update `compose.dev.yml` with a `postgres` service profile. Update deployment documentation for PostgreSQL setup. This is explicitly the last step — infrastructure follows working code.
---
## Scalability Considerations
| Concern | SQLite (current) | PostgreSQL (new) |
|---------|-----------------|-----------------|
| Concurrent writes | Serialized by mutex + `SetMaxOpenConns(1)` | Connection pool, DB-level locking |
| Multiple server instances | Not possible (file lock) | Supported via shared DSN |
| Read performance | `LEFT JOIN` on every poll | Same query; can add indexes |
| Data retention | Unbounded growth | Same; retention policy deferred |
| Connection management | Single connection | `pgx` pool (default 5 conns) |
For the self-hosted single-user target audience, both backends are more than sufficient. PostgreSQL is recommended when the user already runs a PostgreSQL instance (common in Coolify deployments) to avoid volume-mounting complexity and SQLite file permission issues.
---
## Component Interaction Diagram
```
┌─────────────────────────────────────────────────────────┐
│ cmd/diunwebhook/main.go │
│ │
│ DB_DRIVER=sqlite → NewSQLiteStore(DB_PATH) │
│ DB_DRIVER=postgres → NewPostgresStore(DATABASE_URL) │
│ │ │
│ NewServer(store, secret)│ │
└──────────────────────────┼──────────────────────────────┘
┌──────────────────────────────────────────┐
│ Server (pkg/diunwebhook/server.go) │
│ │
│ store Store ◄──── interface boundary │
│ secret string │
│ │
│ .WebhookHandler() │
│ .UpdatesHandler() │
│ .DismissHandler() │
│ .TagsHandler() │
│ .TagByIDHandler() │
│ .TagAssignmentHandler() │
└──────────────┬───────────────────────────┘
│ calls Store methods only
┌──────────────────────────────────────────┐
│ Store interface (store.go) │
│ UpsertEvent / GetAllUpdates / │
│ AcknowledgeUpdate / ListTags / ... │
└────────────┬─────────────────┬───────────┘
│ │
▼ ▼
┌────────────────────┐ ┌──────────────────────┐
│ SQLiteStore │ │ PostgresStore │
│ (sqlite.go) │ │ (postgres.go) │
│ │ │ │
│ modernc.org/sqlite│ │ pgx/v5/stdlib │
│ *sql.DB │ │ *sql.DB │
│ sync.Mutex │ │ (no mutex needed) │
│ SQLite SQL syntax │ │ PostgreSQL SQL syntax│
└────────────────────┘ └──────────────────────┘
```
---
## Sources
- Direct analysis of `pkg/diunwebhook/diunwebhook.go` (current monolith) — HIGH confidence
- Direct analysis of `cmd/diunwebhook/main.go` (entry point) — HIGH confidence
- `.planning/codebase/CONCERNS.md` (identified tech debt) — HIGH confidence
- `.planning/PROJECT.md` (constraints: no CGO, backward compat, dual DB) — HIGH confidence
- Go `database/sql` standard library interface pattern — HIGH confidence (well-established Go idiom)
- `pgx/v5/stdlib` compatibility layer for `database/sql` — MEDIUM confidence (standard approach, verify exact import path during implementation)
---
*Architecture research: 2026-03-23*

View File

@@ -0,0 +1,146 @@
# Feature Landscape
**Domain:** Container image update monitoring dashboard (self-hosted)
**Researched:** 2026-03-23
**Confidence note:** Web search and WebFetch tools unavailable in this session. Findings are based on training-data knowledge of Portainer, Watchtower, Dockcheck-web, Diun, Uptime Kuma, and the self-hosted container tooling ecosystem. Confidence levels reflect this constraint.
---
## Table Stakes
Features users expect from any container monitoring dashboard. Missing any of these and the tool feels unfinished or untrustworthy.
| Feature | Why Expected | Complexity | Notes |
|---------|--------------|------------|-------|
| Persistent update list (survives page reload, container restart) | Core value prop — the whole point is to not lose track of what needs updating | Low | Already exists but broken by SQLite bugs; fixing it is table stakes |
| Individual acknowledge/dismiss per image | Minimum viable workflow to mark "I dealt with this" | Low | Already exists |
| Bulk acknowledge — dismiss all | Without this, users with 20+ images must click 20+ times; abandonment is near-certain | Medium | Flagged in CONCERNS.md as missing; very high priority |
| Bulk acknowledge — dismiss by group/tag | If you've tagged a group and updated everything in it, dismissing one at a time is painful | Medium | Depends on tag feature existing (already does) |
| Search / filter by image name | Standard affordance in any list of 10+ items | Medium | Missing; flagged in PROJECT.md as active requirement |
| Filter by status (pending update vs acknowledged) | Separating signal from noise is core to the "nag until you fix it" value prop | Low | Missing; complements search |
| New-update indicator (badge, counter, or highlight) | Users need to know at a glance "something new arrived since I last checked" | Medium | Flagged in PROJECT.md as active requirement |
| Page/tab title update count | Gives browser-tab visibility without opening the page — "DiunDashboard (3)" in the tab | Low | Tiny implementation, high perceived value |
| Data integrity across restarts | If the DB loses data on restart, trust collapses | Medium | High-priority bug: INSERT OR REPLACE + missing FK pragma |
| PostgreSQL option for non-SQLite users | Self-hosters who run Postgres expect it as an option for persistent services | High | Flagged in PROJECT.md; dual-DB is the plan |
---
## Differentiators
Features not universally expected but meaningfully better than the baseline. Build these after table stakes are solid.
| Feature | Value Proposition | Complexity | Notes |
|---------|-------------------|------------|-------|
| Filter by tag/group | Users who've organized images into groups want to scope their view | Low | Tag infrastructure already exists; filter is a frontend-only change |
| Visual "new since last visit" highlight (session-based) | Distinguish newly arrived updates from ones you've already seen | Medium | Requires client-side tracking of "last seen" timestamp (localStorage) |
| Toast / in-page notification on new update arrival (during polling) | Passive, non-intrusive signal when updates arrive while the tab is open | Medium | Uses existing 5-second poll; could compare prior state |
| Browser notification API on new update | Reaches users when the tab is backgrounded | High | Requires permission prompt; risky UX if over-notified; defer |
| Sort order controls (newest first, image name, registry) | Power-user need once list grows beyond 20 images | Low | Pure frontend sort; no backend change needed |
| Filter by registry | Useful for multi-registry setups | Low | Derived from image name; no schema change needed |
| Keyboard shortcuts (bulk dismiss with keypress, focus search) | Power users strongly value keyboard-driven UIs | Medium | Rarely table stakes for self-hosted tools but appreciated |
| Light / dark theme toggle (currently hardcoded dark) | Respects system preferences; accessibility baseline | Low | Flagged in CONCERNS.md; CSS variable change + prefers-color-scheme |
| Drag handle always visible (not hover-only) | Accessibility: keyboard and touch users need discoverable reordering | Low | Flagged in CONCERNS.md |
| Alternative to drag-and-drop for tag assignment | Dropdown select for assigning tags; removes dependency on pointer hover | Medium | Fixes accessibility gap in CONCERNS.md |
| Data retention / auto-cleanup of old acknowledged entries | Prevents unbounded DB growth over months/years | Medium | Configurable TTL for acknowledged records |
---
## Anti-Features
Features to deliberately NOT build in this milestone.
| Anti-Feature | Why Avoid | What to Do Instead |
|--------------|-----------|-------------------|
| Auto-triggering image pulls or container restarts from the dashboard | This app is a viewer, not an orchestrator; acting on the host would require Docker socket access and creates a significant security surface | Remain read-only; users run `docker pull` / Coolify update themselves |
| Notification channel management UI (email, Slack, webhook routing) | DIUN already manages notification channels; duplicating this is wasted effort and creates config drift | Keep DIUN as the notification layer; this dashboard is the persistent record |
| OAuth / multi-user accounts | Single-user self-hosted tool; auth complexity is disproportionate to the use case | Document "don't expose to the public internet"; optional basic auth via reverse proxy is sufficient |
| Real-time WebSocket / SSE updates | The 5-second poll is adequate for this use case; SSE/WS adds complexity without meaningful UX gain for a low-frequency signal | Improve the poll with ETag/If-Modified-Since to reduce wasted bandwidth instead |
| Mobile-native / PWA features | Web-first responsive design is sufficient; self-hosters rarely need a fully offline-capable PWA for an internal tool | Ensure the layout is responsive for mobile browser access |
| Auto-grouping by Docker stack / Compose project | Requires Docker socket access or DIUN metadata changes; significant scope increase | Defer to a dedicated future milestone per PROJECT.md |
| DIUN config management UI | Requires DIUN bundling; out of scope for this milestone | Defer per PROJECT.md |
| Changelog or CVE lookups per image | Valuable but requires external API integrations (Docker Hub, Trivy, etc.); different product scope | Document as a possible future phase |
| Undo for dismiss actions | Adds state complexity; accidental dismisses are recoverable by the next DIUN scan | Keep dismiss as final; communicate this in the UI |
---
## Feature Dependencies
```
Data integrity fixes (SQLite upsert + FK pragma)
→ must precede all UX features (broken data undermines everything)
PostgreSQL support
→ depends on struct-based refactor (global state → Server struct)
→ struct refactor is also a prerequisite for safe parallel tests
Bulk acknowledge (all)
→ no dependencies; purely additive API + frontend work
Bulk acknowledge (by group)
→ depends on tag feature (already exists)
Search / filter by image name
→ no backend dependency; frontend filter on existing GET /api/updates payload
Filter by status
→ no backend dependency; frontend filter
Filter by tag
→ depends on tag data being returned by GET /api/updates (already is)
New-update indicator (badge/counter)
→ depends on frontend comparing poll results across cycles
→ no backend change needed
Page title update count
→ depends on update count being derivable from GET /api/updates (already is)
Toast notification on new arrival
→ depends on new-update indicator logic (same poll comparison)
→ can share implementation
Sort controls
→ no dependencies; pure frontend
Data retention / TTL
→ depends on PostgreSQL support OR can be added to SQLite path independently
→ no frontend dependency
Light/dark theme
→ no dependencies; CSS + localStorage
Drag handle accessibility fix
→ no dependencies
Alternative tag assignment (dropdown)
→ no dependencies
```
---
## MVP Recommendation for This Milestone
The milestone goal is: bug fixes, dual DB, and UX improvements (bulk actions, filtering, search, new-update indicators).
Prioritize in this order:
1. **Fix SQLite data integrity** (UPSERT + FK pragma) — trust foundation; nothing else matters if data is lost
2. **Bulk acknowledge (all + by group)** — the single highest-impact UX addition; drops manual effort from O(n) to O(1)
3. **Search + filter by name/status/tag** — table stakes for any list of >10 items
4. **New-update indicator + page title count** — completes the "persistent visibility" core value with in-page signal
5. **PostgreSQL support** — requires struct refactor; large but well-scoped; enables users who need it
6. **Light/dark theme + accessibility fixes** — low complexity; removes known complaints
Defer to next milestone:
- **Data retention / TTL**: Real but not urgent; unbounded growth is a future problem for most users
- **Toast notifications**: Nice to have but the badge + title count cover the signal adequately
- **Alternative tag assignment (dropdown)**: Accessibility improvement but drag-and-drop exists and works
- **Browser notification API**: High complexity, UX risk, very low reward vs. the badge approach
---
## Sources
- Project context: `.planning/PROJECT.md` (validated requirements and constraints)
- Codebase audit: `.planning/codebase/CONCERNS.md` (confirmed gaps: bulk ops, search, indicators, FK bugs)
- Training-data knowledge of: Portainer CE, Watchtower (no UI), Dockcheck-web, Diun native notifications, Uptime Kuma (comparable self-hosted monitoring dashboard UX patterns) — **MEDIUM confidence** (cannot be verified in this session due to tool restrictions; findings should be spot-checked against current Portainer docs and community forums before roadmap finalization)

View File

@@ -0,0 +1,280 @@
# Domain Pitfalls
**Domain:** Go dashboard — SQLite to dual-database (SQLite + PostgreSQL) migration + dashboard UX improvements
**Researched:** 2026-03-23
**Confidence:** HIGH for SQLite/Go-specific pitfalls (sourced directly from codebase evidence); MEDIUM for PostgreSQL dialect differences (from training knowledge, verified against known Go `database/sql` contract)
---
## Critical Pitfalls
Mistakes that cause rewrites, data loss, or silent test passes.
---
### Pitfall 1: Leaking SQLite-specific SQL into "shared" query layer
**What goes wrong:** When adding a PostgreSQL path, developers copy existing SQLite queries and swap the driver — but keep SQLite-isms in the SQL itself. The two most common in this codebase: `datetime('now')` (SQLite built-in, line 225) and `INSERT OR REPLACE` (SQLite only, lines 109 and 352). Both fail silently or loudly on PostgreSQL. PostgreSQL uses `NOW()` and `INSERT ... ON CONFLICT DO UPDATE`.
**Why it happens:** The queries are embedded as raw strings throughout handler functions rather than in a dedicated SQL layer. Each query must be individually audited and conditionally branched or abstracted.
**Consequences:** PostgreSQL path silently produces wrong results (`datetime('now')` evaluates as a column name or throws an error) or panics on `INSERT OR REPLACE` (PostgreSQL does not support this syntax at all).
**Warning signs:**
- Any raw `db.Exec` or `db.Query` call with `datetime(`, `OR REPLACE`, `AUTOINCREMENT`, `PRAGMA`, or `?` placeholders — all must be replaced or branched for PostgreSQL.
- `?` is the SQLite/MySQL placeholder; PostgreSQL requires `$1`, `$2`, etc.
**Prevention:**
- Define a `Store` interface with methods (`UpsertEvent`, `GetUpdates`, `DismissImage`, etc.) and provide two concrete implementations: `sqliteStore` and `pgStore`.
- Never write raw SQL in HTTP handlers. All SQL lives in the store implementation only.
- Add an integration test that runs against both stores for every write operation; if the schema or SQL diverges the test fails at the driver level.
**Phase mapping:** Must be resolved before any PostgreSQL code is written — this is the foundational refactor that makes dual-DB possible without a maintenance nightmare.
---
### Pitfall 2: `INSERT OR REPLACE` silently deletes tag assignments before PostgreSQL is even added
**What goes wrong:** `UpdateEvent()` (line 109) uses `INSERT OR REPLACE INTO updates`. SQLite implements this as DELETE + INSERT when a conflict is found. Because `tag_assignments.image` is a foreign key referencing `updates.image`, the DELETE step removes the child row — unless `PRAGMA foreign_keys = ON` is active (it is not, confirmed at line 58-103). Even with FK enforcement, the CASCADE would delete the assignment rather than preserve it. The result: every time DIUN sends a new event for a tracked image, its tag assignment vanishes.
**Why it happens:** The intent of `INSERT OR REPLACE` is to update existing rows, but the mechanism is destructive. The UPSERT syntax (`INSERT ... ON CONFLICT(image) DO UPDATE SET ...`) is the correct tool and has been available since SQLite 3.24 (2018).
**Consequences:** This bug is already in production. Users lose tag assignments every time an image receives a new DIUN event. This directly contributed to the trust erosion described in PROJECT.md. Adding PostgreSQL without fixing this first means the bug ships in both DB paths.
**Warning signs:**
- Tag assignments disappear after DIUN reports a new update for a previously-tagged image.
- `TestDismissHandler_ReappearsAfterNewWebhook` tests the acknowledged-state reset correctly, but no test asserts that the tag survives a second `UpdateEvent` call on the same image.
**Prevention:**
- Replace line 109 with: `INSERT INTO updates (...) VALUES (...) ON CONFLICT(image) DO UPDATE SET diun_version=excluded.diun_version, ...` (preserves all other columns, including nothing that touches `tag_assignments`).
- Add `PRAGMA foreign_keys = ON` immediately after `sql.Open` in `InitDB()`.
- Add a regression test: `UpdateEvent` twice on the same image with a tag assigned between calls; assert tag survives.
**Phase mapping:** Fix before any other work — this is a data-correctness bug affecting existing users.
---
### Pitfall 3: Global package-level state makes database abstraction structurally impossible without a refactor
**What goes wrong:** The codebase uses `var db *sql.DB` and `var mu sync.Mutex` at package level (lines 48-52). The `InitDB` function sets the global `db`. Adding PostgreSQL means calling a different `sql.Open` and storing it — but there is only one `db` variable. You cannot run SQLite and PostgreSQL tests in the same process, cannot dependency-inject the store into handlers, and cannot test the two stores independently.
**Why it happens:** The package was written as a single-instance tool, which was appropriate at first. Dual-DB support requires the concept of a "store" that can be swapped — which requires struct-based design.
**Consequences:** If you try to add PostgreSQL without refactoring, you end up with `if dbType == "postgres" { ... } else { ... }` branches scattered across every handler. This is unmaintainable, untestable, and will break if a third DB is ever added.
**Warning signs:**
- Any attempt to pass a PostgreSQL `*sql.DB` to the existing handlers requires changing the global variable, which breaks concurrent tests.
- The test file uses `UpdatesReset()` to reset global state between tests — a design smell that signals the global state problem.
**Prevention:**
- Introduce `type Server struct { store Store; secret string }` where `Store` is an interface.
- Move all handler functions to methods on `Server`.
- `InitDB` becomes a factory: `NewSQLiteStore(path)` or `NewPostgresStore(dsn)` returning the interface.
- Tests construct a fresh `Server` with an in-memory SQLite store; no global state to reset.
**Phase mapping:** This refactor is the prerequisite for dual-DB. Do it as the first step of the milestone, before any PostgreSQL driver work.
---
### Pitfall 4: Schema migration strategy does not scale to dual-DB or multi-version upgrades
**What goes wrong:** The current migration strategy is a single silent `ALTER TABLE` at line 87: `_, _ = db.Exec("ALTER TABLE updates ADD COLUMN acknowledged_at TEXT")`. This works for one SQLite column addition but fails in two ways when expanded: (1) PostgreSQL requires different syntax and error handling, (2) there is no version tracking, so there is no way to know which migrations have already run on an existing database.
**Why it happens:** The approach was acceptable for a single-column addition in a personal project. It does not generalise.
**Consequences:**
- On PostgreSQL, `ALTER TABLE ... ADD COLUMN IF NOT EXISTS` is available but the silent `_, _` error swallow pattern will hide real migration failures.
- If a second column is added in a future milestone, there is no mechanism to skip it on databases that already have it (SQLite's `IF NOT EXISTS` on `ADD COLUMN` is only available in SQLite 3.37+).
- Existing user databases upgrading from the current version need all migrations to run in order and idempotently.
**Warning signs:**
- More than one `ALTER TABLE` in `InitDB()`.
- Any `_, _ = db.Exec(...)` where the underscore discards an error on a DDL statement.
**Prevention:**
- Introduce a `schema_migrations` table with a single `version INTEGER` column.
- Write migrations as numbered functions: `migration001`, `migration002`, etc.
- `InitDB` reads the current version and runs only pending migrations.
- Keep migrations simple: pure SQL, no application logic.
- A lightweight library (`golang-migrate/migrate`) can handle this, but for this project's scale a 30-line hand-rolled runner is sufficient and avoids a new dependency.
**Phase mapping:** Implement alongside the Store interface refactor. The migration runner must support both SQLite and PostgreSQL SQL dialects.
---
## Moderate Pitfalls
---
### Pitfall 5: PostgreSQL connection pooling behaves differently than SQLite's forced single connection
**What goes wrong:** The SQLite configuration uses `db.SetMaxOpenConns(1)` to serialize all DB access (line 64). This was the correct choice for SQLite's single-writer model. For PostgreSQL, `MaxOpenConns(1)` is a severe bottleneck and eliminates one of the primary reasons to use PostgreSQL. However, removing the constraint also removes the `sync.Mutex`, which must be eliminated correctly — not just the `SetMaxOpenConns(1)` call.
**Why it happens:** The mutex was added as belt-and-suspenders to the `SetMaxOpenConns(1)` constraint. For PostgreSQL, transactions handle isolation and the driver manages connection pooling correctly. The mutex is not needed and actively harmful at scale.
**Consequences:** Keeping `SetMaxOpenConns(1)` on PostgreSQL caps throughput to sequential queries. Removing it without reviewing the mutex usage can cause incorrect locking (the mutex guards writes, but PostgreSQL transactions should guard atomicity instead).
**Warning signs:**
- The `pgStore` implementation sets `MaxOpenConns(1)` — that is wrong.
- The `pgStore` implementation acquires a `sync.Mutex` around individual `db.Exec` calls instead of using transactions.
**Prevention:**
- In `sqliteStore`: keep `SetMaxOpenConns(1)` and the mutex (SQLite needs it).
- In `pgStore`: use PostgreSQL's default pooling (`SetMaxOpenConns` appropriate to load, e.g. 10-25), use `db.BeginTx` for operations that require atomicity, no application-level mutex.
- Document the difference in code comments.
**Phase mapping:** During the `pgStore` implementation phase.
---
### Pitfall 6: Optimistic UI updates in `assignTag` have no rollback on failure
**What goes wrong:** `assignTag()` in `useUpdates.ts` (lines 60-84) applies the state change optimistically before the API call. If the PUT/DELETE fails, the UI shows the new tag state but the server retained the old one. The next poll at most 5 seconds later will overwrite the optimistic state with the real server state — but during that window the user sees incorrect data. Worse, the error is only `console.error`, so the user gets no feedback that their action failed.
**Why it happens:** Optimistic updates are a good UX pattern, but require pairing with: (a) rollback on failure, and (b) user-visible error feedback.
**Consequences:**
- During a 5-second window after a failed tag assignment, the UI shows the wrong tag.
- If the backend is down and the user assigns multiple tags, all changes appear to succeed. The next poll resets all of them silently.
**Warning signs:**
- No `try/catch` that restores `prev` state on `assignTag` failure.
- No error toast or inline error state for tag assignment failures.
**Prevention:**
- Capture `prevState` before the optimistic update.
- In the `catch` block: restore `prevState` and surface an error message to the user (inline or toast).
- Example pattern: `const prev = updates[image]; setUpdates(optimistic); try { await api() } catch { setUpdates(restore(prev)); showError() }`.
**Phase mapping:** Part of the UX improvements phase.
---
### Pitfall 7: Bulk acknowledge actions hitting the backend sequentially instead of in a single operation
**What goes wrong:** "Dismiss all" and "dismiss by group" are planned features. The naive implementation fires one `PATCH /api/updates/{image}` per image from the frontend. For a user with 30 tracked images, this is 30 sequential API calls. Each call acquires the mutex and executes a SQL UPDATE. This is fine for single-user loads but is the wrong pattern: it creates 30 round trips, 30 DB transactions, and 30 state updates in the React UI (causing 30 re-renders).
**Why it happens:** The existing dismiss path is single-image by design; bulk is an afterthought unless an explicit bulk endpoint is designed from the start.
**Consequences:**
- 30 re-renders in rapid succession cause visible UI flickering.
- If one request fails in the middle, some images are acknowledged and others are not, with no clear feedback to the user.
**Warning signs:**
- A "dismiss all" button that loops over `updates` calling `acknowledge(image)` in sequence or in `Promise.all`.
**Prevention:**
- Add a `POST /api/updates/acknowledge-bulk` endpoint that accepts an array of image names and wraps all UPDATEs in a single transaction.
- The frontend calls one endpoint and updates state once.
- For "dismiss by group": pass `tag_id` as the filter parameter so the backend does `UPDATE updates SET acknowledged_at = NOW() WHERE image IN (SELECT image FROM tag_assignments WHERE tag_id = ?)`.
**Phase mapping:** Design the bulk endpoint before implementing the frontend bulk UI; the API contract drives the UI, not the other way around.
---
### Pitfall 8: No rollback path for existing SQLite users upgrading to a version with dual-DB
**What goes wrong:** When an existing user upgrades their Docker image to the version that includes PostgreSQL support, they continue using SQLite. If the migration runner runs new DDL migrations on their existing SQLite database (e.g., a new column added for PostgreSQL compatibility), and the migration fails silently due to the `_, _` pattern, they are left with a database in an intermediate state. On the next restart the migration runner does not know whether to retry or skip.
**Why it happens:** No migration version tracking means "already migrated" cannot be distinguished from "never migrated."
**Consequences:** Database schema becomes inconsistent. Queries that expect the new column fail. The user has no recourse except to delete the database (losing all data) or manually run SQL.
**Warning signs:**
- `InitDB` has no `SELECT version FROM schema_migrations` step.
- Migration SQL errors are swallowed.
**Prevention:**
- Implement the versioned migration runner (see Pitfall 4).
- Log migration progress visibly at startup: `INFO: running migration 002 (add_xyz_column)`.
- For the column that already exists implicitly (`acknowledged_at`), migration 001 is `ALTER TABLE updates ADD COLUMN IF NOT EXISTS acknowledged_at TEXT` with the result logged regardless of whether the column existed.
**Phase mapping:** Part of the store interface refactor phase, before any new schema changes land.
---
## Minor Pitfalls
---
### Pitfall 9: Drag handle invisible by default breaks tag reorganization discoverability
**What goes wrong:** The `GripVertical` icon in `ServiceCard.tsx` (line 96) has `opacity-0 group-hover:opacity-100`. On touch devices, on keyboard navigation, and for users who do not hover over each card, the drag-to-regroup feature is entirely invisible. Drag-and-drop is the only way to assign a tag to an image (the `assignTag` API is only called from the drag-and-drop handler).
**Why it happens:** The design prioritized a clean visual for non-interactive browsing, but made the interactive feature undiscoverable.
**Consequences:** Users who cannot use hover (touch devices, keyboard-only) have no way to reorganize images. As noted in CONCERNS.md, the delete button on `TagSection.tsx` has the same problem.
**Warning signs:**
- The drag handle has `opacity-0` without a `focus-visible:opacity-100` counterpart.
- No alternative assignment mechanism exists (e.g., a dropdown on the card).
**Prevention:**
- Make the grip handle always visible at reduced opacity (e.g., `opacity-30 group-hover:opacity-100`), or make it visible on focus.
- Add an accessible fallback: a "Move to group" dropdown on the card's context menu or `...` menu. This also gives keyboard and touch users the ability to assign tags.
**Phase mapping:** UX improvements phase. Not a blocker for DB work but should be addressed before the milestone closes.
---
### Pitfall 10: `datetime('now')` in DismissHandler produces SQLite-only timestamps
**What goes wrong:** `DismissHandler` (line 225) writes `acknowledged_at` using `datetime('now')`, a SQLite built-in. This is a SQL dialect issue distinct from the `INSERT OR REPLACE` problem. When the PostgreSQL path is added, this query must become `NOW()` or an application-layer timestamp.
**Why it happens:** It is a small single-line SQL call, easy to overlook during the migration to dual-DB.
**Consequences:** `DismissHandler` breaks entirely on PostgreSQL; `datetime('now')` is not a valid PostgreSQL function call and will produce a column-name error.
**Warning signs:**
- Any raw `datetime(` in query strings.
**Prevention:**
- In the Store interface, the `DismissImage(image string) error` method takes no timestamp argument — the store implementation generates `NOW()` in SQL or passes `time.Now()` as a parameter from Go. Passing the timestamp from Go (`?` / `$1`) is the most portable approach: both SQLite and PostgreSQL accept a bound `time.Time` value, removing all dialect issues for timestamps.
**Phase mapping:** Resolve during the `pgStore` implementation. Can be fixed in `sqliteStore` at the same time for consistency.
---
### Pitfall 11: `AUTOINCREMENT` in SQLite schema vs PostgreSQL `SERIAL` or `GENERATED ALWAYS AS IDENTITY`
**What goes wrong:** The `tags` table uses `INTEGER PRIMARY KEY AUTOINCREMENT` (line 90). PostgreSQL does not have `AUTOINCREMENT`; it uses `SERIAL`, `BIGSERIAL`, or `GENERATED ALWAYS AS IDENTITY`. When writing the `CREATE TABLE` DDL for PostgreSQL, this must be translated.
**Why it happens:** A detail that is invisible in the SQLite path because `CREATE TABLE IF NOT EXISTS` never re-runs.
**Consequences:** `CREATE TABLE` fails on PostgreSQL if the SQLite DDL is used verbatim.
**Warning signs:**
- A single `schema.sql` file used for both databases.
**Prevention:**
- Store DDL per-driver: `schema_sqlite.sql` and `schema_pg.sql`, or generate DDL in code with driver-specific constants.
- For PostgreSQL, use `id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY`.
**Phase mapping:** Part of the initial `pgStore` schema setup.
---
## Phase-Specific Warnings
| Phase Topic | Likely Pitfall | Mitigation |
|---|---|---|
| Fix SQLite bugs (UPSERT + FK enforcement) | INSERT OR REPLACE deletes tag assignments (Pitfall 2) | Use `ON CONFLICT DO UPDATE`; add `PRAGMA foreign_keys = ON` |
| Store interface refactor | Global state prevents dual-DB (Pitfall 3) | Struct-based `Server` with `Store` interface before any PostgreSQL work |
| Migration runner | Silent failures leave DB in unknown state (Pitfalls 4, 8) | Versioned migrations with visible logging; never swallow DDL errors |
| PostgreSQL implementation | SQLite SQL dialect in shared queries (Pitfall 1) | All SQL in store implementations, never in handlers; integration test both stores |
| PostgreSQL connection setup | Single-connection constraint applied to Postgres (Pitfall 5) | `pgStore` uses pooling and transactions, not mutex + `MaxOpenConns(1)` |
| Timestamp writes | `datetime('now')` fails on PostgreSQL (Pitfall 10) | Pass `time.Now()` as a bound parameter from Go instead of using SQL built-ins |
| Schema creation | `AUTOINCREMENT` not valid PostgreSQL syntax (Pitfall 11) | Separate DDL per driver |
| Bulk acknowledge UI | Sequential API calls cause flickering and partial state (Pitfall 7) | Design bulk endpoint first; one API call, one state update |
| Tag UX improvements | Optimistic updates without rollback confuse users (Pitfall 6) | Always pair optimistic updates with `catch` rollback and user-visible error |
| Accessibility improvements | Drag handle invisible; keyboard users cannot reorganize (Pitfall 9) | Always-visible handle at reduced opacity + dropdown alternative |
---
## Sources
- Codebase analysis: `/pkg/diunwebhook/diunwebhook.go`, lines 48-117, 225, 352 (HIGH confidence — direct code evidence)
- Codebase analysis: `/frontend/src/hooks/useUpdates.ts`, lines 60-84 (HIGH confidence — direct code evidence)
- Codebase analysis: `/frontend/src/components/ServiceCard.tsx`, line 96 (HIGH confidence — direct code evidence)
- `.planning/codebase/CONCERNS.md` — confirmed INSERT OR REPLACE and FK enforcement issues (HIGH confidence — prior audit)
- Go `database/sql` package contract and SQLite vs PostgreSQL dialect differences (MEDIUM confidence — training knowledge, no external verification available; recommend verifying PostgreSQL placeholder syntax `$1` format before implementation)

185
.planning/research/STACK.md Normal file
View File

@@ -0,0 +1,185 @@
# Technology Stack
**Project:** DiunDashboard — PostgreSQL milestone
**Researched:** 2026-03-23
**Scope:** Adding PostgreSQL support alongside existing SQLite to a Go 1.26 backend
---
## Recommended Stack
### PostgreSQL Driver
| Technology | Version | Purpose | Why |
|------------|---------|---------|-----|
| `github.com/jackc/pgx/v5/stdlib` | v5.9.1 | PostgreSQL `database/sql` driver | The de-facto standard Go PostgreSQL driver. Pure Go. 7,328+ importers. The `stdlib` adapter makes it a drop-in for the existing `*sql.DB` code path. Native pgx interface not needed — this project uses `database/sql` already and has no PostgreSQL-specific features (no LISTEN/NOTIFY, no COPY). |
**Confidence:** HIGH — Verified via pkg.go.dev (v5.9.1, published 2026-03-22). pgx v5 is the clear community standard; lib/pq is officially in maintenance-only mode.
**Do NOT use:**
- `github.com/lib/pq` — maintenance-only since 2021; pgx is the successor recommended by the postgres ecosystem.
- Native pgx interface (`pgx.Connect`, `pgxpool.New`) — overkill here; this project only needs standard queries and the existing `*sql.DB` pattern should be preserved for consistency.
### Database Migration Tool
| Technology | Version | Purpose | Why |
|------------|---------|---------|-----|
| `github.com/golang-migrate/migrate/v4` | v4.19.1 | Schema migrations for both SQLite and PostgreSQL | Supports both `database/sqlite` (uses `modernc.org/sqlite` — pure Go, no CGO) and `database/pgx/v5` (uses pgx v5). Both drivers are maintained. The existing inline `CREATE TABLE IF NOT EXISTS` + silent `ALTER TABLE` approach does not scale to dual-database support; a proper migration tool is required. |
**Confidence:** HIGH — Verified via pkg.go.dev. The `database/sqlite` sub-package explicitly uses `modernc.org/sqlite` (pure Go), matching the project's no-CGO constraint. The `database/pgx/v5` sub-package uses pgx v5.
**Drivers to import:**
```go
// For SQLite migrations (pure Go, no CGO — matches existing constraint)
_ "github.com/golang-migrate/migrate/v4/database/sqlite"
// For PostgreSQL migrations (via pgx v5)
_ "github.com/golang-migrate/migrate/v4/database/pgx/v5"
// Migration source (embedded files)
_ "github.com/golang-migrate/migrate/v4/source/iofs"
```
**Do NOT use:**
- `pressly/goose` — Its SQLite dialect documentation does not confirm pure-Go driver support; CGO status is ambiguous. golang-migrate explicitly documents use of `modernc.org/sqlite`. Goose is a fine tool but the CGO uncertainty is a disqualifier for this project.
- `database/sqlite3` variant of golang-migrate — Uses `mattn/go-sqlite3` which requires CGO. Use `database/sqlite` (no `3`) instead.
### SQLite Driver (Existing — Retain)
| Technology | Version | Purpose | Why |
|------------|---------|---------|-----|
| `modernc.org/sqlite` | v1.47.0 | Pure-Go SQLite driver | Already in use; must be retained for no-CGO cross-compilation. Current version in go.mod is v1.46.1 — upgrade to v1.47.0 (released 2026-03-17) for latest SQLite 3.51.3 and bug fixes. |
**Confidence:** HIGH — Verified via pkg.go.dev versions tab.
---
## SQL Dialect Abstraction
### The Problem
The existing codebase has four SQLite-specific SQL constructs that break on PostgreSQL:
| Location | SQLite syntax | PostgreSQL equivalent |
|----------|--------------|----------------------|
| `InitDB` — tags table | `INTEGER PRIMARY KEY AUTOINCREMENT` | `INTEGER PRIMARY KEY GENERATED ALWAYS AS IDENTITY` |
| `UpdateEvent` | `INSERT OR REPLACE INTO updates VALUES (?,...)` | `INSERT INTO updates (...) ON CONFLICT (image) DO UPDATE SET ...` |
| `DismissHandler` | `UPDATE ... SET acknowledged_at = datetime('now')` | `UPDATE ... SET acknowledged_at = NOW()` |
| `TagAssignmentHandler` | `INSERT OR REPLACE INTO tag_assignments` | `INSERT INTO tag_assignments ... ON CONFLICT (image) DO UPDATE SET tag_id = ...` |
| All handlers | `?` positional placeholders | `$1, $2, ...` positional placeholders |
### Recommended Pattern: Storage Interface
Extract a `Store` interface in `pkg/diunwebhook/`. Implement it twice: once for SQLite (`sqliteStore`), once for PostgreSQL (`postgresStore`). Both implementations use `database/sql` and raw SQL, but with dialect-appropriate queries.
```go
// pkg/diunwebhook/store.go
type Store interface {
InitSchema() error
UpdateEvent(event DiunEvent) error
GetUpdates() (map[string]UpdateEntry, error)
DismissUpdate(image string) error
GetTags() ([]Tag, error)
CreateTag(name string) (Tag, error)
DeleteTag(id int) error
AssignTag(image string, tagID int) error
UnassignTag(image string) error
}
```
This is a standard Go pattern: define a narrow interface, swap implementations via factory function. The `sync.Mutex` moves into each store implementation (SQLite store keeps `SetMaxOpenConns(1)` + mutex; PostgreSQL store can use a connection pool without a global mutex).
**Do NOT use:**
- ORM (GORM, ent, sqlc, etc.) — The query set is small and known. An ORM adds a dependency with its own dialect quirks and opaque query generation. Raw SQL with an interface is simpler, easier to test, and matches the existing project style.
- `database/sql` query builder libraries (squirrel, etc.) — Same reasoning; the schema is simple enough that explicit SQL per dialect is more readable and maintainable.
---
## Configuration
### New Environment Variable
| Variable | Purpose | Default |
|----------|---------|---------|
| `DATABASE_URL` | PostgreSQL connection string (triggers PostgreSQL mode when set) | — (unset = SQLite mode) |
| `DB_PATH` | SQLite file path (existing) | `./diun.db` |
**Selection logic:** If `DATABASE_URL` is set, use PostgreSQL. Otherwise, use SQLite with `DB_PATH`. This is the simplest signal — no new `DB_DRIVER` variable needed.
**PostgreSQL connection string format:**
```
postgres://user:password@host:5432/dbname?sslmode=disable
```
---
## Migration File Structure
```
migrations/
001_initial_schema.up.sql
001_initial_schema.down.sql
002_add_acknowledged_at.up.sql
002_add_acknowledged_at.down.sql
```
Each migration file should be valid for **both** SQLite and PostgreSQL — this is achievable for the current schema since:
- `AUTOINCREMENT` can become `INTEGER PRIMARY KEY` (SQLite auto-assigns rowid regardless of keyword; PostgreSQL uses `SERIAL` — requires separate dialect files or a compatibility shim).
**Revised recommendation:** Use **separate migration directories per dialect** when DDL diverges significantly:
```
migrations/
sqlite/
001_initial_schema.up.sql
002_add_acknowledged_at.up.sql
postgres/
001_initial_schema.up.sql
002_add_acknowledged_at.up.sql
```
This is more explicit than trying to share SQL across dialects. golang-migrate supports `iofs` (Go embed) as a source, so both directories can be embedded in the binary.
---
## Full Dependency Changes
```bash
# Add PostgreSQL driver (via pgx v5 stdlib adapter)
go get github.com/jackc/pgx/v5@v5.9.1
# Add migration tool with SQLite (pure Go) and pgx/v5 drivers
go get github.com/golang-migrate/migrate/v4@v4.19.1
# Upgrade existing SQLite driver to current version
go get modernc.org/sqlite@v1.47.0
```
No other new dependencies are required. The existing `database/sql` usage throughout the codebase is preserved.
---
## Alternatives Considered
| Category | Recommended | Alternative | Why Not |
|----------|-------------|-------------|---------|
| PostgreSQL driver | pgx/v5 stdlib | lib/pq | lib/pq is maintenance-only since 2021; pgx is the successor |
| PostgreSQL driver | pgx/v5 stdlib | Native pgx interface | Project uses database/sql; stdlib adapter preserves consistency; no need for PostgreSQL-specific features |
| Migration tool | golang-migrate | pressly/goose | Goose's SQLite CGO status unconfirmed; golang-migrate explicitly uses modernc.org/sqlite |
| Migration tool | golang-migrate | Inline `CREATE TABLE IF NOT EXISTS` | Inline approach cannot handle dual-dialect schema differences or ordered version history |
| Abstraction | Store interface | GORM / ent | Schema is 3 tables; ORM adds complexity without benefit; project already uses raw SQL |
| Abstraction | Store interface | sqlc | Code generation adds a build step and CI dependency; not warranted for this scope |
| Placeholder style | Per-dialect (`?` vs `$1`) | `sqlx` named params | Named params add a new library; explicit per-dialect SQL is clearer and matches project style |
---
## Sources
- pgx v5.9.1: https://pkg.go.dev/github.com/jackc/pgx/v5@v5.9.1 — HIGH confidence
- pgxpool: https://pkg.go.dev/github.com/jackc/pgx/v5/pgxpool — HIGH confidence
- golang-migrate v4.19.1 sqlite driver (pure Go): https://pkg.go.dev/github.com/golang-migrate/migrate/v4/database/sqlite — HIGH confidence
- golang-migrate v4 pgx/v5 driver: https://pkg.go.dev/github.com/golang-migrate/migrate/v4/database/pgx/v5 — HIGH confidence
- golang-migrate v4 sqlite3 driver (CGO — avoid): https://pkg.go.dev/github.com/golang-migrate/migrate/v4/database/sqlite3 — HIGH confidence
- modernc.org/sqlite v1.47.0: https://pkg.go.dev/modernc.org/sqlite?tab=versions — HIGH confidence
- goose v3.27.0: https://pkg.go.dev/github.com/pressly/goose/v3 — MEDIUM confidence (SQLite CGO status not confirmed in official docs)

View File

@@ -0,0 +1,181 @@
# Project Research Summary
**Project:** DiunDashboard — PostgreSQL milestone + UX improvements
**Domain:** Self-hosted container image update monitoring dashboard (Go backend + React SPA)
**Researched:** 2026-03-23
**Confidence:** HIGH (stack and architecture sourced from direct codebase analysis and verified package versions; features MEDIUM due to tool restrictions during research)
## Executive Summary
DiunDashboard is a self-hosted Go + React dashboard that receives DIUN webhook events and presents a persistent, acknowledgeable list of container images with available updates. The current milestone covers two parallel tracks: (1) fixing active data-correctness bugs and adding PostgreSQL as an alternative to SQLite, and (2) delivering UX improvements users need before the tool is genuinely usable at scale (bulk dismiss, search/filter, new-update indicators). Both tracks have well-understood solutions rooted in established Go patterns — the engineering risk is low provided the work is sequenced correctly.
The recommended approach is a strict dependency-first build order. The SQLite data-integrity bugs (`INSERT OR REPLACE` silently deleting tag assignments, missing FK pragma) must be fixed before any other work because they undermine trust in the tool and will complicate the subsequent refactor if left in. The backend refactor — introducing a `Store` interface and a `Server` struct to replace package-level globals — is the foundational prerequisite for PostgreSQL support, parallel test execution, and reliable UX features. PostgreSQL is then a clean additive step: implement `PostgresStore`, wire the `DATABASE_URL` env var into `main.go`, and provide dialect-appropriate SQL in the new store file.
The primary risk is dialect leakage: SQLite-specific SQL (`datetime('now')`, `INSERT OR REPLACE`, `?` placeholders, `AUTOINCREMENT`, `PRAGMA`) scattered across handler functions will silently break on PostgreSQL if the Store interface abstraction is not in place before any PostgreSQL code is written. Secondary risks are a missing versioned migration runner (which leaves existing user databases in an unknown state on upgrade) and bulk dismiss implemented as N sequential API calls rather than a single transactional endpoint. Both risks have well-documented mitigations and are easy to prevent if addressed in the correct phase.
---
## Key Findings
### Recommended Stack
The existing stack is largely correct and requires minimal additions. The PostgreSQL driver is `github.com/jackc/pgx/v5/stdlib` (v5.9.1, verified 2026-03-22) — the de-facto community standard. Its `stdlib` adapter makes it a drop-in for the existing `*sql.DB` code path; the native pgx interface is not needed. `lib/pq` is explicitly maintenance-only and must not be used. For schema migrations, `github.com/golang-migrate/migrate/v4` (v4.19.1) supports both the project's `modernc.org/sqlite` (pure-Go, no CGO) and pgx/v5 backends via separately maintained sub-packages. The existing SQLite driver should be upgraded from v1.46.1 to v1.47.0.
**Core technologies:**
- `github.com/jackc/pgx/v5/stdlib` v5.9.1: PostgreSQL driver — only viable current option; `lib/pq` is maintenance-only
- `github.com/golang-migrate/migrate/v4` v4.19.1: schema migrations — explicit `modernc.org/sqlite` support satisfies no-CGO constraint
- `modernc.org/sqlite` v1.47.0: existing SQLite driver (upgrade from v1.46.1) — must remain for pure-Go cross-compilation
No ORM. No query-builder library. The query set is 8 operations across 3 tables; raw SQL per store implementation is simpler, easier to audit, and matches the existing project style.
**Configuration:** `DATABASE_URL` env var (when set, activates PostgreSQL mode). `DB_PATH` retained for SQLite. No separate `DB_DRIVER` variable needed.
### Expected Features
The feature research produced a clear priority stack grounded in the documented concerns and self-hosted dashboard conventions. Data integrity is a prerequisite for everything else — broken data collapses user trust faster than any missing feature.
**Must have (table stakes):**
- SQLite data integrity fix (UPSERT + FK pragma) — existing bug silently deletes tag assignments on every DIUN event
- Bulk acknowledge: dismiss all + dismiss by group — O(n) clicking for 20+ images causes abandonment
- Search + filter by image name, status, and tag — standard affordance for any list exceeding 10 items
- New-update indicator (badge/counter) and page/tab title count — persistent visibility is the core value proposition
- PostgreSQL support — required for users running Coolify or other Postgres-backed infrastructure
**Should have (differentiators):**
- Toast notification on new update arrival during polling — shares implementation with new-update indicator
- Sort order controls (newest first, by name, by registry) — pure frontend, no backend change
- Light/dark theme toggle — low complexity, removes a known complaint
- Drag handle always visible (accessibility) — currently hover-only, invisible on touch/keyboard
- Optimistic UI rollback on tag assignment failure — current code has no error recovery path
**Defer (v2+):**
- Data retention / auto-cleanup of acknowledged entries — real concern but not urgent for most users
- Alternative tag assignment dropdown — drag-and-drop exists; dropdown is an accessibility improvement, not a blocker
- Browser notification API — high UX risk, low reward vs. badge approach
- Auto-grouping by Docker stack — requires Docker socket access; different scope entirely
### Architecture Approach
The architecture follows a standard Go repository interface pattern. The current monolith (`diunwebhook.go` with package-level `var db *sql.DB` and `var mu sync.Mutex`) is extracted into a `Store` interface implemented by two concrete types (`SQLiteStore`, `PostgresStore`), with HTTP handlers moved to methods on a `Server` struct that holds a `Store`. This pattern eliminates global state, enables parallel tests without resets, and enforces a strict boundary: handlers never see SQL, store implementations never see HTTP.
**Major components:**
1. `Store` interface (`store.go`) — contract for all persistence; 11 methods covering updates, tags, and assignments
2. `SQLiteStore` (`sqlite.go`) — SQLite-specific SQL, `sync.Mutex`, `SetMaxOpenConns(1)`, `PRAGMA foreign_keys = ON`
3. `PostgresStore` (`postgres.go`) — PostgreSQL-specific SQL, pgx connection pool, no mutex, `db.BeginTx` for atomicity
4. `Server` struct (`server.go`) — holds `Store` and `secret`; all HTTP handlers are methods on `Server`
5. `models.go` — shared `DiunEvent`, `UpdateEntry`, `Tag` structs with no imports beyond stdlib
6. `main.go` — sole location where backend is chosen (`DATABASE_URL` present → PostgreSQL, absent → SQLite)
7. Frontend SPA — unchanged API contract; communicates with backend via `/api/*` only
**Key pattern: `SQLiteStore` retains `sync.Mutex`; `PostgresStore` does not.** These are structurally different and must not share a mutex.
**Migration strategy:** Separate DDL per dialect (`migrations/sqlite/` and `migrations/postgres/`). Both embedded in the binary via `//go:embed`. A versioned `schema_migrations` table prevents re-running migrations on existing databases and makes upgrade failures visible.
### Critical Pitfalls
1. **SQLite-specific SQL leaking into shared code**`datetime('now')`, `INSERT OR REPLACE`, `?` placeholders, `AUTOINCREMENT`, and `PRAGMA` all fail on PostgreSQL. Prevention: Store interface forces all SQL into store files; handlers call named methods only; integration tests run both stores.
2. **`INSERT OR REPLACE` silently deleting tag assignments** — SQLite implements this as DELETE + INSERT, which cascades to `tag_assignments` and erases the user's groupings on every DIUN event. Prevention: replace with `INSERT ... ON CONFLICT(image) DO UPDATE SET ...`; add `PRAGMA foreign_keys = ON`; add regression test asserting tag survives a second `UpdateEvent` call.
3. **Global package-level state blocks dual-DB without struct refactor**`var db *sql.DB` at package scope means there is only one DB handle; PostgreSQL cannot be added without introducing `if dbType == "postgres"` branches across every handler. Prevention: `Server` struct with injected `Store` must precede all PostgreSQL work.
4. **No versioned migration runner** — silent `ALTER TABLE` with discarded errors leaves existing SQLite databases in an unknown state on upgrade. Prevention: `schema_migrations` version table; log every migration attempt; never swallow DDL errors.
5. **Bulk dismiss implemented as N sequential API calls** — 30 acknowledged images = 30 round trips, 30 mutex acquisitions, 30 React re-renders with potential flickering and partial-state failure. Prevention: design `POST /api/updates/acknowledge-bulk` endpoint first; one call, one transaction, one state update.
---
## Implications for Roadmap
Based on the dependency graph from feature and architecture research, the milestone decomposes into four phases. The ordering is non-negotiable: each phase is a prerequisite for the next.
### Phase 1: Data Integrity Fixes
**Rationale:** The `INSERT OR REPLACE` bug is active in production and deletes user data on every DIUN event. Fixing it before the refactor means the bug-fix tests become the regression suite that validates the refactor did not regress behavior. No other work is credible until the data layer is correct.
**Delivers:** Trustworthy persistence — tag assignments survive new DIUN events; FK enforcement works; acknowledged state is preserved correctly.
**Addresses:** Table-stakes feature "Data integrity across restarts"; Pitfalls 2, 10 (timestamp fix can be included here).
**Avoids:** Shipping the bug in both DB paths; losing the fix in refactor noise.
**Research flag:** None needed — the fix is a 3-line SQL change with a clear regression test. Standard patterns apply.
### Phase 2: Backend Refactor — Store Interface + Server Struct
**Rationale:** The global state architecture makes PostgreSQL support structurally impossible without this refactor. All subsequent work (PostgreSQL implementation, parallel test execution, safer UX features) depends on this change. The refactor must be behavior-neutral — all existing tests pass before PostgreSQL is introduced.
**Delivers:** `Store` interface, `SQLiteStore` implementation, `Server` struct with constructor injection, models in `models.go`. Zero behavior change for existing SQLite users.
**Uses:** Existing `modernc.org/sqlite`; `database/sql` standard library; no new dependencies.
**Implements:** Core architecture pattern from ARCHITECTURE.md; eliminates Pitfall 3 (global state) and Pitfall 4 (migration runner) in one phase.
**Avoids:** Introducing PostgreSQL and refactoring simultaneously (would make failures ambiguous).
**Research flag:** None needed — this is a standard Go repository interface pattern with well-documented prior art.
### Phase 3: PostgreSQL Support
**Rationale:** With the `Store` interface in place, adding PostgreSQL is additive: write `PostgresStore`, add `pgx/v5/stdlib` as a dependency, add `DATABASE_URL` to `main.go` and Docker Compose. The interface boundary guarantees no SQLite-specific SQL can appear in handlers.
**Delivers:** `PostgresStore` implementing all `Store` methods with PostgreSQL dialect SQL; `DATABASE_URL` env var wired through `main.go`; separate dialect migration files; updated `compose.dev.yml` with optional `postgres` profile; documentation.
**Uses:** `github.com/jackc/pgx/v5/stdlib` v5.9.1; `github.com/golang-migrate/migrate/v4` v4.19.1; separate `migrations/sqlite/` and `migrations/postgres/` directories.
**Avoids:** Pitfalls 1, 5, 8, 10, 11 — all are mitigated by the Store interface + per-dialect SQL + connection pool (no mutex) in `PostgresStore`.
**Research flag:** Verify exact import path for `pgx/v5/stdlib` during implementation. The `database/sql` compatibility layer is standard but the import string should be confirmed against pkg.go.dev before coding.
### Phase 4: UX Improvements
**Rationale:** These features are independent of the DB work but grouped together because they share the frontend codebase and several features share implementation logic (new-update indicator and toast notification use the same poll-comparison logic; bulk dismiss all and bulk dismiss by group share the same API endpoint design). Deferring UX until after the backend is correct means UX tests run against a trustworthy data layer.
**Delivers:** Bulk acknowledge (all + by group) with a single backend endpoint (`POST /api/updates/acknowledge-bulk`); search and filter by name/status/tag (frontend-only); new-update badge/counter and page title count; light/dark theme toggle; drag handle always-visible fix; optimistic UI rollback with user-visible error on tag assignment failure.
**Uses:** Existing React 19 + Tailwind + shadcn/ui stack; no new frontend dependencies expected.
**Avoids:** Pitfalls 6, 7, 9 — optimistic rollback, bulk endpoint, accessible drag handle.
**Research flag:** None needed for search/filter/theme/accessibility (standard patterns). The bulk acknowledge endpoint needs clear API contract design before frontend implementation begins — define the request/response shape first.
### Phase Ordering Rationale
- **Phase 1 before Phase 2:** Bug fix tests become the regression suite; the refactor cannot accidentally regress behavior it has not validated first.
- **Phase 2 before Phase 3:** The Store interface is a structural prerequisite. PostgreSQL added to the monolith produces unmaintainable dialect branches.
- **Phase 3 before Phase 4 (or parallel after Phase 2):** UX features are mostly frontend and do not depend on PostgreSQL. However, the bulk acknowledge endpoint (`AcknowledgeAll`, `AcknowledgeByTag`) must be in the `Store` interface, which is finalized in Phase 2. Phase 4 frontend work can start once Phase 2 is merged; Phase 3 and Phase 4 can proceed in parallel.
- **Never:** Mix refactor and new feature in the same commit. Each phase should be independently reviewable and revertable.
### Research Flags
Phases likely needing deeper research during planning:
- **Phase 3 (PostgreSQL):** Verify `pgx/v5/stdlib` import path (`github.com/jackc/pgx/v5/stdlib`) against pkg.go.dev before adding the dependency. Confirm `golang-migrate` `database/sqlite` sub-package still uses `modernc.org/sqlite` (not `mattn/go-sqlite3`) in v4.19.1 — this was verified but should be re-confirmed at time of implementation.
Phases with standard patterns (skip research-phase):
- **Phase 1 (Bug fixes):** 3-line SQL change with a clear regression test; no research needed.
- **Phase 2 (Refactor):** Standard Go repository interface pattern; no research needed.
- **Phase 4 (UX):** All features use existing stack (React, Tailwind, shadcn/ui); no new technologies introduced.
---
## Confidence Assessment
| Area | Confidence | Notes |
|------|------------|-------|
| Stack | HIGH | All versions verified via pkg.go.dev at time of research (2026-03-23); pgx v5.9.1 published 2026-03-22; golang-migrate v4.19.1 confirms `modernc.org/sqlite` |
| Features | MEDIUM | Feature priorities derived from direct codebase audit (CONCERNS.md, PROJECT.md) — HIGH confidence; competitive landscape analysis (Portainer, Uptime Kuma patterns) from training data only — MEDIUM |
| Architecture | HIGH | Based on direct analysis of `pkg/diunwebhook/diunwebhook.go` and `cmd/diunwebhook/main.go`; Store interface pattern is a well-established Go idiom with no ambiguity |
| Pitfalls | HIGH (backend) / MEDIUM (frontend) | Backend pitfalls sourced from direct code evidence (line numbers cited); PostgreSQL dialect differences from training knowledge — recommend verifying `$1` placeholder syntax before implementation; frontend pitfalls sourced from direct code analysis |
**Overall confidence:** HIGH
### Gaps to Address
- **PostgreSQL `$1` placeholder syntax:** PITFALLS.md flags this as MEDIUM confidence from training knowledge. Verify against the `pgx/v5/stdlib` documentation before writing any PostgreSQL query strings.
- **`golang-migrate` CGO status at v4.19.1:** Confirmed at research time that `database/sqlite` sub-package uses `modernc.org/sqlite`; re-confirm at implementation time that this has not changed in a patch release.
- **Competitive feature validation:** The UX feature priorities are based on self-hosted dashboard patterns (Portainer, Uptime Kuma) from training data. If the roadmapper wants higher confidence on feature ordering, a quick review of current Portainer CE and Uptime Kuma changelogs would validate the bulk-dismiss and search/filter priorities.
- **`golang-migrate` vs hand-rolled migration runner:** PITFALLS.md notes a 30-line hand-rolled runner is sufficient for this project's scale. STACK.md recommends `golang-migrate`. Either is valid — the roadmap phase should make a decision and commit to one approach before implementation begins to avoid scope creep.
---
## Sources
### Primary (HIGH confidence)
- `pkg/diunwebhook/diunwebhook.go` (direct codebase analysis, lines 48-117, 225, 352) — dialect issues, global state, INSERT OR REPLACE bug
- `cmd/diunwebhook/main.go` (direct codebase analysis) — entry point, env vars, mux wiring
- `.planning/codebase/CONCERNS.md` (prior audit) — confirmed FK enforcement gap, drag handle, bulk ops missing
- `.planning/PROJECT.md` (requirements source) — confirmed dual-DB requirement, no-CGO constraint, backward compatibility
- https://pkg.go.dev/github.com/jackc/pgx/v5@v5.9.1 — pgx v5 driver, version verified
- https://pkg.go.dev/github.com/golang-migrate/migrate/v4/database/sqlite — pure-Go SQLite sub-package confirmed
- https://pkg.go.dev/github.com/golang-migrate/migrate/v4/database/pgx/v5 — pgx/v5 migration sub-package confirmed
- https://pkg.go.dev/modernc.org/sqlite?tab=versions — v1.47.0 version verified
### Secondary (MEDIUM confidence)
- Training-data knowledge of `pgx/v5/stdlib` `database/sql` adapter pattern — standard approach, verify import path at implementation
- Training-data knowledge of Portainer CE, Uptime Kuma, Dockcheck-web UX patterns — feature prioritization for self-hosted dashboards
### Tertiary (LOW confidence)
- None
---
*Research completed: 2026-03-23*
*Ready for roadmap: yes*

283
CLAUDE.md Normal file
View File

@@ -0,0 +1,283 @@
<!-- GSD:project-start source:PROJECT.md -->
## Project
**DiunDashboard**
A web-based dashboard that receives DIUN webhook events and presents a persistent, visual overview of which Docker services have available updates. Built for self-hosters who use DIUN to monitor container images but need something better than dismissable push notifications — a place that nags you until you actually update.
**Core Value:** Reliable, persistent visibility into which services need updating — data never disappears, and the dashboard is the one place you trust to show the full picture.
### Constraints
- **Tech stack**: Go backend + React frontend — established, no migration
- **Database**: Must support both SQLite (simple deploys) and PostgreSQL (robust deploys)
- **Deployment**: Docker-first, single-container with optional compose
- **No CGO**: Pure Go SQLite driver (modernc.org/sqlite) — must maintain this for easy cross-compilation
- **Backward compatible**: Existing users with SQLite databases should be able to upgrade without data loss
<!-- GSD:project-end -->
<!-- GSD:stack-start source:codebase/STACK.md -->
## Technology Stack
## Languages
- Go 1.26 - Backend HTTP server and all API logic (`cmd/diunwebhook/main.go`, `pkg/diunwebhook/diunwebhook.go`)
- TypeScript ~5.7 - Frontend React SPA (`frontend/src/`)
- SQL (SQLite dialect) - Inline schema DDL and queries in `pkg/diunwebhook/diunwebhook.go`
## Runtime
- Go 1.26 (compiled binary, no runtime needed in production)
- Bun (frontend build toolchain, uses `oven/bun:1-alpine` Docker image)
- Alpine Linux 3.18 (production container base)
- Go modules - `go.mod` at project root (module name: `awesomeProject`)
- Bun - `frontend/bun.lock` present for frontend dependencies
- Bun - `docs/bun.lock` present for documentation site dependencies
## Frameworks
- `net/http` (Go stdlib) - HTTP server, routing, and handler registration. No third-party router.
- React 19 (`^19.0.0`) - Frontend SPA (`frontend/`)
- Vite 6 (`^6.0.5`) - Frontend dev server and build tool (`frontend/vite.config.ts`)
- Tailwind CSS 3.4 (`^3.4.17`) - Utility-first CSS (`frontend/tailwind.config.ts`)
- shadcn/ui - Component library (uses Radix UI primitives, `class-variance-authority`, `clsx`, `tailwind-merge`)
- Radix UI (`@radix-ui/react-tooltip` `^1.1.6`) - Accessible tooltip primitives
- dnd-kit (`@dnd-kit/core` `^6.3.1`, `@dnd-kit/utilities` `^3.2.2`) - Drag and drop
- Lucide React (`^0.469.0`) - Icon library
- simple-icons (`^16.9.0`) - Brand/service icons
- VitePress (`^1.6.3`) - Static documentation site (`docs/`)
- Go stdlib `testing` package with `httptest` for handler tests
- No frontend test framework detected
- Vite 6 (`^6.0.5`) - Frontend bundler (`frontend/vite.config.ts`)
- TypeScript ~5.7 (`^5.7.2`) - Type checking (`tsc -b` runs before `vite build`)
- PostCSS 8.4 (`^8.4.49`) with Autoprefixer 10.4 (`^10.4.20`) - CSS processing (`frontend/postcss.config.js`)
- `@vitejs/plugin-react` (`^4.3.4`) - React Fast Refresh for Vite
## Key Dependencies
- `modernc.org/sqlite` v1.46.1 - Pure-Go SQLite driver (no CGO required). Registered as `database/sql` driver named `"sqlite"`.
- `modernc.org/libc` v1.67.6 - C runtime emulation for pure-Go SQLite
- `modernc.org/memory` v1.11.0 - Memory allocator for pure-Go SQLite
- `github.com/dustin/go-humanize` v1.0.1 - Human-readable formatting (indirect dep of modernc.org/sqlite)
- `github.com/google/uuid` v1.6.0 - UUID generation (indirect)
- `github.com/mattn/go-isatty` v0.0.20 - Terminal detection (indirect)
- `golang.org/x/sys` v0.37.0 - System calls (indirect)
- `golang.org/x/exp` v0.0.0-20251023 - Experimental packages (indirect)
- `react` / `react-dom` `^19.0.0` - UI framework
- `@dnd-kit/core` `^6.3.1` - Drag-and-drop for tag assignment
- `tailwindcss` `^3.4.17` - Styling
- `class-variance-authority` `^0.7.1` - shadcn/ui component variant management
- `clsx` `^2.1.1` - Conditional CSS class composition
- `tailwind-merge` `^2.6.0` - Tailwind class deduplication
## Configuration
- `PORT` - HTTP listen port (default: `8080`)
- `DB_PATH` - SQLite database file path (default: `./diun.db`)
- `WEBHOOK_SECRET` - Token for webhook authentication (optional; when unset, webhook is open)
- `go.mod` - Go module definition (module `awesomeProject`)
- `frontend/vite.config.ts` - Vite config with `@` path alias to `./src`, dev proxy for `/api` and `/webhook` to `:8080`
- `frontend/tailwind.config.ts` - Tailwind with shadcn/ui theme tokens (dark mode via `class` strategy)
- `frontend/postcss.config.js` - PostCSS with Tailwind and Autoprefixer plugins
- `frontend/tsconfig.json` - Project references to `tsconfig.node.json` and `tsconfig.app.json`
- `@` resolves to `frontend/src/` (configured in `frontend/vite.config.ts`)
## Database
## Platform Requirements
- Go 1.26+
- Bun (for frontend and docs development)
- No CGO required (pure-Go SQLite driver)
- Single static binary + `frontend/dist/` static assets
- Alpine Linux 3.18 Docker container
- Persistent volume at `/data/` for SQLite database
- Port 8080 (configurable via `PORT`)
- Gitea Actions with custom Docker image `gitea.jeanlucmakiola.de/makiolaj/docker-node-and-go` (contains both Go and Node/Bun toolchains)
- `GOTOOLCHAIN=local` env var set in CI
<!-- GSD:stack-end -->
<!-- GSD:conventions-start source:CONVENTIONS.md -->
## Conventions
## Naming Patterns
- Package-level source files use the package name: `diunwebhook.go`
- Test files follow Go convention: `diunwebhook_test.go`
- Test-only export files: `export_test.go`
- Entry point: `main.go` inside `cmd/diunwebhook/`
- PascalCase for exported functions: `WebhookHandler`, `UpdateEvent`, `InitDB`, `GetUpdates`
- Handler functions are named `<Noun>Handler`: `WebhookHandler`, `UpdatesHandler`, `DismissHandler`, `TagsHandler`, `TagByIDHandler`, `TagAssignmentHandler`
- Test functions use `Test<FunctionName>_<Scenario>`: `TestWebhookHandler_BadRequest`, `TestDismissHandler_NotFound`
- PascalCase structs: `DiunEvent`, `UpdateEntry`, `Tag`
- JSON tags use snake_case: `json:"diun_version"`, `json:"hub_link"`, `json:"received_at"`
- Package-level unexported variables use short names: `mu`, `db`, `webhookSecret`
- Local variables use short idiomatic Go names: `w`, `r`, `err`, `res`, `n`, `e`
- Components: PascalCase `.tsx` files: `ServiceCard.tsx`, `AcknowledgeButton.tsx`, `Header.tsx`, `TagSection.tsx`
- Hooks: camelCase with `use` prefix: `useUpdates.ts`, `useTags.ts`
- Types: camelCase `.ts` files: `diun.ts`
- Utilities: camelCase `.ts` files: `utils.ts`, `time.ts`, `serviceIcons.ts`
- UI primitives (shadcn): lowercase `.tsx` files: `badge.tsx`, `button.tsx`, `card.tsx`, `tooltip.tsx`
- camelCase for regular functions and hooks: `fetchUpdates`, `useUpdates`, `getServiceIcon`
- PascalCase for React components: `ServiceCard`, `StatCard`, `AcknowledgeButton`
- Helper functions within components use camelCase: `getInitials`, `getTag`, `getShortName`
- Event handlers prefixed with `handle`: `handleDragEnd`, `handleNewGroupSubmit`
- PascalCase interfaces: `DiunEvent`, `UpdateEntry`, `Tag`, `ServiceCardProps`
- Type aliases: PascalCase: `UpdatesMap`
- Interface properties use snake_case matching the Go JSON tags: `diun_version`, `hub_link`
## Code Style
- `gofmt` enforced in CI (formatting check fails the build)
- No additional Go linter (golangci-lint) configured
- `go vet` runs in CI
- Standard Go formatting: tabs for indentation
- No ESLint or Prettier configured in the frontend
- No formatting enforcement in CI for frontend code
- Consistent 2-space indentation observed in all `.tsx` and `.ts` files
- Single quotes for strings in TypeScript
- No semicolons (observed in all frontend files)
- Trailing commas used in multi-line constructs
- `strict: true` in `tsconfig.app.json`
- `noUnusedLocals: true`
- `noUnusedParameters: true`
- `noFallthroughCasesInSwitch: true`
- `noUncheckedSideEffectImports: true`
## Import Organization
- The project module is aliased as `diun` in both `main.go` and test files
- The blank-import pattern `_ "modernc.org/sqlite"` is used for the SQLite driver in `pkg/diunwebhook/diunwebhook.go`
- `@/` maps to `frontend/src/` (configured in `vite.config.ts` and `tsconfig.app.json`)
## Error Handling
- Handlers use `http.Error(w, message, statusCode)` for all error responses
- Error messages are lowercase: `"bad request"`, `"internal error"`, `"not found"`, `"method not allowed"`
- Internal errors are logged with `log.Printf` before returning HTTP 500
- Decode errors include context: `log.Printf("WebhookHandler: failed to decode request: %v", err)`
- Fatal errors in `main.go` use `log.Fatalf`
- `errors.Is()` used for sentinel error comparison (e.g., `http.ErrServerClosed`)
- String matching used for SQLite constraint errors: `strings.Contains(err.Error(), "UNIQUE")`
- API errors throw with HTTP status: `throw new Error(\`HTTP ${res.status}\`)`
- Catch blocks use `console.error` for logging
- Error state stored in hook state: `setError(e instanceof Error ? e.message : 'Failed to fetch updates')`
- Optimistic updates used for tag assignment (update UI first, then call API)
## Logging
- Startup messages: `log.Printf("Listening on :%s", port)`
- Warnings: `log.Println("WARNING: WEBHOOK_SECRET not set ...")`
- Request logging on success: `log.Printf("Update received: %s (%s)", event.Image, event.Status)`
- Error logging before HTTP error response: `log.Printf("WebhookHandler: failed to store event: %v", err)`
- Handler name prefixed to log messages: `"WebhookHandler: ..."`, `"UpdatesHandler: ..."`
## Comments
- Comments are sparse in the Go codebase
- Handler functions have short doc comments describing the routes they handle:
- Inline comments used for non-obvious behavior: `// Migration: add acknowledged_at to existing databases`
- No JSDoc/TSDoc in the frontend codebase
## Function Design
- Each handler is a standalone `func(http.ResponseWriter, *http.Request)`
- Method checking done at the top of each handler (not via middleware)
- Multi-method handlers use `switch r.Method`
- URL path parameters extracted via `strings.TrimPrefix`
- Request bodies decoded with `json.NewDecoder(r.Body).Decode(&target)`
- Responses written with `json.NewEncoder(w).Encode(data)` or `w.WriteHeader(status)`
- Mutex (`mu`) used around write operations to SQLite
- Custom hooks return object with state and action functions
- `useCallback` wraps all action functions
- `useEffect` for side effects (polling, initial fetch)
- State updates use functional form: `setUpdates(prev => { ... })`
## Module Design
- Single package `diunwebhook` exports all types and handler functions
- No barrel files; single source file `diunwebhook.go` contains everything
- Test helpers exposed via `export_test.go` (only visible to `_test` packages)
- Named exports for all components, hooks, and utilities
- Default export only for the root `App` component (`export default function App()`)
- Type exports use `export interface` or `export type`
- `@/components/ui/` contains shadcn primitives (`badge.tsx`, `button.tsx`, etc.)
## Git Commit Message Conventions
- `feat` - new features
- `fix` - bug fixes
- `docs` - documentation changes
- `chore` - maintenance tasks (deps, config)
- `refactor` - code restructuring
- `style` - UI/styling changes
- `test` - test additions
<!-- GSD:conventions-end -->
<!-- GSD:architecture-start source:ARCHITECTURE.md -->
## Architecture
## Pattern Overview
- Single Go binary serves both the JSON API and the static frontend assets
- All backend logic lives in one library package (`pkg/diunwebhook/`)
- SQLite database for persistence (pure-Go driver, no CGO)
- Frontend is a standalone React SPA that communicates via REST polling
- No middleware framework -- uses `net/http` standard library directly
## Layers
- Purpose: Accept HTTP requests, validate input, delegate to storage functions, return JSON responses
- Location: `pkg/diunwebhook/diunwebhook.go` (functions: `WebhookHandler`, `UpdatesHandler`, `DismissHandler`, `TagsHandler`, `TagByIDHandler`, `TagAssignmentHandler`)
- Contains: Request parsing, method checks, JSON encoding/decoding, HTTP status responses
- Depends on: Storage layer (package-level `db` and `mu` variables)
- Used by: Route registration in `cmd/diunwebhook/main.go`
- Purpose: Persist and query DIUN events, tags, and tag assignments
- Location: `pkg/diunwebhook/diunwebhook.go` (functions: `InitDB`, `UpdateEvent`, `GetUpdates`; inline SQL in handlers)
- Contains: Schema creation, migrations, CRUD operations via raw SQL
- Depends on: `modernc.org/sqlite` driver, `database/sql` stdlib
- Used by: HTTP handlers in the same file
- Purpose: Initialize database, configure routes, start HTTP server with graceful shutdown
- Location: `cmd/diunwebhook/main.go`
- Contains: Environment variable reading, mux setup, signal handling, server lifecycle
- Depends on: `pkg/diunwebhook` (imported as `diun`)
- Used by: Docker container CMD, direct `go run`
- Purpose: Display DIUN update events in an interactive dashboard with drag-and-drop grouping
- Location: `frontend/src/`
- Contains: React components, custom hooks for data fetching, TypeScript type definitions
- Depends on: Backend REST API (`/api/*` endpoints)
- Used by: Served as static files from `frontend/dist/` by the Go server
## Data Flow
- **Backend:** No in-memory state beyond the `sync.Mutex`. All data lives in SQLite. The `db` and `mu` variables are package-level globals in `pkg/diunwebhook/diunwebhook.go`.
- **Frontend:** React `useState` hooks in two custom hooks:
- No global state library (no Redux, Zustand, etc.) -- state is passed via props from `App.tsx`
## Key Abstractions
- Purpose: Represents a single DIUN webhook payload (image update notification)
- Defined in: `pkg/diunwebhook/diunwebhook.go` (Go struct), `frontend/src/types/diun.ts` (TypeScript interface)
- Pattern: Direct JSON mapping between Go struct tags and TypeScript interface
- Purpose: Wraps a `DiunEvent` with metadata (received timestamp, acknowledged flag, optional tag)
- Defined in: `pkg/diunwebhook/diunwebhook.go` (Go), `frontend/src/types/diun.ts` (TypeScript)
- Pattern: The API returns `map[string]UpdateEntry` keyed by image name (`UpdatesMap` type in frontend)
- Purpose: User-defined grouping label for organizing images
- Defined in: `pkg/diunwebhook/diunwebhook.go` (Go), `frontend/src/types/diun.ts` (TypeScript)
- Pattern: Simple ID + name, linked to images via `tag_assignments` join table
## Entry Points
- Location: `cmd/diunwebhook/main.go`
- Triggers: `go run ./cmd/diunwebhook/` or Docker container `CMD ["./server"]`
- Responsibilities: Read env vars (`DB_PATH`, `PORT`, `WEBHOOK_SECRET`), init DB, register routes, start HTTP server, handle graceful shutdown on SIGINT/SIGTERM
- Location: `frontend/src/main.tsx`
- Triggers: Browser loads `index.html` from `frontend/dist/` (served by Go file server at `/`)
- Responsibilities: Mount React app, force dark mode (`document.documentElement.classList.add('dark')`)
- Location: `POST /webhook` -> `WebhookHandler` in `pkg/diunwebhook/diunwebhook.go`
- Triggers: External DIUN instance sends webhook on image update detection
- Responsibilities: Authenticate (if secret set), validate payload, upsert event into database
## Concurrency Model
- A single `sync.Mutex` (`mu`) in `pkg/diunwebhook/diunwebhook.go` guards all write operations to the database
- `UpdateEvent()`, `DismissHandler`, `TagsHandler` (POST), `TagByIDHandler` (DELETE), and `TagAssignmentHandler` (PUT/DELETE) all acquire `mu.Lock()` before writing
- Read operations (`GetUpdates`, `TagsHandler` GET) do NOT acquire the mutex
- SQLite connection is configured with `db.SetMaxOpenConns(1)` to prevent concurrent write issues
- Standard `net/http` server handles requests concurrently via goroutines
- Graceful shutdown with 15-second timeout on SIGINT/SIGTERM
## Error Handling
- Method validation: Return `405 Method Not Allowed` for wrong HTTP methods
- Input validation: Return `400 Bad Request` for missing/malformed fields
- Authentication: Return `401 Unauthorized` if webhook secret doesn't match
- Not found: Return `404 Not Found` when row doesn't exist (e.g., dismiss nonexistent image)
- Conflict: Return `409 Conflict` for unique constraint violations (duplicate tag name)
- Internal errors: Return `500 Internal Server Error` for database failures
- Fatal startup errors: `log.Fatalf` on `InitDB` failure
- `useUpdates`: catches fetch errors, stores error message in state, displays error banner
- `useTags`: catches errors, logs to `console.error`, fails silently (no user-visible error)
- `assignTag`: uses optimistic update -- updates local state first, fires API call, logs errors to console but does not revert on failure
## Cross-Cutting Concerns
<!-- GSD:architecture-end -->
<!-- GSD:workflow-start source:GSD defaults -->
## GSD Workflow Enforcement
Before using Edit, Write, or other file-changing tools, start work through a GSD command so planning artifacts and execution context stay in sync.
Use these entry points:
- `/gsd:quick` for small fixes, doc updates, and ad-hoc tasks
- `/gsd:debug` for investigation and bug fixing
- `/gsd:execute-phase` for planned phase work
Do not make direct repo edits outside a GSD workflow unless the user explicitly asks to bypass it.
<!-- GSD:workflow-end -->
<!-- GSD:profile-start -->
## Developer Profile
> Profile not yet configured. Run `/gsd:profile-user` to generate your developer profile.
> This section is managed by `generate-claude-profile` -- do not edit manually.
<!-- GSD:profile-end -->

View File

@@ -2,6 +2,7 @@ package main
import (
"context"
"database/sql"
"errors"
"log"
"net/http"
@@ -11,6 +12,7 @@ import (
"time"
diun "awesomeProject/pkg/diunwebhook"
_ "modernc.org/sqlite"
)
func main() {
@@ -18,33 +20,42 @@ func main() {
if dbPath == "" {
dbPath = "./diun.db"
}
if err := diun.InitDB(dbPath); err != nil {
log.Fatalf("InitDB: %v", err)
db, err := sql.Open("sqlite", dbPath)
if err != nil {
log.Fatalf("sql.Open: %v", err)
}
if err := diun.RunMigrations(db); err != nil {
log.Fatalf("RunMigrations: %v", err)
}
store := diun.NewSQLiteStore(db)
secret := os.Getenv("WEBHOOK_SECRET")
if secret == "" {
log.Println("WARNING: WEBHOOK_SECRET not set — webhook endpoint is unprotected")
} else {
diun.SetWebhookSecret(secret)
log.Println("Webhook endpoint protected with token authentication")
}
srv := diun.NewServer(store, secret)
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
mux := http.NewServeMux()
mux.HandleFunc("/webhook", diun.WebhookHandler)
mux.HandleFunc("/api/updates/", diun.DismissHandler)
mux.HandleFunc("/api/updates", diun.UpdatesHandler)
mux.HandleFunc("/api/tags", diun.TagsHandler)
mux.HandleFunc("/api/tags/", diun.TagByIDHandler)
mux.HandleFunc("/api/tag-assignments", diun.TagAssignmentHandler)
mux.HandleFunc("/webhook", srv.WebhookHandler)
mux.HandleFunc("/api/updates/", srv.DismissHandler)
mux.HandleFunc("/api/updates", srv.UpdatesHandler)
mux.HandleFunc("/api/tags", srv.TagsHandler)
mux.HandleFunc("/api/tags/", srv.TagByIDHandler)
mux.HandleFunc("/api/tag-assignments", srv.TagAssignmentHandler)
mux.Handle("/", http.FileServer(http.Dir("./frontend/dist")))
srv := &http.Server{
httpSrv := &http.Server{
Addr: ":" + port,
Handler: mux,
ReadTimeout: 10 * time.Second,
@@ -57,7 +68,7 @@ func main() {
go func() {
log.Printf("Listening on :%s", port)
if err := srv.ListenAndServe(); err != nil && !errors.Is(err, http.ErrServerClosed) {
if err := httpSrv.ListenAndServe(); err != nil && !errors.Is(err, http.ErrServerClosed) {
log.Fatalf("ListenAndServe: %v", err)
}
}()
@@ -67,7 +78,7 @@ func main() {
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
defer cancel()
if err := srv.Shutdown(ctx); err != nil {
if err := httpSrv.Shutdown(ctx); err != nil {
log.Printf("Shutdown error: %v", err)
} else {
log.Println("Server stopped cleanly")

8
go.mod
View File

@@ -2,6 +2,11 @@ module awesomeProject
go 1.26
require (
github.com/golang-migrate/migrate/v4 v4.19.1
modernc.org/sqlite v1.46.1
)
require (
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/google/uuid v1.6.0 // indirect
@@ -9,9 +14,8 @@ require (
github.com/ncruces/go-strftime v1.0.0 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 // indirect
golang.org/x/sys v0.37.0 // indirect
golang.org/x/sys v0.38.0 // indirect
modernc.org/libc v1.67.6 // indirect
modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.11.0 // indirect
modernc.org/sqlite v1.46.1 // indirect
)

46
go.sum
View File

@@ -1,23 +1,65 @@
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/golang-migrate/migrate/v4 v4.19.1 h1:OCyb44lFuQfYXYLx1SCxPZQGU7mcaZ7gH9yH4jSFbBA=
github.com/golang-migrate/migrate/v4 v4.19.1/go.mod h1:CTcgfjxhaUtsLipnLoQRWCrjYXycRz/g5+RWDuYgPrE=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e h1:ijClszYn+mADRFY17kjQEVQ1XRhq2/JR1M3sGqeJoxs=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70=
golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA=
golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w=
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.37.0 h1:fdNQudmxPjkdUTPnLn5mdQv7Zwvbvpaxqs831goi9kQ=
golang.org/x/sys v0.37.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ=
golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
modernc.org/cc/v4 v4.27.1 h1:9W30zRlYrefrDV2JE2O8VDtJ1yPGownxciz5rrbQZis=
modernc.org/cc/v4 v4.27.1/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.30.1 h1:4r4U1J6Fhj98NKfSjnPUN7Ze2c6MnAdL0hWw6+LrJpc=
modernc.org/ccgo/v4 v4.30.1/go.mod h1:bIOeI1JL54Utlxn+LwrFyjCx2n2RDiYEaJVSrgdrRfM=
modernc.org/fileutil v1.3.40 h1:ZGMswMNc9JOCrcrakF1HrvmergNLAmxOPjizirpfqBA=
modernc.org/fileutil v1.3.40/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc=
modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI=
modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito=
modernc.org/gc/v3 v3.1.1 h1:k8T3gkXWY9sEiytKhcgyiZ2L0DTyCQ/nvX+LoCljoRE=
modernc.org/gc/v3 v3.1.1/go.mod h1:HFK/6AGESC7Ex+EZJhJ2Gni6cTaYpSMmU/cT9RmlfYY=
modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks=
modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI=
modernc.org/libc v1.67.6 h1:eVOQvpModVLKOdT+LvBPjdQqfrZq+pC39BygcT+E7OI=
modernc.org/libc v1.67.6/go.mod h1:JAhxUVlolfYDErnwiqaLvUqc8nfb2r6S6slAgZOnaiE=
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
modernc.org/memory v1.11.0/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
modernc.org/sqlite v1.46.1 h1:eFJ2ShBLIEnUWlLy12raN0Z1plqmFX9Qe3rjQTKt6sU=
modernc.org/sqlite v1.46.1/go.mod h1:CzbrU2lSB1DKUusvwGz7rqEKIq+NUd8GWuBBZDs9/nA=
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=
modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=

View File

@@ -2,18 +2,17 @@ package diunwebhook
import (
"crypto/subtle"
"database/sql"
"encoding/json"
"errors"
"log"
"net/http"
"strconv"
"strings"
"sync"
"time"
_ "modernc.org/sqlite"
)
const maxBodyBytes = 1 << 20 // 1 MB
type DiunEvent struct {
DiunVersion string `json:"diun_version"`
Hostname string `json:"hostname"`
@@ -45,125 +44,22 @@ type UpdateEntry struct {
Tag *Tag `json:"tag"`
}
var (
mu sync.Mutex
db *sql.DB
// Server holds the application dependencies for HTTP handlers.
type Server struct {
store Store
webhookSecret string
)
func SetWebhookSecret(secret string) {
webhookSecret = secret
}
func InitDB(path string) error {
var err error
db, err = sql.Open("sqlite", path)
if err != nil {
return err
}
db.SetMaxOpenConns(1)
_, err = db.Exec(`CREATE TABLE IF NOT EXISTS updates (
image TEXT PRIMARY KEY,
diun_version TEXT NOT NULL DEFAULT '',
hostname TEXT NOT NULL DEFAULT '',
status TEXT NOT NULL DEFAULT '',
provider TEXT NOT NULL DEFAULT '',
hub_link TEXT NOT NULL DEFAULT '',
mime_type TEXT NOT NULL DEFAULT '',
digest TEXT NOT NULL DEFAULT '',
created TEXT NOT NULL DEFAULT '',
platform TEXT NOT NULL DEFAULT '',
ctn_name TEXT NOT NULL DEFAULT '',
ctn_id TEXT NOT NULL DEFAULT '',
ctn_state TEXT NOT NULL DEFAULT '',
ctn_status TEXT NOT NULL DEFAULT '',
received_at TEXT NOT NULL,
acknowledged_at TEXT
)`)
if err != nil {
return err
}
// Migration: add acknowledged_at to existing databases (silently ignored if already present)
_, _ = db.Exec(`ALTER TABLE updates ADD COLUMN acknowledged_at TEXT`)
_, err = db.Exec(`CREATE TABLE IF NOT EXISTS tags (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL UNIQUE
)`)
if err != nil {
return err
}
_, err = db.Exec(`CREATE TABLE IF NOT EXISTS tag_assignments (
image TEXT PRIMARY KEY,
tag_id INTEGER NOT NULL REFERENCES tags(id) ON DELETE CASCADE
)`)
if err != nil {
return err
}
return nil
// NewServer constructs a Server backed by the given Store.
func NewServer(store Store, webhookSecret string) *Server {
return &Server{store: store, webhookSecret: webhookSecret}
}
func UpdateEvent(event DiunEvent) error {
mu.Lock()
defer mu.Unlock()
_, err := db.Exec(`INSERT OR REPLACE INTO updates VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,NULL)`,
event.Image, event.DiunVersion, event.Hostname, event.Status, event.Provider,
event.HubLink, event.MimeType, event.Digest,
event.Created.Format(time.RFC3339), event.Platform,
event.Metadata.ContainerName, event.Metadata.ContainerID,
event.Metadata.State, event.Metadata.Status,
time.Now().Format(time.RFC3339),
)
return err
}
func GetUpdates() (map[string]UpdateEntry, error) {
rows, err := db.Query(`SELECT u.image, u.diun_version, u.hostname, u.status, u.provider,
u.hub_link, u.mime_type, u.digest, u.created, u.platform,
u.ctn_name, u.ctn_id, u.ctn_state, u.ctn_status, u.received_at, COALESCE(u.acknowledged_at, ''),
t.id, t.name
FROM updates u
LEFT JOIN tag_assignments ta ON u.image = ta.image
LEFT JOIN tags t ON ta.tag_id = t.id`)
if err != nil {
return nil, err
}
defer func(rows *sql.Rows) {
err := rows.Close()
if err != nil {
}
}(rows)
result := make(map[string]UpdateEntry)
for rows.Next() {
var e UpdateEntry
var createdStr, receivedStr, acknowledgedAt string
var tagID sql.NullInt64
var tagName sql.NullString
err := rows.Scan(&e.Event.Image, &e.Event.DiunVersion, &e.Event.Hostname,
&e.Event.Status, &e.Event.Provider, &e.Event.HubLink, &e.Event.MimeType,
&e.Event.Digest, &createdStr, &e.Event.Platform,
&e.Event.Metadata.ContainerName, &e.Event.Metadata.ContainerID,
&e.Event.Metadata.State, &e.Event.Metadata.Status,
&receivedStr, &acknowledgedAt, &tagID, &tagName)
if err != nil {
return nil, err
}
e.Event.Created, _ = time.Parse(time.RFC3339, createdStr)
e.ReceivedAt, _ = time.Parse(time.RFC3339, receivedStr)
e.Acknowledged = acknowledgedAt != ""
if tagID.Valid && tagName.Valid {
e.Tag = &Tag{ID: int(tagID.Int64), Name: tagName.String}
}
result[e.Event.Image] = e
}
return result, rows.Err()
}
func WebhookHandler(w http.ResponseWriter, r *http.Request) {
if webhookSecret != "" {
// WebhookHandler handles POST /webhook
func (s *Server) WebhookHandler(w http.ResponseWriter, r *http.Request) {
if s.webhookSecret != "" {
auth := r.Header.Get("Authorization")
if subtle.ConstantTimeCompare([]byte(auth), []byte(webhookSecret)) != 1 {
if subtle.ConstantTimeCompare([]byte(auth), []byte(s.webhookSecret)) != 1 {
http.Error(w, "unauthorized", http.StatusUnauthorized)
return
}
@@ -174,9 +70,15 @@ func WebhookHandler(w http.ResponseWriter, r *http.Request) {
return
}
r.Body = http.MaxBytesReader(w, r.Body, maxBodyBytes)
var event DiunEvent
if err := json.NewDecoder(r.Body).Decode(&event); err != nil {
var maxBytesErr *http.MaxBytesError
if errors.As(err, &maxBytesErr) {
http.Error(w, "request body too large", http.StatusRequestEntityTooLarge)
return
}
log.Printf("WebhookHandler: failed to decode request: %v", err)
http.Error(w, "bad request", http.StatusBadRequest)
return
@@ -187,7 +89,7 @@ func WebhookHandler(w http.ResponseWriter, r *http.Request) {
return
}
if err := UpdateEvent(event); err != nil {
if err := s.store.UpsertEvent(event); err != nil {
log.Printf("WebhookHandler: failed to store event: %v", err)
http.Error(w, "internal error", http.StatusInternalServerError)
return
@@ -198,8 +100,9 @@ func WebhookHandler(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
}
func UpdatesHandler(w http.ResponseWriter, r *http.Request) {
updates, err := GetUpdates()
// UpdatesHandler handles GET /api/updates
func (s *Server) UpdatesHandler(w http.ResponseWriter, r *http.Request) {
updates, err := s.store.GetUpdates()
if err != nil {
log.Printf("UpdatesHandler: failed to get updates: %v", err)
http.Error(w, "internal error", http.StatusInternalServerError)
@@ -211,7 +114,8 @@ func UpdatesHandler(w http.ResponseWriter, r *http.Request) {
}
}
func DismissHandler(w http.ResponseWriter, r *http.Request) {
// DismissHandler handles PATCH /api/updates/{image}
func (s *Server) DismissHandler(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPatch {
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
return
@@ -221,15 +125,12 @@ func DismissHandler(w http.ResponseWriter, r *http.Request) {
http.Error(w, "bad request: image name required", http.StatusBadRequest)
return
}
mu.Lock()
res, err := db.Exec(`UPDATE updates SET acknowledged_at = datetime('now') WHERE image = ?`, image)
mu.Unlock()
found, err := s.store.AcknowledgeUpdate(image)
if err != nil {
http.Error(w, "internal error", http.StatusInternalServerError)
return
}
n, _ := res.RowsAffected()
if n == 0 {
if !found {
http.Error(w, "not found", http.StatusNotFound)
return
}
@@ -237,50 +138,36 @@ func DismissHandler(w http.ResponseWriter, r *http.Request) {
}
// TagsHandler handles GET /api/tags and POST /api/tags
func TagsHandler(w http.ResponseWriter, r *http.Request) {
func (s *Server) TagsHandler(w http.ResponseWriter, r *http.Request) {
switch r.Method {
case http.MethodGet:
rows, err := db.Query(`SELECT id, name FROM tags ORDER BY name`)
tags, err := s.store.ListTags()
if err != nil {
http.Error(w, "internal error", http.StatusInternalServerError)
return
}
defer func(rows *sql.Rows) {
err := rows.Close()
if err != nil {
}
}(rows)
tags := []Tag{}
for rows.Next() {
var t Tag
if err := rows.Scan(&t.ID, &t.Name); err != nil {
http.Error(w, "internal error", http.StatusInternalServerError)
return
}
tags = append(tags, t)
}
if err := rows.Err(); err != nil {
http.Error(w, "internal error", http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/json")
err = json.NewEncoder(w).Encode(tags)
if err != nil {
return
}
json.NewEncoder(w).Encode(tags) //nolint:errcheck
case http.MethodPost:
r.Body = http.MaxBytesReader(w, r.Body, maxBodyBytes)
var req struct {
Name string `json:"name"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil || req.Name == "" {
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
var maxBytesErr *http.MaxBytesError
if errors.As(err, &maxBytesErr) {
http.Error(w, "request body too large", http.StatusRequestEntityTooLarge)
return
}
http.Error(w, "bad request: name required", http.StatusBadRequest)
return
}
mu.Lock()
res, err := db.Exec(`INSERT INTO tags (name) VALUES (?)`, req.Name)
mu.Unlock()
if req.Name == "" {
http.Error(w, "bad request: name required", http.StatusBadRequest)
return
}
tag, err := s.store.CreateTag(req.Name)
if err != nil {
if strings.Contains(err.Error(), "UNIQUE") {
http.Error(w, "conflict: tag name already exists", http.StatusConflict)
@@ -289,13 +176,9 @@ func TagsHandler(w http.ResponseWriter, r *http.Request) {
http.Error(w, "internal error", http.StatusInternalServerError)
return
}
id, _ := res.LastInsertId()
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
err = json.NewEncoder(w).Encode(Tag{ID: int(id), Name: req.Name})
if err != nil {
return
}
json.NewEncoder(w).Encode(tag) //nolint:errcheck
default:
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
@@ -303,7 +186,7 @@ func TagsHandler(w http.ResponseWriter, r *http.Request) {
}
// TagByIDHandler handles DELETE /api/tags/{id}
func TagByIDHandler(w http.ResponseWriter, r *http.Request) {
func (s *Server) TagByIDHandler(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodDelete {
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
return
@@ -314,15 +197,12 @@ func TagByIDHandler(w http.ResponseWriter, r *http.Request) {
http.Error(w, "bad request: invalid id", http.StatusBadRequest)
return
}
mu.Lock()
res, err := db.Exec(`DELETE FROM tags WHERE id = ?`, id)
mu.Unlock()
found, err := s.store.DeleteTag(id)
if err != nil {
http.Error(w, "internal error", http.StatusInternalServerError)
return
}
n, _ := res.RowsAffected()
if n == 0 {
if !found {
http.Error(w, "not found", http.StatusNotFound)
return
}
@@ -330,45 +210,57 @@ func TagByIDHandler(w http.ResponseWriter, r *http.Request) {
}
// TagAssignmentHandler handles PUT /api/tag-assignments and DELETE /api/tag-assignments
func TagAssignmentHandler(w http.ResponseWriter, r *http.Request) {
func (s *Server) TagAssignmentHandler(w http.ResponseWriter, r *http.Request) {
switch r.Method {
case http.MethodPut:
r.Body = http.MaxBytesReader(w, r.Body, maxBodyBytes)
var req struct {
Image string `json:"image"`
TagID int `json:"tag_id"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil || req.Image == "" {
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
var maxBytesErr *http.MaxBytesError
if errors.As(err, &maxBytesErr) {
http.Error(w, "request body too large", http.StatusRequestEntityTooLarge)
return
}
http.Error(w, "bad request", http.StatusBadRequest)
return
}
// Check tag exists
var exists int
err := db.QueryRow(`SELECT COUNT(*) FROM tags WHERE id = ?`, req.TagID).Scan(&exists)
if err != nil || exists == 0 {
if req.Image == "" {
http.Error(w, "bad request", http.StatusBadRequest)
return
}
exists, err := s.store.TagExists(req.TagID)
if err != nil || !exists {
http.Error(w, "not found: tag does not exist", http.StatusNotFound)
return
}
mu.Lock()
_, err = db.Exec(`INSERT OR REPLACE INTO tag_assignments (image, tag_id) VALUES (?, ?)`, req.Image, req.TagID)
mu.Unlock()
if err != nil {
if err := s.store.AssignTag(req.Image, req.TagID); err != nil {
http.Error(w, "internal error", http.StatusInternalServerError)
return
}
w.WriteHeader(http.StatusNoContent)
case http.MethodDelete:
r.Body = http.MaxBytesReader(w, r.Body, maxBodyBytes)
var req struct {
Image string `json:"image"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil || req.Image == "" {
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
var maxBytesErr *http.MaxBytesError
if errors.As(err, &maxBytesErr) {
http.Error(w, "request body too large", http.StatusRequestEntityTooLarge)
return
}
http.Error(w, "bad request", http.StatusBadRequest)
return
}
mu.Lock()
_, err := db.Exec(`DELETE FROM tag_assignments WHERE image = ?`, req.Image)
mu.Unlock()
if err != nil {
if req.Image == "" {
http.Error(w, "bad request", http.StatusBadRequest)
return
}
if err := s.store.UnassignTag(req.Image); err != nil {
http.Error(w, "internal error", http.StatusInternalServerError)
return
}

View File

@@ -7,7 +7,6 @@ import (
"fmt"
"net/http"
"net/http/httptest"
"os"
"sync"
"testing"
"time"
@@ -15,13 +14,11 @@ import (
diun "awesomeProject/pkg/diunwebhook"
)
func TestMain(m *testing.M) {
diun.UpdatesReset()
os.Exit(m.Run())
}
func TestUpdateEventAndGetUpdates(t *testing.T) {
diun.UpdatesReset()
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
event := diun.DiunEvent{
DiunVersion: "1.0",
Hostname: "host",
@@ -34,13 +31,12 @@ func TestUpdateEventAndGetUpdates(t *testing.T) {
Created: time.Now(),
Platform: "linux/amd64",
}
err := diun.UpdateEvent(event)
if err != nil {
return
if err := srv.TestUpsertEvent(event); err != nil {
t.Fatalf("test setup: TestUpsertEvent failed: %v", err)
}
got, err := diun.GetUpdates()
got, err := srv.TestGetUpdates()
if err != nil {
t.Fatalf("GetUpdates error: %v", err)
t.Fatalf("TestGetUpdates error: %v", err)
}
if len(got) != 1 {
t.Fatalf("expected 1 update, got %d", len(got))
@@ -51,7 +47,10 @@ func TestUpdateEventAndGetUpdates(t *testing.T) {
}
func TestWebhookHandler(t *testing.T) {
diun.UpdatesReset()
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
event := diun.DiunEvent{
DiunVersion: "2.0",
Hostname: "host2",
@@ -67,95 +66,106 @@ func TestWebhookHandler(t *testing.T) {
body, _ := json.Marshal(event)
req := httptest.NewRequest(http.MethodPost, "/webhook", bytes.NewReader(body))
rec := httptest.NewRecorder()
diun.WebhookHandler(rec, req)
srv.WebhookHandler(rec, req)
if rec.Code != http.StatusOK {
t.Errorf("expected status 200, got %d", rec.Code)
}
if len(diun.GetUpdatesMap()) != 1 {
t.Errorf("expected 1 update, got %d", len(diun.GetUpdatesMap()))
if len(srv.TestGetUpdatesMap()) != 1 {
t.Errorf("expected 1 update, got %d", len(srv.TestGetUpdatesMap()))
}
}
func TestWebhookHandler_Unauthorized(t *testing.T) {
diun.UpdatesReset()
diun.SetWebhookSecret("my-secret")
defer diun.ResetWebhookSecret()
srv, err := diun.NewTestServerWithSecret("my-secret")
if err != nil {
t.Fatalf("NewTestServerWithSecret: %v", err)
}
event := diun.DiunEvent{Image: "nginx:latest"}
body, _ := json.Marshal(event)
req := httptest.NewRequest(http.MethodPost, "/webhook", bytes.NewReader(body))
rec := httptest.NewRecorder()
diun.WebhookHandler(rec, req)
srv.WebhookHandler(rec, req)
if rec.Code != http.StatusUnauthorized {
t.Errorf("expected 401, got %d", rec.Code)
}
}
func TestWebhookHandler_WrongToken(t *testing.T) {
diun.UpdatesReset()
diun.SetWebhookSecret("my-secret")
defer diun.ResetWebhookSecret()
srv, err := diun.NewTestServerWithSecret("my-secret")
if err != nil {
t.Fatalf("NewTestServerWithSecret: %v", err)
}
event := diun.DiunEvent{Image: "nginx:latest"}
body, _ := json.Marshal(event)
req := httptest.NewRequest(http.MethodPost, "/webhook", bytes.NewReader(body))
req.Header.Set("Authorization", "wrong-token")
rec := httptest.NewRecorder()
diun.WebhookHandler(rec, req)
srv.WebhookHandler(rec, req)
if rec.Code != http.StatusUnauthorized {
t.Errorf("expected 401, got %d", rec.Code)
}
}
func TestWebhookHandler_ValidToken(t *testing.T) {
diun.UpdatesReset()
diun.SetWebhookSecret("my-secret")
defer diun.ResetWebhookSecret()
srv, err := diun.NewTestServerWithSecret("my-secret")
if err != nil {
t.Fatalf("NewTestServerWithSecret: %v", err)
}
event := diun.DiunEvent{Image: "nginx:latest"}
body, _ := json.Marshal(event)
req := httptest.NewRequest(http.MethodPost, "/webhook", bytes.NewReader(body))
req.Header.Set("Authorization", "my-secret")
rec := httptest.NewRecorder()
diun.WebhookHandler(rec, req)
srv.WebhookHandler(rec, req)
if rec.Code != http.StatusOK {
t.Errorf("expected 200, got %d", rec.Code)
}
}
func TestWebhookHandler_NoSecretConfigured(t *testing.T) {
diun.UpdatesReset()
diun.ResetWebhookSecret()
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
event := diun.DiunEvent{Image: "nginx:latest"}
body, _ := json.Marshal(event)
req := httptest.NewRequest(http.MethodPost, "/webhook", bytes.NewReader(body))
rec := httptest.NewRecorder()
diun.WebhookHandler(rec, req)
srv.WebhookHandler(rec, req)
if rec.Code != http.StatusOK {
t.Errorf("expected 200 (no secret configured), got %d", rec.Code)
}
}
func TestWebhookHandler_BadRequest(t *testing.T) {
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
req := httptest.NewRequest(http.MethodPost, "/webhook", bytes.NewReader([]byte("not-json")))
rec := httptest.NewRecorder()
diun.WebhookHandler(rec, req)
srv.WebhookHandler(rec, req)
if rec.Code != http.StatusBadRequest {
t.Errorf("expected 400 for bad JSON, got %d", rec.Code)
}
}
func TestUpdatesHandler(t *testing.T) {
diun.UpdatesReset()
event := diun.DiunEvent{Image: "busybox:latest"}
err := diun.UpdateEvent(event)
srv, err := diun.NewTestServer()
if err != nil {
return
t.Fatalf("NewTestServer: %v", err)
}
event := diun.DiunEvent{Image: "busybox:latest"}
if err := srv.TestUpsertEvent(event); err != nil {
t.Fatalf("test setup: TestUpsertEvent failed: %v", err)
}
req := httptest.NewRequest(http.MethodGet, "/api/updates", nil)
rec := httptest.NewRecorder()
diun.UpdatesHandler(rec, req)
srv.UpdatesHandler(rec, req)
if rec.Code != http.StatusOK {
t.Errorf("expected status 200, got %d", rec.Code)
}
@@ -177,17 +187,25 @@ func (f failWriter) Write([]byte) (int, error) { return 0, errors.New("forced er
func (f failWriter) WriteHeader(_ int) {}
func TestUpdatesHandler_EncodeError(t *testing.T) {
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
rec := failWriter{httptest.NewRecorder()}
diun.UpdatesHandler(rec, httptest.NewRequest(http.MethodGet, "/api/updates", nil))
srv.UpdatesHandler(rec, httptest.NewRequest(http.MethodGet, "/api/updates", nil))
// No panic = pass
}
func TestWebhookHandler_MethodNotAllowed(t *testing.T) {
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
methods := []string{http.MethodGet, http.MethodPut, http.MethodDelete}
for _, method := range methods {
req := httptest.NewRequest(method, "/webhook", nil)
rec := httptest.NewRecorder()
diun.WebhookHandler(rec, req)
srv.WebhookHandler(rec, req)
if rec.Code != http.StatusMethodNotAllowed {
t.Errorf("method %s: expected 405, got %d", method, rec.Code)
}
@@ -197,53 +215,61 @@ func TestWebhookHandler_MethodNotAllowed(t *testing.T) {
body, _ := json.Marshal(event)
req := httptest.NewRequest(http.MethodPost, "/webhook", bytes.NewReader(body))
rec := httptest.NewRecorder()
diun.WebhookHandler(rec, req)
srv.WebhookHandler(rec, req)
if rec.Code == http.StatusMethodNotAllowed {
t.Errorf("POST should not return 405")
}
}
func TestWebhookHandler_EmptyImage(t *testing.T) {
diun.UpdatesReset()
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
body, _ := json.Marshal(diun.DiunEvent{Image: ""})
req := httptest.NewRequest(http.MethodPost, "/webhook", bytes.NewReader(body))
rec := httptest.NewRecorder()
diun.WebhookHandler(rec, req)
srv.WebhookHandler(rec, req)
if rec.Code != http.StatusBadRequest {
t.Errorf("expected 400 for empty image, got %d", rec.Code)
}
if len(diun.GetUpdatesMap()) != 0 {
t.Errorf("expected map to stay empty, got %d entries", len(diun.GetUpdatesMap()))
if len(srv.TestGetUpdatesMap()) != 0 {
t.Errorf("expected map to stay empty, got %d entries", len(srv.TestGetUpdatesMap()))
}
}
func TestConcurrentUpdateEvent(t *testing.T) {
diun.UpdatesReset()
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
const n = 100
var wg sync.WaitGroup
wg.Add(n)
for i := range n {
go func(i int) {
defer wg.Done()
err := diun.UpdateEvent(diun.DiunEvent{Image: fmt.Sprintf("image:%d", i)})
if err != nil {
return
if err := srv.TestUpsertEvent(diun.DiunEvent{Image: fmt.Sprintf("image:%d", i)}); err != nil {
t.Errorf("test setup: TestUpsertEvent[%d] failed: %v", i, err)
}
}(i)
}
wg.Wait()
if got := len(diun.GetUpdatesMap()); got != n {
if got := len(srv.TestGetUpdatesMap()); got != n {
t.Errorf("expected %d entries, got %d", n, got)
}
}
func TestMainHandlerIntegration(t *testing.T) {
diun.UpdatesReset()
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/webhook" {
diun.WebhookHandler(w, r)
srv.WebhookHandler(w, r)
} else if r.URL.Path == "/api/updates" {
diun.UpdatesHandler(w, r)
srv.UpdatesHandler(w, r)
} else {
w.WriteHeader(http.StatusNotFound)
}
@@ -282,19 +308,21 @@ func TestMainHandlerIntegration(t *testing.T) {
}
func TestDismissHandler_Success(t *testing.T) {
diun.UpdatesReset()
err := diun.UpdateEvent(diun.DiunEvent{Image: "nginx:latest"})
srv, err := diun.NewTestServer()
if err != nil {
return
t.Fatalf("NewTestServer: %v", err)
}
if err := srv.TestUpsertEvent(diun.DiunEvent{Image: "nginx:latest"}); err != nil {
t.Fatalf("test setup: TestUpsertEvent failed: %v", err)
}
req := httptest.NewRequest(http.MethodPatch, "/api/updates/nginx:latest", nil)
rec := httptest.NewRecorder()
diun.DismissHandler(rec, req)
srv.DismissHandler(rec, req)
if rec.Code != http.StatusNoContent {
t.Errorf("expected 204, got %d", rec.Code)
}
m := diun.GetUpdatesMap()
m := srv.TestGetUpdatesMap()
if len(m) != 1 {
t.Errorf("expected entry to remain after acknowledge, got %d entries", len(m))
}
@@ -304,39 +332,48 @@ func TestDismissHandler_Success(t *testing.T) {
}
func TestDismissHandler_NotFound(t *testing.T) {
diun.UpdatesReset()
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
req := httptest.NewRequest(http.MethodPatch, "/api/updates/does-not-exist:latest", nil)
rec := httptest.NewRecorder()
diun.DismissHandler(rec, req)
srv.DismissHandler(rec, req)
if rec.Code != http.StatusNotFound {
t.Errorf("expected 404, got %d", rec.Code)
}
}
func TestDismissHandler_EmptyImage(t *testing.T) {
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
req := httptest.NewRequest(http.MethodPatch, "/api/updates/", nil)
rec := httptest.NewRecorder()
diun.DismissHandler(rec, req)
srv.DismissHandler(rec, req)
if rec.Code != http.StatusBadRequest {
t.Errorf("expected 400, got %d", rec.Code)
}
}
func TestDismissHandler_SlashInImageName(t *testing.T) {
diun.UpdatesReset()
err := diun.UpdateEvent(diun.DiunEvent{Image: "ghcr.io/user/image:tag"})
srv, err := diun.NewTestServer()
if err != nil {
return
t.Fatalf("NewTestServer: %v", err)
}
if err := srv.TestUpsertEvent(diun.DiunEvent{Image: "ghcr.io/user/image:tag"}); err != nil {
t.Fatalf("test setup: TestUpsertEvent failed: %v", err)
}
req := httptest.NewRequest(http.MethodPatch, "/api/updates/ghcr.io/user/image:tag", nil)
rec := httptest.NewRecorder()
diun.DismissHandler(rec, req)
srv.DismissHandler(rec, req)
if rec.Code != http.StatusNoContent {
t.Errorf("expected 204, got %d", rec.Code)
}
m := diun.GetUpdatesMap()
m := srv.TestGetUpdatesMap()
if len(m) != 1 {
t.Errorf("expected entry to remain after acknowledge, got %d entries", len(m))
}
@@ -346,21 +383,28 @@ func TestDismissHandler_SlashInImageName(t *testing.T) {
}
func TestDismissHandler_ReappearsAfterNewWebhook(t *testing.T) {
diun.UpdatesReset()
diun.UpdateEvent(diun.DiunEvent{Image: "nginx:latest"})
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
if err := srv.TestUpsertEvent(diun.DiunEvent{Image: "nginx:latest"}); err != nil {
t.Fatalf("test setup: TestUpsertEvent failed: %v", err)
}
req := httptest.NewRequest(http.MethodPatch, "/api/updates/nginx:latest", nil)
rec := httptest.NewRecorder()
diun.DismissHandler(rec, req)
srv.DismissHandler(rec, req)
if rec.Code != http.StatusNoContent {
t.Fatalf("expected 204 on acknowledge, got %d", rec.Code)
}
if !diun.GetUpdatesMap()["nginx:latest"].Acknowledged {
if !srv.TestGetUpdatesMap()["nginx:latest"].Acknowledged {
t.Errorf("expected entry to be acknowledged after PATCH")
}
diun.UpdateEvent(diun.DiunEvent{Image: "nginx:latest", Status: "update"})
m := diun.GetUpdatesMap()
if err := srv.TestUpsertEvent(diun.DiunEvent{Image: "nginx:latest", Status: "update"}); err != nil {
t.Fatalf("second TestUpsertEvent failed: %v", err)
}
m := srv.TestGetUpdatesMap()
if len(m) != 1 {
t.Errorf("expected entry to remain, got %d entries", len(m))
}
@@ -374,21 +418,21 @@ func TestDismissHandler_ReappearsAfterNewWebhook(t *testing.T) {
// --- Tag handler tests ---
func postTag(t *testing.T, name string) (int, int) {
func postTag(t *testing.T, srv *diun.Server, name string) (int, int) {
t.Helper()
body, _ := json.Marshal(map[string]string{"name": name})
req := httptest.NewRequest(http.MethodPost, "/api/tags", bytes.NewReader(body))
rec := httptest.NewRecorder()
diun.TagsHandler(rec, req)
srv.TagsHandler(rec, req)
return rec.Code, rec.Body.Len()
}
func postTagAndGetID(t *testing.T, name string) int {
func postTagAndGetID(t *testing.T, srv *diun.Server, name string) int {
t.Helper()
body, _ := json.Marshal(map[string]string{"name": name})
req := httptest.NewRequest(http.MethodPost, "/api/tags", bytes.NewReader(body))
rec := httptest.NewRecorder()
diun.TagsHandler(rec, req)
srv.TagsHandler(rec, req)
if rec.Code != http.StatusCreated {
t.Fatalf("expected 201 creating tag %q, got %d", name, rec.Code)
}
@@ -398,11 +442,14 @@ func postTagAndGetID(t *testing.T, name string) int {
}
func TestCreateTagHandler_Success(t *testing.T) {
diun.UpdatesReset()
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
body, _ := json.Marshal(map[string]string{"name": "nextcloud"})
req := httptest.NewRequest(http.MethodPost, "/api/tags", bytes.NewReader(body))
rec := httptest.NewRecorder()
diun.TagsHandler(rec, req)
srv.TagsHandler(rec, req)
if rec.Code != http.StatusCreated {
t.Fatalf("expected 201, got %d", rec.Code)
}
@@ -419,30 +466,39 @@ func TestCreateTagHandler_Success(t *testing.T) {
}
func TestCreateTagHandler_DuplicateName(t *testing.T) {
diun.UpdatesReset()
postTag(t, "monitoring")
code, _ := postTag(t, "monitoring")
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
postTag(t, srv, "monitoring")
code, _ := postTag(t, srv, "monitoring")
if code != http.StatusConflict {
t.Errorf("expected 409, got %d", code)
}
}
func TestCreateTagHandler_EmptyName(t *testing.T) {
diun.UpdatesReset()
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
body, _ := json.Marshal(map[string]string{"name": ""})
req := httptest.NewRequest(http.MethodPost, "/api/tags", bytes.NewReader(body))
rec := httptest.NewRecorder()
diun.TagsHandler(rec, req)
srv.TagsHandler(rec, req)
if rec.Code != http.StatusBadRequest {
t.Errorf("expected 400, got %d", rec.Code)
}
}
func TestGetTagsHandler_Empty(t *testing.T) {
diun.UpdatesReset()
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
req := httptest.NewRequest(http.MethodGet, "/api/tags", nil)
rec := httptest.NewRecorder()
diun.TagsHandler(rec, req)
srv.TagsHandler(rec, req)
if rec.Code != http.StatusOK {
t.Fatalf("expected 200, got %d", rec.Code)
}
@@ -454,12 +510,15 @@ func TestGetTagsHandler_Empty(t *testing.T) {
}
func TestGetTagsHandler_WithTags(t *testing.T) {
diun.UpdatesReset()
postTag(t, "alpha")
postTag(t, "beta")
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
postTag(t, srv, "alpha")
postTag(t, srv, "beta")
req := httptest.NewRequest(http.MethodGet, "/api/tags", nil)
rec := httptest.NewRecorder()
diun.TagsHandler(rec, req)
srv.TagsHandler(rec, req)
if rec.Code != http.StatusOK {
t.Fatalf("expected 200, got %d", rec.Code)
}
@@ -471,36 +530,47 @@ func TestGetTagsHandler_WithTags(t *testing.T) {
}
func TestDeleteTagHandler_Success(t *testing.T) {
diun.UpdatesReset()
id := postTagAndGetID(t, "to-delete")
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
id := postTagAndGetID(t, srv, "to-delete")
req := httptest.NewRequest(http.MethodDelete, fmt.Sprintf("/api/tags/%d", id), nil)
rec := httptest.NewRecorder()
diun.TagByIDHandler(rec, req)
srv.TagByIDHandler(rec, req)
if rec.Code != http.StatusNoContent {
t.Errorf("expected 204, got %d", rec.Code)
}
}
func TestDeleteTagHandler_NotFound(t *testing.T) {
diun.UpdatesReset()
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
req := httptest.NewRequest(http.MethodDelete, "/api/tags/9999", nil)
rec := httptest.NewRecorder()
diun.TagByIDHandler(rec, req)
srv.TagByIDHandler(rec, req)
if rec.Code != http.StatusNotFound {
t.Errorf("expected 404, got %d", rec.Code)
}
}
func TestDeleteTagHandler_CascadesAssignment(t *testing.T) {
diun.UpdatesReset()
diun.UpdateEvent(diun.DiunEvent{Image: "nginx:latest"})
id := postTagAndGetID(t, "cascade-test")
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
if err := srv.TestUpsertEvent(diun.DiunEvent{Image: "nginx:latest"}); err != nil {
t.Fatalf("test setup: TestUpsertEvent failed: %v", err)
}
id := postTagAndGetID(t, srv, "cascade-test")
// Assign the tag
body, _ := json.Marshal(map[string]interface{}{"image": "nginx:latest", "tag_id": id})
req := httptest.NewRequest(http.MethodPut, "/api/tag-assignments", bytes.NewReader(body))
rec := httptest.NewRecorder()
diun.TagAssignmentHandler(rec, req)
srv.TagAssignmentHandler(rec, req)
if rec.Code != http.StatusNoContent {
t.Fatalf("expected 204 on assign, got %d", rec.Code)
}
@@ -508,43 +578,53 @@ func TestDeleteTagHandler_CascadesAssignment(t *testing.T) {
// Delete the tag
req = httptest.NewRequest(http.MethodDelete, fmt.Sprintf("/api/tags/%d", id), nil)
rec = httptest.NewRecorder()
diun.TagByIDHandler(rec, req)
srv.TagByIDHandler(rec, req)
if rec.Code != http.StatusNoContent {
t.Fatalf("expected 204 on delete, got %d", rec.Code)
}
// Confirm assignment cascaded
m := diun.GetUpdatesMap()
m := srv.TestGetUpdatesMap()
if m["nginx:latest"].Tag != nil {
t.Errorf("expected tag to be nil after cascade delete, got %+v", m["nginx:latest"].Tag)
}
}
func TestTagAssignmentHandler_Assign(t *testing.T) {
diun.UpdatesReset()
diun.UpdateEvent(diun.DiunEvent{Image: "alpine:latest"})
id := postTagAndGetID(t, "assign-test")
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
if err := srv.TestUpsertEvent(diun.DiunEvent{Image: "alpine:latest"}); err != nil {
t.Fatalf("test setup: TestUpsertEvent failed: %v", err)
}
id := postTagAndGetID(t, srv, "assign-test")
body, _ := json.Marshal(map[string]interface{}{"image": "alpine:latest", "tag_id": id})
req := httptest.NewRequest(http.MethodPut, "/api/tag-assignments", bytes.NewReader(body))
rec := httptest.NewRecorder()
diun.TagAssignmentHandler(rec, req)
srv.TagAssignmentHandler(rec, req)
if rec.Code != http.StatusNoContent {
t.Errorf("expected 204, got %d", rec.Code)
}
}
func TestTagAssignmentHandler_Reassign(t *testing.T) {
diun.UpdatesReset()
diun.UpdateEvent(diun.DiunEvent{Image: "redis:latest"})
id1 := postTagAndGetID(t, "group-a")
id2 := postTagAndGetID(t, "group-b")
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
if err := srv.TestUpsertEvent(diun.DiunEvent{Image: "redis:latest"}); err != nil {
t.Fatalf("test setup: TestUpsertEvent failed: %v", err)
}
id1 := postTagAndGetID(t, srv, "group-a")
id2 := postTagAndGetID(t, srv, "group-b")
assign := func(tagID int) {
body, _ := json.Marshal(map[string]interface{}{"image": "redis:latest", "tag_id": tagID})
req := httptest.NewRequest(http.MethodPut, "/api/tag-assignments", bytes.NewReader(body))
rec := httptest.NewRecorder()
diun.TagAssignmentHandler(rec, req)
srv.TagAssignmentHandler(rec, req)
if rec.Code != http.StatusNoContent {
t.Fatalf("expected 204, got %d", rec.Code)
}
@@ -553,51 +633,61 @@ func TestTagAssignmentHandler_Reassign(t *testing.T) {
assign(id1)
assign(id2)
m := diun.GetUpdatesMap()
m := srv.TestGetUpdatesMap()
if m["redis:latest"].Tag == nil || m["redis:latest"].Tag.ID != id2 {
t.Errorf("expected tag id %d after reassign, got %+v", id2, m["redis:latest"].Tag)
}
}
func TestTagAssignmentHandler_Unassign(t *testing.T) {
diun.UpdatesReset()
diun.UpdateEvent(diun.DiunEvent{Image: "busybox:latest"})
id := postTagAndGetID(t, "unassign-test")
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
if err := srv.TestUpsertEvent(diun.DiunEvent{Image: "busybox:latest"}); err != nil {
t.Fatalf("test setup: TestUpsertEvent failed: %v", err)
}
id := postTagAndGetID(t, srv, "unassign-test")
body, _ := json.Marshal(map[string]interface{}{"image": "busybox:latest", "tag_id": id})
req := httptest.NewRequest(http.MethodPut, "/api/tag-assignments", bytes.NewReader(body))
rec := httptest.NewRecorder()
diun.TagAssignmentHandler(rec, req)
srv.TagAssignmentHandler(rec, req)
// Now unassign
body, _ = json.Marshal(map[string]string{"image": "busybox:latest"})
req = httptest.NewRequest(http.MethodDelete, "/api/tag-assignments", bytes.NewReader(body))
rec = httptest.NewRecorder()
diun.TagAssignmentHandler(rec, req)
srv.TagAssignmentHandler(rec, req)
if rec.Code != http.StatusNoContent {
t.Errorf("expected 204, got %d", rec.Code)
}
m := diun.GetUpdatesMap()
m := srv.TestGetUpdatesMap()
if m["busybox:latest"].Tag != nil {
t.Errorf("expected tag nil after unassign, got %+v", m["busybox:latest"].Tag)
}
}
func TestGetUpdates_IncludesTag(t *testing.T) {
diun.UpdatesReset()
diun.UpdateEvent(diun.DiunEvent{Image: "postgres:latest"})
id := postTagAndGetID(t, "databases")
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
if err := srv.TestUpsertEvent(diun.DiunEvent{Image: "postgres:latest"}); err != nil {
t.Fatalf("test setup: TestUpsertEvent failed: %v", err)
}
id := postTagAndGetID(t, srv, "databases")
body, _ := json.Marshal(map[string]interface{}{"image": "postgres:latest", "tag_id": id})
req := httptest.NewRequest(http.MethodPut, "/api/tag-assignments", bytes.NewReader(body))
rec := httptest.NewRecorder()
diun.TagAssignmentHandler(rec, req)
srv.TagAssignmentHandler(rec, req)
if rec.Code != http.StatusNoContent {
t.Fatalf("expected 204, got %d", rec.Code)
}
m := diun.GetUpdatesMap()
m := srv.TestGetUpdatesMap()
entry, ok := m["postgres:latest"]
if !ok {
t.Fatal("expected postgres:latest in updates")
@@ -612,3 +702,110 @@ func TestGetUpdates_IncludesTag(t *testing.T) {
t.Errorf("expected tag id %d, got %d", id, entry.Tag.ID)
}
}
func TestWebhookHandler_OversizedBody(t *testing.T) {
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
// Generate a body that exceeds 1 MB (maxBodyBytes = 1<<20 = 1,048,576 bytes).
// Use a valid JSON prefix so the decoder reads past the limit before failing,
// ensuring MaxBytesReader triggers a 413 rather than a JSON parse 400.
prefix := []byte(`{"image":"`)
padding := bytes.Repeat([]byte("x"), 1<<20+1)
oversized := append(prefix, padding...)
req := httptest.NewRequest(http.MethodPost, "/webhook", bytes.NewReader(oversized))
rec := httptest.NewRecorder()
srv.WebhookHandler(rec, req)
if rec.Code != http.StatusRequestEntityTooLarge {
t.Errorf("expected 413 for oversized body, got %d", rec.Code)
}
}
func TestTagsHandler_OversizedBody(t *testing.T) {
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
prefix := []byte(`{"name":"`)
padding := bytes.Repeat([]byte("x"), 1<<20+1)
oversized := append(prefix, padding...)
req := httptest.NewRequest(http.MethodPost, "/api/tags", bytes.NewReader(oversized))
rec := httptest.NewRecorder()
srv.TagsHandler(rec, req)
if rec.Code != http.StatusRequestEntityTooLarge {
t.Errorf("expected 413 for oversized body, got %d", rec.Code)
}
}
func TestTagAssignmentHandler_OversizedBody(t *testing.T) {
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
prefix := []byte(`{"image":"`)
padding := bytes.Repeat([]byte("x"), 1<<20+1)
oversized := append(prefix, padding...)
req := httptest.NewRequest(http.MethodPut, "/api/tag-assignments", bytes.NewReader(oversized))
rec := httptest.NewRecorder()
srv.TagAssignmentHandler(rec, req)
if rec.Code != http.StatusRequestEntityTooLarge {
t.Errorf("expected 413 for oversized body, got %d", rec.Code)
}
}
func TestUpdateEvent_PreservesTagOnUpsert(t *testing.T) {
srv, err := diun.NewTestServer()
if err != nil {
t.Fatalf("NewTestServer: %v", err)
}
// Insert image
if err := srv.TestUpsertEvent(diun.DiunEvent{Image: "nginx:latest", Status: "new"}); err != nil {
t.Fatalf("first TestUpsertEvent failed: %v", err)
}
// Assign tag
tagID := postTagAndGetID(t, srv, "webservers")
body, _ := json.Marshal(map[string]interface{}{"image": "nginx:latest", "tag_id": tagID})
req := httptest.NewRequest(http.MethodPut, "/api/tag-assignments", bytes.NewReader(body))
rec := httptest.NewRecorder()
srv.TagAssignmentHandler(rec, req)
if rec.Code != http.StatusNoContent {
t.Fatalf("tag assignment failed: got %d", rec.Code)
}
// Dismiss (acknowledge) the image — second event must reset this
req = httptest.NewRequest(http.MethodPatch, "/api/updates/nginx:latest", nil)
rec = httptest.NewRecorder()
srv.DismissHandler(rec, req)
if rec.Code != http.StatusNoContent {
t.Fatalf("dismiss failed: got %d", rec.Code)
}
// Receive a second event for the same image
if err := srv.TestUpsertEvent(diun.DiunEvent{Image: "nginx:latest", Status: "update"}); err != nil {
t.Fatalf("second TestUpsertEvent failed: %v", err)
}
// Tag must survive the second event
m := srv.TestGetUpdatesMap()
entry, ok := m["nginx:latest"]
if !ok {
t.Fatal("nginx:latest missing from updates after second event")
}
if entry.Tag == nil {
t.Error("tag was lost after second TestUpsertEvent — UPSERT bug not fixed")
}
if entry.Tag != nil && entry.Tag.ID != tagID {
t.Errorf("tag ID changed: expected %d, got %d", tagID, entry.Tag.ID)
}
// Acknowledged state must be reset by the new event
if entry.Acknowledged {
t.Error("acknowledged state must be reset by new event")
}
// Status must reflect the new event
if entry.Event.Status != "update" {
t.Errorf("expected status 'update', got %q", entry.Event.Status)
}
}

View File

@@ -1,19 +1,46 @@
package diunwebhook
func GetUpdatesMap() map[string]UpdateEntry {
m, _ := GetUpdates()
import "database/sql"
// NewTestServer constructs a Server with a fresh in-memory SQLite database.
// Each call returns an isolated server -- tests do not share state.
func NewTestServer() (*Server, error) {
db, err := sql.Open("sqlite", ":memory:")
if err != nil {
return nil, err
}
if err := RunMigrations(db); err != nil {
return nil, err
}
store := NewSQLiteStore(db)
return NewServer(store, ""), nil
}
// NewTestServerWithSecret constructs a Server with webhook authentication enabled.
func NewTestServerWithSecret(secret string) (*Server, error) {
db, err := sql.Open("sqlite", ":memory:")
if err != nil {
return nil, err
}
if err := RunMigrations(db); err != nil {
return nil, err
}
store := NewSQLiteStore(db)
return NewServer(store, secret), nil
}
// TestUpsertEvent calls UpsertEvent on the server's store (for test setup).
func (s *Server) TestUpsertEvent(event DiunEvent) error {
return s.store.UpsertEvent(event)
}
// TestGetUpdates calls GetUpdates on the server's store (for test assertions).
func (s *Server) TestGetUpdates() (map[string]UpdateEntry, error) {
return s.store.GetUpdates()
}
// TestGetUpdatesMap is a convenience wrapper that returns the map without error.
func (s *Server) TestGetUpdatesMap() map[string]UpdateEntry {
m, _ := s.store.GetUpdates()
return m
}
func UpdatesReset() {
InitDB(":memory:")
}
func ResetTags() {
db.Exec(`DELETE FROM tag_assignments`)
db.Exec(`DELETE FROM tags`)
}
func ResetWebhookSecret() {
SetWebhookSecret("")
}

View File

@@ -0,0 +1,36 @@
package diunwebhook
import (
"database/sql"
"embed"
"errors"
"github.com/golang-migrate/migrate/v4"
sqlitemigrate "github.com/golang-migrate/migrate/v4/database/sqlite"
"github.com/golang-migrate/migrate/v4/source/iofs"
_ "modernc.org/sqlite"
)
//go:embed migrations/sqlite
var sqliteMigrations embed.FS
// RunMigrations applies all pending schema migrations to the given SQLite database.
// Returns nil if all migrations applied successfully or if database is already up to date.
func RunMigrations(db *sql.DB) error {
src, err := iofs.New(sqliteMigrations, "migrations/sqlite")
if err != nil {
return err
}
driver, err := sqlitemigrate.WithInstance(db, &sqlitemigrate.Config{})
if err != nil {
return err
}
m, err := migrate.NewWithInstance("iofs", src, "sqlite", driver)
if err != nil {
return err
}
if err := m.Up(); err != nil && !errors.Is(err, migrate.ErrNoChange) {
return err
}
return nil
}

View File

@@ -0,0 +1,3 @@
DROP TABLE IF EXISTS tag_assignments;
DROP TABLE IF EXISTS tags;
DROP TABLE IF EXISTS updates;

View File

@@ -0,0 +1,28 @@
CREATE TABLE IF NOT EXISTS updates (
image TEXT PRIMARY KEY,
diun_version TEXT NOT NULL DEFAULT '',
hostname TEXT NOT NULL DEFAULT '',
status TEXT NOT NULL DEFAULT '',
provider TEXT NOT NULL DEFAULT '',
hub_link TEXT NOT NULL DEFAULT '',
mime_type TEXT NOT NULL DEFAULT '',
digest TEXT NOT NULL DEFAULT '',
created TEXT NOT NULL DEFAULT '',
platform TEXT NOT NULL DEFAULT '',
ctn_name TEXT NOT NULL DEFAULT '',
ctn_id TEXT NOT NULL DEFAULT '',
ctn_state TEXT NOT NULL DEFAULT '',
ctn_status TEXT NOT NULL DEFAULT '',
received_at TEXT NOT NULL,
acknowledged_at TEXT
);
CREATE TABLE IF NOT EXISTS tags (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL UNIQUE
);
CREATE TABLE IF NOT EXISTS tag_assignments (
image TEXT PRIMARY KEY,
tag_id INTEGER NOT NULL REFERENCES tags(id) ON DELETE CASCADE
);

View File

@@ -0,0 +1,183 @@
package diunwebhook
import (
"database/sql"
"sync"
"time"
)
// SQLiteStore implements Store using a SQLite database.
type SQLiteStore struct {
db *sql.DB
mu sync.Mutex
}
// NewSQLiteStore creates a new SQLiteStore backed by the given *sql.DB.
// It sets MaxOpenConns(1) to prevent concurrent write contention and
// enables foreign key enforcement via PRAGMA foreign_keys = ON.
func NewSQLiteStore(db *sql.DB) *SQLiteStore {
db.SetMaxOpenConns(1)
// PRAGMA foreign_keys must be set per-connection; with MaxOpenConns(1) this covers all queries.
db.Exec("PRAGMA foreign_keys = ON") //nolint:errcheck
return &SQLiteStore{db: db}
}
// UpsertEvent inserts or updates a DIUN event in the updates table.
// On conflict (same image), all fields are updated and acknowledged_at is reset to NULL.
func (s *SQLiteStore) UpsertEvent(event DiunEvent) error {
s.mu.Lock()
defer s.mu.Unlock()
_, err := s.db.Exec(`
INSERT INTO updates (
image, diun_version, hostname, status, provider,
hub_link, mime_type, digest, created, platform,
ctn_name, ctn_id, ctn_state, ctn_status,
received_at, acknowledged_at
) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,NULL)
ON CONFLICT(image) DO UPDATE SET
diun_version = excluded.diun_version,
hostname = excluded.hostname,
status = excluded.status,
provider = excluded.provider,
hub_link = excluded.hub_link,
mime_type = excluded.mime_type,
digest = excluded.digest,
created = excluded.created,
platform = excluded.platform,
ctn_name = excluded.ctn_name,
ctn_id = excluded.ctn_id,
ctn_state = excluded.ctn_state,
ctn_status = excluded.ctn_status,
received_at = excluded.received_at,
acknowledged_at = NULL`,
event.Image, event.DiunVersion, event.Hostname, event.Status, event.Provider,
event.HubLink, event.MimeType, event.Digest,
event.Created.Format(time.RFC3339), event.Platform,
event.Metadata.ContainerName, event.Metadata.ContainerID,
event.Metadata.State, event.Metadata.Status,
time.Now().Format(time.RFC3339),
)
return err
}
// GetUpdates returns all update entries joined with their tag assignments.
func (s *SQLiteStore) GetUpdates() (map[string]UpdateEntry, error) {
rows, err := s.db.Query(`SELECT u.image, u.diun_version, u.hostname, u.status, u.provider,
u.hub_link, u.mime_type, u.digest, u.created, u.platform,
u.ctn_name, u.ctn_id, u.ctn_state, u.ctn_status, u.received_at, COALESCE(u.acknowledged_at, ''),
t.id, t.name
FROM updates u
LEFT JOIN tag_assignments ta ON u.image = ta.image
LEFT JOIN tags t ON ta.tag_id = t.id`)
if err != nil {
return nil, err
}
defer rows.Close()
result := make(map[string]UpdateEntry)
for rows.Next() {
var e UpdateEntry
var createdStr, receivedStr, acknowledgedAt string
var tagID sql.NullInt64
var tagName sql.NullString
err := rows.Scan(&e.Event.Image, &e.Event.DiunVersion, &e.Event.Hostname,
&e.Event.Status, &e.Event.Provider, &e.Event.HubLink, &e.Event.MimeType,
&e.Event.Digest, &createdStr, &e.Event.Platform,
&e.Event.Metadata.ContainerName, &e.Event.Metadata.ContainerID,
&e.Event.Metadata.State, &e.Event.Metadata.Status,
&receivedStr, &acknowledgedAt, &tagID, &tagName)
if err != nil {
return nil, err
}
e.Event.Created, _ = time.Parse(time.RFC3339, createdStr)
e.ReceivedAt, _ = time.Parse(time.RFC3339, receivedStr)
e.Acknowledged = acknowledgedAt != ""
if tagID.Valid && tagName.Valid {
e.Tag = &Tag{ID: int(tagID.Int64), Name: tagName.String}
}
result[e.Event.Image] = e
}
return result, rows.Err()
}
// AcknowledgeUpdate marks the given image as acknowledged.
// Returns found=false if no row with that image exists.
func (s *SQLiteStore) AcknowledgeUpdate(image string) (found bool, err error) {
s.mu.Lock()
defer s.mu.Unlock()
res, err := s.db.Exec(`UPDATE updates SET acknowledged_at = datetime('now') WHERE image = ?`, image)
if err != nil {
return false, err
}
n, _ := res.RowsAffected()
return n > 0, nil
}
// ListTags returns all tags ordered by name.
func (s *SQLiteStore) ListTags() ([]Tag, error) {
rows, err := s.db.Query(`SELECT id, name FROM tags ORDER BY name`)
if err != nil {
return nil, err
}
defer rows.Close()
tags := []Tag{}
for rows.Next() {
var t Tag
if err := rows.Scan(&t.ID, &t.Name); err != nil {
return nil, err
}
tags = append(tags, t)
}
return tags, rows.Err()
}
// CreateTag inserts a new tag with the given name and returns the created tag.
func (s *SQLiteStore) CreateTag(name string) (Tag, error) {
s.mu.Lock()
defer s.mu.Unlock()
res, err := s.db.Exec(`INSERT INTO tags (name) VALUES (?)`, name)
if err != nil {
return Tag{}, err
}
id, _ := res.LastInsertId()
return Tag{ID: int(id), Name: name}, nil
}
// DeleteTag deletes the tag with the given id.
// Returns found=false if no tag with that id exists.
func (s *SQLiteStore) DeleteTag(id int) (found bool, err error) {
s.mu.Lock()
defer s.mu.Unlock()
res, err := s.db.Exec(`DELETE FROM tags WHERE id = ?`, id)
if err != nil {
return false, err
}
n, _ := res.RowsAffected()
return n > 0, nil
}
// AssignTag assigns the given image to the given tag.
// Uses INSERT OR REPLACE so re-assigning an image to a different tag replaces the existing assignment.
func (s *SQLiteStore) AssignTag(image string, tagID int) error {
s.mu.Lock()
defer s.mu.Unlock()
_, err := s.db.Exec(`INSERT OR REPLACE INTO tag_assignments (image, tag_id) VALUES (?, ?)`, image, tagID)
return err
}
// UnassignTag removes any tag assignment for the given image.
func (s *SQLiteStore) UnassignTag(image string) error {
s.mu.Lock()
defer s.mu.Unlock()
_, err := s.db.Exec(`DELETE FROM tag_assignments WHERE image = ?`, image)
return err
}
// TagExists returns true if a tag with the given id exists.
func (s *SQLiteStore) TagExists(id int) (bool, error) {
var count int
err := s.db.QueryRow(`SELECT COUNT(*) FROM tags WHERE id = ?`, id).Scan(&count)
if err != nil {
return false, err
}
return count > 0, nil
}

15
pkg/diunwebhook/store.go Normal file
View File

@@ -0,0 +1,15 @@
package diunwebhook
// Store defines all persistence operations. Implementations must be safe
// for concurrent use from HTTP handlers.
type Store interface {
UpsertEvent(event DiunEvent) error
GetUpdates() (map[string]UpdateEntry, error)
AcknowledgeUpdate(image string) (found bool, err error)
ListTags() ([]Tag, error)
CreateTag(name string) (Tag, error)
DeleteTag(id int) (found bool, err error)
AssignTag(image string, tagID int) error
UnassignTag(image string) error
TagExists(id int) (bool, error)
}