--- stepsCompleted: - 1 - 2 - 3 inputDocuments: - _bmad-output/planning-artifacts/prd.md - _bmad-output/planning-artifacts/ux-design-specification.md - _bmad-output/planning-artifacts/validation-report-2026-04-03.md - Initial Description.txt - Initial Documents/Access_Schema.txt - Initial Documents/Client Database Plan.xlsx - _bmad-output/planning-artifacts/client-database-plan-extract.txt workflowType: 'architecture' project_name: 'Campaign_Tracker App' user_name: 'Daniel' date: '2026-04-03' --- # Architecture Decision Document _This document builds collaboratively through step-by-step discovery. Sections are appended as we work through each architectural decision together._ ## Project Context Analysis ### Requirements Overview **Functional Requirements:** The system scope covers end-to-end municipal election operations with 34 FRs organized into 7 architectural capability domains: - Municipality & account management (FR1-F4) - Election-cycle setup & planning (FR5-F8) - Service configuration (FR9-F13) - Scheduling & milestone management (FR14-F17) - Production tracking & exception handling (FR18-F21) - Operational reporting & exports (FR22-F26) - Role governance + legacy compatibility/data integrity (FR27-F34) Architecturally, this implies a modular domain model with strict separation between: - Permanent municipality entities (long-lived master data) - Election-cycle entities (repeatable operational lifecycle data) **Non-Functional Requirements:** Key architecture-driving NFRs include: - Performance: p95-like expectations on page loads, updates, and report generation - Security: transport encryption, at-rest protection, RBAC enforcement, audit logging - Scalability: 10x job growth and 150 concurrent users under load constraints - Accessibility: keyboard-operable critical workflows and WCAG-aligned operation - Compatibility/integrity: immutable legacy schema and high referential join consistency - Reliability/recoverability: availability targets, audit reconstruction, recovery safeguards These NFRs strongly favor a service-oriented architecture with explicit quality gates, observability, and data-contract validation. **Scale & Complexity:** This project is high complexity due to regulated operations, date-critical workflows, multi-role coordination, and brownfield constraints. - Primary domain: internal govtech operations web platform (data-intensive line-of-business) - Complexity level: high - Estimated architectural components: ~14-18 major components/services (provisional; to be validated after topology and integration boundary decisions) ### Technical Constraints & Dependencies - Legacy Access tables are immutable (no add/drop/alter of original tables). - All new behavior must be implemented through extension tables and join queries/views. - Join keys are fixed and must remain deterministic: `ID`, `JCode`/`JurisCode`, `KitID`. - Permanent municipality and election-cycle data must remain model-separated while joinable in reports/workflows. - Reporting output parity with existing operational consumers must be preserved. - Browser-targeted desktop web app with keyboard-first usability and dense data interaction. - Existing process expectations are anchored to Access-era workflows and spreadsheet export patterns. ### Critical Tensions to Resolve Early - **Scope tension (platform):** PRD includes tablet-friendly secondary target, while confirmed UX direction is PC-only. Architectural baseline should treat PC-only as controlling scope unless PRD is revised. - **Compliance specificity tension:** govtech expectations (clearance/residency/procurement evidence) are present but need explicit system-level enforcement decisions. - **Performance vs density tension:** dense grid workflows, live operational state, and sub-second update targets may conflict without careful query/index/cache strategy. - **Parity vs modernization tension:** preserving legacy report parity while improving data model and workflows introduces dual-truth risk during transition. ### Architectural Pressure Points - Introduce an **anti-corruption data access layer** between modern services and legacy Access structures. - Separate architecture into **operational write path** (extension tables) and **report read path** (join/materialized/query layer) to reduce coupling. - Treat join-key integrity as a platform concern with scheduled validation and release-gate checks. - Model pre-commit validation as a shared orchestration capability (required fields, dependencies, policy checks). - Ensure deterministic state refresh semantics for dense grids, risk queues, and provenance views. - Define explainable blocker payloads so UI can present clear reasons and one-click corrective actions. ### Migration & Parity Governance - Run legacy and modern reports in parallel during transition windows. - Define parity acceptance thresholds per report type and field criticality. - Implement discrepancy triage workflow with owner assignment and closure status tracking. - Require parity evidence as a formal cutover gate before legacy process deprecation. - Maintain backward-compatible export schemas for existing downstream consumers where mandated. ### Compliance Evidence Model (Govtech) - Define security evidence artifacts for privileged actions and authentication events. - Define data residency attestation artifacts aligned to municipality policy constraints. - Define accessibility evidence artifacts (keyboard coverage, contrast checks, critical-flow audits). - Bind evidence generation to release gates rather than post-release documentation. - Assign ownership and retention rules for each evidence artifact category. ### Cross-Cutting Concerns Identified - Data integrity and compatibility enforcement across all read/write paths - Authorization boundaries and privileged operation controls - Auditability/provenance for every sensitive status transition - Validation orchestration (required-field, dependency, policy checks pre-commit) - Operational reporting consistency and deterministic filtering/sorting behavior - Performance under peak election windows (query shaping, caching, export throughput) - Accessibility and keyboard operability in dense grid/form workflows - Error handling and recovery UX linked to exception workflows ### Assumptions & Risk Flags (Architecture-Level) - Assumption: legacy identifiers are sufficiently clean for deterministic joins at operational scale. - Assumption: extension-table growth can be managed without degrading report latency. - Risk flag: key-mapping inconsistencies may produce silent report drift unless reconciliation controls are explicit. - Risk flag: migration period may require side-by-side parity verification and discrepancy triage workflow. - Risk flag: compliance evidence generation must be designed in, not added after implementation. ## Starter Template Evaluation ### Primary Technology Domain Full-stack municipal operations web application using an ASP.NET backend and TypeScript-first React frontend. ### Starter Options Considered - ASP.NET Core SPA template (`dotnet new react`): convenient single template, but less aligned with modern frontend tooling and independent frontend lifecycle. - ASP.NET Core 10 Web API + Vite React TypeScript: clean backend/frontend boundary, modern frontend DX, and explicit fit for the requested stack. - Next.js full-stack starter: strong option generally, but misaligned with explicit requirement for a .NET ASP application backend. ### Selected Starter: ASP.NET Core 10 Web API + Vite React TypeScript **Rationale for Selection:** This aligns directly with the requested architecture (`.NET 10 ASP` + `TypeScript-first React`), supports modern frontend workflows, and preserves flexibility for immutable-legacy integration patterns on the backend. **Initialization Command:** ```bash dotnet new sln -n campaign-tracker dotnet new webapi -n Campaign_Tracker.Server -f net10.0 --use-controllers dotnet sln add .\Campaign_Tracker.Server\Campaign_Tracker.Server.csproj npm create vite@latest campaign-tracker-client -- --template react-ts ``` **Architectural Decisions Provided by Starter:** **Language & Runtime:** Backend on C#/.NET 10 Web API, frontend on React + TypeScript with Node-based toolchain. **Styling Solution:** Vite React TypeScript starter defaults to CSS-based setup, enabling incremental adoption of a design system without early framework lock-in. **Build Tooling:** Backend built via .NET SDK tooling; frontend built via Vite for fast local iteration and production bundling. **Testing Framework:** Backend includes .NET test ecosystem compatibility; frontend starter includes a TypeScript-ready React project structure that can layer Vitest and Playwright in implementation stories. **Code Organization:** Clear server/client repo boundaries with explicit API contracts between them, supporting modular domain design and safer migration from legacy Access workflows. **Development Experience:** Hot reload on both backend and frontend, strong TypeScript ergonomics, and independent deploy/test pipelines for server and client. **Note:** Project initialization using this command should be the first implementation story. ## Approved Correct Course Addendum (2026-05-05) ### New Source Integration Boundary A late source document was approved for inclusion: Google Sheet `AUGUST 2026 PRIMARY Ballot Envelope Imprinting and Tracking Scheduled .xlsx`. To support this safely, architecture adds an explicit ingestion boundary: - Source intake adapter (approved template/file detection) - Template-version and header validation - Mapping registry from source columns to extension entities - Validation engine for required fields and join-key integrity - Staging store for pre-publish review - Publish service with audit and provenance capture ### Provenance and Reconciliation Model For spreadsheet-origin data, the platform must persist: - Source file identifier - Source row reference - Import timestamp - Importing user A reconciliation process must compare source-origin values against operational report outputs, with deterministic mismatch reporting and triage workflow ownership. ### OIDC Logout / End-Session The application uses OIDC RP-initiated logout. On user-initiated logout: - The backend `/api/auth/logout` endpoint calls the Keycloak `end_session_endpoint` with the `id_token_hint` to destroy the server-side session. - The client clears local token storage and redirects to the Keycloak login page. - If the Keycloak end-session call fails, client tokens are still cleared and the failure is logged — no silent partial-logout state is permitted. - The `SESSION_LOGOUT` event is written to the audit service within 5 seconds (NFR7). ### Security and Role Handling for Contact Data Imported contact and proofing-related fields are role-protected and auditable. - Least-privilege visibility by role - All sensitive field changes captured in audit stream - No bypass around SafeCommitRail-equivalent validation for publish-sensitive updates