Story 2.6: Spreadsheet Import & Column Mapping
Status: ready-for-dev
Story
As a client services or operations user,
I want to import approved election-cycle spreadsheet data and map it to the extension model,
so that key schedule and service fields can be staged and reviewed without manual re-entry.
Acceptance Criteria
- Given an approved import file is provided When the import workflow starts Then the system parses required columns and validates expected template headers before staging data
- Given staged rows include jurisdiction references When mapping runs Then each row is matched to legacy-linked identifiers (
ID, JCode/JurisCode, KitID where applicable) or flagged as unresolved
- Given a staged row fails validation When the review screen is displayed Then deterministic error output identifies the row, failing field, and corrective action needed before publish
- Given a staged row passes validation When it is included in publish Then provenance metadata is recorded: source file identifier, source row reference, import timestamp, and importing user
Tasks / Subtasks
Dev Notes
- Staging store is separate from the live extension tables. Do not write parsed rows directly into election-cycle extension storage — staging exists so review can happen pre-publish (per architecture).
- Jurisdiction mapping must reuse the legacy anti-corruption data access layer (Story 1.6) and the legacy identifier linker (Story 1.8). Do not bypass them or build a parallel matcher.
- “Deterministic error output” is a hard property — sort/index error records by stable keys (row index, field name) so identical input yields identical output. Avoid set-iteration order or hash-randomized ordering.
- Provenance metadata (source file identifier, source row reference, import timestamp, importing user) is required per published row, not per import batch.
- Legacy Access tables remain read-only — the import path writes only to staging and to extension tables.
- This story does not consume SafeCommitRail (Story 2.5); the import publish path has its own validation gate scoped to import semantics. Coordinate naming so the two flows are visibly distinct.
Project Structure Notes
- Backend:
Campaign_Tracker.Server/ — add an import feature folder (parser, staging repository, mapper, controller); reuse legacy ACL from Story 1.6 and identifier linker from Story 1.8
- Frontend:
campaign-tracker-client/ — add the import workflow under the election-cycle workspace
- Story artifacts:
_bmad-output/implementation-artifacts/
References
- Story source:
_bmad-output/planning-artifacts/epics.md (Epic 2 / Story 2.6)
- Architecture constraints:
_bmad-output/planning-artifacts/architecture.md (mapping registry, staging store for pre-publish review, provenance fields, publish service with audit)
- UX patterns:
_bmad-output/planning-artifacts/ux-design-specification.md
- Prior stories: Story 1.5 — shared audit logging; Story 1.6 — legacy anti-corruption data access layer; Story 1.8 — legacy identifier linking for extension records
Dev Agent Record
Agent Model Used
{{agent_model_name_version}}
Debug Log References
- Story generated from epic source and architecture/UX planning artifacts.
Completion Notes List
- Story context created and marked ready-for-dev.
File List
Change Log