You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

5.6KB

Story 2.6: Spreadsheet Import & Column Mapping

Status: ready-for-dev

Story

As a client services or operations user, I want to import approved election-cycle spreadsheet data and map it to the extension model, so that key schedule and service fields can be staged and reviewed without manual re-entry.

Acceptance Criteria

  1. Given an approved import file is provided When the import workflow starts Then the system parses required columns and validates expected template headers before staging data
  2. Given staged rows include jurisdiction references When mapping runs Then each row is matched to legacy-linked identifiers (ID, JCode/JurisCode, KitID where applicable) or flagged as unresolved
  3. Given a staged row fails validation When the review screen is displayed Then deterministic error output identifies the row, failing field, and corrective action needed before publish
  4. Given a staged row passes validation When it is included in publish Then provenance metadata is recorded: source file identifier, source row reference, import timestamp, and importing user

Tasks / Subtasks

  • Backend: import parser & header validation (AC: #1)
    • Add an import endpoint that accepts an approved spreadsheet file, parses required columns, and validates expected template headers up-front
    • Reject malformed or unexpected templates with a structured error before any row staging occurs
  • Backend: staging store & jurisdiction mapping (AC: #2)
    • Persist parsed rows in a staging store separate from the live extension tables (per architecture: pre-publish review)
    • Map each staged row to legacy-linked identifiers (ID, JCode/JurisCode, KitID where applicable) using the legacy anti-corruption layer from Story 1.6 / linker from Story 1.8
    • Flag unresolved rows explicitly with the unresolved identifier(s) named
  • Backend: validation & deterministic error output (AC: #3)
    • Run row-level validation producing deterministic, stable error records identifying row index, failing field, and a specific corrective action message
    • Same input must always produce the same error set in the same order (for reviewer trust and diffability)
  • Backend: publish with provenance (AC: #4)
    • On publish, write valid staged rows into the election-cycle extension model and record provenance per row: source file identifier, source row reference, import timestamp, importing user
    • Audit the import publish event via the shared audit logger
  • Frontend: import workflow UI (AC: #1, #2, #3, #4)
    • File upload entry point with header-validation feedback before staging
    • Review screen showing staged rows, jurisdiction mapping status (mapped vs. unresolved with the unmatched identifier visible), and validation errors with row/field/corrective-action columns
    • Publish action runs only after all blocking validation errors are resolved or the offending rows are excluded
  • Tests & evidence (AC: #1–#4)
    • Backend tests for header validation, staging persistence, jurisdiction mapping (matched and unresolved), deterministic error output, provenance write, audit emission
    • Frontend tests for upload feedback, review screen states, publish gating
    • Document changed files and any config notes

Dev Notes

  • Staging store is separate from the live extension tables. Do not write parsed rows directly into election-cycle extension storage — staging exists so review can happen pre-publish (per architecture).
  • Jurisdiction mapping must reuse the legacy anti-corruption data access layer (Story 1.6) and the legacy identifier linker (Story 1.8). Do not bypass them or build a parallel matcher.
  • “Deterministic error output” is a hard property — sort/index error records by stable keys (row index, field name) so identical input yields identical output. Avoid set-iteration order or hash-randomized ordering.
  • Provenance metadata (source file identifier, source row reference, import timestamp, importing user) is required per published row, not per import batch.
  • Legacy Access tables remain read-only — the import path writes only to staging and to extension tables.
  • This story does not consume SafeCommitRail (Story 2.5); the import publish path has its own validation gate scoped to import semantics. Coordinate naming so the two flows are visibly distinct.

Project Structure Notes

  • Backend: Campaign_Tracker.Server/ — add an import feature folder (parser, staging repository, mapper, controller); reuse legacy ACL from Story 1.6 and identifier linker from Story 1.8
  • Frontend: campaign-tracker-client/ — add the import workflow under the election-cycle workspace
  • Story artifacts: _bmad-output/implementation-artifacts/

References

  • Story source: _bmad-output/planning-artifacts/epics.md (Epic 2 / Story 2.6)
  • Architecture constraints: _bmad-output/planning-artifacts/architecture.md (mapping registry, staging store for pre-publish review, provenance fields, publish service with audit)
  • UX patterns: _bmad-output/planning-artifacts/ux-design-specification.md
  • Prior stories: Story 1.5 — shared audit logging; Story 1.6 — legacy anti-corruption data access layer; Story 1.8 — legacy identifier linking for extension records

Dev Agent Record

Agent Model Used

{{agent_model_name_version}}

Debug Log References

  • Story generated from epic source and architecture/UX planning artifacts.

Completion Notes List

  • Story context created and marked ready-for-dev.

File List

Change Log

Powered by TurnKey Linux.