BogDB sample · Work Decomposition

Tactical Messaging — Work Decomposition

The work decomposition plan for the Tactical Messaging sample, aligned to the system intent and architecture definitions, produced through ASE.

Work Decomposition Plan

Project: TacticalMessaging
Stage: Work Decomposition
Status: Draft for operator review
Aligned to: system_intent_brief, architecture_definition, compliance_matrix

1. Decomposition Intent

This plan breaks TacticalMessaging into coherent work packets that preserve the project's intended shape:

  • a graph-centered traceability model;
  • a golden change request assessment workflow;
  • a minimal web UI for request, review, decision, and export;
  • a named query and policy library as the explainable behavior layer;
  • a reproducible evidence package suitable for demo and review;
  • a clear non-production posture with explicit scope boundaries.

The primary sequencing principle is:

make the golden scenario credible end to end before broadening coverage.

That means work should first establish the smallest durable slice that can:

  1. create a change request;
  2. retrieve linked requirement and implementation context;
  3. run policy checks;
  4. record findings and a decision with audit metadata;
  5. export a bounded evidence package from known graph state.

2. Delivery Framing

2.1 Target outcome for this decomposition

A downstream team should be able to implement TacticalMessaging through a small number of coordinated lanes without ambiguity about:

  • what lands first;
  • what can proceed in parallel;
  • what must be validated before expanding scope;
  • what evidence proves the slice is credible.

2.2 Proposed implementation lanes

  1. Domain and graph lane
    Owns entity model, relationships, identifiers, persistence patterns, and audit history.

  2. Query and policy lane
    Owns named trace queries, report queries, policy checks, trigger behavior, and interpretation notes.

  3. Workflow and web UI lane
    Owns intake, review, findings, decision, export screens, role-shaped behavior, and baseline accessibility.

  4. Seed and reproducibility lane
    Owns deterministic fixtures, golden scenario data, reset behavior, and expected outputs.

  5. Evidence and reporting lane
    Owns generated reports, JSON evidence package composition, export metadata, and stable output shape.

  6. Quality and compliance lane
    Owns scenario validation, smoke tests, traceability checks, representation disclaimers, and release-readiness evidence.

3. Work Breakdown Structure

flowchart TD
    A[Foundation alignment]
    B[Graph domain model]
    C[Seed scenario]
    D[Named trace queries]
    E[Policy checks]
    F[Workflow services]
    G[Minimal web UI]
    H[Reporting and export]
    I[Validation and evidence]
    J[Release readiness]

    A --> B
    A --> C
    B --> D
    B --> F
    C --> D
    C --> E
    D --> G
    E --> G
    F --> G
    D --> H
    E --> H
    G --> I
    H --> I
    I --> J

4. Work Packets

WP 1 Foundation alignment and scope lock

Purpose
Translate the carried-forward intent into a constrained build target that all lanes can use.

Includes

  • confirm the golden scenario boundaries for change request assessment;
  • confirm the minimum role shape such as submitter reviewer admin;
  • confirm the minimum screen set for the web UI;
  • confirm the first-class graph object set;
  • confirm the non-production disclaimer language to appear in docs and if practical in the UI.

Primary outputs

  • golden scenario definition;
  • bounded screen inventory;
  • role action matrix;
  • initial domain glossary;
  • visible scope statement.

Dependencies

  • carried-forward architecture and compliance artifacts.

Completion signals

  • all downstream lanes use the same scenario and object vocabulary;
  • no lane is planning for unsupported production-grade behavior.

WP 2 Graph domain model and persistence skeleton

Purpose
Establish the graph as the system backbone for traceability, findings, decisions, and audit history.

Includes

  • define core node types such as Requirement, StandardSource, TranslatorComponent, TestCase, ReviewRequest, Finding, Decision, and EvidenceArtifact;
  • define relationship patterns for source lineage, implementation mapping, verification links, affected scope, findings, and decisions;
  • establish stable identifier rules;
  • define append-oriented history approach for reviews and decisions;
  • create persistence patterns used by workflow services and exports.

Primary outputs

  • graph schema or model definition;
  • identifier convention;
  • relationship catalog;
  • audit metadata shape;
  • persistence service skeleton.

Dependencies

  • WP 1.

Completion signals

  • the golden scenario can be represented without placeholder entities;
  • decision history is modeled without destructive overwrite assumptions.

WP 3 Deterministic seed scenario and fixture governance

Purpose
Create the frozen sample data needed to make the golden scenario reproducible and reviewable.

Includes

  • create one canonical change request assessment scenario;
  • seed requirement lineage from source through component and test;
  • include at least one policy failure and one passing condition where practical;
  • document fictionalized data rules;
  • define reset and reseed behavior;
  • attach version identity to the seed package.

Primary outputs

  • seed manifest;
  • deterministic fixture set;
  • scenario identifiers;
  • reset instructions;
  • expected query result snapshots or output shape notes.

Dependencies

  • WP 1 and WP 2.

Completion signals

  • a fresh environment reproduces the same entities and identifiers;
  • the seeded scenario is rich enough to drive trace queries policy checks and export.

WP 4 Named trace query catalog

Purpose
Make traceability explicit and queryable rather than embedded in opaque application code.

Includes

  • define a deliberately small named-query set for the sample app;
  • implement the must-have trace and audit queries needed by the golden scenario;
  • provide interpretation notes for each named query;
  • keep additional reports explicitly out of the first slice unless they directly support the evidence path.

Minimum catalog for Increment 2

  1. Requirement trace chain
    Traverses StandardSource to Requirement to TranslatorComponent to TestCase and any linked review outcome.
  2. Change request impact view
    Shows a ReviewRequest and its linked affected requirements components and tests.
  3. Decision lineage
    Shows findings decisions and prior review history for a change request in append order.
  4. Evidence link inventory
    Lists evidence artifacts linked to the request findings or decision.

Nice to have later not first-slice required

  • aggregate policy trend summaries;
  • cross-request dashboards;
  • admin-oriented catalog browsing reports.

Primary outputs

  • versioned query catalog;
  • named query implementations;
  • sample query outputs;
  • interpretation guidance for reviewers.

Dependencies

  • WP 2 and WP 3.

Completion signals

  • reviewers can inspect trace chains without reading application code;
  • query names are stable enough to be referenced by reports and exports;
  • the initial catalog remains small enough to explain in a short demo.

WP 5 Policy query library and findings model

Purpose
Implement the illustrative policy layer used during change request assessment.

Includes

  • define a focused policy catalog for the sample;
  • implement policy queries for vocabulary naming completeness or other approved checks;
  • record findings linked to affected entities and policy names;
  • support a mixed execution model for the first release slice;
  • document known policy coverage boundaries.

First-slice policy execution model

  • automatic execution on selected write actions that materially affect assessment state such as initial request submission or explicit scope update;
  • manual execution from the review flow so a reviewer can rerun checks after examining context or making assessment updates;
  • preserve run metadata so the evidence package can distinguish auto-generated and manually triggered findings.

Primary outputs

  • policy catalog;
  • policy query implementations;
  • finding record shape;
  • policy run record shape including trigger source;
  • pass fail sample results;
  • interpretation notes and coverage caveats.

Dependencies

  • WP 2 and WP 3.

Completion signals

  • at least one meaningful fail case appears in the golden scenario;
  • findings can be traced back to a named policy and affected graph objects;
  • reviewers can tell whether a policy result came from an automatic or manual run.

WP 6 Workflow services for request review decision and audit

Purpose
Implement the service behavior behind the golden workflow.

Includes

  • create change request submission behavior;
  • link request to affected requirements or scope objects;
  • load review context from queries;
  • trigger automatic policy execution on approved write actions;
  • persist reviewer findings comments and decision rationale;
  • capture actor time and evidence links;
  • expose manual policy rerun and export initiation behavior.

Primary outputs

  • workflow service interfaces;
  • create update review actions;
  • policy trigger integration points;
  • audit metadata handling;
  • role-shaped authorization hooks or route guards appropriate to the sample.

Dependencies

  • WP 2 and strongly benefits from WP 4 and WP 5.

Completion signals

  • the workflow can save all state needed by the evidence package;
  • append-oriented audit history is preserved across review activity;
  • automatic and manual policy paths both produce inspectable records.

WP 7 Minimal web UI for the golden scenario

Purpose
Provide the smallest credible operator-facing surface for request and assessment.

Includes

  • request intake screen;
  • review summary screen with trace context;
  • findings and policy results presentation;
  • decision capture screen;
  • export action and status surface;
  • visible policy rerun action during review;
  • role-distinct navigation or action visibility;
  • baseline accessibility for labels focus keyboard flow and status messaging.

Primary outputs

  • screen implementations;
  • navigation flow;
  • role-aware action states;
  • accessibility checklist and review notes.

Dependencies

  • WP 1 and WP 6 with data from WP 4 and WP 5.

Completion signals

  • a reviewer can complete the golden scenario through the UI;
  • the core screens are understandable without hidden technical steps;
  • policy results and rerun behavior are visible enough to explain in demo;
  • the UI does not over-claim authentication or operational readiness.

WP 8 Reporting and evidence package export

Purpose
Generate bounded outputs that can support review and demo evidence.

Includes

  • define canned reports used in the golden scenario;
  • bind reports to stable named queries;
  • implement a JSON-first evidence artifact for the first release slice;
  • include metadata such as generation time scenario id seed version and query references;
  • include policy run metadata and trigger source;
  • label outputs as illustrative and environment-scoped.

First-slice JSON artifact shape

  • export metadata;
  • scenario and seed identifiers;
  • request summary;
  • trace query results;
  • policy run summary and findings;
  • reviewer rationale and final decision;
  • linked evidence artifact references;
  • visible scope disclaimer fields.

Primary outputs

  • report definitions;
  • JSON export schema or contract;
  • example evidence artifact;
  • metadata standards for generated outputs.

Dependencies

  • WP 4 WP 5 and WP 6.

Completion signals

  • the evidence artifact can be reproduced from documented setup;
  • reports remain explainable through referenced query names;
  • exported JSON is stable enough for review and fixture-based checks.

WP 9 Quality validation and release readiness

Purpose
Prove the sample is reproducible credible and accurately represented.

Includes

  • smoke tests for setup and seeded scenario;
  • scenario tests for submission review findings decision and export;
  • query output shape checks;
  • policy trigger behavior checks for auto and manual runs;
  • audit metadata completeness checks;
  • JSON artifact contract checks;
  • manual accessibility review of core screens;
  • repository hygiene and secret-free checks;
  • docs and UI disclaimer review.

Primary outputs

  • smoke test suite;
  • scenario validation notes;
  • accessibility review notes;
  • JSON export fixture or contract test;
  • release checklist;
  • known limitations list.

Dependencies

  • all prior work packets.

Completion signals

  • fresh setup reproduces the golden scenario outputs;
  • core evidence expected by the compliance matrix exists;
  • public-facing materials clearly describe the sample boundary;
  • the JSON evidence artifact matches the documented contract.

5. Suggested Lane Ownership Matrix

Work Packet Domain and graph Query and policy Workflow and web UI Seed and reproducibility Evidence and reporting Quality and compliance
WP 1 Foundation alignment Support Support Support Support Support Lead
WP 2 Graph domain model Lead Support Support Support Support Review
WP 3 Seed scenario Support Support Support Lead Support Review
WP 4 Named trace queries Support Lead Support Support Support Review
WP 5 Policy library Support Lead Support Support Support Review
WP 6 Workflow services Support Support Lead Support Support Review
WP 7 Minimal web UI Support Support Lead Support Support Review
WP 8 Reporting and export Support Support Support Support Lead Review
WP 9 Validation and readiness Support Support Support Support Support Lead

6. Validation Checkpoints

Checkpoint A Scenario and model credibility

When
After WP 1 through early WP 3.

Review focus

  • does the golden scenario reflect the original project intent;
  • are core entities and relationships sufficient for end-to-end traceability;
  • is the seed scenario bounded and deterministic.

Exit evidence

  • scenario definition;
  • graph object and relationship inventory;
  • seeded identifier plan.

Checkpoint B Query and policy credibility

When
After WP 4 and WP 5 initial implementation.

Review focus

  • is the initial named-query catalog small but credible;
  • are trace and policy behaviors explainable through named queries;
  • is there at least one meaningful fail case;
  • can findings and decision lineage be inspected independently of the UI;
  • are automatic and manual policy runs both visible in stored records.

Exit evidence

  • query catalog with the four first-slice queries;
  • policy catalog;
  • sample outputs;
  • interpretation notes;
  • sample policy run records with trigger source.

Checkpoint C Workflow and auditability

When
After WP 6 and WP 7 first usable slice.

Review focus

  • can a user submit review and decide a change request end to end;
  • are actor time rationale and evidence links consistently captured;
  • is role-shaped behavior visible without overstating trust strength;
  • can a reviewer manually rerun policy checks during assessment.

Exit evidence

  • working UI flow;
  • persisted review and decision records;
  • audit metadata examples;
  • visible policy rerun behavior.

Checkpoint D Evidence reproducibility

When
After WP 8.

Review focus

  • can reports and export be reproduced from a known seed state;
  • does the JSON artifact identify scenario seed version and generation time;
  • are query references and policy trigger metadata preserved;
  • are evidence boundaries and disclaimer language visible.

Exit evidence

  • sample report outputs;
  • JSON evidence artifact example;
  • documented export procedure;
  • artifact contract notes.

Checkpoint E Release readiness and representation accuracy

When
After WP 9.

Review focus

  • does the sample run cleanly from documented steps;
  • do core quality checks pass;
  • do docs and UI accurately represent the non-production posture.

Exit evidence

  • smoke test results;
  • accessibility review notes;
  • README and visible scope statement;
  • known limitations and residual risks.

7. Behavior-Oriented Scenarios

These scenarios are phrased so they can later become cucumber-style acceptance tests.

Scenario 1 Submit a change request with linked scope

Given a seeded environment with known requirements and components
When a submitter creates a change request through the web UI
Then the system stores the request with a stable identifier
And links it to the selected affected requirement or scope objects
And records actor and time metadata
And runs configured automatic policy checks for submission events

Scenario 2 Review trace context for a change request

Given a submitted change request in the seeded scenario
When a reviewer opens the assessment view
Then the system shows linked source requirement component and test context
And the trace results come from named queries with stable identities

Scenario 3 Run policy checks and record findings

Given a review request with affected scope
When a reviewer runs policy evaluation from the assessment view
Then the system returns pass and fail results for the configured policy catalog
And stores findings linked to both policy name and affected entities
And records that the policy run was manually triggered during review

Scenario 4 Capture a review decision with audit history

Given a reviewer has reviewed trace and policy results
When the reviewer records rationale and a final decision
Then the system preserves actor time rationale and evidence links
And stores the decision in append-oriented history rather than destructive overwrite

Scenario 5 Export a bounded JSON evidence artifact

Given a completed change request assessment
When the reviewer requests an export
Then the system generates a JSON artifact containing trace outputs findings decision rationale and metadata
And includes generation time scenario id and seed version
And identifies referenced query names and policy trigger source
And labels the output as illustrative and environment-scoped

Scenario 6 Reproduce the golden scenario from clean setup

Given a fresh local environment
When an operator runs the documented setup and seed steps
Then the golden scenario data loads with stable identifiers
And the named queries return expected output shapes
And the UI flow and export can be exercised without manual data repair

8. Dependency and Parallelization Guidance

8.1 Strict sequence items

The following should happen in order:

  1. WP 1 before all other major work;
  2. WP 2 before durable query workflow and export implementation;
  3. WP 3 before validation of reproducibility and golden-scenario outputs;
  4. WP 8 after stable query and workflow behaviors exist;
  5. WP 9 after first end-to-end slice is usable.

8.2 Parallel-capable items

The following can proceed in parallel once foundations exist:

  • WP 4 and WP 5 can run together after WP 2 and WP 3 are sufficiently stable;
  • WP 6 can start once the graph model is stable enough even if some query details are still maturing;
  • WP 7 can begin with stubbed data contracts while WP 4 and WP 5 finalize real outputs;
  • quality check design in WP 9 can start early, even if execution waits for implemented slices.

8.3 Change-risk hotspots

Extra coordination is needed where:

  • entity naming changes would break seeded data queries and reports;
  • policy semantics change expected golden-scenario outcomes;
  • policy trigger rules change workflow and evidence expectations;
  • export schema changes would invalidate evidence examples and contract tests;
  • UI wording or behavior could overstate role strength or production posture.

Increment 1 Backbone and frozen scenario

Scope:

  • WP 1
  • WP 2
  • essential WP 3

Success target:

  • the graph model and seed data can represent the entire golden scenario even if no UI exists yet.

Increment 2 Explainable trace and policy layer

Scope:

  • WP 4
  • WP 5

Success target:

  • the four-query starter catalog and focused policy checks produce credible outputs for the frozen scenario.

Increment 3 Usable end-to-end assessment flow

Scope:

  • WP 6
  • WP 7

Success target:

  • a user can submit review rerun policy checks and decide a seeded change request through the minimal web UI.

Increment 4 Evidence package and reporting

Scope:

  • WP 8

Success target:

  • the system can produce a stable bounded JSON evidence artifact from known graph state.

Increment 5 Readiness and representation review

Scope:

  • WP 9

Success target:

  • the sample is reproducible reviewable and honestly represented.

10. Definition of Done for the Stage

This decomposition should be considered implementation-ready when downstream teams can agree that:

  • the golden scenario is fixed enough to build against;
  • every required compliance evidence item maps to at least one work packet;
  • lane owners know their primary packets and dependencies;
  • the first-slice named query catalog is explicit and bounded;
  • mixed policy execution behavior is explicit enough to implement and test;
  • the JSON evidence artifact is concrete enough to define a stable contract;
  • validation checkpoints are explicit and ordered;
  • at least one end-to-end scenario can later be translated directly into executable acceptance tests.

11. Open Coordination Points

These areas should be resolved early because they affect decomposition boundaries:

  • the exact minimum web screens to lock into the first implementation slice;
  • how lightweight role simulation should appear in the UI;
  • whether the JSON artifact should be a single document only or allow referenced companion files later;
  • which disclaimer text must be visible in-product versus documentation only.

Cookie Compliance

We use cookies to ensure you get the best experience on our website. By continuing to use our site, you accept our use of cookies, privacy policy and terms of service.