Skip to content

Next Steps, Gaps, and User Education

This project is strong on methodology and templates. The highest leverage now is execution clarity: faster onboarding, fewer unknowns during implementation, and tighter operator-specific guidance.

  1. Operator-specific execution detail is the largest gap. Close open placeholders in the runbooks first.
  2. Unknowns need active tracking. Keep RESEARCH_NEEDED.md current with status and source evidence.
  3. User education needs layered onboarding + troubleshooting. Implement the immediate onboarding and recovery items in this guide.
  1. Complete operator playbooks with verified commands
  2. Turn placeholders into validated snippets
  3. Add a “First Run” path from install to first shipped artifact
  4. Document failure recovery paths (plan fails gate, tests fail, deployment fails)
  1. Add architecture variants (single-service, API + worker, event-driven)
  2. Add scale profiles (solo builder, two-engineer team, multi-agent execution)
  3. Add validation bundles per phase (charter lint, plan gate checks, task consistency checks)
  1. Add vertical examples (mobile, data pipeline, internal tooling)
  2. Add change-management workflow for requirement drift
  3. Publish operator compatibility matrix with tested versions and known limitations

Gap 1: Operator-Specific End-to-End Coverage

Section titled “Gap 1: Operator-Specific End-to-End Coverage”

General methodology is clear, but concrete execution differs by agent environment.

Impact: Users stall at setup and orchestration details.

Fix: Provide tested playbooks for each operator with:

  • exact prerequisites
  • exact commands
  • expected outputs
  • failure handling

Open questions are spread across docs and chats.

Impact: Knowledge gaps persist and recur.

Fix: Maintain RESEARCH_NEEDED.md as a live backlog with status + source evidence.

Gap 3: Limited “What Good Looks Like” Evidence

Section titled “Gap 3: Limited “What Good Looks Like” Evidence”

Current walkthrough is strong but single-domain.

Impact: Users cannot map ACE to their stack quickly.

Fix: Add multiple production-like examples and include measurable success criteria (time to first commit, test pass rate, deployment lead time).

Users need explicit recovery when gates fail or generated tasks are inconsistent.

Impact: Momentum loss during first adoption cycle.

Fix: Add troubleshooting runbooks:

  • charter ambiguity saturation
  • phase gate rejection
  • task dependency deadlocks
  • implementation mismatch vs charter
  1. Layer 1: quick orientation (concepts + one command chain)
  2. Layer 2: guided build (copyable end-to-end runbook)
  3. Layer 3: advanced operations (parallel agents, governance, scale)

For each phase, show:

  • input artifact
  • command/operator action
  • output artifact
  • review checklist

Include frequent checkpoints:

  • “You should now have <artifact>
  • “Run <verification command>
  • “If it fails, do <recovery step>

Every operator playbook should include:

  • tested version
  • known limitations
  • compatibility notes

Definition of Done for Documentation Quality

Section titled “Definition of Done for Documentation Quality”

Documentation is “operationally complete” when a new user can:

  1. Install tools and initialize a project without external help.
  2. Run charter -> plan -> tasks -> implement once end-to-end.
  3. Resolve one controlled failure using documented recovery steps.
  4. Deploy and verify a live artifact.