01.
Discovery · Phase One
Strategic Context & Business Intent
Why this exists. The questions every sponsor can answer — and that every developer wishes had been asked first.
0% complete
If a sponsor cannot describe the business problem in two sentences without using the word "AI," the project is not yet ready to scope.
— Section premise
1.02
The business problem in plain language required critical
Describe what hurts today, in two or three sentences, without mentioning AI, ML, or any technology. If you cannot, the problem is not yet defined.
1.03
Why now? strategic
What changed — internally or in the market — that makes this urgent this quarter rather than next year?
1.04
Success criteria & KPIs required new
Each row: the metric, current baseline, target at 12 months, how it's measured. One number per row.
1.05
Key stakeholders required new
Who owns what. Naming the executive sponsor, technical owner, and compliance lead now prevents weeks of routing later.
1.06
Build vs. buy posture
How does the organization feel about building proprietary AI versus integrating commercial tools?
1.07
Budget envelope (Year 1, all-in) critical
Order of magnitude is fine. Includes infrastructure, licenses, services, and internal time.
1.08
Assumptions & constraints new
Each row: type (assumption or constraint), description, impact if invalidated. Surfacing these now is dramatically cheaper than discovering them mid-build.
+ Add Assumption / Constraint
← Previous
Next: Users & Use Cases →
02.
Discovery · Phase Two
Users, Personas & Use Cases
Who actually touches this thing, in what state of mind, and how often.
0% complete
2.01
User personas required new
Each row: persona name, role and tenure, technical fluency, working environment, primary motivation.
2.03
Top use cases, ranked required new
Each row: use case name, description, pre-condition, post-condition. Use cases drive scope; vague ambitions sink it.
2.04
Out-of-scope use cases & exclusions new
Things people will ask for that the system explicitly will not do. Naming these now prevents scope creep later.
2.05
Languages supported at launch
Particularly relevant for LATAM deployments where Spanish, Portuguese, and English often coexist.
2.06
Channels of access
Where the user encounters the system.
2.07
Accessibility requirements new
Beyond WCAG, particularly relevant for AI: screen-reader handling of streamed responses, keyboard navigation through suggestions, color-independent confidence signals.
← Previous
Next: AI Strategy →
03.
Discovery · Phase Three
AI Capability & Model Strategy
What the AI is actually doing under the hood. The first place vague briefs collide with engineering reality.
0% complete
3.01
Primary AI capabilities required technical
Most apps blend two or three; identify all that apply.
Content generation Drafts, summaries, copy
RAG / retrieval Q&A Answer from company knowledge
Classification / scoring Categorize, prioritize, score
Data extraction Structure from unstructured
Agentic workflows Multi-step, tool-using
Forecasting / prediction Time-series, demand, risk
Computer vision Image, document, video
Speech / audio Transcription, voice
Anomaly detection Outliers, fraud signals
Recommendation Personalization, ranking
3.02
Model preference / mandate
Has the organization committed to a provider, or is this open?
3.03
AI features & models to EXCLUDE new strategic
What the system must explicitly NOT do, even if technically possible. Often a regulatory, ethical, or brand constraint. Naming these is as important as naming what it will do.
3.04
Hallucination tolerance critical
How catastrophic is a confident-sounding wrong answer? This determines guardrails, evaluation rigor, and whether human review is mandatory.
3.05
Determinism requirements
Must the same input always produce the same output? Critical for audit, reproducibility, regulatory contexts.
3.06
Fine-tuning or custom training expected?
Most enterprise apps don't need it. Confirming early prevents architectural over-investment.
3.07
Evaluation strategy
How will quality be measured before launch and monitored afterward?
3.08
Bias mitigation strategy strategic
Especially critical for any system affecting people's access to services, credit, employment, or care.
← Previous
Next: Data & Knowledge →
04.
Discovery · Phase Four
Data, Knowledge & Sources
The fuel. Most AI projects fail here, not at the model layer.
0% complete
"We have all the data" is the single most expensive sentence in enterprise AI. The data exists; what's missing is access, structure, freshness, and rights.
— Recurring observation
4.01
Data sources & owners required new
Each row: system name, data type, owner / steward, access method. Be specific.
4.02
Data types processed
All formats the system will handle.
4.04
Data freshness requirement technical
How recent must the data be for answers to be useful?
4.05
Data quality reality check strategic
Honest assessment. The right answer is rarely "excellent."
4.06
Data validation rules new
What rules must input data adhere to before it enters the AI pipeline?
4.07
Data classification & sensitivity
What's the most sensitive data the system will touch?
4.08
Data residency & sovereignty critical
Where is the data legally required to live? Particularly relevant for LATAM (LGPD Brazil, Habeas Data Colombia, México LFPDPPP), EU (GDPR), and regulated sectors globally.
4.09
Data licensing & rights to use for AI
Has anyone confirmed the organization has the right to feed this data into models — including third-party providers?
← Previous
Next: Security →
05.
Discovery · Phase Five
Security, Identity & Sessions
Who gets in, who sees what, and how the system handles state across time.
0% complete
5.01
Authentication mechanism required technical
How users prove who they are.
5.02
MFA requirement new
Often mandated separately from SSO method.
5.03
Authorization model
How permissions are structured. Affects every read of data.
5.04
Tenant isolation model
If the system serves multiple business units, customers, or organizations.
5.05
Session management requirements
How user state, conversation context, and authenticated sessions persist.
5.06
Encryption requirements
In transit, at rest, and for sensitive fields.
5.08
Prompt injection & adversarial input defense critical
AI-specific attack surface. What's the strategy for hostile users trying to manipulate the model? Often missed entirely in traditional security reviews.
5.09
PII handling at the prompt boundary critical
Are you allowed to send personal data to a third-party model? If not, what's the redaction strategy?
← Previous
Next: Memory & Retention →
06.
Discovery · Phase Six
Historical Data, Memory & Retention
What the system remembers, for how long, and who can see it later.
0% complete
6.01
Conversation history retention
How long full prompt/response transcripts are kept.
6.02
User memory / personalization
Should the system learn about each user across sessions, or treat every interaction as fresh?
6.03
Audit trail for AI decisions critical
For regulated environments, every AI decision may need to be reconstructable years later: input, prompt version, retrieved context, raw model output, post-processing, final user-facing output.
6.04
Right to deletion / right to be forgotten
GDPR, LGPD, and similar regimes require deletion of personal data on request — including from prompts, embeddings, and logs.
← Previous
Next: Integrations →
07.
Discovery · Phase Seven
Integrations, Imports & Exports
The plumbing. AI applications live or die on how cleanly they connect to existing systems.
0% complete
7.01
Inbound integrations required new
Each row: system, protocol, frequency, owner. Systems the AI app must read from.
+ Add Inbound Integration
7.02
Outbound integrations new
Each row: system, action triggered, format, owner. Systems the AI app writes to or triggers actions in.
+ Add Outbound Integration
7.03
Bulk import requirements
Formats, volumes, and frequency for one-time or periodic data loads.
7.04
Export requirements
What users and downstream systems need to extract. Formats, scheduling, access control.
7.05
API exposure
Will this system expose APIs for others to consume?
7.06
Webhooks & event streaming
Real-time push of AI events to external systems.
7.07
Integration error handling new
How should integration failures be handled? Retry strategy, fallback, rollback, notification.
← Previous
Next: Scalability →
08.
Discovery · Phase Eight
Scalability, Performance & Cost
The economics of AI at scale. Token costs are a recurring P&L line, not a one-time bill.
0% complete
8.03
Latency budget technical
User-perceived response time tolerance. AI inference is slower than traditional apps; expectations must be set early.
8.04
Streaming responses?
Token-by-token streaming dramatically improves perceived latency. Not all integrations support it.
8.06
Caching strategy
Semantic cache, prompt cache, embedding cache — each cuts cost differently.
8.07
Architecture & scaling pattern new technical
High-level shape of the system. Drives team structure, deployment complexity, and cost profile.
8.08
Geographic distribution
Where users access from. Drives latency, residency, and provider region selection — particularly for multi-country LATAM deployments.
← Previous
Next: Governance →
09.
Discovery · Phase Nine
Governance, Compliance & Risk
The questions auditors will ask in 18 months. Answering them now is dramatically cheaper.
0% complete
9.01
Applicable regulations critical
Every framework that touches the use case. Be exhaustive.
9.02
AI risk classification
Per EU AI Act framing, useful even outside Europe.
9.03
Human-in-the-loop requirements
Where human review is mandatory before AI output reaches end recipient.
9.04
Explainability requirements
When the AI says "no," can someone explain why?
9.05
User consent & disclosure
How users are informed about AI involvement and consent to data processing.
9.06
Vendor & model governance
Approval process for new AI providers, model versions, and prompt changes in production.
9.07
Risk register new strategic
Each row: risk description, likelihood, impact, mitigation, owner. Honest naming halves the risk.
← Previous
Next: Operations →
10.
Discovery · Phase Ten
Operations, Observability & Support
Day 2 reality. Most projects under-invest here and pay for it within 90 days of launch.
0% complete
10.02
Deployment model new
Where the system runs, including AI inference.
10.03
Observability stack
Logging, metrics, tracing, AI-specific telemetry (token usage, hallucination flags, eval scores).
10.04
Monitoring & alerting new
What gets paged, who gets paged, on what conditions. AI systems need new alert types beyond traditional infra.
10.05
CI/CD & DevOps tooling new
How code, prompts, and model configurations are tested and deployed.
10.06
Support model
Who handles tier-1, tier-2, tier-3 issues. AI issues often require new triage skills.
10.07
User feedback mechanism
In-product way for users to flag bad outputs. Fuel for continuous improvement.
10.08
Change management & rollout new
Strategy for deploying new prompts, model versions, and features without breaking trust.
10.09
Incident response for AI-specific failures critical
What happens when the model goes off the rails — degrades, hallucinates badly, gets jailbroken.
← Previous
Next: UX & Trust →
11.
Discovery · Phase Eleven
User Experience & Trust
How the system behaves, communicates uncertainty, and earns the right to be relied upon.
0% complete
11.01
Tone & persona
How the AI presents itself. Formal, friendly, neutral, branded.
11.02
Citation & source transparency
When the AI surfaces information, how does the user verify the source?
11.03
Confidence communication
How the system signals when it's uncertain.
11.05
Error messages & failure UX new
How the system communicates when it can't help. Often the difference between trusted and abandoned.
← Previous
Next: Timeline →
12.
Discovery · Phase Twelve
Timeline, Team & Delivery
The closing act. What's known, what's unknown, what could derail it.
0% complete
12.01
Target dates & milestones new
Each row: milestone, target date, status (committed / aspirational / blocked).
12.02
Team composition & gaps
Roles assigned and gaps. AI builds need product, engineering, data, ML, security, legal, change management.
12.03
Known dependencies
Other teams, vendors, decisions, or events the project waits on.
12.04
SLA & support tier new
What the business commits to internal users or external customers.
12.05
Open questions for follow-up
Anything this discovery surfaced that needs a separate working session.
12.06
Notes & additional context
Anything else the engineering team should know going into this build.
← Previous
Next: Generate Deliverables →