LORENZA · INTAKE INSTRUMENT v2.1

Lorenza.
The Intent Brief
for AI applications.

A structured intake instrument from the Lopettia AI Practice. Captures human intent, constraints, stakeholders, and the architectural realities most stakeholders forget to mention until the project is in flight. Hands off a complete brief to the agent fleet — or to your engineering team — with a measurable readiness score.

Humans set intent and govern gates · Agents execute · Operations close the loop.
FOR · Sponsors, Architects, PMs OUTPUT · 3 deliverables + JSON SECTIONS · 13 EST. TIME · 60–90 min
Stage 01 — Strategy
Completion 0%
0 of 0
01.
Discovery · Phase One

Strategic Context & Business Intent

Why this exists. The questions every sponsor can answer — and that every developer wishes had been asked first.

0%complete
If a sponsor cannot describe the business problem in two sentences without using the word "AI," the project is not yet ready to scope. — Section premise
1.01

A short codename for the build. Final branding can come later.

1.02

Describe what hurts today, in two or three sentences, without mentioning AI, ML, or any technology. If you cannot, the problem is not yet defined.

1.03

What changed — internally or in the market — that makes this urgent this quarter rather than next year?

1.04

Each row: the metric, current baseline, target at 12 months, how it's measured. One number per row.

+ Add KPI
1.05

Who owns what. Naming the executive sponsor, technical owner, and compliance lead now prevents weeks of routing later.

+ Add Stakeholder
1.06

How does the organization feel about building proprietary AI versus integrating commercial tools?

1.07

Order of magnitude is fine. Includes infrastructure, licenses, services, and internal time.

1.08

Each row: type (assumption or constraint), description, impact if invalidated. Surfacing these now is dramatically cheaper than discovering them mid-build.

+ Add Assumption / Constraint
02.
Discovery · Phase Two

Users, Personas & Use Cases

Who actually touches this thing, in what state of mind, and how often.

0%complete
2.01

Each row: persona name, role and tenure, technical fluency, working environment, primary motivation.

+ Add Persona
2.02

At launch versus 12 months out. This drives almost every infrastructure decision.

2.03

Each row: use case name, description, pre-condition, post-condition. Use cases drive scope; vague ambitions sink it.

+ Add Use Case
2.04

Things people will ask for that the system explicitly will not do. Naming these now prevents scope creep later.

2.05

Particularly relevant for LATAM deployments where Spanish, Portuguese, and English often coexist.

2.06

Where the user encounters the system.

2.07

Beyond WCAG, particularly relevant for AI: screen-reader handling of streamed responses, keyboard navigation through suggestions, color-independent confidence signals.

03.
Discovery · Phase Three

AI Capability & Model Strategy

What the AI is actually doing under the hood. The first place vague briefs collide with engineering reality.

0%complete
3.01

Most apps blend two or three; identify all that apply.

3.02

Has the organization committed to a provider, or is this open?

3.03

What the system must explicitly NOT do, even if technically possible. Often a regulatory, ethical, or brand constraint. Naming these is as important as naming what it will do.

3.04

How catastrophic is a confident-sounding wrong answer? This determines guardrails, evaluation rigor, and whether human review is mandatory.

3.05

Must the same input always produce the same output? Critical for audit, reproducibility, regulatory contexts.

3.06

Most enterprise apps don't need it. Confirming early prevents architectural over-investment.

3.07

How will quality be measured before launch and monitored afterward?

3.08

Especially critical for any system affecting people's access to services, credit, employment, or care.

04.
Discovery · Phase Four

Data, Knowledge & Sources

The fuel. Most AI projects fail here, not at the model layer.

0%complete
"We have all the data" is the single most expensive sentence in enterprise AI. The data exists; what's missing is access, structure, freshness, and rights. — Recurring observation
4.01

Each row: system name, data type, owner / steward, access method. Be specific.

+ Add Data Source
4.02

All formats the system will handle.

4.03

Order of magnitude: documents, rows, GB, hours of audio, etc.

4.04

How recent must the data be for answers to be useful?

4.05

Honest assessment. The right answer is rarely "excellent."

4.06

What rules must input data adhere to before it enters the AI pipeline?

4.07

What's the most sensitive data the system will touch?

4.08

Where is the data legally required to live? Particularly relevant for LATAM (LGPD Brazil, Habeas Data Colombia, México LFPDPPP), EU (GDPR), and regulated sectors globally.

4.09

Has anyone confirmed the organization has the right to feed this data into models — including third-party providers?

05.
Discovery · Phase Five

Security, Identity & Sessions

Who gets in, who sees what, and how the system handles state across time.

0%complete
5.01

How users prove who they are.

5.02

Often mandated separately from SSO method.

5.03

How permissions are structured. Affects every read of data.

5.04

If the system serves multiple business units, customers, or organizations.

5.05

How user state, conversation context, and authenticated sessions persist.

5.06

In transit, at rest, and for sensitive fields.

5.07

Where API keys, model credentials, and database secrets live.

5.08

AI-specific attack surface. What's the strategy for hostile users trying to manipulate the model? Often missed entirely in traditional security reviews.

5.09

Are you allowed to send personal data to a third-party model? If not, what's the redaction strategy?

06.
Discovery · Phase Six

Historical Data, Memory & Retention

What the system remembers, for how long, and who can see it later.

0%complete
6.01

How long full prompt/response transcripts are kept.

6.02

Should the system learn about each user across sessions, or treat every interaction as fresh?

6.03

For regulated environments, every AI decision may need to be reconstructable years later: input, prompt version, retrieved context, raw model output, post-processing, final user-facing output.

6.04

GDPR, LGPD, and similar regimes require deletion of personal data on request — including from prompts, embeddings, and logs.

6.05

RPO (recovery point objective) and RTO (recovery time objective).

07.
Discovery · Phase Seven

Integrations, Imports & Exports

The plumbing. AI applications live or die on how cleanly they connect to existing systems.

0%complete
7.01

Each row: system, protocol, frequency, owner. Systems the AI app must read from.

+ Add Inbound Integration
7.02

Each row: system, action triggered, format, owner. Systems the AI app writes to or triggers actions in.

+ Add Outbound Integration
7.03

Formats, volumes, and frequency for one-time or periodic data loads.

7.04

What users and downstream systems need to extract. Formats, scheduling, access control.

7.05

Will this system expose APIs for others to consume?

7.06

Real-time push of AI events to external systems.

7.07

How should integration failures be handled? Retry strategy, fallback, rollback, notification.

08.
Discovery · Phase Eight

Scalability, Performance & Cost

The economics of AI at scale. Token costs are a recurring P&L line, not a one-time bill.

0%complete
8.01

Per day at launch and at year 1. Distinguish interactive (user-initiated) from batch.

8.02

Simultaneous users at the busiest moment. Drives infrastructure sizing.

8.03

User-perceived response time tolerance. AI inference is slower than traditional apps; expectations must be set early.

8.04

Token-by-token streaming dramatically improves perceived latency. Not all integrations support it.

8.05

Per-user, per-month or per-transaction limit. Forgetting this is how enterprise AI bills surprise CFOs.

8.06

Semantic cache, prompt cache, embedding cache — each cuts cost differently.

8.07

High-level shape of the system. Drives team structure, deployment complexity, and cost profile.

8.08

Where users access from. Drives latency, residency, and provider region selection — particularly for multi-country LATAM deployments.

09.
Discovery · Phase Nine

Governance, Compliance & Risk

The questions auditors will ask in 18 months. Answering them now is dramatically cheaper.

0%complete
9.01

Every framework that touches the use case. Be exhaustive.

9.02

Per EU AI Act framing, useful even outside Europe.

9.03

Where human review is mandatory before AI output reaches end recipient.

9.04

When the AI says "no," can someone explain why?

9.05

How users are informed about AI involvement and consent to data processing.

9.06

Approval process for new AI providers, model versions, and prompt changes in production.

9.07

Each row: risk description, likelihood, impact, mitigation, owner. Honest naming halves the risk.

+ Add Risk
10.
Discovery · Phase Ten

Operations, Observability & Support

Day 2 reality. Most projects under-invest here and pay for it within 90 days of launch.

0%complete
10.01

Availability, latency p95/p99, error rate.

10.02

Where the system runs, including AI inference.

10.03

Logging, metrics, tracing, AI-specific telemetry (token usage, hallucination flags, eval scores).

10.04

What gets paged, who gets paged, on what conditions. AI systems need new alert types beyond traditional infra.

10.05

How code, prompts, and model configurations are tested and deployed.

10.06

Who handles tier-1, tier-2, tier-3 issues. AI issues often require new triage skills.

10.07

In-product way for users to flag bad outputs. Fuel for continuous improvement.

10.08

Strategy for deploying new prompts, model versions, and features without breaking trust.

10.09

What happens when the model goes off the rails — degrades, hallucinates badly, gets jailbroken.

11.
Discovery · Phase Eleven

User Experience & Trust

How the system behaves, communicates uncertainty, and earns the right to be relied upon.

0%complete
11.01

How the AI presents itself. Formal, friendly, neutral, branded.

11.02

When the AI surfaces information, how does the user verify the source?

11.03

How the system signals when it's uncertain.

11.04

Required by emerging regulation in many jurisdictions. Always tell the user when they're interacting with AI.

11.05

How the system communicates when it can't help. Often the difference between trusted and abandoned.

12.
Discovery · Phase Twelve

Timeline, Team & Delivery

The closing act. What's known, what's unknown, what could derail it.

0%complete
12.01

Each row: milestone, target date, status (committed / aspirational / blocked).

+ Add Milestone
12.02

Roles assigned and gaps. AI builds need product, engineering, data, ML, security, legal, change management.

12.03

Other teams, vendors, decisions, or events the project waits on.

12.04

What the business commits to internal users or external customers.

12.05

Anything this discovery surfaced that needs a separate working session.

12.06

Anything else the engineering team should know going into this build.

13.
Discovery · Output

Readiness Assessment & Deliverables

A scored view of how ready this initiative is to enter architecture and delivery — and the documents that follow from this discovery.

summary
0/100
Discovery Readiness Score
Business Clarity
0%
Data Readiness
0%
AI Rigor
0%
Security Posture
0%
Operational Readiness
0%

Critical Gaps & Recommendations

Deliverables

Four outputs ready for distribution. The full specification goes to the engineering team. The executive summary goes to the sponsor. The JSON export feeds downstream tooling. The PowerPoint presentations carry the message to four different audiences — and a partner template can be plugged in as the brand carrier.

Document 1 of 4

Full Specification

Comprehensive Markdown document. ~13 sections, formatted for engineering scoping and architecture review.

Document 2 of 4

Executive Summary

One-page brief for the sponsor. Problem, KPIs, budget, top risks, readiness score, next steps.

Document 3 of 4

Structured Data (JSON)

Machine-readable export. Feeds project management tools, downstream proposal generation, or CRM logging.

Document 4 of 4

PowerPoint Presentations

Four audience-ready decks: executive summary, sales enablement, end-user briefing, training. Optionally upload a .pptx per audience as a partner-branded template — its slide master, theme, and per-master logo carry the brand.


      
Saved