If you have ever wondered why one prompt produces a useful answer and another produces something that sounds like a LinkedIn motivational post, the answer is structure. Great prompts share an anatomy. This post is about each part — what it is, why it matters, and what it sounds like in a real prompt.
Why anatomy matters
When you write a prompt, you are giving the model a brief. A model — like a smart colleague — fills in any gap you leave. The wider the gap, the more it has to guess, and the closer its guess pulls to the average of everything it has read. The "AI sounds generic" complaint is really a "my prompt has too many gaps" complaint.
A prompt generator is built around this insight: it identifies the gaps in what you wrote, asks you to close the most important ones, and assembles the result into a prompt with all five parts wired in. Whether you use one or write by hand, the anatomy is the same.
Part 1 — Role
The first sentence of every great prompt names who the model is pretending to be. Specifically.
Bad: "You are an AI assistant." Good: "You are a senior product manager at a Series B SaaS company who has shipped 50+ features and writes PRDs engineers actually build from."
The role does three things at once. It anchors expertise (so the model uses domain vocabulary correctly). It anchors voice (a senior PM doesn't write like a marketing intern). And it anchors judgment — when constraints conflict, the role tells the model whose values to apply.
Part 2 — Context
What does the model need to know about your specific situation that it cannot guess?
This is the part most prompts skip. The user knows the context (it's their situation), so they forget the model doesn't. Then the model produces an answer for the average situation, not theirs.
Useful context includes:
- Audience: who is this for, in their job title voice
- Stage: where are you in this process — exploring, deciding, executing
- History: what have you already tried that didn't work
- Stakeholders: who needs to sign off, who blocks
- Constraints from upstream: budget, deadline, regulation
Two sentences of context can change a prompt from "useless" to "exactly what I needed."
Part 3 — Constraints
Constraints are the rules the answer must follow. They are the part of the prompt that prevents the model from drifting toward the average.
Strong constraints:
- "Length: 250-400 words"
- "Tone: confident, never hype-y. No exclamation marks. No 'we're thrilled.'"
- "Must include three specific data points the user provides"
- "Must NOT mention competitors by name"
The constraint trick is that the negative ones are often more useful than the positive ones. "Don't" tells the model what to avoid. The space of bad answers is enormous; ruling it out is high leverage.
Part 4 — Output format
What shape should the answer take?
This is where most prompts miss the easiest win. "Summarize this" gives you whatever the model feels like — sometimes a paragraph, sometimes bullets, sometimes a wall of text. "Summarize this as 3 bullet points, each ≤ 20 words" gives you exactly that.
Common output format choices:
- Markdown headers + bullets (good for explanations)
- A specific JSON schema (good for downstream code)
- A table with named columns (good for comparisons)
- A numbered list (good for steps)
- A two-column before/after (good for diffs)
If your answer needs to feed into another tool, define the format strictly. If it needs to be read by a human, pick the format the human will actually skim.
Part 5 — Success criteria
How will you know the answer is good?
This is the most-skipped part of a great prompt, and the easiest to add. A success criterion is a sentence the model can use as a self-check before stopping.
- "The summary is good if a busy executive could state the takeaway in one sentence after reading it."
- "The PRD is good if every user story has at least 3 acceptance criteria and an explicit 'out of scope' section."
- "The email is good if the reader can tell, in 5 seconds, what we want them to do next."
Success criteria are also what let an AI prompt generator do multi-turn refinement intelligently — it can check the model's first output against the criteria you stated and identify which gaps still need closing.
A great prompt assembled
Putting all five parts together for a real example — "help me run a quarterly review for my engineering team":
Role: You are a director of engineering at a 200-person SaaS company who has run 12 quarterly reviews and knows what makes them useful versus what makes them theater.
Context: My team is 8 engineers, mid-market B2B product, finishing a quarter where we shipped 60% of planned roadmap and missed two major launches. Two engineers want to be promoted. One is at risk of leaving.
Constraints: No corporate-speak. No "synergy." Address the missed launches directly without naming individuals. Address the at-risk engineer privately, not in the team review.
Output format: Three sections — Team review (700-900 words, structured), Promotion conversations (one paragraph each), 1:1 talking points for the at-risk engineer (3-5 bullets).
Success criteria: Good if (1) every team member could read the team review without feeling thrown under the bus, (2) the promotion paragraphs include one specific moment that made the case, (3) the at-risk 1:1 doesn't read like a retention pitch.
That prompt produces a useful answer on the first try in any frontier model. The same prompt without any one of those five parts produces something you'd throw away.
Use a prompt generator to handle the bookkeeping
Knowing the anatomy is half the work. The other half is doing it every time, for every prompt — which is exactly what wears down most people. A prompt generator handles that half: you describe your task, it asks the questions whose answers fill in role, context, constraints, format, and criteria. The output is the structured prompt above, written for you.
The point isn't to memorize the recipe. The point is to have a tool that follows the recipe automatically, every time, so you can focus on the part only you can do — the substance.