How to teach AI your brand voice (and stop rewriting every draft)
Prompt engineering for Content Designers
Imagine you met a really smart storyteller who can help you write stories. But here’s the problem: the storyteller doesn’t know you yet.
You: “Hey, write me a story!”
Storyteller: “Once upon a time there was a princess...”
You: “No no no! I want a story about dinosaurs!”
Storyteller: “Once upon a time there was a dinosaur princess...”
You: “ARGH! No princesses! Just dinosaurs doing dinosaur things!”
This happens 10 times. You get frustrated. The storyteller gets confused.
The solution: Instead of just saying “write me a story,” you need to teach the storyteller how you want stories. Give them a recipe card.
The 5 things on your recipe card
1. Context = “Here’s what’s happening”
“We’re writing a bedtime story for my little brother. He’s 4. He loves dinosaurs. He’s scared of the dark.”
Now the storyteller knows: make it gentle, not scary, dinosaurs are the heroes.
2. Instructions = “Here’s exactly what to do”
“Make it 5 sentences long. The dinosaur should solve a problem. End with the dinosaur going to sleep. No scary parts.”
Clear rules = Storyteller knows what to do.
3. Examples = “Here’s what good looks like”
“Good story: ‘Little T-Rex was afraid of the dark. His mom gave him a glowing rock. Now he feels brave. He can sleep anywhere. Goodnight, T-Rex.’
Bad story: ‘The dinosaur ate everything and roared loudly. THE END.’ (Too loud for bedtime!)”
Now the storyteller can copy what you like.
4. Rules = “Don’t do these things”
“Don’t use big words. Don’t make the dinosaur eat anyone. Don’t make loud sound effects. Use periods, not exclamation marks.”
Specific don’ts = Storyteller stays on track.
5. Checking = “Before you’re done, check your work”
“Is it 5 sentences? Is it gentle? Does the dinosaur go to sleep? If you answered NO to any of these, write it again.”
Storyteller learns to fix mistakes before showing you.
Why this works
Before recipe card:
You: “Write story!”
Robot: writes random thing
You: “Wrong!”
Repeat 10 times 😤
After recipe card:
You: give robot the recipe card once
Robot: reads all 5 things
Robot: writes exactly what you want
You: “Perfect!” 🎉
You use the same recipe card every time you need a bedtime story
The magic trick
Once you make a good recipe card, you can use it forever.
Every time you need a bedtime story or on-brand UX copy, just give the storyteller (ChatGPT, Claude, or any other AI) your recipe card and tell it: “This time, make it about a triceratops” or “This time, make it about a pterodactyl.”
The AI remembers your rules. You just change the component & context.
Congratulations! You just learned prompt engineering!
Now let’s talk about how this works for real content design work.
Why content designers need prompt engineering
Most people think prompt engineering is for developers. It’s not.
Developers use it to:
Extract structured data
Build AI features
Automate technical tasks
Content designers use it to:
Generate on-brand copy at scale
Test different phrasings quickly
Create content variations for A/B testing
Draft microcopy across multiple states
Maintain voice consistency across large products
AI isn’t replacing you. It’s multiplying your output. But only if you can direct it precisely.
Without prompt engineering:
You iterate 10+ times to get usable copy
Each generation feels random
You spend more time editing than writing from scratch
Brand voice is inconsistent
With prompt engineering:
First draft is 80% there
Consistent voice across all outputs
You iterate on strategy, not syntax
AI becomes a reliable content partner
But…Prompt engineering is dead. Models are so smart now they don’t need complex instructions.
That’s only half right. Yes, AI is better at guessing intent. But it still can’t read your mind about brand voice. Prompt engineering for content isn’t about tricking the AI—it’s about constraint management. You’re not coaxing better logic from the model. You’re defining the boundaries it must work within.
The prompt architecture stack
Think of prompts like content strategy—they have layers that build on each other, similar to how we approach any design system.
┌─────────────────────────────────────┐
│ LAYER 5: VALIDATION │ ← Critic loop (checks & rewrites)
├─────────────────────────────────────┤
│ LAYER 4: CONSTRAINTS │ ← Syntax rules > adjectives
├─────────────────────────────────────┤
│ LAYER 3: EXAMPLES │ ← 3-5 pairs max
├─────────────────────────────────────┤
│ LAYER 1.5: STRUCTURE │ ← Content skeleton (long-form only)
├─────────────────────────────────────┤
│ LAYER 2: INSTRUCTIONS │ ← Do this, not that
├─────────────────────────────────────┤
│ LAYER 1: CONTEXT │ ← Who, what, why
└─────────────────────────────────────┘
↑ ↓
└─── feedback loop ──┘
The layered structure combines established prompt engineering principles with content design system thinking.
Most people only use Layer 2 (instructions). That’s why their outputs are generic.
Good prompt engineers use all layers—and understand that validation loops back to instructions.
Note on Layer 1.5: For microcopy (error messages, CTAs, tooltips), skip this layer. For long-form content (help articles, onboarding flows), define the skeleton before defining voice.
Layer 1: Context (who, what, why)
Give AI the background it needs to make informed decisions.
What to include:
Product type and audience (general)
User state and journey stage (specific to this scenario)
Business goals
Technical constraints
Example:
❌ Weak context:
Write an error message✓ Strong context:
You’re writing for a personal finance app used by millennials managing their first budget. User tried to delete a recurring expense but it’s linked to active budgets.Context helps AI make decisions you’d make—like knowing whether to be reassuring (finance) vs. playful (gaming).
General product context (”fintech app for millennials”) stays constant.
Specific scenario context (”user tried to delete expense”) changes every time.
Layer 1.5: Structure (long-form content only)
For anything longer than microcopy, define the skeleton before the voice.
When to use this layer:
Help centre articles
Onboarding flows
Email sequences
Documentation
When to skip this layer:
Error messages
CTAs
Tooltips
Button labels
What to include:
Content hierarchy (H1, H2, H3)
Section sequence
Required components (intro, examples, conclusion)
Approximate length per section
Example:
Structure this help article as:
- H1: Title (question format)
- Intro: 2-3 sentences explaining the concept
- H2: How it works (3-4 bullet points)
- H2: Common scenarios (2 examples with before/after)
- H2: Troubleshooting (3 FAQs)
- Total length: 400-600 wordsWithout structure, long-form AI content becomes meandering. The AI needs a skeleton before it can apply voice.
If you paste 3 full help articles as examples in Layer 3, you’ll hit context limits or confuse the AI with stylistic variance. Structure first, then examples.
Layer 2: Instructions (do this, not that)
Tell AI exactly what you want and what to avoid.
What to include:
Specific action to take
Clear dos and don’ts
Structural requirements
Length constraints
Example:
❌ Vague instructions:
Write it clearly✓ Specific instructions:
Write an error message that:
- Explains what went wrong in one sentence
- Explains why in one sentence
- Provides 2 specific actions as bullet points
- Is under 50 words total
- Does NOT use idioms or jargon
- Does NOT apologize (just explain and help)AI doesn’t know what ‘clear’ means to you. Be explicit.
Layer 3: Examples (few-shot demonstrations)
Show AI what good looks like by providing examples.
What to include:
3-5 pairs maximum (more causes recency bias)
Examples that demonstrate voice and structure
Both good and bad examples (what not to do)
The recency bias trap: Too many examples cause AI to ignore earlier instructions and just mimic the most recent example. Limit to 3-5 distinct pairs.
Example:
❌ No examples:
Write in our brand voice✓ With examples (limited to 3):
Here are examples of our voice:
Good:
- This recurring expense is linked to 3 budgets. Remove it from those budgets first, or archive it instead.
- Your budget needs at least one income source to calculate spending limits. Add your income to continue.
- Cannot complete transfer. Your daily limit resets at midnight EST.
Bad (don’t do this):
- Oops! Looks like something went wrong!
- Error: Cannot process request.Layer 4: Constraints (syntax rules over adjectives)
Define the boundaries AI must work within—using concrete syntax, not subjective feelings.
The adjective trap: AI interprets “confident” as a statistical probability of tokens, not an emotional state. “Not cocky” is impossible to quantify.
What to include:
Syntactic rules (sentence structure, voice, punctuation)
Specific word choices to use/avoid
Formatting rules
Concrete measurables
Example:
❌ Subjective adjectives:
Be professional and friendly. Sound confident but not cocky. Be helpful but not hand-holding.✓ Concrete syntax rules:
Sentence structure:
- Maximum 2 clauses per sentence
- Start with imperative verbs (”Remove”, “Add”, “Archive”)
- Use active voice only (”We couldn’t process” not “The request could not be processed”)
Punctuation:
- Use periods, not exclamation marks
- No emojis
- Use contractions (”can’t” not “cannot”) for friendly tone
Word choices:
- Use: ‘linked to’ not ‘associated with’
- Use: ‘archive’ not ‘hide’
- Avoid: ‘unable to’, ‘unfortunately’, ‘sorry’, ‘just’, ‘simply’
Formatting:
- Sentence case only
- Bullet points for actions
- Numbers for stepsSyntax is measurable. Adjectives are interpretable.
“Friendly” can mean “uses contractions + exclamation marks” or “uses first-person pronouns + questions.” Be explicit.
Translation guide for common adjectives:
┌───────────────────────────────────────────────────────────────────────────┐
│ VAGUE ADJECTIVE | CONCRETE SYNTAX │
├───────────────────────────────────────────────────────────────────────────┤
│ Be concise │ Maximum 2 clauses per sentence │
├───────────────────────────────────────────────────────────────────────────┤
│ Be confident │ Start with imperative verbs. No hedging words like │
│ │ ‘might’, ‘possibly’, ‘perhaps’ │
├───────────────────────────────────────────────────────────────────────────┤
│ Be friendly │ Use contractions. Use ‘we’ & ‘you’. No passive voice │
├───────────────────────────────────────────────────────────────────────────┤
│ Be professional │ No contractions. Use full sentences. Periods only, │
│ │ no exclamation marks │
├───────────────────────────────────────────────────────────────────────────┤
│ Be helpful │ Include 2-3 specific next steps with action verbs │
└───────────────────────────────────────────────────────────────────────────┘Layer 5: Validation (the critic loop)
Tell AI how to evaluate its own output—and force it to rewrite if it fails.
The yes-man trap: LLMs hallucinate validation. If an AI generates a bad draft, it often lies on the checklist to match the draft rather than fixing the draft.
What to include:
Quality checklist
Explicit instruction to critique before finalizing
Requirement to rewrite if validation fails
Example:
❌ Weak validation (AI will lie):
Check if it’s under 50 words. Show your answers.✓ Strong validation (forces honest critique):
Before showing me the final copy, critique your draft using these questions:
1. Is it under 50 words? (Count every word. Show the count.)
2. Does it use any idioms? (List any idioms found. Idioms include: ‘just’, ‘simply’, ‘try’, phrasal verbs like ‘reach out’)
3. Does it explain both what and why?
4. Are the next steps actionable? (Can a user do them without clarification?)
5. Would this translate literally to another language without confusion?
If ANY answer reveals a problem, REWRITE the copy to fix it. Show me:
- Your critique (with specific issues found)
- The revised copy
- Confirmation that all checks now passWhy this matters: Forcing AI to critique step-by-step (Chain-of-Thought reasoning) before finalizing prevents hallucinated validation. The AI must identify specific issues and fix them.
Pro tip: Separate the drafter from the validator. First prompt generates copy. Second prompt critiques it against your guidelines. This prevents the yes-man effect.
A note on data security
Before we go further, let’s talk about what you should never paste into AI tools.
AI is a tool, not a vault. When you use ChatGPT, Claude, or similar tools (especially free versions), treat them like a contractor you just met in a coffee shop. Give them the context they need to do the job, but don’t give them your login credentials.
Never paste:
Real user names, emails, or account numbers (PII)
Unreleased confidential feature details (unless you’re on an enterprise plan with proper safeguards)
Trade secrets or proprietary business logic
Actual customer data or support tickets with identifiable information
In enterprise fintech (like the examples I’ve used), pasting proprietary error scenarios or user context into public AI tools can be a fireable offense.
The enterprise vs. consumer distinction:
API/Enterprise plans (ChatGPT Team/Enterprise, Claude for Work): Often have zero-retention policies and contractual privacy protections
Consumer plans (ChatGPT Free/Plus, Claude free): Your chats may be used for training. Assume a human reviewer might read them.
The fix: Anonymize everything. Use placeholder names. Generalize user scenarios. If you’re working on unreleased features, check your company’s AI policy first.
Even anonymized data patterns can be risky in free models. Example: “User tried to transfer money between accounts” reveals you have a transfer feature. If that’s unannounced, you’ve leaked information.
Safe approach:
Instead of: “User John Smith tried to transfer $5,000 to his savings account ending in 4521”
Use: “User tried to transfer money to a linked account”
If you’re on a free tier, assume a human reviewer might read your chat.
Your prompts can still be detailed and effective without exposing sensitive information.
Real example: Building a complete prompt
Let’s build a prompt for generating error messages using all layers (skipping 1.5 since this is microcopy).
The complete prompt:
CONTEXT:
You’re writing error messages for a personal finance app used by millennials (ages 25-35) managing their budget for the first time. The user tried to delete a recurring expense that’s currently linked to 3 active budgets. Deleting it would break those budgets.
INSTRUCTIONS:
Write an error message that:
1. Explains what the user tried to do
2. Explains why it can’t be done
3. Provides 2 specific next steps as separate bullets
4. Keep it under 50 words total
5. Use sentence case
6. Do NOT apologize
7. Do NOT use idioms or casual phrases like ‘oops’, ‘uh oh’
Format the 2 action options as bullet points.
EXAMPLES OF OUR VOICE (3 PAIRS):
Good examples:
- ‘This recurring expense is linked to 3 budgets. Remove it from those budgets first, or archive it instead.’
- ‘Your budget needs at least one income source to calculate spending limits. Add your income to continue.’
- ‘Cannot complete transfer. Your daily limit resets at midnight EST.’
Bad examples (don’t write like this):
- ‘Oops! Looks like something went wrong!’
- ‘We’re unable to process your request at this time.’
CONSTRAINTS (SYNTAX RULES):
Sentence structure:
- Maximum 2 clauses per sentence
- Start with imperative verbs for action items
- Use active voice only
Punctuation:
- Periods only (no exclamation marks)
- Use contractions for friendly tone
Word choices:
- Use: ‘linked to’ not ‘associated with’
- Use: ‘remove’ not ‘delete’ when talking about connections
- Avoid: ‘unable to’, ‘unfortunately’, ‘sorry’, ‘just’, ‘simply’, ‘try’
Formatting:
- Sentence case only
- Bullet points for actions
- No bold or italics
VALIDATION (CRITIC LOOP):
Before showing me the final copy, critique your draft:
1. Word count: Count every word. Is it under 50?
2. Idiom check: Does it use ‘just’, ‘simply’, ‘try’, or phrasal verbs? List any found.
3. Explanation: Does it explain both what and why?
4. Actionability: Can a user complete the next steps without clarification?
5. Translation safety: Would this work literally translated to Spanish?
If ANY check fails, REWRITE the copy to fix it. Show me:
- Your critique (specific issues found)
- The revised copy
- Confirmation all checks passResult from this prompt:
AI’s critique:
Word count: 31 words ✓
Idiom check: None found ✓
Explanation: Explains “cannot delete” (what) and “linked to 3 budgets” (why) ✓
Actionability: Both actions are specific and completable ✓
Translation safety: Direct translation works (no idioms or cultural phrases) ✓
All checks pass.
Final error message:
Cannot delete this expense—it’s linked to 3 budgets.
To proceed:
• Remove it from those budgets first
• Archive it to keep your budgets workingWhy this works:
Explains what happened (can’t delete)
Explains why (linked to 3 budgets)
Gives 2 clear options in bullets (as requested)
31 words (well under 50)
Uses syntax rules (active voice, imperative verbs, contractions)
No apologies, no idioms
Formatted exactly as specified
First try. No iteration needed.
Pro tip: System prompts vs. user prompts
Tired of pasting Layers 1, 3, and 4 every time you need copy?
Modern AI tools separate persistent context (system prompt) from specific tasks (user prompt).
In ChatGPT: Use ‘Custom Instructions’ in settings or create a project to store your brand voice, constraints, and examples.
In Claude: Create a Project and add your voice guidelines, examples, and constraints to project knowledge.
What this means for your workflow:
Store once (in system prompt):
Layer 1: General product context (what your product is, who it’s for)
Layer 3: Voice examples
Layer 4: Brand constraints (syntax rules)
Write every time (in user prompt):
Layer 1: Specific scenario context (what the user just did, current emotional state)
Layer 2: Task instructions
Layer 5: Validation for this task
Example breakdown:
Stored in system prompt:
“You’re writing for a personal finance app used by millennials managing their budget”
[Your 3 best error message examples]
[Your syntax rules: active voice, contractions, imperative verbs, max 2 clauses per sentence]
Written fresh each time:
“User tried to delete a recurring expense linked to 3 budgets”
“Write an error message that explains what and why, provides 2 actions...”
“Critique your draft: check word count, idioms, actionability. Rewrite if needed.”
Critical distinction: User emotional state must go in the user prompt. If you lock “user is calm” into the system prompt, you can’t write effective error messages for angry users.
Result: You only type the specific scenario and instructions for each new error message, CTA, or piece of copy. The AI already knows your brand voice and general product context.
This transforms prompt engineering from a chore into a scalable system.
Common prompt engineering patterns
Pattern 1: The if-then prompt (applying conditional logic)
Use conditional logic in your prompts for different scenarios.
IF user is new (< 1 week):
- Add more context and explanation
- Include ‘Learn more’ links
- Use complete sentences
ELSE IF user is returning:
- Be brief
- Focus on the specific issue
- Skip basic explanations
ELSE IF user is power user:
- Be extremely brief
- Assume knowledge
- Just tell them what to fixExample prompt:
Based on user state, adjust the error message:
- New users: Include what this feature does + why it matters
- Returning users: Focus on the specific error only
- Power users: Just the error + solution, no contextPattern 2: The variation generator
Generate multiple options at once for A/B testing.
Generate 3 variations of this CTA:
1. Benefit-focused (emphasize what user gains)
2. Action-focused (emphasize what happens next)
3. Urgency-focused (emphasize time/scarcity if relevant)
Keep all under 3 words.Pattern 3: The consistency checker
Use AI to review existing copy for voice consistency.
Review these 5 error messages and identify which ones break our syntax rules:
Syntax rules:
- Max 2 clauses per sentence
- Active voice only
- No apologies
- Start actions with imperative verbs
Error messages:
1. [Your message here]
2. [Your message here]
...
For each, tell me: Does it follow our syntax? If not, what specifically breaks the rule?This prompt helps you scale content review without hiring more people by forcing AI to cite specific rule violations, not just give vague feedback.
Pattern 4: The microcopy suite
Generate all related microcopy states at once to ensure all states use the same voice and terminology.
For the ‘Export data’ feature, generate copy for:
1. Button label
2. Hover tooltip
3. Loading state
4. Success message
5. Error message (if file too large)
6. Error message (if no data to export)
Use our syntax rules (active voice, max 2 clauses, contractions).
Keep voice consistent across all 6.Pattern 5: The developer handoff (JSON output)
Position yourself as highly technical by generating structured copy.
Write error messages for these 5 states and output as a JSON object that developers can use directly.
Format:
{
“error_code”: “EXPENSE_LINKED”,
“user_message”: “Cannot delete...”,
“actions”: [“Remove from budgets”, “Archive expense”],
“severity”: “warning”
}Now, developers can copy-paste your output directly into code. You become indispensable because you’re delivering implementation-ready content, not just words in a doc.
Pro tip: This is especially powerful for design systems. Generate entire microcopy libraries in JSON format that can be imported directly into component libraries.
The BIG question: When to use AI vs. when to write yourself
Use AI for:
First drafts you’ll refine
Generating variations for testing
Scaling content across similar patterns
Documenting existing voice (analyzing your copy)
Repetitive microcopy (button labels, form fields)
Write yourself for:
Strategic messaging (value props, positioning)
Brand-defining moments (onboarding, first use)
High-stakes copy (legal, security, payments)
Creative concepts (empty states with personality)
Anything that defines your brand voice
AI amplifies your voice. It doesn’t create it.
If you can’t explain your voice clearly enough to teach AI, you probably don’t have a clear voice yet. Define it first. Then use AI to scale it.
The AI content workflow
Here’s how to integrate AI into your content process:
Step 1: Define your voice (once)
Document your syntax rules (not adjectives)
Collect 10-15 examples of your best copy
Note what you never say
Step 2: Build prompt templates (once per content type)
Error messages template
CTA template
Empty state template
Confirmation template
Step 3: Set up system prompts
Store general product context, examples, and syntax rules in ChatGPT Custom Instructions or Claude Projects
Now you only type specific scenario and validation each time
Step 4: Generate drafts (every time)
Use your template
Add specific scenario context
Include critic loop validation
Generate 2-3 variations if testing
Step 5: Review and refine
Check against your syntax rules
Verify technical accuracy
Test with real users
Iterate
Step 6: Document learnings
What worked? Add to examples
What didn’t? Tighten syntax rules
Update templates
The result: Your prompts get better over time. Your first drafts get closer to final. Your voice stays consistent.
Quality control checklist
Before using AI-generated content, always check:
Voice consistency:
Follows our syntax rules (not subjective tone)
Matches examples I provided
No off-brand phrases
Accuracy:
Technically correct
Actionable next steps
No made-up features or capabilities
Clarity:
Would a new user understand this?
No jargon or idioms
Translates well to other languages
Effectiveness:
Achieves the goal (reduce errors, drive action, etc.)
Appropriate for user state
Works on mobile and desktop
Security:
No sensitive information included
No PII or proprietary details leaked
Safe for public AI tools
Never publish AI content without human review. AI is your first draft, not your final draft.
Building your prompt library
Don’t let good prompts disappear into Slack threads. Build a library your whole team can use.
Simple structure:
Create a shared Notion page or spreadsheet with these columns:
Content type - Error message
Trigger/use case - Linked data
The prompt - [Full prompt]
Example output - “Cannot delete...”
Last updated - Nov 2025
This helps every content designer on your team generate on-brand copy without starting from scratch. You’re not just improving your own workflow—you’re scaling your voice across the entire team.
Start small:
Pick one content type (error messages, CTAs, etc.)
Build one good prompt template
Use it 10 times, refine it
Then expand to other content types
Get better over time:
Collect examples of what works (limit to 3-5 per category)
Document syntax rules that AI violates
Tighten constraints
Update templates
Share with your team:
Document your prompts
Create a prompt library
Train others to use them
Build on each other’s learnings
The goal isn’t to replace writers with AI. It’s to make every writer 10x more productive while maintaining quality.
What’s next
We’ll build on what you learned about prompts to tackle something even more complex: intent mapping. Understanding what users really mean when they say something to a chatbot or voice interface.
See you in January. Happy new year!
— Mansi
Your UX Writing Bud
Found this useful? Here’s how to go deeper:
Share your prompts - Built a great prompt template? DM me on Substack. I’ll feature the best ones (with credit).
Share with your team - Forward this to other content designers exploring AI. Build your prompt library together.
Tell me what worked - Used this framework to generate better AI content? Reply with your results.
Support this work - If this changed how you use AI, buy me a coffee/book.
UX Writing Bud delivers frameworks and systems for content designers every alternate Friday. Free, always.


