In January 2025, I ran identical 800-word article prompts through six AI writing platforms with every brand voice setting each one offered. Jasper, HubSpot’s Content Assistant, Copy.ai, Writesonic, Writer, Notion AI, the full stack. Set each to “Professional” on the first pass, “Casual” on the second. The outputs changed. “Utilize” became “use.” Sentences got shorter. Register shifted. But here’s what didn’t change: the rhetorical moves. Every article opened with a topic sentence. Every section closed with a transition. Every paragraph held the same AI prose cadence, confident assertions, balanced by qualifiers, wrapped in smooth transitions. The voice was indistinguishable between platforms and between settings.
A tone slider doesn’t control voice. It shifts vocabulary register. Those are different problems, and conflating them is the reason most AI writing tools produce content that sounds like its brand for exactly one article and then drifts.
What “Professional” Actually Does
The model you’re prompting was trained with reinforcement learning from human feedback. Human raters consistently prefer prose that sounds clear, appropriately formal, and well-structured, which is exactly what “professional” describes. When you apply that setting, you’re nudging a model that already wants to sound professional toward a slightly more formal vocabulary distribution. The constraint is redundant for the behavior you actually want to change.
When you set it to “casual,” you’re moving the distribution toward contractions and shorter average sentence length. Neither instruction changes how the model opens paragraphs, what kinds of entities it references, whether it makes direct claims or hedges everything, or what phrases it gravitates toward under pressure. Those patterns come from the training distribution and stay there regardless of the tone attribute you’ve selected.
The mistake most operators make is conflating vocabulary register with voice. Register is a narrow slice of the full signal. Voice is the whole rhetorical pattern, sentence length distribution, what gets named versus left vague, whether the writer uses fragments, how assertions are structured, what the opening move of each section is. You can shift register with a slider. Voice requires behavioral specification, not a setting.
The Format That Actually Holds
I’ve been building AI writing pipelines since 2022, first as internal tooling, then as the underlying persona engine in BriefWorks. The format I landed on has four components. None of them are adjectives.
Required rhetorical actions. Concrete moves the writer must perform, not a description of what the writing should feel like. Three to five per persona, each specific enough to check: did this fire or not? A required action is testable. “Be confident” is not a required action. “Open at least one paragraph per section with a first-person observation from direct experience” is.
Prohibited phrases. Not “avoid jargon.” Specific strings the brand doesn’t use, maintained quarterly because AI phrase patterns evolve. A useful list starts with universal AI-register phrases, the filler intensifiers, simulated-candor openers, frictionless-everything marketing words that signal machine-generated text to experienced readers, and adds brand-specific avoidances on top. Every item on the list is a guard against a specific failure mode you’ve already observed in your outputs.
A cadence description. One sentence about the rhythm of the prose: sentence length distribution, typical opening move, whether fragments are acceptable. Under 30 words. This constraint produces recognizable sentence-level patterns across all content, the thing readers notice without being able to name.
Worked example sentences. Three to five sentences chosen for rhythm rather than topic. Demonstrations, not descriptions. A model pattern-matching against worked examples holds the voice longer than a model handed a paragraph explaining what the voice should feel like, because the rhythm is visible in the examples and invisible in the description.
The Senior Practitioner in Full
Let me make this concrete with one persona in full, the Senior Practitioner, the default voice BriefWorks uses for content aimed at experienced operators. Here’s the complete spec.
Four required rhetorical actions:
The first: open at least one paragraph per major section with a first-person observation from experience. Not a research citation. Not a general claim. Specifically: “I’ve seen this break when…” or “When I worked on [project/company/year]…” or “In [year] we shipped something like this and…” The formula is first-person + named circumstance + outcome. This move is what distinguishes practitioner writing from summary writing, and it’s the move AI defaults skip entirely when left unconstrained.
The second: name a specific entity in each section. A tool, a brand, a year, an institution, the actual thing, not a generic placeholder. “A company I worked with” becomes “the content team at Intercom in 2021.” “Some teams” becomes “the team scaling from 20 to 200 articles per month.” The anonymization reflex is one of the strongest AI-writing tells. Removing it is a required action, not a suggestion.
The third: call out one mistake-pattern per article. The format: “The mistake most operators make here is…” or “Where this consistently breaks is…” One per article, not one per section. Pick the mistake that costs teams the most and name it directly. This move is absent from nearly every AI-generated article because the model defaults to a teaching frame. The correction move is what signals credibility to readers who already know the basics.
The fourth: include one hyper-specific tangent. Two to three sentences breaking from the main thread to a named incident, a client, a specific date, a tool bug, a side-project that derailed the main work, before returning. “I burned a week on this with Notion’s API in Q3 2023 when…” or “A content director I worked with at a Series B fintech once asked me to…” The tangent is evidence of lived experience. An article with zero tangents reads like a summary of general knowledge, which is the baseline AI output with no persona constraints applied.
Six prohibited phrases: “best practice,” “industry standard,” “as a beginner you might think,” “studies show that,” “many experts believe,” and the filler opener “in this article we’ll explore.” Each one marks the model reverting to generic professional prose. The first two signal deference to consensus over direct experience. The last four signal the hedging register of a writer who isn’t confident enough in their position to state it directly.
Cadence: Medium-length declaratives, occasional sharp fragment for emphasis. Paragraphs open with subjects or first-person (“I’ve…”, “Most teams…”), rarely with transitional adverbs.
Three worked example sentences:
- “I’ve watched six different teams ship the same kind of process. Three failed at the same handoff.”
- “The data exists. Most people never query past the dashboard view.”
- “The fix isn’t a better template. It’s killing the template entirely.”
Those three sentences tell a model more about the expected voice than a paragraph describing that the persona is “confident, direct, and occasionally blunt.” The fragment pattern is demonstrated. The sentence-opening convention is shown. The rhythm is visible in the text rather than labeled in the description.
Required Actions vs Attitudes
In spring 2024, I was reviewing output from a B2B SaaS content team that had been running Writer’s brand voice product for six months, full style guide pasted into the system prompt, tone set to “professional,” every article reviewed by a content lead before publishing. The output was technically correct and completely unrecognizable as their brand. Their founder described it as “SaaS Wikipedia.”
The style guide they’d pasted ran to 3,200 words. It described the desired voice as “data-informed but approachable,” “confident without being arrogant,” and “jargon where appropriate, plain language everywhere else.” None of those descriptions map to executable constraints. A model generating text cannot execute “be approachable.” Approachable is an interpretation a human reader applies after the fact, it isn’t a token sequence the model can aim for.
What the model can execute: open this paragraph with a subject-first claim, not a transitional adverb. Use the actual company name instead of “a leading provider.” State the assertion before the qualification, not after. Those are specific patterns. The model holds them because they’re specified at the level of behavior, not attitude.
We rebuilt their spec from their fifteen best historical posts. Three required actions extracted from what their writers actually did in the pieces their readers had shared most. A prohibited-phrase list built from observing what their AI output reached for that their best content never used. Output quality was measurable within the first five articles under the new spec, not because the underlying model had changed, but because the behavioral constraints had replaced the attitudinal description.
Why Example Sentences Win
Anthropic’s prompt engineering guidance is explicit on this: models follow concrete instructions better than abstract descriptions, and few-shot examples produce more consistent output than prose explanation of the desired behavior. That finding directly governs how persona specs should be built.
Three rhythm-matched example sentences in the system prompt are an active constraint on every generation decision, the model is pattern-matching against them as it generates. A descriptive paragraph, “the voice is direct and occasionally dry”, is a high-level label competing with every other pattern in the training distribution. The model matches patterns. Give it patterns.
The test I run with new clients: generate the same article with (a) an adjective-based persona description and (b) three worked example sentences with no other description. Then audit section four of each output. The adjective version shows drift, still nominally on-brand in the opening section but the rhetorical moves are gone by section three. The example version holds because the rhythm anchor is still in context.
Three examples are better than one for a structural reason: they give the model enough data to infer a distribution rather than copy a specific sentence. One example produces mimicry. Three produce a pattern the model can generalize. This is why BriefWorks personas carry three cadence anchors rather than one, opener rhythm, mid-article rhythm, and closing rhythm, each demonstrating the voice in a different structural position.
Building One from Scratch
Start with your best existing content, not your aspirational brand guide. Pull ten to fifteen pieces that people inside the company point to and say “this is us.” Read them for behavior, not for adjectives.
For required actions: what do sections open with? Do they start with first-person? A claim? A number? What entities get named, and how specifically? Is there a consistent structure to how the writer introduces the problem before the solution? Three patterns you can verify in the text become the required-actions list. If you can’t point to a specific place in the article and say “yes, this fired,” the action is too vague.
For prohibited phrases: start with your worst AI output, the articles that felt the most generic, the ones that prompted the most “this doesn’t sound like us” comments. List the specific phrases that appear there but never in your best pieces. That list, not the list you’d write from scratch, is accurate. The model reaches for those phrases under pressure. Every item is a guard against a failure mode you’ve already seen in production.
For the cadence description: one sentence about rhythm. Sentence length, opening convention, fragment policy. Don’t try to describe the tone, describe the mechanics of how sentences are assembled.
For worked examples: pick three sentences from your ten best pieces, chosen for rhythm over content. You want a sentence that shows how the voice handles a claim, a sentence that shows a transition, and a sentence that shows a section opening. Those three, injected at every generation prompt, outperform two pages of adjective-based description by the time you reach article fifteen.
Once the spec exists, it needs maintenance. Required actions and examples are stable, update when your brand positioning shifts significantly. The prohibited-phrase list needs quarterly attention. AI phrase patterns propagate fast; phrases that were distinctive tells in 2023 are now average internet writing. The spec is a living document tracking the gap between your voice and the statistical center you’re trying to stay away from.
The content brief guide covers how persona context feeds into the brief layer before generation starts, the voice spec is the behavioral constraint; the brief is the structure that channels it into a specific article. For a comparison of how different AI brief tools handle persona injection at the pipeline level, the tools comparison has the breakdown. The 2026 ranking analysis covers how brand differentiation at the content layer maps to ranking outcomes at scale.
Frequently Asked Questions
Can I write my own persona in BriefWorks?
Yes. The persona builder accepts required rhetorical actions, a prohibited-phrase list, a cadence description, and worked example sentences, the same four components described in this guide. You can build from scratch or start from a built-in persona and modify it. Custom personas are stored per-account and available across all projects.
Why does Claude ignore tone descriptors even when I paste them into the prompt?
Tone descriptors like “professional” or “conversational” are redundant for an RLHF-trained model, the model already defaults to producing text that fits those labels, so the instruction doesn’t compete effectively with the training distribution. Required rhetorical actions, prohibited phrases, and worked examples compete effectively because they specify actual token-level behavior rather than labeling an interpretation.
What’s the difference between a persona spec and a brand voice guide?
A brand voice guide is written for human writers and describes the desired voice using adjectives and general principles. A persona spec is written for AI generation pipelines and specifies behavioral constraints, required actions, prohibited phrases, cadence mechanics, and worked examples, all of which produce pass/fail criteria a review system can check. The key operational difference: a style guide produces a document a human interprets. A persona spec produces measurable criteria.
Do I need a separate persona per content topic?
No. Voice, the rhetorical substrate, stays constant across topics. The cadence description and required actions that define the persona apply to an article about pricing strategy the same way they apply to an article about onboarding flows. What changes per topic is the brief: the angle, the audience, the evidence stack. The persona provides the rhetorical pattern; the brief provides the content direction. If required-actions lists need to be substantially different per topic, that’s usually a brief-quality problem, not a persona problem.
How do I know if a persona spec is working?
Run a review pass after generation that checks: are the required actions present? Does the prohibited-phrase list show zero hits? Do sections open per the cadence description? Those three checks produce measurable output signals you can track across articles. If required actions are missing consistently, the actions are too vague or the worked examples aren’t demonstrating the right moves. If prohibited phrases keep appearing, the list is too short or the pipeline needs a re-injection point at section boundaries rather than a single system-prompt injection.



