Tone Tara is an AI team member that checks and rewrites content in our brand voice. She started as a quick experiment to score 20,000 web pages. Now she is used thousands of times by over 80 editors across the organisation.
The story of how she was built is just as interesting as what she does. Because she was not built in an hour. She was built in months, together with the person who needed her most.
Where it started
Tone Tara started because HAN University had 20,000 web pages and no scalable way to check brand voice compliance after a site-wide migration.
Five years ago, our university website (20,000+ pages) had to be migrated to a new brand style. It happened fast. Too fast to check whether all content actually matched the new tone of voice.
The result: hundreds of legacy pages that were never reviewed. One editor was responsible for the brand voice across the entire site. She had to search for pages manually and read each one to check compliance. That is not a job. That is a sentence.
I was curious: could we score all 20,000 pages automatically?
Phase 1: the scoring sheet
The first phase of Tone Tara was an automated scoring system that rated all 20,000 HAN University web pages on brand voice compliance from 0 to 10.
I built a prompt that analysed web pages against our tone of voice guidelines and scored them from 0 (poor) to 10 (perfect) in a Google Sheet.
The entire page inventory, scored in one run. What used to be months of manual reading became a priority list. The editor finally knew where to start.
That solved the diagnostic problem. But it did not help the other 80 editors who write new content every day.
Scoring gave us a priority list from 20,000 pages. The editor finally knew where to start, instead of checking everything manually.
Phase 2: from score sheet to chatbot
In phase 2, Tone Tara evolved from a scoring tool into a chatbot that 80+ editors at HAN University could use to check and rewrite their content in the correct brand voice.
If the scoring prompt worked so well, why not turn it into something every editor could use? A chatbot where they paste their text and get it back in the right tone of voice.
The responsible editor and I started building together. I handled the AI architecture, she provided the data and quality judgment. That split mattered. She knew what good brand voice looked like. I knew how to teach it to the AI.
The iteration that made the difference
Five iterations of the BUILD framework's Debug step transformed Tone Tara from a mediocre rule-follower into a brand voice expert, each time by improving the knowledge, not the technology.
The first version was functional but not good enough. Here is what we did, step by step:
Version 1: We uploaded the tone of voice guidelines as a PDF. The AI could read them but produced mediocre results. It followed the rules loosely, like a new hire who read the manual once.
Version 2: I converted the PDF to a structured markdown file. Better parsing, better output. The AI understood the guidelines more precisely.
Version 3: We added example texts that demonstrated good brand voice. Now the AI had a reference point, not just rules but real examples of what right looks like.
Version 4: We added bad examples with the editor's rewrites. This was the biggest jump in quality. The AI could now see the gap between wrong and right, and how to close it.
Version 5: We added a list of words we do not use, paired with the words we prefer instead. This gave the AI a concrete vocabulary filter on top of the style rules.
The iteration overview
Tone Tara went through five versions, each driven by a specific knowledge improvement that produced a measurable jump in brand voice accuracy.
| Version | What changed | Impact |
|---|---|---|
| v1 | PDF upload of tone of voice guidelines | Functional but mediocre. Followed rules loosely. |
| v2 | Converted PDF to structured markdown | Better parsing, more precise output. |
| v3 | Added good example texts | AI had a reference point, not just rules. |
| v4 | Added bad examples with editor rewrites | Biggest quality jump. AI learned the gap between wrong and right. |
| v5 | Added word ban list with preferred alternatives | Concrete vocabulary filter on top of style rules. |
Each version got noticeably better. Not because we changed the technology, but because we gave the AI better knowledge.
Every version improvement came from better knowledge, not better technology. Structured guidelines, good examples, bad examples with corrections, word lists. The AI platform stayed the same.
Going live early, improving continuously
Tone Tara launched before she was perfect, and the real-world feedback from 80+ editors at HAN University made her better faster than any internal testing could.
We did not wait until Tara was perfect. We went live relatively early and improved based on real usage.
Two things made the biggest difference after launch:
Active feedback collection. We asked editors directly: where does Tara get it wrong? What suggestions feel off? Every piece of feedback became a concrete improvement.
Training integration. We brought Tara into the existing tone of voice training sessions. Editors saw her in action, tried her during the training, and gave immediate input on what worked and what did not. This created a feedback loop we could not have manufactured otherwise.
Perfection before launch is a myth. Go live early, collect feedback from real users, and integrate AI into existing training sessions for the fastest improvement cycle.
What Tara looks like today
Tara analyses text against four brand voice dimensions, scores each dimension, gives specific improvement suggestions, and rewrites text in the correct tone. She knows 12 audience types, each with their own accent on the brand voice. She has a list of 879 outdated words with modern alternatives.
She has been used thousands of times by over 80 editors. The editor who was once responsible for manually checking every page now manages Tara instead of managing pages.
Why this case matters
The Tone Tara case matters because it proves that AI team members built with the BUILD framework and real domain expertise outperform generic AI tools by a wide margin.
Most people try AI once, get a generic result, and conclude it does not work for their context. Tara proves the opposite pattern:
Start with a real problem. Not "let's try AI" but "one person is manually checking 20,000 pages and that is not sustainable."
Build with the domain expert. The responsible editor was not a user of Tara. She was a co-builder. Her knowledge of what good brand voice looks like was the most important input.
Iterate on knowledge, not technology. Every version improvement came from better data: structured guidelines, good examples, bad examples with corrections, word lists. The AI platform stayed the same. The knowledge got sharper.
Go live early, collect feedback. Real usage produces real feedback. Training sessions produced the best feedback of all.
Give it time. Tara was not built in an hour. She was built in months. Some AI team members are quick wins. Others need sustained investment. Both are valid.
The BUILD framework in practice
Tara follows the same BUILD framework as Social Media Maik, but the timescale is different:
- Begin with goal: Score 20,000 pages on brand voice compliance, then help 80+ editors write in the correct tone.
- Unpack skills: How does the responsible editor actually evaluate brand voice? What does she look for? What mistakes does she correct most often?
- Identify knowledge: Tone of voice guidelines, audience descriptions, example texts (good and bad), word lists, spelling rules. Seven knowledge files in total.
- Layout instructions: Structured analysis workflow with scoring per dimension, concrete suggestions, and full rewrites.
- Debug and improve: Five iterations over several months, driven by editor feedback and training session input.
The difference is that Maik's Debug step took a week. Tara's took months. The principle is the same. The investment scales with the complexity of the task.
What results can a team expect from a brand voice AI team member?
Individual editors save 10-15 minutes per text by getting immediate tone of voice feedback instead of waiting for a manual review. At scale, with 80+ editors producing content weekly, that adds up to hundreds of hours per year.
But the real value is not time saved. It is consistency. Before Tara, brand voice compliance depended on one person's capacity. Now it is built into the workflow of every editor, every time they write.
The impact overview
Tone Tara shifted brand voice checking at HAN University from a manual, one-person bottleneck to an instant, self-service tool for 80+ editors.
| Metric | Before Tara | After Tara |
|---|---|---|
| Brand voice checks | 1 editor, manual, page by page | 80+ editors, instant, self-service |
| Time per text review | 15-20 minutes (read, assess, rewrite) | 2-3 minutes (paste, review, done) |
| Coverage | Reactive, one person's capacity | Built into every editor's workflow |
| Knowledge files | 1 PDF, loosely interpreted | 7 structured files, continuously updated |
| Feedback cycle | Editor corrects after the fact | AI corrects during writing |
| Word list | Not enforced | 879 words with preferred alternatives |
| Usage | N/A | Thousands of uses by 80+ editors |
How do I build something like this for my team?
Start with the BUILD framework. The five steps are the same whether you are building a quick social media assistant or a complex brand voice checker. What changes is the depth of each step.
For a project like Tara, plan for iteration. Your first version will be mediocre. That is normal. The quality comes from feeding the AI better knowledge over time, ideally together with the person who knows the domain best.
Frequently asked questions
What results can a team expect from a brand voice AI team member?
Individual editors save 10-15 minutes per text by getting immediate tone of voice feedback instead of waiting for a manual review. At scale, with 80+ editors producing content weekly, that adds up to hundreds of hours per year. Before Tara, brand voice compliance depended on one person's capacity. Now it is built into the workflow of every editor, every time they write.
How do I build something like this for my team?
Start with the BUILD framework. The five steps are the same whether you are building a quick social media assistant or a complex brand voice checker. What changes is the depth of each step. For a project like Tara, plan for iteration. Your first version will be mediocre. That is normal. The quality comes from feeding the AI better knowledge over time, ideally together with the person who knows the domain best.
This case study describes the real results of AI team member "Tone Tara", built by Guus Witjes using the BUILD framework. The system has been used thousands of times by 80+ editors and continues to be improved based on user feedback.