← Back to blog
Framework

The DRAFT Framework: How to Get Your Team From AI Experiments to AI Workflows

Most teams are stuck in the same loop. Someone on the team uses ChatGPT. Maybe two people. They write emails with it, brainstorm ideas, summarise meeting notes. It works. Sort of.

But it stays personal. Nobody shares what they do. There is no system, no shared approach. And when someone asks "how are we using AI as a team?" the answer is a shrug. Sound familiar?

Why Most Teams Get Stuck (And What to Do Instead)

Most teams get stuck because their AI usage stays individual, never reaching shared workflows. The fix is a structured approach that moves from personal experiments to team-wide adoption.

The Random Experiments Problem

The core issue is AI happening in silos: people try tools on their own but never combine what they learn.

Here is what happens in most teams. A few people try AI on their own. They get decent results for individual tasks. Some of them even build their own solutions: custom prompts, personal templates, little automations. But it stays individual. Each person figures out their own way, based on their own knowledge. Nobody combines it. No shared workflows, no team-level approach, no compounding value.

The problem is not that nothing is happening. Plenty is happening. The problem is that it is happening in silos. One person's solution does not reach the rest of the team. And when you only capture one person's knowledge instead of the team's collective expertise, you get a narrow solution that works for them but not for everyone.

Research backs this up:

40% faster task completion MIT / Science — individual AI use
30-50% efficiency gains BCG / 10,600 workers — team workflows

The difference is not the tool. It is the approach.

The Missing Piece: From Individual Experiments to Team Workflows

The missing piece is a framework that covers the full journey from strategy to team adoption, not just prompting.

Most AI training stops at "here's how to write a good prompt." That is useful, but it is step three of a five-step process. What comes before and after matters more.

You need a framework that starts with strategy, ends with team adoption, and treats the messy middle (building, testing, getting honest feedback) as the core work. That is what DRAFT does.

Introducing the DRAFT Framework

DRAFT is a five-step framework for teams that want to move from scattered AI experiments to shared AI workflows.

The first two steps are strategy. The last three are execution. Most frameworks focus on one or the other. DRAFT covers both, because a strategy without execution is a slide deck, and execution without strategy is a random experiment.

Strategy
Execution
D Discover Where are we?
R Rank What matters?
A Apply Build it
F Feedback Test with the team
T Teamify Make it stick
D

Step 1: Discover. Audit How Your Team Works Today

Discover is the DRAFT framework's first step: mapping how your team actually works to identify 10-15 concrete AI opportunities.

Goal: a list of 10-15 concrete AI opportunities, based on how your team actually works today.

Before you open any AI tool, you need to understand how your team actually works. Not how the process document says it works. How it really works, day to day.

Pay special attention to the desire paths: the shortcuts, workarounds and informal habits people have developed outside the documented process. Someone who copies data into a personal spreadsheet before entering it into the system. Someone who rewrites the template every time because it does not fit their situation. These informal paths often point to the biggest AI opportunities, because they show you where the official process does not match reality.

Map the Current State

Mapping the current state means asking five specific questions to the whole team, not just the manager, to surface real AI usage and hidden bottlenecks.

Start with these questions. Ask them to the whole team, not just the manager:

Why Detail Matters

Detail matters because AI opportunities hide in the sub-steps of tasks, not in the task description itself.

When someone says "I write blog posts," that is not enough. Keep asking. What comes first? A title. Then what? A catchy opening line. Then the first paragraph, which needs to include the keyword. Then the subheadings, which follow a specific structure. Every task has sub-steps that people do on autopilot, and those sub-steps are where the real AI opportunities hide.

Do not settle for "I write content." Get to "I spend 15 minutes writing alt text for six images, and each one needs to include the product name and a description of what is shown." The smaller and more concrete the step, the easier it is to test with AI.

There is a reason to push for this level of detail. Most people cannot see where AI fits into their work, because they do not know what AI can do. They will not say "this step could be automated" if they have never seen AI handle that type of task. That is your job in Discover: to map their work at a level of detail where you, with your AI knowledge, can spot the opportunities they cannot.

Start Small, Expand Later

The best DRAFT Discover results come from starting with a single, specific sub-task rather than an entire process.

Start small. Do not try to automate "writing blog posts." Start with "generating five title options from a brief" or "writing alt text that matches the brand guidelines." One small, detailed step. Get that working. Then expand from there. The best AI workflows are built bottom-up, not top-down.

What You Are Looking For

In Discover, you are mapping three things: current AI usage, workflow bottlenecks, and the gap between ambition and ability.

You are mapping three things:

  1. Current AI usage: who uses what, how, and how well
  2. Workflow bottlenecks: where time disappears into repetitive or manual work
  3. The gap: what the team wants to do with AI but cannot figure out yet

In a hackathon, this takes 20-30 minutes with post-its on a wall. In a team training, you can spend half a day here, going deep into two or three workflows. In an advisory setting, this is your intake session.

Do not rush this step. Every minute you spend here saves an hour in the Apply step.

Key takeaway

Map how your team actually works, not how the process document says it works. The desire paths reveal the biggest AI opportunities.

R

Step 2: Rank. Pick the Right Problem

Rank is the DRAFT step where you score your Discover list on four criteria and pick the single highest-impact workflow to build first.

Goal: one specific workflow to focus on first, chosen because it has the highest impact for the least effort.

After Discover, you will have a list of 10-15 potential AI applications. You cannot do all of them. And you should not try. The single biggest mistake teams make is starting with the most exciting idea instead of the most impactful one.

The Prioritisation Criteria

Score each opportunity on four dimensions:

Impact

How much time or quality does this save per week, across the team? Not per person, per team.

Feasibility

Can you build this with the tools and skills your team has today? Start with what you can build this week.

Frequency

How often does this happen? Daily tasks compound faster than monthly ones.

Standardisability

Does this follow a pattern? Clear input, repeatable process, defined output.

The Decision

The decision in Rank is to commit to exactly one workflow, not three.

Pick one. Seriously, one. Not three. The goal is a working result, not a roadmap. You will get to the others later.

The best first project usually has these characteristics:

Strategy, Not Tools

DRAFT intentionally delays tool selection until after strategy is done. The first two steps, Discover and Rank, are entirely tool-agnostic.

Notice that we have not mentioned a single tool yet. No ChatGPT, no Claude, no Gemini. That is intentional. The tool comes after the problem. Always.

Key takeaway

Pick one workflow. Score it on impact, feasibility, frequency, and standardisability. The tool comes after the problem, not before.

A

Step 3: Apply. Build Something That Works

Apply is where you build a working AI solution in one week or less, using real work and the 3C approach (Character, Context, Clarity).

Goal: a working first version, tested with real work, ready for the team to try.

Now you build. But not a perfect system. A working first version that you test with real work.

The One-Week Rule

If your first version takes longer than one week to build and test, you are overbuilding. Break it into smaller pieces. The goal is not a polished product. It is a working draft (the name is not accidental) that proves the concept.

How to Build It

Building an AI workflow in the Apply step follows four stages: collect knowledge, write instructions, pick the tool, and test with real work.

At its core, AI works with data. The quality of what comes out depends entirely on the quality of what you put in. Most people skip this part and wonder why the output sounds generic. The real work in Apply is not picking a tool or writing a prompt. It is collecting and structuring the knowledge that makes the tool useful.

Step 1: Collect the knowledge. This is 80% of the work. Gather everything the AI needs to do this task well. Think of it as briefing a new team member who is talented but knows nothing about your organisation. What would they need to read before they could do this job?

Typical knowledge documents include:

Do not try to write all of this from scratch. Most of it already exists somewhere: in style guides, in process documents, in the heads of your team members.

Write Instructions and Pick the Tool

Once you have the knowledge, structuring it into clear AI instructions is the second stage of Apply.

Step 2: Write the instructions. Tell the AI who it is, what context it operates in, and what you want it to do. Be specific. Use the 3C approach:

C

Character

What role should the AI play?

C

Context

What does the AI need to know?

C

Clarity

What exactly should it do, and how?

Step 3: Pick your tool. Now, and only now, choose the tool that fits. Match the tool to the complexity of the task, not the other way around.

Step 4: Test with real work. Not demo data. Not example scenarios. Actual work from this week. Run three to five real tasks through it. Compare the output to what you would have produced manually.

Common Pitfalls

Building in isolation. If one person builds it alone, the rest of the team will not use it. Build it together, or at least build it with one other person.

Chasing perfection. An 80% solution that the team uses is worth more than a 100% solution that sits in someone's bookmarks. Ship it rough, improve it later.

Skipping the instructions. A prompt without context is a coin flip. Invest time in writing clear, specific instructions.

Key takeaway

80% of the work is collecting knowledge, not writing prompts. Build it in one week or less. Test with real work, not demo data.

F

Step 4: Feedback. Test It With the Team, Honestly

Feedback is the DRAFT step that separates "works for me" from "works for the team." After one week of use, you gather substance, style, and usage feedback from every team member.

Goal: honest input from the team on what works, what does not, and a clear list of adjustments to make.

This is where most frameworks stop: you built something, it works, done. But "it works for me" is not the same as "it works for the team." Feedback is the reality check.

Three Types of Feedback

Substance feedback: Does the AI actually solve the problem? Is the output quality good enough?

Style feedback: Is the format useful? Can people work with the output directly, or do they spend 20 minutes reformatting it every time?

Usage feedback: Are people actually using it? If they built it together but nobody touches it after week one, something is off.

How to Gather Feedback

Keep it simple. Do not build a survey. Ask three questions after one week:

  1. Did you use it? How many times?
  2. What worked?
  3. What did not?

Then adjust. One round of feedback and iteration usually gets you from "this is okay" to "this is actually useful."

The Honest Conversation

The honest conversation means accepting negative feedback as data, not as an attack on the solution you built.

Key takeaway

Ask three questions after one week: did you use it, what worked, what did not. Then adjust. "It works for me" is not the same as "it works for the team."

T

Step 5: Teamify. Make It the New Way of Working

Teamify is the DRAFT step that turns a working AI solution into the team's default way of working, through documentation, ownership, and a clear signal from the team lead.

Goal: a documented, owned workflow that the whole team uses as their default, with a feedback loop to keep improving it.

This is the step nobody else has. And it is the most important one.

Building an AI solution is easy. Making it stick with the whole team is hard. Teamify is the difference between "I built a cool thing" and "this is how we work now."

Document It

Write down how it works. Not a 20-page manual. A one-pager:

Assign Ownership

Someone on the team owns this workflow. Not "the team." A person. They keep the prompt updated, gather feedback, and improve it over time. Without an owner, it decays within a month.

Make It Visible

Put it where people already work. If your team lives in Slack, pin it in Slack. If you use Notion, put it in the team wiki. Do not create a new destination. Meet people where they are.

Set the Expectation

The manager (or team lead) needs to say: "This is how we do this now." Not as a mandate, but as a clear signal. When the old way and the new way coexist without a clear preference, people default to what they already know. Make the new way the default.

Keep Sharing Wins

Sharing wins after Teamify is what sustains adoption. Every time-saving or quality improvement shared with the team reinforces the new workflow.

Adoption does not happen at launch. It happens in the weeks after, when people see results. Every time someone saves an hour, produces better output, or finds a new use case, share it with the team. This is not bragging. It is evidence.

Plan the Next One

After one DRAFT cycle completes, go back to your Rank list for the next workflow. Each cycle gets faster as the team learns the process.

Once one workflow is Teamified, go back to your Rank list and pick the next one. The second time is faster, because the team now knows how the process works. By the third or fourth cycle, the team starts spotting opportunities on their own.

Key takeaway

Document it, assign an owner, and make the new way the default. Without a clear signal from the team lead, people will fall back to what they already know.

Real-World Example: AI-Powered Content Workflow

This DRAFT case study shows how a five-person digital marketing team cut content reformatting time from 8 hours to 2 hours per week.

The Challenge

The challenge was a team spending 15 hours per week on content with inconsistent quality and no shared AI approach.

A digital marketing team of five produces weekly content: social media posts, blog summaries, and internal updates. Each person creates their own content independently. Total time: about 15 hours per week across the team. Quality is inconsistent. Nobody shares prompts.

Applying DRAFT

Discover: The team mapped their content workflow step by step. They found that 60% of the time went to three tasks: writing first drafts, reformatting content per channel, and writing alt text and meta descriptions.

Rank: "Reformatting content per channel" won: high frequency (daily), everyone does it, clear input/output, easy to standardise.

Apply: They built a Claude project with their brand voice guidelines, channel specifications, and three example posts per channel. Input: one core message. Output: versions for LinkedIn, Instagram, and the internal newsletter. Took four days to build and test.

Feedback: After one week, four out of five people used it. The LinkedIn output was solid. Instagram captions needed work (too formal). They adjusted the Instagram instructions with more specific tone guidance.

Teamify: They documented the workflow in their shared Notion, assigned one person as owner, and the team lead made it the default process for all channel-specific content.

The Results

8 hrs per week before
2 hrs per week after

Plus: consistent tone across all channels, and the team started their second DRAFT cycle three weeks later.

Using DRAFT in Different Settings

The DRAFT framework scales from a 1-hour lightning session to a multi-month advisory engagement. The five steps stay the same; only the depth changes.

Setting Time Focus
Lightning Session 1 hour Taste the full cycle once. Everyone leaves having seen it work.
Hackathon 3-4 hours Working prototype and a rollout plan.
Team Training 1-2 days Deep workflow audit, tested tool, ready for Monday.
Ongoing Advisory Weeks/months Multiple DRAFT cycles, compounding results.

What It Comes Down To

The DRAFT framework comes down to five questions, one per step: Discover, Rank, Apply, Feedback, Teamify.

The whole framework fits in five questions. One per step.

  1. How does your team actually work with AI today? (Discover)
  2. Where is the biggest win for the least effort? (Rank)
  3. Can you build a working version this week? (Apply)
  4. Does the team actually use it? (Feedback)
  5. How does this become the way you work? (Teamify)

If you can answer all five, you have a working AI workflow. Not a slide deck. Not a pilot that fades after two weeks. A real change in how your team operates.

AI tools are available to everyone. Your competitor uses the same ChatGPT, the same Claude, the same Gemini. The tools are not the advantage.

The advantage is how your team uses them. Together. Systematically. With shared workflows that compound over time. One person saving 30 minutes a day is nice. A team of eight saving 30 minutes each, on the same workflow, with consistent quality: that is a different game.

That is what "build your AI team" actually means. Not buying AI. Building a team that knows how to use it, together.

Frequently asked questions

How long does it take to run a DRAFT cycle?

It depends on the format. A lightning session takes 1 hour, a hackathon 3-4 hours, a team training 1-2 days, and an ongoing advisory engagement 4-6 weeks. The framework scales to the time you have.

What is the biggest mistake teams make with AI adoption?

Starting with the tool instead of the problem. Most teams pick an AI tool first and then look for problems to solve with it. DRAFT starts with understanding how your team actually works (Discover) and where the biggest opportunity is (Rank) before touching any tool.

Can I use the DRAFT framework for my team even if most people have never used AI?

Yes. DRAFT is designed for teams at any stage. If your team has never used AI together, start with a half-day Discover session. Map your workflows, identify one concrete opportunity, and build it together. The framework walks you through every step.

Why do AI tools stop being used after a few weeks?

Usually one of three reasons: the output was not good enough (fix the instructions and knowledge), it did not fit existing workflows (fix the integration), or there was no clear ownership (fix with the Teamify step). Most frameworks skip the Teamify step entirely, which is why adoption fades.

The DRAFT framework was developed by Guus Witjes at Step Ahead AI, based on training teams at HAN University and advising organisations on practical AI adoption.

Want to run DRAFT with your team?

Let's talk

From a one-hour lightning session to a multi-day training. DRAFT works at every scale.

Get in touch