Why These Three
These three mistakes, choosing the tool, giving vague prompts, and accepting the first output, are the most consistent patterns across 750+ professionals I have trained on AI.
I have trained over a thousand professionals on AI. Different industries, different roles, different levels of experience. Some had never opened ChatGPT. Others had been using it daily for months.
The same three mistakes show up every time. Not occasionally. Every time. In every workshop, in every team, regardless of the tool or the sector.
These are not advanced problems. They are not about prompt engineering techniques or choosing the right model architecture. They are about how people interact with AI at the most basic level. And they are the reason most people get mediocre results and then conclude that AI "is not that useful."
It is that useful. But not if you keep making these three mistakes.
Choosing the Tool Instead of Using One
The first mistake is spending weeks comparing ChatGPT, Claude, and Gemini instead of opening any one of them and starting with a real task.
"Which model should I use?"
I hear this question in every single workshop. Before people have tried anything, before they have typed a single prompt, they want to know which tool is "the best." ChatGPT or Claude? The free version or paid? What about Gemini? And Copilot?
The question feels reasonable. There are dozens of options, new ones every week, and nobody wants to invest time in the wrong one. So they research. They compare. They read articles. They ask colleagues. And while they are comparing, they are not doing.
This is procrastination disguised as preparation.
The Story
One workshop participant spent months comparing AI tools and never started using any of them.
A participant in one of my workshops got it. She understood the framework, loved the examples, could see exactly how AI would help her work. At the end of the session, she raised her hand.
"This is really clear. But which model should I actually pick?"
She told me she had been stuck on this question for months. There were too many options. She did not know which one was right for her, so she just kept postponing it. She compared plans, read reviews, asked around. Months went by. She had not started.
It is like that task you know you need to do but rarely do. You push it to Friday afternoon. Friday comes, you are not quite sure how to start, so you think: next week. Before you know it, weeks or months have passed and you still have not done it.
The Fix
Pick one. Any one. Open ChatGPT, Claude, Gemini, whatever is closest. Type a real question about real work you are doing today. Spend 20 minutes with it.
The difference between the tools matters far less than the difference between using AI and not using it. You can always switch later. But you cannot learn by comparing. You learn by doing.
The best AI tool is the one you actually use. Stop comparing and start a conversation. You will learn more in 20 minutes of use than in 20 hours of research.
Giving AI Half the Picture
The second mistake is writing vague prompts with no context, then blaming the AI when the output is generic.
People type half a question and expect a full answer. They give vague instructions and are surprised when the output is vague. They assume AI "gets it" the way a colleague might, filling in the gaps from shared context and experience.
AI does not get it. AI does exactly what you ask. If you are not clear, it fills the gaps itself, and what it fills in is often wrong. Or generic. Or completely made up.
This is a communication problem, not a technology problem. And honestly, if you are not clear with AI, there is a chance you are not that clear with people either. The difference is that people ask follow-up questions. AI just runs with whatever you gave it.
The Story
A workshop participant gave ChatGPT every detail about his calendar but forgot to list the actual tasks he needed to plan.
A workshop participant wanted AI to create his weekly work schedule. He gave it everything: his available time blocks, when he had meetings, when he could not work because of other commitments, even his sport schedule. Thorough, detailed, well-prepared.
One thing missing: the actual tasks he wanted to plan.
When I pointed this out, he started listing them. But in broad strokes. "Write a report." "Prepare a memo." "Finish the project plan."
That is not how you plan. When you plan your own week, you think about each task in detail. Writing a report: what information do I need? Where do I find it? How accessible is it? Who do I need input from? Are they available this week? All of that matters for your time estimate. You might do these steps unconsciously, but you do them.
AI needs them too. If you say "write a report" without the detail, you get a time block called "write report" with no relation to how long it will actually take.
The Fix
Before you prompt AI, ask yourself: if I were explaining this task to a new colleague on their first day, what would they need to know? Then write that down.
Include:
- What exactly needs to happen (not just the label, but the steps)
- What context matters (audience, constraints, preferences)
- What a good result looks like
- What you specifically want to avoid
More detail in means better output out. Not because AI is stupid, but because AI is literal. It takes you at your word.
Vague prompt: "Help me write a report."
Clear prompt: "I need to write a quarterly report for my team lead. It covers our social media performance in Q1. I have the data in a spreadsheet. The report should be max 2 pages. Focus on what changed compared to last quarter and what we recommend for Q2. Tone: professional but not formal."
AI does not read between the lines. Give it the full picture: the context, the constraints, and what good looks like. Think of it as briefing a new colleague, not chatting with someone who knows your work.
Accepting the First Answer
The third mistake is copying the first AI output without pushing back. One follow-up question consistently produces a better result.
Someone prompts AI. They get an answer. It looks decent. They copy it and move on.
This is like accepting the first draft of anything. It is a starting point, not the finish line. But most people treat it as the final output. They do not know they can push back. They do not realise they can ask AI to be critical of its own work. They take what they get.
And "what they get" on the first try is almost never the best it can do.
The Story
In every workshop, asking "Is this the best answer you could give me?" makes the AI produce a noticeably better version on the spot.
I do this in every workshop. Someone works with AI, gets an output they are happy with. They look satisfied, ready to move on. Then I say:
"Now add this to your prompt: Is this the best answer you could give me?"
They type it. Every single time, the same thing happens. AI comes back with a better version. It points out weaknesses in its own previous answer. It restructures, adds nuance, fixes things nobody asked about. The output is noticeably better.
The look on people's faces when this happens is always the same. Surprise, then a kind of "why did it not just give me this the first time?"
Because you did not ask. AI gives you a good-enough answer by default. If you want the best answer, you have to ask for it. And then keep asking. Challenge the output. Ask what is missing. Ask what could be stronger. Ask if there is a different angle.
The conversation is the product, not the first response.
The Fix
Never stop at the first answer. After every AI output, try one of these:
- "Is this the best answer you could give me?"
- "What would you change if you had to improve this?"
- "What am I not asking that I should be asking?"
- "Play devil's advocate on your own answer."
You do not need to use all of them. Even one follow-up question will get you a significantly better result than copy-pasting the first output.
The first answer is a draft, not the final version. One follow-up question, "is this the best you can do?", consistently produces better results. Make it a habit.
The Pattern Underneath
All three mistakes share the same root cause: people treat AI like a search engine instead of a conversation partner.
These three mistakes look different, but they share the same root: people treat AI like a search engine. Type something in, get something out, done.
AI is not a search engine. It is a conversation partner. The quality of what you get out depends entirely on what you put in and how you respond to what it gives back.
| Search engine mindset | Conversation mindset |
|---|---|
| Pick the right tool first | Start with any tool and learn by doing |
| Type a query, get an answer | Brief it properly, then iterate |
| Accept the first result | Push back, ask follow-ups, challenge the output |
The shift from search engine thinking to conversation thinking is the single biggest factor in getting better results from AI. Not the model. Not the prompt template. Not the subscription plan. Just: talk to it like it is a capable colleague who needs a proper briefing and honest feedback.
What to Do About It
Fix these three AI mistakes today by picking one tool and starting, adding context to your prompts, and asking one follow-up question after every output.
If you recognise yourself in any of these, here is what to do today:
- If you are stuck on which tool to pick: open whichever AI tool you have access to right now. Give it a real task. Spend 20 minutes. That is your research.
- If your AI output feels generic: look at your last prompt. Count the details. Then add what you would tell a new colleague: the context, the constraints, what good looks like.
- If you are copy-pasting the first answer: after your next AI output, type "Is this the best answer you could give me?" See what happens.
None of these require a new tool, a new subscription, or a course. They require a different way of thinking about what AI is and how you work with it.
That is what I train people on. Not the tools. The thinking.
Based on training 750+ professionals at organisations including HAN University of Applied Sciences, and workshops for teams ranging from marketing departments to management boards. These patterns are consistent across industries, roles, and experience levels.
Frequently asked questions
What are the most common mistakes people make when using AI?
After training 750+ professionals, three mistakes show up every time: spending too long choosing which AI tool to use instead of just starting, giving AI vague or incomplete instructions, and accepting the first output without pushing back. All three stem from treating AI like a search engine instead of a conversation partner.
Does it matter which AI tool I use?
The difference between AI tools matters far less than the difference between using AI and not using it. Pick whichever tool you have access to, give it a real task, and spend 20 minutes with it. You will learn more in 20 minutes of use than in 20 hours of comparing tools. You can always switch later.
How do I write better prompts for AI?
Think of it as briefing a new colleague on their first day. Include what exactly needs to happen (not just a label, but the steps), what context matters (audience, constraints, preferences), what a good result looks like, and what you want to avoid. More detail in means better output out, because AI is literal and takes you at your word.
Why does AI give generic or mediocre answers?
AI gives a good-enough answer by default. If your prompt is vague, AI fills in the gaps itself, and what it fills in is often wrong or generic. Two fixes: give more detailed instructions upfront, and never accept the first answer. Ask 'Is this the best answer you could give me?' and the output will improve significantly.
Should I accept the first answer AI gives me?
No. The first answer is a draft, not the final version. After every AI output, try asking 'Is this the best answer you could give me?' or 'What would you change if you had to improve this?' Even one follow-up question consistently produces better results. The conversation is the product, not the first response.