You Learned How to Talk AI. Now Learn to Talk Human. 

Why your team is failing the clarity test you only pass with machines


Table of Contents


I've been watching everyone obsess over AI prompts lately. 

"You have to be specific with your instructions!" "The more detailed your prompt, the better the output!" "Don't assume the AI knows what you want!" 

Meanwhile, these same people are giving their human teams instructions like "handle this strategically" and "make sure you communicate well." 

Apparently, we've figured out how to talk to artificial intelligence but we're still terrible at talking to actual intelligence. 

The Irony That's Killing Your Performance

You spend 20 minutes crafting the perfect ChatGPT prompt because you know vague instructions get mediocre results. 

Then you turn to your team member and say "I need this client-ready ASAP" and wonder why they didn't read your mind about what "client-ready" and “ASAP” means to you. 

You've discovered that clarity works with AI. You just haven't applied that discovery to humans yet. 

The Clarity Crisis in Real Time

Scene 1: The Project Meeting
You: "I need this done ASAP." Them (thinking): "How ASAP? By end of day? End of week? Before I go home for Christmas?" Them (out loud): "Sure, no problem."

Three days later you're wondering why it's not done.

Scene 2: The Quality Standards
You: "Make sure this is client-ready." Them (thinking): "Client-ready like rough draft client-ready? Or client-ready like we're presenting to the board? What does client-ready even mean here?" Them (out loud): "Absolutely."

You review their work and think they clearly don't understand quality.

Scene 3: The Performance Expectation
You: "I need you to be more proactive." Them (thinking): "More proactive about what? Should I email you more? Less? Make decisions without asking? Ask more questions?" Them (out loud): "I'll work on that."

Nothing changes because nobody knows what "more proactive" actually means.

What You've Learned About AI (But Forgot About Humans)

With AI: You give specific, detailed prompts because you want quality output
With humans: You give vague instructions and expect them to guess what you want 

With AI: You iterate and refine your prompts when you don't get the result you want
With humans: You assume they "don't get it" when your unclear instructions produce unclear results 

With AI: You know garbage input = garbage output
With humans: You give garbage instructions and blame them for garbage performance 

What Clarity Actually Sounds Like

Instead of: "I need this done well"  

Try: "I need this done by 3pm Thursday, with the financial analysis double-checked and formatted according to the client template" 

Instead of: "Communicate better with the team" 

Try: "Send a project update email to the team every Friday by 4pm with current status, next week's priorities, and any blockers" 

Instead of: "Be more client-focused"  

Try: "Respond to client emails within 4 hours during business days, and if you can't solve their issue immediately, send an update within 24 hours" 

You wouldn't prompt AI this vaguely. Don't prompt your humans this vaguely either. 

The Human Prompt Engineering You're Missing

You've become a prompt engineer for AI. Time to become a prompt engineer for your team. 

The same principles apply: 

  • Be specific about the desired outcome 

  • Provide context and constraints 

  • Define what success looks like 

  • Give examples when helpful 

  • Test and refine your "prompts" (instructions) 

Your Clarity Challenge

This week, treat your team instructions like AI prompts. Before you ask someone to do something, ask yourself: "Is this specific enough that it would get me good results from ChatGPT?" 

If the answer is no, refine your human prompt. 

You'll probably discover that the "performance issues" you thought you had were actually clarity issues in disguise. 

Because most people aren't failing because they don't care. They're failing because they don't know what success looks like. 

The Bottom Line

You've figured out how to get great results from artificial intelligence through clear, specific instructions. 

Your human intelligence deserves the same courtesy. 

Stop making your team guess. Start making success achievable. 


In Brief (TLDR)

  • The Problem: Most leaders in professional services businesses have unknowingly created a clarity crisis on their teams. They carry the emotional weight of feeling let down by people who "should know better," and they run the administrative cost of re-dos, missed deadlines, and performance conversations that solve nothing. The frustration is real, but it's misdirected. The team isn't the problem.

  • The Cause: The underlying issue is a double standard that most leaders don't even see in themselves. They've invested real effort into learning how to communicate with AI tools, writing detailed, specific, structured prompts, because they know vague input produces garbage output. But they haven't applied that same discipline to the people they actually pay. The commercial risk of staying in this pattern is significant: you lose good people who can't perform to a standard they were never given, and you keep underperforming teams because nobody's named what success actually looks like.

  • The Solution: The reframe is straightforward. Treat your team instructions the way you treat your AI prompts. Get specific about outcomes, timelines, formats, and what "done" means. This isn't about micromanaging. It's about removing ambiguity as a variable so that performance, not guesswork, is what determines results. Leaders who adopt this discipline stop cycling through "performance issues" that were never performance issues at all. They build teams that execute because they finally know what executing means.

FAQ’s

  • Tenure doesn't transfer standards. What feels obvious to you is the product of years of context that your team member doesn't have. "Client-ready" means something specific to you based on your experience, your clients, and your commercial instincts, but unless you've made that definition explicit and consistent, you're testing their ability to read your mind, not their ability to do the job. The longer you assume shared understanding, the longer you carry avoidable re-do cycles that cost you time and the client relationship.

  • Micromanaging is when you stand over someone's shoulder and control how they do the work. Clarity is when you define what the output needs to be and then let them figure out how to get there. The two are opposites. The leaders who are actually micromanaging are usually the ones who didn't set clear expectations upfront and are now correcting on the fly because the result wasn't what they imagined. Set the standard clearly once, and you spend less time managing, not more.

  • Every vague instruction is a rework risk. In a professional services business where your product is delivered by people, rework costs you margin, client confidence, and team morale at the same time. As you scale from 10 to 40 staff, the ambiguity doesn't stay contained. It compounds. One unclear standard at the top produces dozens of misaligned interpretations at the delivery level, and that's when client relationships start to crack in ways that are hard to trace back to the source.

  • Performance reviews can't fix a clarity problem that's been running for six months. By the time someone arrives at a formal review with a pattern of missed expectations, the commercial damage is already done and the employee has likely checked out. Clarity is a preventive discipline, not a remedial one. The review becomes useful when it's assessing performance against a standard that was clearly communicated at the start, not when it's the first time anyone's defining what good actually looks like.

  • Frame it as a systems upgrade, not a personal correction. You're building more rigorous operating standards because the business is at a stage where informal communication doesn't scale. That's a commercial reality, not an admission of fault. Most teams respond well to leaders who say "we're tightening how we work" with genuine conviction, especially when the result is that people finally know what success looks like in their role. It's harder to argue against clarity than it is to argue against criticism.

Next
Next

Do You Tell People When They Have Spinach in Their Teeth?