The Uncomfortable Truth About AI and Effort

SF Scott Farrell • December 8, 2025 • scott@leverageai.com.au • LinkedIn

The Uncomfortable Truth About AI and Effort

AI doesn’t remove work. It moves work. And that changes everything.

📘 Want the complete guide?

Learn more: Read the full eBook here →

TL;DR:

  • AI shifts effort from execution to thinking (80/20 becomes 10/90)
  • You’re not the audience expecting magic. You’re the director providing vision.
  • Vague inputs produce vague outputs—that’s your specification failing, not AI

There’s a lie we’ve all been sold about AI.

Not a malicious lie—more of a hopeful exaggeration now colliding with reality. It goes something like this: “AI will do the work for you. Just ask, and it delivers.”

People imagine a smart genie. You mutter something vague, it reads your mind, and hands back exactly what you wanted.

It obviously doesn’t work like that.

What’s actually happening is subtler and more uncomfortable: AI doesn’t remove work. It moves work.

It shifts the effort from doing to thinking. From execution to specification. From hands to brain.


The Smart Genie Myth

Someone asks AI to compare two things: “What’s the best?”

And the AI flounders. Generic, both-sides-have-merits answer that helps no one.

But think about what “best” actually means:

  • The most accurate?
  • The most interesting?
  • The clearest for a technical audience?
  • The most likely to go viral?
  • The most accessible for a beginner?

“Best” has at least ten dimensions. When you don’t specify which ones matter, you’re asking AI to read your mind.

It can’t. And when it guesses, it picks something “generic and safe.”

The model isn’t failing. Your specification is failing.


The Director and the Crew

Better mental model: You’re the director. AI is the crew.

When a film director says “just make it look good,” the cinematographer is paralysed by infinite options. They default to something safe and generic. Stock footage.

But when the director says: “Slow push-in on her face. Natural light from the window. She’s realising something painful but trying not to show it. Stay tight enough that we see the moment her eyes change.”

Now the cinematographer can deliver something powerful.

AI works the same way.

“Under this mental model, LLMs should be thought of as actors; prompts as scripts and cues; and LLM responses as performances. Prompt engineering is playwriting and directing.”

— ArXiv: LLMs as Method Actors

Research backs this up. When researchers tested a “Method Actors” approach, performance on complex tasks jumped from 27% to 86%.

Same model. Different direction. Radically different results.


The Laziness Mirroring Effect

“If I’m lazy, it’ll be lazy.”

The model doesn’t literally decide to slack off. But the mechanism is real:

  • Vague input → huge space of plausible outputs
  • Huge space → AI picks “generic and safe”
  • Generic and safe → feels useless

It’s reflecting your ambiguity back at you, wrapped in fluent language.

19%

Slower task completion for experienced developers using AI tools (METR 2025)

A 2025 study found experienced developers take 19% longer with AI tools—despite expecting a 24% speedup.

AI creates cognitive overhead—verification, correction, interpretation—that most users aren’t prepared for.


The Effort Redistributes

You’re not going from “100 units of work” to “10 units.”

You’re going from:

Before AI With AI
80% execution, 20% thinking 10% execution, 90% thinking

Total effort may be similar. Sometimes higher. But the distribution shifts dramatically.

“77% of workers say AI has either increased their workload or decreased their productivity. Instead of cutting effort, AI stacks a second layer of work on top—reviewing outputs, bridging system limitations, handling exceptions.”

— Upwork Research

Every “AI-assisted” workflow hides invisible human effort:

  1. Verification work — checking outputs are correct
  2. Correction work — editing before use
  3. Interpretive work — deciding what suggestions mean

The Vibe Coding Catastrophe

“Vibe coding”—describing projects in vague terms, accepting code without review—demonstrates the specification problem perfectly.

10x

More security issues in AI-assisted code (Fortune 50 analysis)

Fortune 50 companies found AI-assisted developers produced 3-4x more code but 10 times more security issues.

16 out of 18 CTOs surveyed reported production disasters from AI-generated code.

“Vibe coding’s most dangerous characteristic is code that ‘appears to work perfectly until it catastrophically fails.'”

— Quartz

AI can’t compensate for missing specifications. Skip the thinking—the architecture, constraints, edge cases—and you get demos that fail in production.


AI as Intention Compiler

Here’s the mental model that makes everything click:

AI is an intention compiler.

A traditional compiler translates code into machine instructions. It can only compile what you write.

AI compiles your intent into outputs. Same fundamental problem: it can only compile what you specify. Vague intent produces “generic and safe” that technically satisfies your request while missing what you actually wanted.

“Generative AI is the most ambitious compiler yet because it translates from the language of thought.”

— Prompt Engineering Research

AI success isn’t about the model. It’s about the clarity of your thinking.


The Bargain

Using AI well requires both parties keeping up their end.

Your end: Clarity. Rigour. Constraints. Taste. Context. Explicit success criteria.

AI’s end: Speed. Breadth. Pattern matching. Generation. Execution at scale.

When you hold up your end—when you do the thinking work, specify format, define audience, articulate constraints—AI becomes a genuine force multiplier.

When you don’t, AI reflects your ambiguity back and you blame the tool.

The pattern holds across every domain:

  • In code: The real work is designing interface, invariants, constraints. AI generates syntax, but can’t choose the right abstraction without specification.
  • In video: The real work is blocking, framing, emotional beats. Without specification, AI invents generic versions.
  • In writing: The real work is audience, purpose, stakes. Without that, you get content, not communication.

Why This Feels Like a Betrayal

People were sold “AI as shortcut.”

What they got:

  • A force multiplier on structured thought
  • A brutally honest mirror for vague thought

The uncomfortable bit: to use AI well, you must already be doing the kind of thinking that good work requires—often more explicitly than when you did it manually.

AI doesn’t let you skip the hard thinking. It forces you to externalise it.


Keep Up Your End

AI will happily generate oceans of possibility.

You pay your side in clarity, rigour, taste, constraints, iteration.

Used that way, it’s not a shortcut to effort—it’s a shortcut to leverage. You don’t escape the work. You change which parts of your brain are doing it.

The uncomfortable truth: you still have to do work when using AI.

The empowering reframe: the work is now thinking work, and AI will amplify every bit of clarity you bring.

Keep up your end of the bargain.


Try this with your next prompt: Before you hit enter, ask: “Have I specified the format, the audience, and the constraints?” If any of those are vague, fix them first. Notice the difference.

Scott Farrell is an AI strategy advisor helping mid-market leadership teams turn scattered AI experiments into governed portfolios that compound. Based in Australia, working with businesses doing $20M–$500M revenue.


Discover more from Leverage AI for your business

Subscribe to get the latest posts sent to your email.

Leave a Reply

Your email address will not be published. Required fields are marked *