Maximising AI Cognition and AI Value Creation

SF Scott Farrell December 4, 2025 [email protected] LinkedIn

Maximising AI Cognition and AI Value Creation

📘 Want the complete guide?

Learn more: Read the full eBook here →

mini ebook pdf: Read mini eBook here →

TL;DR
  • 70-85% of AI projects fail because companies deploy AI where it’s weakest: live, high-stakes, one-shot interactions
  • AI wins in batch contexts: ticket queues, overnight analysis, and anywhere time flexibility exists (40-60% cost savings vs real-time)
  • The real opportunity is Version 3: not automating current work, but enabling thinking that was previously impossible due to human coordination limits

Your AI Chatbot Probably Failed

Not because the technology is bad. Because you deployed it where AI is weakest.

Consider the economics: 72% of customers say chatbots are a “complete waste of time.” 78% end up escalating to a human anyway. The chatbot didn’t save money—it added friction, then the human still handled the call.

This pattern repeats across industries. A 2025 MIT study found that 95% of corporate AI initiatives show zero return on investment. 42% of companies abandoned most of their AI projects this year—up from just 17% in 2024.

The diagnosis from researchers is consistent: companies force AI into existing processes unchanged, expecting magic. What they get is expensive failure.

But here’s what’s interesting: some companies report $10.30 in value for every dollar invested in AI. What’s the difference?

The difference is where they deploy.


The Latency-Accuracy Trade-Off Nobody Talks About

AI has an asymmetry that most deployment plans ignore.

In a live customer conversation, AI gets one shot to be right. To be reliable, you need:

  • A strong model (expensive, and increased latency)
  • Thinking models for complex reasoning (adds seconds to minutes of latency)
  • Rich context (retrieval, CRM, policies)
  • Guardrails (security filters, PII checks)
  • Maybe a second-pass checker

All of that adds latency and cost per interaction. And even then, real-time systems show higher false positive rates due to limited context and the need for snap decisions.

Meanwhile, a human support agent:

  • Already understands the internal tools
  • Clicks around the CRM with tacit knowledge
  • Adjusts in real time when the customer clarifies
  • Recovers gracefully from being slightly wrong

In low-latency, high-stakes, high-ambiguity contexts, humans still win on accuracy, judgment, and social handling.

But flip to the other side of the matrix—where you don’t need a 5-second answer—and the economics reverse completely.


Where AI Actually Wins: A Simple Framework

Think of AI deployment as a 2×2:

Low Latency (seconds) High Latency (minutes/hours/overnight)
High Error Cost Human-led, AI-assisted AI does work, human signs off
Low Error Cost AI copilot mode Prime AI territory

The bottom-right quadrant—high latency tolerance, low error cost—is where AI dominates:

  • Ticket queues instead of live chat: AI triages, resolves simple cases, drafts responses for complex ones
  • Overnight batch jobs: transaction analysis, CRM scoring, anomaly detection across every row
  • Report generation: research, summarization, quality checks at scale

“Batch processing typically reduces infrastructure costs by 40-60% compared to real-time systems, with savings increasing at higher volumes.”

— Zen van Riel, AI Engineering

Customers already expect ticket responses in minutes or hours, not seconds. AI has time to read history, cross-check systems, do multi-step reasoning, and escalate when unsure.

That’s a completely different game than “answer instantly or fail.”


The Three Versions of AI Value

Most AI conversations stop at “automation.” That’s only the first version—and often the weakest.

Version 1: Same Work, Fewer People

Classic automation. Replace a human task with AI. This is the most common deployment pattern, and it has the highest failure rate. Why? Because you’re competing with humans at what humans do reasonably well, in contexts optimized for human cognition.

It works sometimes—Klarna’s AI assistant replaced 700 agents and contributed $40 million in annual benefit. But notice: even that “replacement” story is really about ticket handling, not live real-time support. It’s batch-adjacent.

Version 2: 10-100x More Thinking at the Same Problems

This is where the economics start to shine. Instead of one analyst sampling 50 transactions, AI checks every transaction. Instead of triaging 100 tickets per day, AI triages 10,000.

The promise of cheap cognition:

“Once the plumbing is in, marginal cost per extra ‘thinking task’ trends toward cents instead of dollars.”

Companies achieving Version 2 report $3.70-$10.30 return per dollar invested. They’re not replacing people—they’re applying 100x more cognitive work to existing problems.

Version 3: Thinking That Was Previously Impossible

Here’s where it gets interesting. What if cheap cognition doesn’t just accelerate current work—but makes new work rational to attempt?

Consider the typical enterprise strategic project:

  • 10 cross-functional people
  • 3 months of workshops and meetings
  • Optimizing for political acceptability under time pressure
  • Settling for “good enough” because coordination overhead is crushing

This is committee-think. Research confirms the pattern: when time is limited, less knowledge is shared, and decisions become negotiations between prior preferences rather than genuine exploration.

What if instead you ran a hyper sprint?

  • Thousands of AI calls overnight
  • Exploration of multiple frames, scenarios, constraints
  • Audit trail of what was considered, rejected, and why
  • Human experts review in the morning and adjust the search for the next sprint

You’re not asking AI to magically know the answer. You’re asking it to systematically explore more possibilities than humans would ever have time for—like chess engines exploring move trees.

“Humans are terrible at exploring large idea spaces under time and social pressure. AI is good at it, as long as humans shape the scoring and constraints.”


Marketplace of One: When Personalization Becomes Rational

Here’s another Version 3 example.

Historically, we segment customers because it’s too expensive to treat each one individually. Policies, campaigns, support flows—all designed for “average.”

McKinsey estimates that shifting from standardization to personalization represents $1 trillion in value across US industries alone. But managing per-customer complexity was never economically sane.

With AI, you can design per-customer:

  • Offers and pricing
  • Service levels and communication style
  • Risk rules and escalation paths

The cost structure has flipped. Recomputing per-customer costs less than maintaining one-size-fits-none.

Companies leveraging AI-driven hyper-personalization are seeing 62% higher engagement and 80% better conversion rates compared to traditional approaches.

That’s not automation. That’s a new class of product and service design.


AI as Cognitive Exoskeleton, Not Replacement

There’s a pattern that works across all three versions: AI does the pre-work, humans own the moment.

In live contexts where AI struggles with latency-accuracy trade-offs, flip the model:

  • AI mines the CRM for truly relevant past interactions
  • AI infers what the customer probably cares about
  • AI pulls in knowledge base, policies, similar cases
  • AI surfaces a rich cockpit: context, suggested actions, draft responses, risks

The human is then faster (less clicking), more accurate (better context), and still owns judgment and relationship handling.

Medical studies show AI assistance increases diagnostic sensitivity from 72% to 80%—not by replacing doctors, but by augmenting them.

Anthropic’s research found multi-agent systems with human orchestration outperform single-agent systems by 90.2% on complex tasks.

The mental model shift:

“AI answers the customer” (fragile, one-shot, high failure rate)

becomes:

“AI does everything leading up to the moment where the human answers” (robust, augmentative, plays to each party’s strengths)


The Real AI On-Costs

One more thing that separates successful projects from failures: acknowledging the real cost structure.

“$1/hour AI” is fantasy if you ignore the on-costs. The true AI expense list looks suspiciously like hiring a department:

  • Toolchain & infra: retrieval, vector stores, orchestration, observability
  • Ops & monitoring: logs, alerts, dashboards for weirdness
  • Governance & risk: policies, model choices, approvals, audit records
  • Model & prompt maintenance: workflows decay, products change
  • Change management: training staff, updating SOPs

Companies succeeding with AI invest 70% of resources in people and processes—not just technology. They expect 2-4 year ROI timelines (not 7-12 months like typical software).

AI does make cognition cheaper at scale. But only if you treat it like genuine capability with ongoing costs—not a magic feature you tick on in settings.


A New Question for AI Project Selection

Most organizations start with:

“Where can we put a chatbot?”

Better question:

“Where do we waste human thinking time on work that’s slow, repetitive, or queued up?”

Best question:

“What thinking have we never even attempted because the coordination overhead was too high?”

Those are your Version 3 opportunities. And that’s where AI looks less like a gimmick and more like infrastructure.


FAQ

What if our executive team expects a chatbot?

Show them the 72% “waste of time” stat. Then show them the 40-60% cost savings in batch contexts. Reframe the conversation from “chatbot” to “queue brain” and “report brain”—where AI actually delivers ROI.

How do we identify Version 3 opportunities?

Look for problems you’ve never properly tackled because they’d require too many people, too much coordination, or too long a timeline. Strategic planning exercises. Per-customer personalization. Continuous sense-making across all your data. Those are the greenfield areas.

Isn’t this just “use AI strategically”?

The three-version framework makes it concrete. Version 1 fails when you fight the latency-accuracy trade-off. Version 2 wins in batch contexts. Version 3 unlocks work that was never feasible. That’s specific enough to change project selection decisions.

What about hallucinations?

Hallucination is one symptom of a deeper issue. The real risks are systemic wrongness (AI faithfully executing a bad spec 24/7), unobservable decisions (no reasoning trails), and silent drift (behavior shifting unnoticed for months). Batch contexts with human review cycles handle these better than real-time autonomy.


Conclusion

The organizations getting 10x returns from AI aren’t smarter about technology. They’re smarter about deployment.

They’ve stopped asking “where can we automate?” and started asking “what becomes rational when cognition is abundant?”

That shift—from Version 1 thinking to Version 3 thinking—is the difference between AI as expensive experiment and AI as infrastructure.

What’s one project you’ve never attempted because the coordination overhead was too high?

That might be your Version 3.


Scott Farrell is an AI strategist focused on production AI systems. Connect to discuss how this framework applies to your organization.


Discover more from Leverage AI for your business

Subscribe to get the latest posts sent to your email.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *