AI Is Anti-Staff by Default โ and Staff Are Anti-AI by Default
We need an HR seat on AI governance. Not as a courtesy. As a structural requirement.
๐ Want the complete guide?
Learn more: Read the full eBook here โ
TL;DR
- AI governance frameworks cover model risk (bias, explainability, robustness) but ignore workforce impact โ the actual failure mode for 80% of AI projects
- AI is structurally anti-staff: it collapses cost-of-cognition and triggers extraction reflexes (higher quotas, surveillance, deskilling) unless actively governed
- Staff are rationally anti-AI: 31% admit sabotage, trust in agentic AI dropped 89%, and 64% fear being made less valuable
- The fix: add a third governance column โ Workforce Governance โ with HR as a permanent seat and a concrete Workforce AI Compact
The Graphic Everyone Shares โ and What It’s Missing
There’s a governance graphic that’s been circulating on LinkedIn. You’ve probably seen it. Two columns: Data Governance (trustworthy inputs) on the left. AI Governance (trustworthy outcomes) on the right. It’s clean. It’s sensible. It covers bias, explainability, robustness, accountability.
It’s also structurally incomplete.
The graphic asks “trustworthy outcomes” and answers: for customers, for regulators, for the business. But it never asks: trustworthy outcomes for the staff?
That’s not a minor omission. It’s the gap that kills rollouts.
Because 80% of AI projects fail โ twice the rate of non-AI IT projects โ and the gap is largely governance and organisational, not technical.1 Cultural challenges and change management are the primary obstacle to AI adoption, with 91% of data leaders pointing to people issues rather than technology challenges.2
The model works fine. The organisation falls apart around it.
AI Is Anti-Staff by Default
This isn’t a moral claim. It’s a mechanical one.
AI collapses the cost of cognition. Tasks that took hours take minutes. Throughput that required a team can be handled by one person with an AI assistant. That’s the pitch. That’s the business case.
But here’s what happens next โ not sometimes, reliably:
A UC Berkeley research team followed 200 workers at a tech company for eight months after AI was introduced. They found that workers “worked longer hours at a faster pace on a broader range of tasks.” Not because they were told to. Because the system created implicit pressure. AI didn’t reduce work โ it intensified it.3,4
This is what I call the extraction reflex. AI lifts throughput, and management (often unconsciously) does what management always does with productivity gains:
- “You can handle more cases now.”
- “We’ll hold headcount flat next cycle.”
- “We’ll measure you more tightly because we can.”
- “We’ll hit the same SLA with fewer people.”
Even when the work becomes less clicky, it becomes more relentless, more monitored, more standardised, and less autonomous.
The evidence is stacking up
Burnout: 83% of workers report some degree of burnout. The influence of burnout on engagement jumped from 34% to 52% in a single year.5 Mental fatigue and cognitive strain are now the leading indicators of burnout โ surpassing workload volume for the first time.5
Surveillance: 74% of U.S. employers use online monitoring tools. 61% use AI-powered analytics to measure employee productivity or behaviour.6 Employees in high-surveillance workplaces report 45% stress levels compared to 28% in low-surveillance environments. Eight in ten say monitoring erodes trust.7,8
Deskilling: AI coding assistants deliver minimal productivity gains while causing a 17% reduction in skill development.9 Because AI handles the messy, repetitive tasks that once built judgment, junior employees miss chances to develop it โ and organisations risk ending up with managers who’ve never done the underlying work.10
Role erosion: AI strips out judgment work, leaving humans with exception handling and blame. The work doesn’t disappear โ it concentrates into the hardest, most emotionally draining parts.11
“What if you speed up someone’s workflow by 50% or 100%? Are they now expected to work on two or three times as many cases in the same day? What does that do to staff morale?”
This isn’t a side effect. It’s the default trajectory. AI chews up people if you don’t take care of it.
Staff Are Anti-AI by Default
And here’s the symmetry that most governance frameworks miss entirely.
If AI is structurally extractive toward staff, then staff resistance to AI isn’t irrational โ it’s the correct response to misaligned incentives.
But sabotage is just the visible tip. The full spectrum of rational resistance includes:
- Low-quality requirements: “Don’t tell them the real workflow”
- Adversarial testing: “Prove it fails”
- Passive non-adoption: “Sure, we tried it”
- Shadow resistance: comply outwardly, route around it quietly
The trust numbers are devastating. Trust in company-provided generative AI fell 31% between May and July 2025. Trust in agentic AI โ systems that act independently โ dropped 89% in the same period.13
Meanwhile, 64% of managers say employees fear AI will make them less valuable. 58% say employees fear it will cost them their jobs.14 And 56% of employees say they’ve received no AI training or minimal mentorship.15
Executives believe their workforce is informed and enthusiastic about AI. They’re wrong. Most employees report confusion, anxiety, and limited involvement in key decisions.16
“Most staff are anti-AI anyway. The general consensus from employees is: it’s taking jobs, and there’s no real upside for them.”
That’s not a character flaw. That’s incentives plus fear. And it’s entirely predictable.
Most organisations accidentally send this message: “We’re introducing AI to increase productivity. If it works, we’ll raise targets or cut roles. If it breaks, you’ll wear the blame.” That is a perfect recipe for resistance.
The Governance Gap: We Govern the Model but Not the Humans
Look at any standard AI governance framework. NIST AI Risk Management Framework defines seven characteristics of trustworthy AI: validity, safety, security, accountability, explainability, privacy, and fairness.17
All focused on the model. None address what happens to the people the model replaces, monitors, speeds up, or deskills.
Look at who sits on AI governance committees. Harvard Law research shows 72% of boards involve CTO/CIO in AI discussions. 27% involve CFO. HR? Available “on demand” โ not a permanent seat.18,19
This is a structural problem, not an oversight. AI governance emerged from data science and ML ops communities. They think about model behaviour. HR involvement feels “soft” and “political.” So HR gets excluded from the governance table โ and workforce impact becomes ungoverned.
What governance covers
- Model bias and fairness
- Explainability
- Data privacy
- Security and robustness
- Regulatory compliance
What governance ignores
- Throughput extraction
- Surveillance creep
- Role erosion and deskilling
- Staff morale and burnout
- Accountability without authority
80% of AI project failures are organisational, not technical.1 Yet governance frameworks focus on the 20% (model risk) and ignore the 80% (workforce and organisational risk).
AI governance that ignores the workforce dimension is like building a bridge that’s structurally sound โ but points it at a cliff. The engineering’s fine. The outcome is still a disaster.
The Third Column: Workforce Governance
The fix isn’t more change management comms. It’s structural.
If Data Governance covers trustworthy inputs and AI Governance covers trustworthy outcomes, then the missing column is Workforce Governance: trustworthy work.
| Data Governance | AI Governance | Workforce Governance | |
|---|---|---|---|
| End goal | Trustworthy inputs | Trustworthy outcomes | Trustworthy work |
| Protects | Data quality and privacy | Model safety and fairness | Staff dignity and value |
| Primary owner | Data/IT | AI/Risk/Legal | HR + Business leadership |
| Core risks | Breaches, bias in data | Harmful outputs, drift | Extraction, surveillance, deskilling, burnout, sabotage |
This isn’t “being nice to staff.” This is governance โ with the same rigour we apply to data quality and model safety. Enforceable policies. Measurable risks. Clear ownership. Defined triggers.
We already apply “Architecture, Not Vibes” to AI systems โ we don’t trust AI to behave, we architect containment. The same principle applies to AI’s impact on workers. Structure beats intentions.
The Workforce AI Compact
What does workforce governance look like in practice? Not a values statement. Not a slide deck. A governance instrument โ pre-negotiated, enforceable, with clear triggers and stakeholder accountability.
Here are the five policy elements:
1. Throughput Policy
For the first 6-12 months of any AI deployment, productivity gains are used to buy time back, reduce backlogs, improve quality, and train โ not to ratchet quotas.
The principle: AI should buy time before it buys headcount.
This isn’t permanent protection from change. It’s a governed transition window. EY’s 2025 research shows that among AI-productive organisations, 83% reinvested gains rather than cutting headcount โ and they outperformed the 17% who went straight to extraction.20
2. Value-Sharing Policy
If AI lifts productivity, staff share in the benefits: pay, time, flexibility, progression pathways. Not all of it. But a defined share, governed by policy.
Industries where AI gains were shared saw wages rise twice as fast as those where they weren’t.21 The productivity-pay gap has been widening since the 1970s โ AI will accelerate this unless governance intervenes.22
3. No-Surveillance-Creep Rule
AI can assist work. It can’t quietly become a panopticon without explicit governance approval. Any expansion of monitoring scope, frequency, or granularity requires sign-off from the governance committee โ with HR at the table.
Currently 74% of employers monitor, 61% use AI analytics, and 59% of employees report surveillance-induced stress.6,7 This is ungoverned expansion. The Compact makes it governed.
4. Role Horizon Plan
For every AI deployment: what happens to affected roles at 6, 12, and 24 months? Redeploy, reskill, transition โ and what budget backs that promise?
55% of employers who laid off workers for AI now report regretting it.23 Half of AI-attributed layoffs will be quietly rehired, often offshore or at lower salaries.23 A role horizon plan prevents knee-jerk decisions that damage both staff and capability.
5. Accountability Clarity
When something goes wrong, who owns the decision? Not the frontline worker who clicked “approve.” Humans aren’t scapegoats for model errors. Escalation pathways and decision rights are explicit, not implied.
Without this, you get “accountability without authority” โ humans blamed for AI mistakes they couldn’t prevent and weren’t empowered to override.
HR as the Staff Champion
HR can’t just be “change management comms.” HR needs a formal, permanent seat inside AI governance.
Not because HR understands model architecture โ that’s not their job. But because HR owns the systems that AI disrupts: rewards, performance, role design, capability, and trust. HR is the natural steward of workforce governance, the same way Legal stewards regulatory compliance without writing code.
Right now, HR is “on demand” in most governance structures.19 That’s like having Legal available “when the topic comes up.” By the time they’re called in, the decisions are already made.
What HR’s governance role looks like:
- Pre-deployment: Role impact analysis, throughput policy sign-off, value-sharing terms agreed
- During deployment: Monitor morale, burnout, adoption quality (not just adoption rate), escalation paths for workforce concerns
- Post-deployment: Role horizon execution, skill development tracking, Compact compliance review
- Veto power: On deployments that violate the Workforce AI Compact โ the same way Finance can veto projects that exceed risk appetite
This turns staff from “test subjects” into “co-owners.” When employees know there’s a governed Compact with real protections, requirements gathering becomes collaborative instead of adversarial. Staff become your best red-teamers โ pointing out failures to harden the system, not to kill it.
The Regulatory Signal Is Already Here
If you think workforce governance is optional, look at what’s already happening in legislation:
- Colorado AI Act (effective Feb 2026): Requires impact assessments, worker notification when AI is used in employment decisions, right to appeal AI decisions, public disclosure of AI systems in use.24
- California: Prohibits discriminatory automated-decision systems in employment, requires meaningful human oversight with trained overriders, mandates proactive bias testing and four-year record retention.25
- Illinois (effective Jan 2026): All employers must notify employees when AI is used in employment decisions.26
- EU: 42.3% of workers already affected by algorithmic management.27
These aren’t change management guidelines. They’re enforceable governance with audit requirements, appeal rights, and penalties. The Workforce AI Compact puts you ahead of the curve โ building voluntarily what regulation will eventually require.
The Choice: Govern It or Get Governed
Here’s the bottom line.
85% of enterprises plan agentic AI deployments. Only 21% have governance frameworks ready โ and those frameworks don’t cover workforce impact.28
You have two options:
Ungoverned (default)
- AI extracts value from staff
- Staff resist and sabotage AI
- Both sides are rational
- Rollouts fail at 80%
- Regulation catches up painfully
Governed (by choice)
- Workforce AI Compact protects staff
- Staff become co-owners, not resisters
- Gains are shared and sustainable
- Adoption succeeds because trust exists
- Ahead of regulation, not behind it
“If you don’t govern the human impact, the humans will govern your rollout for you.”
That’s not ideology. That’s physics.
We’ve got to look after the people in the process. Not because it’s nice. Because AI governance without workforce governance is structurally incomplete โ and structurally incomplete governance produces structurally predictable failures.
Add the third column. Give HR a permanent seat. Sign the Compact before you ship the model.
Your AI might be ready. Your governance isn’t โ until your people are protected.
Want to assess your AI governance readiness โ including the workforce dimension?
References
- [1]RAND Corporation. “Root Causes of Failure for Artificial Intelligence Projects.” โ “80% of AI projects fail โ twice the rate of non-AI IT projects. The gap is largely governance and organisational, not technical.” rand.org/pubs/research_reports/RRA2680-1.html
- [2]PwC. “2026 AI Business Predictions.” โ “Cultural challenges and change management are cited as the primary obstacles to AI adoption, with 91% of data leaders pointing to these issues rather than technology challenges.” pwc.com/us/en/tech-effect/ai-analytics/ai-predictions.html
- [3]Fortune. “AI is having the opposite effect it was supposed to.” Feb 2026 โ “Researchers following 200 workers at an American tech company for eight months noted that they worked longer hours at a faster pace on a broader range of tasks once AI was introduced.” fortune.com/2026/02/10/ai-future-of-work-white-collar-employees-technology-productivity-burnout-research-uc-berkeley/
- [4]Harvard Business Review. “AI Doesn’t Reduce Work โ It Intensifies It.” Feb 2026 โ “While AI makes workers more productive, it could also be burning them out.” hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it
- [5]Hunt Scanlon. “Workforce Trends 2026.” โ “Employee engagement has dropped significantly year-over-year, with just 64% of workers describing themselves as very or extremely engaged โ down from 88% in 2025. 83% of workers report some degree of burnout. Burnout’s influence on engagement jumped from 34% to 52%.” huntscanlon.com/workforce-trends-2026-leaders-confront-burnout-disengagement-and-ai-driven-change/
- [6]High5Test. “Employee Monitoring Statistics in the U.S. (2024-2025).” โ “74% of U.S. employers use online monitoring tools, and 61% of US companies use AI-powered analytics to measure employee productivity or behavior.” high5test.com/employee-monitoring-statistics/
- [7]WebsitePlanet. “How AI Is Judging You: A Workplace Surveillance Study.” โ “Employees in high-surveillance workplaces report 45% stress levels compared to 28% in low-surveillance environments. 59% of employees report stress or anxiety caused by workplace surveillance.” websiteplanet.com/blog/how-ai-is-judging-you/
- [8]Smithy Soft. “AI Surveillance in the Workplace: 2025 Trends.” โ “Eight in ten employees report that monitoring erodes trust.” smithysoft.com/blog/workforce-power-shift-ai-surveillance-insights-from-mit-technology-review-2025
- [9]Research.ai Multiple. “Top 20 Predictions from Experts on AI Job Loss in 2026.” โ “AI coding assistants deliver minimal productivity gains while causing a 17% reduction in skill development.” research.aimultiple.com/ai-job-loss/
- [10]Harvard Business Review. “How Do Workers Develop Good Judgment in the AI Era?” Feb 2026 โ “Because AI now handles the messy, repetitive tasks that once built judgment, junior employees miss chances to develop it, and organizations risk ending up with managers who’ve never done the underlying work.” hbr.org/2026/02/how-do-workers-develop-good-judgment-in-the-ai-era
- [11]Sentry Tech Solutions. “The AI Paradox of 2025: AI Exhaustion.” โ “While AI handles simple tasks, it often shifts human work toward more complex, cognitively demanding responsibilities โ going from spending your day on a mix of simple and challenging tasks to handling only the most difficult problems all day.” sentrytechsolutions.com/blog/ai-paradox-of-2025-ai-exhaustion
- [12]Built In. “Fix AI Implementation Sabotage.” โ “31% of employees admit to some form of AI sabotage when expectations increase without compensation adjustment.” builtin.com/articles/fix-ai-implementation-sabotage
- [13]Fortune. “AI adoption is accelerating, but confidence is collapsing.” Jan 2026 โ “Trust in company-provided generative AI fell 31% between May and July of 2025. Trust in agentic AI systems that can act independently dropped 89% during the same period.” fortune.com/2026/01/21/ai-workers-toxic-relationship-trust-confidence-collapses-training-manpower-group/
- [14]Beautiful.ai. “2025 AI Workplace Impact Report.” โ “64% of managers said that their employees fear that AI tools will make them less valuable at work, and 58% said employees fear AI will cost them their jobs.” beautiful.ai/blog/2025-ai-workplace-impact-report
- [15]Outsource Accelerator. “AI widens worker confidence gap as burnout persists.” โ “56% of employees said they have not received any training or minimal mentorship, with only 44% of workers trained within the past six months.” news.outsourceaccelerator.com/ai-widens-worker-confidence-gap/
- [16]Harvard Business Review. “Leaders Assume Employees Are Excited About AI. They’re Wrong.” Nov 2025 โ “Executives believe their workforce is informed and enthusiastic about AI, while most employees report confusion, anxiety, and limited involvement in key decisions.” hbr.org/2025/11/leaders-assume-employees-are-excited-about-ai-theyre-wrong
- [17]NIST. “AI Risk Management Framework.” โ “Seven key characteristics of trustworthy AI: validity and reliability, safety, security and resilience, accountability and transparency, explainability and interpretability, privacy enhancement, and fairness with the management of harmful bias.” nist.gov/itl/ai-risk-management-framework
- [18]Harvard Law School Forum on Corporate Governance. “Governance of AI: A Critical Imperative for Today’s Boards.” โ “Nearly three-quarters (72%) of boards mentioned the CIO and CTO, over half mentioned the CEO. 27% engage with CFOs.” corpgov.law.harvard.edu/2025/05/27/governance-of-ai-a-critical-imperative-for-todays-boards-2/
- [19]Gaming Tech Law. “How to Set Up an AI Committee in Your Company’s Governance Framework.” โ “The head of HR should be available ‘on demand’ depending on the topic discussed within the AI committee.” gamingtechlaw.com/2025/09/how-to-set-up-an-ai-committee-in-your-companys-governance-framework/
- [20]EY. “AI-driven productivity is fueling reinvestment over workforce reductions.” Dec 2025 โ “Among organizations investing in AI and experiencing productivity gains, only 17% say these gains led to reduced headcount; far more reported reinvesting gains into AI capabilities (47%), R&D (39%), and upskilling employees (38%).” ey.com/en_us/newsroom/2025/12/ai-driven-productivity-is-fueling-reinvestment-over-workforce-reductions
- [21]PwC. “AI linked to a fourfold increase in productivity growth and 56% wage premium.” โ “Wages are rising twice as quickly in industries most exposed to AI compared to those least exposed.” pwc.com/gx/en/news-room/press-releases/2025/ai-linked-to-a-fourfold-increase-in-productivity-growth.html
- [22]World Economic Forum / PwC. “AI could make us more productive, can it also make us better paid?” โ “The ‘productivity-pay gap’ has been widening for decades, and this disparity may only grow further with the spreading use of artificial intelligence.” weforum.org/stories/2025/05/productivity-pay-artificial-intelligence/
- [23]HR Executive. “The AI layoff trap: Why half will be quietly rehired.” โ “55% of employers report regretting laying off workers for AI. Forrester predicts that half of AI-attributed layoffs will be quietly rehired, but offshore or at significantly lower salaries.” hrexecutive.com/the-ai-layoff-trap-why-half-will-be-quietly-rehired/
- [24]Lexology. “2026 Overview of AI Use in Employment Decisions.” โ “Colorado’s AI law goes into effect Feb 1, 2026, requiring impact assessments, worker notification, right to appeal AI decisions, and public disclosure.” lexology.com/library/detail.aspx?g=bb0a51a8-4a1f-4592-83a2-3de69f22d075
- [25]Consultils. “The Rise of AI Legislation in the U.S. โ A 2026 Labor Compliance Guide.” โ “California requires meaningful human oversight, proactive bias testing, and four-year record retention for any automated-decision system used in employment.” consultils.com/post/us-ai-hiring-laws-compliance-guide-2026
- [26]HR Defense Blog. “AI in Hiring: Emerging Legal Developments and Compliance Guidance for 2026.” โ “Illinois requires all employers to notify employees and applicants when AI is used in employment decisions, effective January 1, 2026.” hrdefenseblog.com/2025/11/ai-in-hiring-emerging-legal-developments-and-compliance-guidance-for-2026/
- [27]European Parliament. “Digitalisation, artificial intelligence and algorithmic management.” โ “42.3% of EU workers are affected by algorithmic management.” europarl.europa.eu/RegData/etudes/STUD/2025/774670/EPRS_STU(2025)774670_EN.pdf
- [28]Deloitte. “State of AI in the Enterprise 2026.” โ “85% of enterprises plan agentic AI deployments, but only 21% have governance frameworks ready.” deloitte.com/us/en/about/press-room/state-of-ai-report-2026.html
Discover more from Leverage AI for your business
Subscribe to get the latest posts sent to your email.
Previous Post
The Cognition Supply Chain: From Search to Compounding Agentic Cognition