AI-Generated “Workslop” Is Destroying Productivity
By: Kate Niederhoffer, Gabriella Rosen Kellerman, Angela Lee, Alex Liebscher, Kristina Rapuano and Jeffrey T. Hancock
Source: Harvard Business Review | Posted by Datatribes on September 23, 2025
Category: AI in the Workplace - Curated Summary by Data Tribes
Workslop: When Generative AI Creates More Work, Not Less
Despite growing adoption of generative AI tools in the workplace, most organizations are struggling to realize measurable value. A recent MIT Media Lab report revealed that 95% of companies have seen no return on their AI investments. This paradox is explained by a new concept introduced by BetterUp Labs: Workslop.
What Is Workslop?
Workslop refers to AI-generated output that appears polished but lacks depth, accuracy, or context — ultimately shifting the cognitive and execution burden to colleagues. While it masquerades as good work, it creates confusion, inefficiencies, and interpersonal friction.
Common examples include unclear emails, vague summaries, or superficially structured documents that others must revise or clarify.
How Widespread Is the Problem?
- 🧩 40% of surveyed employees reported receiving workslop in the last month.
- 🕒 Average time wasted per incident: 1 hour 56 minutes.
- 💸 Estimated cost per employee: $186/month, or $9M/year in a 10,000-person company.
- 📉 Trust and perception drop: recipients see senders as less capable, intelligent, and creative.
The cost isn’t just in time — it erodes team dynamics. 42% of employees said they trusted their colleagues less after receiving workslop, and 32% were less likely to want to collaborate with them again.
The Workslop TaxUnlike traditional “cognitive offloading” to machines (like search engines), workslop offloads effort to another human being. AI-generated content that lacks substance forces others to guess intent, clarify facts, and redo tasks — sometimes under tight deadlines or awkward social dynamics.
“It probably took an hour or two just to congregate everyone and repeat the information in a clear and concise way.” — Frontline manager in tech
Many organizations have embraced GenAI broadly, but without sufficient guidance. This has led to indiscriminate use, where employees paste AI responses into deliverables without critical thinking. Often, this is driven by pressure to “use AI,” without understanding where and how it adds value.
Key Principles for Leaders1. Set Clear AI Usage Guidelines
Organizations need to replace vague AI mandates with structured best practices and contextual recommendations. Encourage purposeful use, not automation for its own sake.2. Foster the Right Mindsets
Workers with high agency and optimism (“Pilots”) use AI to enhance their creativity and performance. Those lacking these traits (“Passengers”) tend to misuse AI as a shortcut to avoid work.3. Recommit to Collaboration
AI is not just a productivity tool — it’s part of a new collaborative dynamic. Leaders must emphasize that AI output is not “done work,” but a starting point for shared workflows. Poorly integrated AI outputs can undermine team trust and productivity at scale.Conclusion: AI Should Amplify, Not Offload Responsibility
Workslop is effortless to produce, but costly to receive. When AI is used carelessly, it becomes a loophole that others must fix. Leaders must model thoughtful, outcome-focused AI use — one that respects team time and maintains high standards for quality.
In 2025 and beyond, the future of collaboration depends not just on AI capability, but on human discernment, shared norms, and clarity of purpose.
Image Credits: HBR Staff/AI