Blog

Why AI is Creating “Workslop”— and How to Avoid It

Written by Eliassen Group | Mar 13, 2026 5:21:25 PM

By now, most organizations have accepted that AI can do wonders for productivity, and for good reason: In Eliassen’s 2026 Technology Leadership Pulse Survey, 64% of tech leaders said their investments in AI had already delivered positive returns. Of those, two-thirds said that it had already delivered cost savings and improved both efficiency and speed to market.

But when AI is implemented without the proper training — and when expectations about its use aren’t communicated clearly — it can have negative impacts on an organization. In fact, there’s enormous potential for AI to create additional work, slow productivity, and even decrease morale.

The phenomenon is called “workslop,” and it’s probably happening, at least to some degree, in your organization as we speak.

 

What is “workslop”?

“Workslop” refers to inaccurate or low-quality AI output that’s uncritically passed along by a human employee.

“Employees are using AI tools to create low-effort, passable-looking work that ends up creating more work for their coworkers,” writes psychologist and VP of BetterUp Kate Niederhoffer, in a recent article for Harvard Business Review. “The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work. In other words, it transfers the effort from creator to receiver.”

Workslop can appear in the form of an email, a block of code, or any number of other deliverables. But what differentiates workslop from valuable work products is, well, the “slop” aspect.

As anyone who’s used AI knows, the first thing an AI generates — at least, without first supplying it with significant training, context, and sophisticated prompting — is often less than usable. When this initial product is passed as-is to bosses or coworkers, they’re often left with the unenviable task of fixing or reworking it to bring it up to acceptable standards.

 

How Workslop Impacts Productivity, Trust & More

The downstream effects of workslop include frustration, inefficiency, and the erosion of trust among colleagues, but the biggest impact workslop has on the organization comes from the additional and unnecessary work it creates for others. In fact, the authors of the HBR article estimate that, at present, workslop incidents cost employers $186 per month per involved employee.

There are interpersonal costs, as well. As the HBR article noted:

Approximately half of the people we surveyed viewed colleagues who sent workslop as less creative, capable, and reliable than they did before receiving the output. Forty-two percent saw them as less trustworthy, and 37% saw that colleague as less intelligent.

Worse yet is how prevalent workslop has become: Stanford University’s ongoing survey has found that 40% of respondents reported receiving workslop in a single month. Workslop is also surprisingly universal across all levels of seniority. Forty percent reported receiving it from peers, 18% is sent to managers by direct reports, and 16% is sent by managers or leaders to teams and individuals.

 

How to Avoid Workslop

The first step toward minimizing workslop within your team is accepting that AI tools themselves aren’t responsible for workslop — humans are. After all, they’re the ones knowingly submitting subpar work. It’s also worth noting that workslop, in its various forms, existed long before AI came along.

That’s why any solution to the problem of workslop must address the human factors above all.

 

Think Carefully Before Issuing AI Mandates

Companies like Microsoft, Shopify, Coinbase, and more have issued internal mandates requiring employees to use generative AI. Some employers have taken it a step further by installing software to track the usage of AI tools, measuring everything from adoption rate to the number of accepted (or rejected) lines of AI-written code. Either of these steps have the potential to dramatically increase the volume of workslop organizations generate, but taken together, they almost guarantee it.

Why? First, mandating the adoption of AI tools without providing clear guidance on why and how they’re meant to be used can leave workers frustrated and confused. When paired with tools that track (and, in theory, reward) AI usage, you have a recipe for disaster.

Instead of simply issuing AI-usage mandates, ensure you’ve communicated the following:

  • Why AI use is a priority, including the problems it’s intended to solve
  • How employees are expected to use it
  • Where and when AI usage is appropriate or expected

The same is true for AI-usage tracking. For it to be effective — both for the organization and for the employees using it — the use cases have to be clear. If you’re evaluating a particular AI’s ability to write code in different languages, for example, explain that you’re keeping an eye on the number of accepted versus rejected lines of AI-generated code in each language. Otherwise, you run the risk of encouraging “AI for AI’s sake,” which leads, unsurprisingly, to workslop.

 

Don’t Just Offer AI Training—Offer AI Usage Modelling

Another vital step toward minimizing workslop is ensuring that employees who are expected to use AI tools know how to do so effectively. Despite the enthusiasm for AI adoption at the top of many organizations, relatively few employees feel as though they’ve received adequate training on how to use it effectively.

A 2026 Ipsos/Google study found that 27% of US workers surveyed said their organizations provided AI tools, while 37% said their organizations provided guidance on using AI. Just 22% said their companies provided both. But without both, employees can’t realistically be expected to use AI to its fullest potential and deliver quality work. This is one of the reasons why companies like Walmart have begun offering free AI training to all employees.

But don’t stop with basic training. Take it a step further by having executives and team leaders model purposeful AI use across a variety of use cases. These modelling activities can include when and where AI can be most impactful, when AI is unlikely to deliver value, and what constitutes quality output.

 

Make the Cost of Workslop Visible to the Organization

The authors of the HBR article estimate that, for an organization of 10,000 workers, workslop costs more than $9 million a year in lost productivity. Add that to the reputational costs outlined above, and the business case for mitigating workslop is clear — so why not treat it like a business problem?

When possible, surface these figures in training sessions, leadership communications, and AI policy rollouts. Make it clear that the organization is aware of workslop’s potential damage to productivity, morale, and the bottom line, and that continued delivery of low-quality, AI-generated deliverables is not the desired use for the company’s AI investments.

 

Takeaways for Tech Leaders

Workslop didn’t start with AI. Humans have delivered poor-quality work for as long as work has existed, but AI makes doing so easier than ever. To mitigate workslop and its negative effects on the organization, leaders can take the following steps:

  • Before issuing AI-usage mandates, ensure workers have the proper training on how and when to use AI effectively.
  • Encourage leaders to model desired behaviors around AI use.
  • Communicate that low-quality work products are still unacceptable, even if they’re generated by the expensive new AI tool.
  • Ensure that the organization and its employees are aware of the long-term impacts of workslop on productivity and profitability, and that the steps you’re taking to mitigate it can improve their working lives, as well.

Remember, workslop isn’t an AI problem, but a human one. To be successful, any solution to the issue of workslop must focus on addressing human challenges and encouraging the right human behaviors.

To get more expert insights like these on AI, innovation, tech talent, and more, visit our resources page today.