Most organisations take a top-down approach to AI adoption: choose a tool, train everyone, track usage. This rarely succeeds.I’m giving a talk on this at the Engineering Managers Meetup on April 29th, and I wanted to share the core ideas here first.

The Workslop Problem

Stanford/Wharton researchers Jeff Hancock and Kate Niederhoffer coined the term “workslop” — low-effort, low-quality AI-generated work that appears to fulfil a task but lacks the substance to actually do so.The numbers are striking:

  • 53% of workers admit to sending work slop

  • It costs an estimated $9 million/year for a 10,000-employee company.

  • Recipients judge senders as less creative, less capable, and less trustworthy.

The recipe is simple: a general AI mandate (“you must use AI”) + expectations to produce more = workslop, not innovation.

What Actually Works: Grassroots Adoption

In every engineering team I’ve seen, 1-2 people are already deeply immersed in AI tools. They’re your grassroots experts. They don’t just use AI — they’ve built intuition for when it helps and when it doesn’t.Their colleagues are learning from them. Not in a training session — in a standup, a code review, a Slack thread. This is peer-to-peer learning, and it’s more effective than any formal program because it’s:

  • Contextual — the expert works on the same codebase, the same problems.

  • Trusted — they’re a peer, not a vendor or a manager.

  • Continuous — it happens in daily standups, code reviews, and Slack threads.

  • Pull-based — people learn when they’re ready, not when HR schedules it.

The Stanford research calls this the pilot mindset — ownership of what you create with AI, knowing how to edit it, and having discernment about quality. It can’t be trained in a classroom. It develops through practice and watching someone who already has it.

The Leader’s Job: Amplifier, Not Controller

If you ignore the grassroots movement, it stays in the pocket. If you over-manage it, it dies. The art is in amplification.What to do:

  1. Identify your champions — they’re already visible. Look for who gets asked AI questions.

  2. Don’t formalise their role — “AI Champion” with a job description kills the organic dynamic.

  3. Model the behaviour yourself — use AI visibly, share your failures openly.

  4. Create one ritual — a bi-weekly session where anyone shares an AI experiment. Failures celebrated

  5. Set guardrails, not rules — “don’t put customer data in unvetted services” is a guardrail. “You must use Copilot” is a rule

What NOT to do:

Instinct Why it fails Do this instead
Mandate AI use Produces workslop Set quality standards
Mandate one tool People adopt what fits Set security guardrails
Create a training program Doesn’t transfer to daily work Enable peer learning
Measure adoption rate Goodhart’s law — people game it Use established engineering metrics (DORA, SPACE, DevEx) to understand if AI delivers value and performance

The Trust Connection

The single biggest finding from the research: trust is the biggest predictor of reducing workslop. Teams with high psychological safety produce less slop — because people feel safe asking “is this actually good enough?”

Trust is built peer-to-peer, supported from the top down. This is the fundamental reason grassroots adoption works.

The Bottom Line

Leaders: your job isn’t to be the expert. Make experts visible, connected, and empowered.

The grassroots are already growing. Water them._

I’m speaking about this at the_ Engineering Managers Meetup on April 29, 2026, in Copenhagen. If you’re an engineering leader dealing with AI adoption, I’d love to hear your experience.Key references:

  • MIT Sloan Management Review: “Why Digital Dexterity Is Key to Transformation”

  • HBR IdeaCast: “The Hidden Causes of AI Workslop — and How to Fix Them” (Hancock & Niederhoffer, 2026)