Workslop
If you want the 7 word version of this blog post it’s You are responsible for what you ship. And yet, I’m constantly amazed by how many software developers, product designers, and CEOs seem to forget this the first time they see Claude build something complex and impressive in almost no time — not considering the incorrect subtleties.
We’re currently living through a paradoxical moment. AI can do remarkable things like write high quality code, college-level essays, or consultant-quality research — in minutes. These tasks previously would have taken hours, but that speed comes with a hidden cost for those that aren’t paying attention. Even the best code has bugs, college students don’t write perfect essays, and consultants make plenty of mistakes too. And yet when the output comes from Claude, people entirely skip over the rigor they used to apply.
Expectations vs. Reality
I’ve been working with and teaching AI for over 3 years. When I talk to people about AI as a tremendous accelerant, they’ll inevitably share an experience they had where a coworker submitted a bunch of AI slop that they didn’t write, check, or test. And that leads to the question: what do I do if my coworker generates some AI code (or writing or whatever) that isn’t very good?
And I always respond that even in an era of AI the whole point of your work is that it still has to work! The fact that Claude can generate something in minutes doesn’t lower the bar for quality — even if agents are doing increasingly amazing things by the day. If you want to continue using agents then you need to focus on improving their outputs, not gleefully or begrudgingly accepting whatever they spit out and moving on.
Before Claude (BC)
If a coworker submitted bad code before Claude Code what would you have done? You would have pointed out the problems, asked them to fix it, and you wouldn’t have accepted it into your codebase until the code was ready to ship. Even if we can more quickly reach the bar for shipping, the bar hasn’t been lowered. If anything, it should be higher now that we can have AI assist with and validate our work.
If your coworker continues submitting slop, you have to tell them it’s not acceptable. If they don’t understand why, then this person probably isn’t ready for an environment where agents write code. Agentic coding is a new way of working for everyone, so a good employer will invest in their training. A less generous company will simply do what they’ve always done — fire the underperforming employee.
The Claude Era (CE)
Having AI available isn’t an excuse to think less, it’s an opportunity to work smarter. When you don’t check the work AI does for you the only thing you’re doing is creating work for others to review. How would you feel if someone gave you more work to do because they were too lazy to do theirs?
I expect the more common this negative feedback loop becomes the more high-functioning teams will simply give it a hard nope and tell someone to start over. The alternative is untenable and only reduces productivity, so eventually better judgment will prevail.
For years, many in our industry mused that “code doesn’t really matter — it’s all about the user” and stated that “software is a team sport”. If you believed that then and believe it now, nothing has changed about what it takes to ship software. As Simon Willison so succinctly states: “Your job is to deliver code you have proven to work.”
The Harvard Business Review correctly identifies that workslop is on the rise, so I’ll leave you with three rules for sloppy work.
- Don’t be the person who supports workslop.
- Don’t be the person who accepts workslop.
- And most importantly: don’t be the person who creates workslop for others.