It's not a surprise to anyone that lately the internet is being flooded with AI-generated stuff, colloquially called AI slop. This has actually been going on for some years now, ever since models became good enough to write decent content (I put it around the release of GPT-4o), but it hasn't been until this past year that the amount of AI slop has positively exploded. I would say it's natural. As these systems become more powerful, we'll see people trying to use them to make a quick buck while at the same time polluting the web for the rest of us.

Nothing is sacred to them, nor is anything out of the AI's purview. From fiction (or text in general) to music, to video and images, everything is up for grabs. Lately, though, the worst offender I'm seeing is garbage music videos on YouTube. It doesn't take much, right? Just a prompt, and you just made yourself an 8h-long playlist of jazz-funk that sounds similar to the real deal, but with a soulless mechanical zombie playing the bass. Then you upload this video, and you're bound to get at least a few hundred views, and some of these might even be brain- and ear-dead people who end up "liking and subscribing" to your content.

YouTube, either by design, mismanagement, or simple negligence1, hasn't done anything to allow people to opt out from seeing this content. On one hand, I can imagine how "more content means more views which means more ad revenue for us", but at the same time that's an overly simplistic view that ignores the fact the platform is collapsing in on itself2. Thankfully, there are 3rd-party options, like people who are coming up with block lists, but it's not a scalable solution, especially as generating videos/music becomes more and more accessible.

...

Yesterday my son asked me how "baseballs" were made. In these cases, I usually treat him to a short video showing the process. So I found one that looked more or less good (which I won't share out of principle; you can ask if you're interested), but after a while it just felt off. The voice was clearly synthetic, but the video looked more or less real, even too real if you know what I mean, and the people in it were doing weird stuff like walking backward and forward. It wasn't until they were showing how they stitch baseballs (where two ladies were simultaneously stitching the same soccer-ball-sized leather ball) that I realized this was AI-generated. I kept watching it a bit more, and it just kept getting more and more ridiculous.

That video was clearly entirely scripted by and generated by AI. I actually wouldn't be surprised if the people behind that channel had a whole AI pipeline managing, brainstorming, and generating the videos, all just so they can farm a few thousand views, make some pocket change, and in the process also waste the time of lots of people. I'm certain that the ones responsible for that generated baseball video didn't even take the time to watch it before it was published. I, of course, reported it (as spam and misleading), but I'm afraid we'll just keep seeing more and more of these if there's no systemic change.

This is just my story with videos, but again, it applies to every other area of human creative output. For example, once I was looking up some information about Will Wood (the singer) and ended up partially reading an article that answered my question. After I was done, I noticed that the article had been posted on a logging company's blog! Again, an automated AI content generation pipeline, just to maximize SEO.

...

Recently I was talking about this with an acquaintance, and she told me it's the future and that I should get used to it. I don't quite agree. But what really surprised me was that she then proceeded to tell me about a friend of hers who is the head of a big (BIG) publishing house, who told her regarding AI content, "I don't care as long as I can sell it". I guess that's the root of the problem right there: people are not using these tools for real creativity, or even for fun. They're trying to exploit them for revenue. They are not using them to create things that might benefit or entertain others; they're using them for their own personal, selfish gains.

...

Also recently, and perhaps more positively, I was discussing with my father-in-law, who's a retired health worker, about what's the best way to help the world transition into this AI era. I work with AI, mostly with "embodied" AI (think game NPCs or robots), so he was asking my advice about what he personally could do. At the time I didn't really have an answer, but now I think I do.

The best thing we can do right now is to help people realize just how human we are. Help each other recognize our shared humanity. The best way to accomplish this that I know of is to create actual human content. It's almost an ethical responsibility at this point, no? Create content to fight against the slop flood.


Footnotes

  1. I think there is more than a bit of willful negligence at play here. If I understand correctly, YouTube does require creators to disclose if a video was generated with AI or not, but they've not really been enforcing this as far as I can see. Even Coca-Cola's Christmas ad, which was famously done with AI, is still not marked as generated content. Though perhaps I'm being too hard on Google, as it can also be that they're doing their best but are simply "drowning in slop". ↩

  2. Here I'm bashing on YouTube as an example, but it really applies to many (all?) media-sharing platforms like Spotify, etc. Only those where there's a strong curation effort (e.g., Bandcamp) seem to be safe-for now. ↩