Back

A new term, ‘slop’, has emerged to describe dubious A.I.-generated material

272 points1 monthnytimes.com
Joeboy1 month ago
duxup1 month ago

Slop seems like a good term for unwanted AI generated content.

But I wonder how much this is AI and how much we've sort of curated a slop pattern even before AI:

- Video game tips web pages with massive chunks of text / ads before you get to the inevitable answer of "hit A when X happens".

- The horrendous mess that Quora became with technically correct in some ways but also misleading answers to historical content.

- Medium articles about coding that are filled with irrelevant pics and blocks of text that are "not wrong" but also "not right" followed by weirdly specific code...

We had all that before AI.

imabotbeep29371 month ago

Agree. Content was the OG slop. Buzzfeed with monkeys on typewriters.

The problem is that dopamine addicts generate outsized engagement. I know a literal crack mom who spends a solid 90+ hours a week watching accident videos to keep her brain triggered. The algorithm caters to her. Send promotional emails daily or more, constant notifications, recommend the same few videos over and over. Gotta get in there before she clicks another car crash video.

IMHO: Marketing is a top societal evil right now. If the media machine wasn't so desperate for content, AI wouldn't be a fraction of the problem it is. But with everyone obsessing over the next piece of content, fake AI presentations are mandatory.

AnimalMuppet1 month ago

Hmm. An AI trained to maximize dopamine could be a very bad thing. (It won't be stated that way. It will be trained to "maximize engagement", but it amounts to the same thing.)

pjc501 month ago

> An AI trained to maximize dopamine could be a very bad thing.

Spelled "profitable". This is definitely something that's already happened/happening; see algorithmic timelines and the widespread sudden legalization and acceptance of gambling.

pmtcc1 month ago

Our brains have been under attack for years. Zuckerberg, Dorsey, and company have already spent decades and billions doing just that. With capabilities already in the AI realm.

+1
baq1 month ago
Tao33001 month ago

> could be

is

noobermin1 month ago

Too many people are worried about hallucinating AIs somehow taking over nukes instead of them juicing the dopamine machines

mr_toad1 month ago

You don’t need AI for that, “old fashioned” ML has been doing it for a decade.

islandert1 month ago

TikTok?

fennecbutt1 month ago

Brainrot

brcmthrowaway1 month ago

Dude take me back to BuzzFeed listicles. Some millenials being cringe >> AI slop

wdutch1 month ago

I think you're right. Since LLMs went mainstream, I've seen a lot of my colleagues' presentations and thought "was this written by ChatGPT?" but I've come to wonder if it's just given me the frame of mind to identify low-effort slop that lacks any original insight but uses all the right sorts of words and phrases, regardless of if it was authored by a human or not.

cjbgkagh1 month ago

My hope is that equivocating waffle will look so much like ChatGPT that humans will have to write clearly and precisely to differentiate ourselves and we can put this horrible era of essay writing style behind us.

Though I’m starting to think that AI might improve faster than us so there might only be a diminishing margin of opportunity left to do this.

bee_rider1 month ago

We’ll have AI tools to take our bullet-points and expand them into prose (crappy now, eventually beautiful prose). Then we’ll use AI tools to summarize that prose into bullet points.

Eventually we’ll realize we can just send the bullet points and generate the prose on the receiving side. This will be great because most of the time the AI’s will be able to say “let’s just be a nop.”

cjbgkagh1 month ago

Reminds me of the SMBC comic https://www.smbc-comics.com/?id=3576 that explores the idea further.

I am less optimistic than the comic.

imabotbeep29371 month ago

It's both. Especially the out of context tinfoil rage response. They always existed. But now it's so common to see some totally benign article about pizza, and the top comment is "don't let them tell you not to remember in 1995 when US implanted radios in Syrian babies".

The algorithm is being trained solely for engagement. It is horrifying.

Capricorn24811 month ago

This sounds completely unrelated. If someone is really leaving a comment like that, it has nothing to do with the article and everything to do with the way weirdos engage with the internet.

The Slop is the wordy vapid garbage that maximizes SEO.

smcin1 month ago

Parent is right though: AI slop in an article is to maximize SEO. AI slop in a comment is a weird jumble of implausible claims to maximize engagement. I see both on a daily basis now.

duxup1 month ago

I've had similar false positive experiences where I swore some content had to be some form of LLM generated content, until I discovered the source was just poorly done or even just text from some older writing that "sounded" wonky but was more of a product of its time (like a 1940s newsreel).

datavirtue1 month ago

This. Bad AI output is indistinguishable from bad human output. It's literally the same exact shit.

d0odk1 month ago

With AI, there can be more content, produced faster and probably more cheaply, that is tailored to individual users.

flessner1 month ago

Yes, AI isn't entirely to blame for this - it's low quality, irrelevant and misleading content in general.

Also, we have to look at the incentives: advertisement. Somehow, this is acceptable to consumers, profitable for companies and profitable for publishers. How, is absolutely beyond me... and it won't change so long as Google has a majority in the "search"-space as they are directly profiting from this.

lovethevoid1 month ago

Good point. The standards for advertising networks has to increase tenfold, right now they reward slop, companies drain their ad budgets on channels they can’t even fully measure, and it repeats since it takes too long for companies to notice the effects.

It’s the reason advertising costs have ballooned digitally, and also the cause of many lawsuits that Google continues downplaying in the public eye.

Propelloni1 month ago

Today's AI is not to blame for anything because those AIs lack agency. Take a good look at the theory and real-life algorithms and soon you will realize that GPTs are just better parrots. Tools that they are the blame does not lie in the tool, but in the user. Not unlike guns.

giancarlostoro1 month ago

> - The horrendous mess that Quora became with technically correct in some ways but also misleading answers to historical content.

What kills me is I have to hunt for the answer in Quora now. I just treat quora like I do pinterest, just back out and never return.

disqard1 month ago

If you happen to use kagi, you can "ban" Pinterest from your SERP.

For me, it's one of those "quality of life" things that really improves my search experience (and is therefore worth paying for).

janalsncm1 month ago

There are chrome plugins that block domains from Google results for free and don’t require a login.

giancarlostoro1 month ago

I was, and I just might re-subscribe to it, I'm getting tired of how increasingly useless Google is becoming.

martin2931 month ago

I remember finding Quora when it was good. It was a literal godsend, actually interesting and meaningful questions and answers. Basically what the internet was advertised as (information interchange). Sadly it only lasted like 6 months.

tivert1 month ago

> We had all that before AI.

What AI gives us is vastly cheaper slop, so now it can be produced at a scale unimaginable to prior generations. No more paying some schmuck a penny a word to bang out "private label articles," so they were only practical as SEO. Now you can have a unique slop for every spam email, every search query!

Truly, we are making the world a better place.

henriquecm81 month ago

I can see the use to describe AI spam, but I am starting to seeing people using it to describe anything they don't like, basically a replacement for "mid" with was highly used the last couple of years. I noticed that when some people learn a new "trendy" word, they want to use it in every possible opportunity until it loses meaning.

GaggiX1 month ago

"Slop" is a internet slang that has always been used to refer to low quality content that exploits current internet trends, using it to refer specifically to AI generated content is pretty new.

brazzy1 month ago

> basically a replacement for "mid" with was highly used the last couple of years

Wot now? Somehow I managed to completely miss that.

Edit: ah, seems like it's mainly a twitter thing. That explains it.

cj1 month ago

"mid" is something I hear from people high school age most commonly.

henriquecm81 month ago

I've seen it on reddit too.

datavirtue1 month ago

Of course, it's seeking it's low energy state.

threetonesun1 month ago

The signal to noise being so bad on the web today is AI's most compelling use for me, it's better at getting a pretty-close-to-right answer than searching the web, with much less crap I have to block along the way.

But, consider that all that crap ended up on the web for a reason, and wonder how long before AI just injects it itself into its own results.

tananaev1 month ago

That's a result of natural selection forced by search engines. I think that's why I like ChatGPT so much. You can ask it very specific things and it will tell you exactly what you need. It does also output verbose answers by default, but you can control it by promoting for a short answer.

AlexandrB1 month ago

Enjoy it while you can. Once the marketing guys and monetization engineers get to it I suspect things will get a lot more annoying.

TeMPOraL1 month ago

This. Marketing is a cancer on modern society, and it's metastasizing to new communication media increasingly quickly.

+2
datavirtue1 month ago
tavavex1 month ago

Luckily, unlike search engines and similar, LLMs can be run completely locally. As long as there are corporate interests trying to squeeze new tech for every cent with no regard to anything else, there will be hobbyists making what's actually useful for them.

datavirtue1 month ago

Not if my company is paying for it. Which they will, because they have to. The price will be kept in check becauae it is a commodity. Anyone wanting enterprise business will have to include it (Microsoft 365).

duxup1 month ago

Good point.

I do find myself sometimes even prompting for a shorter answer after it hits me with a blob of text ;)

moi23881 month ago

I do so as well, but usually the response is the chatbot first generating a paragraph about how it’ll comply with the request, making the prompt moot

wafflemaker1 month ago

It is probably an internalized "prompt engineering" trick from gpt3.5 times, where you could achieve near gpt-4 performance using stuff like that. Rephrase the question and plan your answer was on top of the list.

TeMPOraL1 month ago

Keep in mind that tokens are LLM units of thought; the only moment the model does any computation is when generating tokens. Therefore, asking it to be succinct means effectively dumbing it down.

GaggiX1 month ago

We also had the term "slop" before and it's not strictly related to AI but "content or media of little-to-no value".

PheonixPharts1 month ago

Coming up with and quickly adopting new terms to sound "hip" is one of the most important skills for AI practitioners. We've had "agent-based" concepts in CS for decades, but if you're "in" you'll of course refer to "agentic" workflows and the like.

It make sense to come up with terms to describe common patterns: Chain-of-Thought, RAG etc. are good examples of this. But the passion some members of this community have for being intentionally confusing is tiresome.

kokanee1 month ago

It's true... the quality of content on the Internet has a bunch of problems, and AI is just one of them. The economic incentives to trick people into staying on your page and scrolling as much as possible are a fundamental part of the problem. Politically-motivated ragebait and lies are a separate huge problem. AI-generated slop is also a problem for content quality and UX, but I'm far more concerned about the impact of AI on the value of human labor and intellectual property than I am about the UX of my search result pages.

badgersnake1 month ago

Youtube videos that could have been a one paragraph answer.

somenameforme1 month ago

Exactly why I literally never use videos for 'how to do [x]', when 'x' can be expected to be fairly straight forward, anymore.

- 10 seconds intro

- 10 seconds yoooo guys wassss up

- 30 seconds build up

- 30 seconds showing what the answer will do

- 30 seconds encouraging you to post comments to the video

- 2 seconds to explain the answer

- 20 seconds yooo don't forget to pound that like and subscribe

If this is really what's optimal for the sacred algorithm, then that algorithm needs a serious tune up.

disqard1 month ago

If it's a somewhat-successful YouTuber, then you missed the 60-second shout-out to their sponsor.

TeMPOraL1 month ago

Plus 30 seconds to 2 minutes of Patreon segment, depending on whether they're reciting the list of newest/bestest patrons, and then 30 seconds of outro in the end, creating a frame for YouTube to stuff recommended videos of the creator in.

ryandrake1 month ago

What we need is an AI agent who can parse through 10 minute videos, and then extract out and summarize in text format only the important 2 seconds.

ghaff1 month ago

I've seen examples where seeing someone make e.g. some repair really benefits from video. But I certainly won't argue in general.

lxgr1 month ago

You forgot the NordVPN ad!

antisthenes1 month ago

Literally all of these are just the symptoms of declining ability of people (general public) to perform critical thinking. The content/spam/slop is simply being tailored to be effective with its intended audience.

But that's not the scary part.

The difference with AI slop is just the enormity of the scale and speed at which we can produce it. Finally, a couple of data centers can produce more slop than the entirety of humanity, combined.

wafflemaker1 month ago

Don't think so. It's just democratizing of the internet. It went from elitist, well read and educated bunch to people communicating with pictures. Nothing wrong with that, tho text internet was nice.

At work people often ask me for help with documents or translation. Or I see some friends' conversations. While Polish grammar is pretty difficult, it's not surprising to see messages with orthographic errors in 5 out of six words. You just live in a bubble of people who can read and write well.

antisthenes1 month ago

> It went from elitist, well read and educated bunch to people communicating with pictures. Nothing wrong with that, tho text internet was nice.

There is absolutely everything wrong with that if it consistently invades and drowns out the voices of the well-educated elite.

The worst tyranny in this world is the tyranny of the ignorant against the learned. In its worst form, it can lead to mob justice and other horrible things.

Maybe that's not your worldview, but it is the view of many, and it's just as legitimate as yours.

wizzwizz41 month ago

> it's not surprising to see messages with orthographic errors in 5 out of six words

But they're saying something. The characteristic feature of slop is not informality: it's fundamental meaninglessness.

echelon1 month ago

> The difference with AI slop is just the enormity of the scale and speed at which we can produce it. Finally, a couple of data centers can produce more slop than the entirety of humanity, combined.

Think only about your own consumption for a second. You're not going to engage with slop, are you?

I'm imagining that whatever your filter process is, that you manage to heavily engage with content that is mostly good and well-suited for you. Discounting Google search becoming crummy, of course.

AI in the hands of talented people is a tool, and they'll use it to make better stuff that appeals to you.

I wouldn't worry about other people. Lots of people like rage bait, yellow journalism, tabloid spam, celebrity gossip, etc. There's not much you can do about that.

anal_reactor1 month ago

When I was a kid and I was told to write an essay "what is slop" teachers would give lots of extra points for dumping useless and somehow vaguely related information just to raise the word count. Answers along "slop is useless shit created only to serve as filler content to make money on stupid people" would get zero points, I was expected to write the history of slop, the etymology of the word, the cultural context, the projected future, blah blah blah, don't forget at least ten citations, even if they're even more useless than the essay I was writing and 100% pure unadulterated slop.

My master's thesis was on a topic that nobody else researched (it wasn't revolutionary, just a fun novel gimmick), so I had to write filler just to have a chapter on a topic possible to find references to, in order to get the citations count, even if the chapter wasn't relevant to the actual topic of the thesis

So yes, I think that the push to create slop was there even before computers became a thing, we just didn't recognize it

woodruffw1 month ago

As with everything, I think it's scope and scale: Quora was always a cesspool, but now every single question has a machine-generated response that's frequently incorrect or misleading (sometimes in legally concerning ways, like one was for me recently).

djaouen1 month ago

I don’t think they are saying that the internet hasn’t been shit. It is. I think what they are saying is that it is about to get a whole lot shittier thanks to AI.

cptcobalt1 month ago

Anything that is a response to classic "SEO manipulation" to get higher ranking on a search engine result page to create an appearance that the content is higher value, is more comprehensive, or took more effort to produce does create net-slop. And that's been going on for 10+ years.

Grimblewald1 month ago

I guess the problem is that for the lazy, the ability to generate slop has accelerated significantly through the advent of AI. Slop creators have been disproportionately empowered by AI tools. People who create quality content still benefit from AI, but not to the same extent.

zrn9001 month ago

> The horrendous mess that Quora became

Its not a horrendous mess for me. It works very well. Everything depends on what content you interact with as the algorithm heavily shapes your content depending on what you interact with. Its no different from any other social network.

asadotzler1 month ago

99% of people who use Quora don't use it as a social network, they click that top Google search result claiming to answer their question and then get frustrated at how fucked up Quora's website is and how rarely it actually answers their question.

guidoism1 month ago

Recipe articles with hundreds of words of irrelevant text before the actual recipe.

lux1 month ago

The endless drivel of recipe websites is another one, burying the actual recipe under an absolute mountain of slop.

saltminer1 month ago

LPT: Recipe Filter is shockingly good at cutting out all the filler and presenting it in an easy-to-read format

https://github.com/sean-public/RecipeFilter

lux1 month ago

Thanks for sharing!

aidenn01 month ago

Recipe for cinnamon rolls:

When I was a kid we used to spend summers with my grandmother. It was an idyllica pastoral setting and we used to chase the goats around and catch butterflys.

[snip 3000 words]

...when I asked her for her recipe, it turns out she made cinnamon rolls by buying pillsbury ones at the grocery store! So if you don't want to be like grandmother, use 2 cups of flour...

gravescale1 month ago

"Best X of YYYY" articles have been (mostly? fully?) automated mashups of tech specs for years too.

marginalia_nu1 month ago

Yeah, slop isn't new, AI makes it easier to produce.

Other examples include those books where each chapter generously estimated has a tweet worth of thought padded out with 35 pages of meandering anecdotes that just paraphrase the same idea. Like it's very clearly a sort of scam, the padding is there to make it seem like the book has more information that it does when you look at it in a digital bookstore.

p_l1 month ago

AI hype allows one to push "slop" about AI slop.

Just like simple template generated SEO, template-written "content", etc. before.

In fact, a lot of writing about AI slop could be considered just as much slop...

CM301 month ago

Yeah, and most of the reason for that can basically be summed up as "it's what Google incentivises".

They look for detailed pages, so pages are bloated with irrelevant information. They look for pages people spend a lot of time on, so the same thing occurs. Plus, the hellscape that is modern advertising means that rushing content out quickly and cheaply is encouraged over anything else.

AI will probably accelerate the process even more, but it's already been a huge issue for years now.

TeMPOraL1 month ago

There's a bit of blaming a victim going on here. Especially early on in the days of SEO, Google incentivized slop the same way a bank vault incentivized armed robbery: by having something of value in it.

Google incentives don't matter much for honest website operators. They're only relevant when you want to abuse the system to promote your worthless bullshit[0] at the expense of the commons.

I really wish society started to treat marketing hustlers with the same disdain it has for robbers.

--

[0] - If it was worth anything, you wouldn't be worried about SEO all that much, especially back before it all turned into a race to the bottom.

greghinkleman1 month ago

[dead]

tkgally1 month ago

Both HN itself and prolific HN contributor simonw get shoutouts in the article:

“The term [‘slop’] has sprung up in 4chan, Hacker News and YouTube comments, where anonymous posters sometimes project their proficiency in complex subject matter by using in-group language.”

“Some have identified Simon Willison, a developer, as an early adopter of the term — but Mr. Willison, who has pushed for the phrase’s adoption, said it was in use long before he found it. ‘I think I might actually have been quite late to the party!’ he said in an email.”

The first substantive discussion of the word here seems to be this:

https://news.ycombinator.com/item?id=40301490

atomicnumber31 month ago

4chan has been calling things "slop" for literally so long I can't remember when it started. If you go to /g/ literally right now, ctrl F slop, 4 hits just on the front page previews.

If anything, it originally started as calling things "goyslop", which you might be able to deduce is a portmanteau of "goyim" and "slop", the implication (given it's 4chan) of course being that it's low-quality stuff made by Jews that is foisted upon the "goyim" (non-Jews). To the point that I usually see people calling it "AIslop"... specifically to differentiate it from "goyslop", so pervasive is the use of the term.

I'm honestly surprised "slop" (in this specific context) is hitting the mainstream (apparently) given it's so closely married to anti-Semitic undertones. I assume it's kind of like Pepe? People see the cute frog or the edgy designation of things as "slop" not knowing that's kind of a minced version of how it's actually used on 4chan.

woodruffw1 month ago

It helps that "slop" has a widespread, intuitive meaning (in this setting) that doesn't need 4chan's anti-semitic usage. I hadn't even made the connection to the 4chan phrase, even though I'd heard it before.

(This is in contrast to Pepe, which was popularized principally on 4chan and then exported by reactionaries.)

sfp1 month ago

I got curious and first usage was apparently on 08 May 2019 ">Implying I care about what Goyslop companies do".

https://archive.4plebs.org/_/search/text/goyslop/order/asc/

unraveller1 month ago

Harold Bloom often called Harry Potter generic "slop" upon release in the 2000s and /lit/ uses the insult the same way without any undertones. /tv/ is probably the biggest user of all the different insult 'slops since that medium is thoroughly middle of the road and they know the only fun to be had of late is in discussing how shit something is rather than consuming it.

It would be strange if the slop industry didn't try to take over the word since they exist to preempt all charges against their authority.

coldblues1 month ago

The regular person will never get these terms right. Most people are not aware of this game of telephone they're unwillingly being part of. It will never ceases to annoy me. I still shudder at seeing "Troll" being misused.

gravescale1 month ago

Or hacker for that matter.

Actually "engineer" as well in countries where it's not protected. Though the confused look on my German housemate's face when an "engineer" turned up to connect the cable broadband cable to the property was pretty funny.

hotdogscout1 month ago

"Goyslop" as an insult is less about anti-semitism and more about how Jews are allegedly injecting anti-white multiculturalism (ex: DEI) in entertainment.

So things that are woke get called goyslop (as opposed to Jewish as you imply).

forgotmypwlol1 month ago

Sounds pretty antisemitic, and also paranoid schizophrenic.

compiler14101 month ago

This antisemitism, is it in the room with us right now?

hotdogscout1 month ago

It is a cliche antisemitic conspiracy theory, but as an insult it is not used at things that are Jewish but things that are woke.

Didn't claim it wasn't antisemitic, it is, a lot!

pcwalton1 month ago

> I assume it's kind of like Pepe? People see the cute frog or the edgy designation of things as "slop" not knowing that's kind of a minced version of how it's actually used on 4chan.

The way 4chan lingo seeps into mainstream Internet discourse is so annoying. It happened with "degenerate" too (which, of course, 4chan borrowed from the Nazis' Degenerate Art Exhibition).

hotdogscout1 month ago

Degenerate was never used to describe people in a non-hateful context though, even before it became /pol/'s favorite word.

tbabb1 month ago

They don't mention Twitter, but that's where Willison got it from.

cletus1 month ago

I'm a huge Neal Stephenson fan. Cryptonomicon is to this day one of my all-time favorite books. Years ago now I read Anathem. It wasn't as good but it had some really interesting ideas.

One such idea was how the Internet was filled with garbage by all these agents (which were implied or stated were AI, I can't recall). They would subtly change things to be wrong. Why? Essentially to sell you a solution to this that filters out all the crap.

Currently we rely a lot on altruism for much of the information on the Internet (eg Wikipedia). AI agents will get harder and harder to differentiate from actual humans making Wikipedia edits. I don't think we're that far away from human-vs-AI Wikipedia edit wars.

I really wonder how much human knowledge will be destroyed by (intentional or otherwise) AI vandalism in the future.

pavel_lishin1 month ago

> “Early in the Reticulum-thousands of years ago-it became almost useless because it was cluttered with faulty, obsolete, or downright misleading information,” Sammann said.

> “Crap, you once called it,” I reminded him.

> “Yes-a technical term. So crap filtering became important. Businesses were built around it. Some of those businesses came up with a clever plan to make more money: they poisoned the well. They began to put crap on the Reticulum deliberately, forcing people to use their products to filter that crap back out. They created syndevs whose sole purpose was to spew crap into the Reticulum. But it had to be good crap.”

> “What is good crap?” Arsibalt asked in a politely incredulous tone.

> “Well, bad crap would be an unformatted document consisting of random letters. Good crap would be a beautifully typeset, well-written document that contained a hundred correct, verifiable sentences and one that was subtly false. It’s a lot harder to generate good crap. At first they had to hire humans to churn it out. They mostly did it by taking legitimate documents and inserting errors-swapping one name for another, say. But it didn’t really take off until the military got interested.”

> “As a tactic for planting misinformation in the enemy’s reticules, you mean,” Osa said. “This I know about. You are referring to the Artificial Inanity programs of the mid-First Millennium A.R.”

> “Exactly!” Sammann said. “Artificial Inanity systems of enormous sophistication and power were built for exactly the purpose Fraa Osa has mentioned. In no time at all, the praxis leaked to the commercial sector and spread to the Rampant Orphan Botnet Ecologies. Never mind. The point is that there was a sort of Dark Age on the Reticulum that lasted until my Ita forerunners were able to bring matters in hand.”

mvdtnz1 month ago

I'm re-reading Anathem now (my favourite book). Something I noticed on my second reading is Stephenson describes Sammann's physical appearance exactly once in the book and I must have missed it the first time around.

I always pictured the Ita as wearing elaborate robes with hoods darkening their faces due to their secretive nature. But in fact he describes Sammann as looking basically just like Gilfoyle from Silicon Valley, including the way he dresses. Which is amazing given the roots of the Ita (he describes the word as coming from Information Technology and the meaning of the A is lost to time, but it's obvious to a 20th century Earth-born reader it comes from IT Administrator).

There are so many delightful details in Anathem, it's well worth a second reading.

pavel_lishin1 month ago

I don't think I remember a description of Sammann's clothes, aside from a hat. I always pictured them as wearing something similar to what Orthodox Jews wear, except in darker colors.

greendestiny_re1 month ago

100 good sentences and 1 subtly wrong one? That makes for an extraordinary text! I suppose the idea sounded better in the writer's head but it would not be nearly as bad as implied.

visarga1 month ago

We can always rely on pre-2022 data. But I guess what you're saying is plausible, I see it becoming like the adversarial game of virus vs immune system, a constant arms race. We got to build our immunity. On the one hand AI can churn bullshit on command, on the other hand training on large datasets tends to cancel out many errors across the corpus.

All the more reason to use local models and curated feeds from now on. Local LLMs can clean / firewall the bad stuff, and follow our guidance. They will be like the new anti-virus software. I've predicted early in 2023 that in the future operating systems and browsers will all sport a small LLM that will ensure we don't get abused by the internet and provide a "room of our own", where we have total privacy. It's already a reality.

tivert1 month ago

> We can always rely on pre-2022 data.

No we can't. How many websites from 1998 survive today, in a form you can actually find (e.g. not the Wayback Machine)? In ten or twenty years, most pre-2022 data will be inaccessible.

93po1 month ago

It's interesting how often I have to look at timestamps on content on the internet these days. I assume anything after 2022 that has the slightest whiff of ChatGPT wordiness is probably slop

gamepsys1 month ago

The same idea is explored in more details in his later book 'Fall; or Dodge in Hell' where AI generated individualized content radicalizes parts of the population and convinces them to believe blatantly false facts. It's a decently large B plot in the novel that ties into some of the larger themes.

soco1 month ago

We didn't need AI for that, but it surely helps a lot making it easier possible.

tromp1 month ago
wy351 month ago

It's definitely not the same.

Spam is trying to sell you something, e.g. an unsolicited email peddling a supplements.

Slop is low-quality content, e.g. someone taking a bunch of bird pictures off Google and posting it in a birding Facebook group.

Spam is an ad, slop is not. With AI, it is now much easier to generate slop.

OptionOfT1 month ago

It is the same. Pages and pages of generated content so that 1) they end up higher in Google and 2) when you end up on their page they're able to show you ads / try to sell you stuff.

ranger_danger1 month ago

People already use AI to generate ads though. It's only going to get worse.

rdlw1 month ago

Spam is content that is delivered to people who did not request it. Slop is content which pretends to be what you want, but turns out to be of low quality.

quantified1 month ago

Huh. Whether or not a word has this meaning or that depends upon acceptance. You say it means X, someone else says it means Y, where X and Y can be arbitrarily dissimilar. And as much as you might be saying "means this to me", the way you are each phrasing it you mean "for all". How do either of you resolve this?

rdlw1 month ago

What is there to resolve? The comment I responded to was saying there's no useful distinction between the two words, I pointed out a distinction that I see between the two in what I think is accepted usage. Of course I see my opinion as more correct, since if I found something more convincing I would change my opinion. As the word becomes more widely used, usage will settle down.

quantified1 month ago

Argh, my apologies. The comment above yours in the sequence I see defines spam as unwanted marketing content, yours as unwanted content in general. For reasons unknown I saw that one as the parent of yours.

cess111 month ago

I don't see the difference. An email that pretends that I need to buy some pills and turns out to be a card collecting scam is obviously of low quality and not something I have requested.

Computer generated fake texts are spam, doesn't matter if they're selling pills to me or my attention to ad networks.

pmtcc1 month ago

Spam = unwanted and against what you asked for, trying to sell you something or get you to do something

Slop = technically what you asked for, but intentionally created just to fill space and increase traffic/hits, generally of the lowest quality rendering it unusable

+3
cess111 month ago
rdlw1 month ago

I see 'slop' used more often in contexts where, say, you look up a problem you're working on, find an article that seems to be relevant and may have the answer you want, but at some point turns out to be devoid of content and possibly AI-generated.

cess111 month ago

That's spam. That's like the messages in your inbox of yesteryear that tried to entice you to look at them but they'll just waste your time or worse if you do.

ketralnis1 month ago

Feels like people calling bad scrolling and other jitter "janky" to just mean that it's a bit crap and people deciding that "jank" is a technical term. No buddy jank just means jank and slop just means slop. This isn't a "new term".

duxup1 month ago

IMO it's slightly different in terms of how it plays out so a new term seems reasonable.

sjsdaiuasgdia1 month ago

I have similar feelings about "smishing", "vishing", and other medium-specific variants of "phishing".

fullshark1 month ago

I see no alternative if people are unwilling to actually pay for content. It's just going to be individualized slop feeds on every advertising based media app until they get tired of that (zero sign of that coming).

Maybe the algorithms will be so good, and enough creative people will use these tools to generate truly exciting content that they wouldn't have been able to otherwise but it just looks totally dire to me for creatives at this moment.

jl61 month ago

> unwilling to actually pay for content

The "content" industry (books, music, movies, all of it) has a systemic issue of which we are only just seeing the beginning. Namely, there is now so much content, and it is all so easily accessible, that the relative value of any one piece of content has fallen way, way down. There are only so many hours in the day, and only so many days in a lifetime, and only so many humans on the planet, and growth in that aggregate content consumption capacity has been far outstripped by the growth in content production capacity.

There's just obscenely more high quality new-to-you content than you can ever consume - and an increasing proportion of it is available very cheaply, or even free. Anything new faces an uphill battle against everything old - and now, against AI too.

This is going to get a lot worse before it gets better (and it may never get better).

apantel1 month ago

Well said. It’s the same with news. When something big happens in the world, there is an explosion of communication about it on the internet. You can absorb the event from countless sources. So what value is any one news outlet’s coverage? It’s not worth much. The media outlets used to be able to monetize a captive audience, i.e. people living in a certain locale would have a few newspapers and television channels to choose from. Now anyone, anywhere, can go online and absorb news from all of the posting and aggregating and reprocessing and commenting going on. It’s almost impossible to sell into that.

The value of generic / impersonal content is rapidly approaching zero. The only thing that still has value is a particular creator of interest posting their next video — like your favorite YouTube channel, you’ll watch that.

It seems like the only way to succeed in this new environment is to be a real human person who builds a following / cult of personality around themselves and their content with its signature that is unique to them. It’s something like ‘releasing content that is personally signed’ where the person’s signature has value to a certain audience. The audience is ‘captive’ because they can’t get that ‘personal signature’ anywhere else. Even AI can’t deepfake it, because the perceived value of it is specifically that it is coming from a particular real human person.

TeMPOraL1 month ago

> It seems like the only way to succeed in this new environment is to be a real human person who builds a following / cult of personality around themselves and their content with its signature that is unique to them.

Correction: pretend to be a real human (or even a hyperreal human), not to be a real human. This is the game YouTubers and Instagram influencers have been playing for over a decade - there's a team of people building a brand around the face of the vlogger/influencer, making them seem like a really nice and interesting human, where in fact the opposite is the case. The point of it is to exploit human vulnerability to parasocial relationships, creating a captive audience primed to be receptive to the deluge of advertising that follows.

Yes, this is one of the few ways for "content" to keep value these days. Which is ironic, given that the net value of it to the consumer and society is squarely negative.

cess111 month ago

"There's just obscenely more high quality new-to-you content"

Are you sure about this? How are you measuring quality?

For me, if something resembles advertising I consider it to be of very low quality. There are some exceptions, for example some of the movie work by Roy Andersson, but they are very few.

As far as I can tell, ad-discourse and ad-style permeates pretty much everything in contemporary "content". Every time I go to my library and open something older than me the language is like a fresh air, it's clear that someone put some intellectual work into it and there is a distinct character to the text, personality imbued by the typographers, authors and editors. This is very rare on the Internet, and whenever I come across it the typography is usually ad-adjacent anyway.

jl61 month ago

Even if a super-Sturgeon's-Law holds at a 10000:1 crap:quality ratio, there's still overwhelmingly so much stuff out there, produced over so many generations of talented writers, artists, musicians and directors, that you'd have to be unreasonably picky not to be able to fill a whole lifetime of consumption with enjoyable content. It's only our predilection for novelty that keeps the content mills going, and I wonder how long that can last against the ever-growing accumulation of culture.

cess111 month ago

Seems to me you're conflating the excretions of the entertainment industry with culture.

willvarfar1 month ago

Companies will serve slop to paying subscribers too.

s1artibartfast1 month ago

If it gets bad enough that people will pay, I think some will pay for exclusively real content.

I hope this is the start of walled garden human internet. Web-rings, moderated forums, ect.

A common cyberpunk trope is a trashed net and a private net.

TeMPOraL1 month ago

> If it gets bad enough that people will pay, I think some will pay for exclusively real content.

People who pay for content demonstrate that they have disposable income and are willing to spend it, which makes them prime population for advertisers to target. By paying, they're distinguishing themselves for the net-near-worthless population of free users. There's a huge pressure for advertisers to tap into that juicy population of paying users; it takes only so long before any given service succumbs to that pressure.

+1
s1artibartfast1 month ago
pixl971 month ago

Exactly. Slop is like meth for corporations. It costs nearly nothing to produce unlike real content, and for a short period of time can give a real boost to number of viewers/ad impressions/etc, the current board members can get that jump in their stocks and take a golden parachute while their replacements have to deal with a company that can no longer produce anything useful and has to spend a massive amount of money to get everything back in shape.

fullshark1 month ago

True, let's get even more cynical actually, companies will serve ads to paying customers too, even those paying for "ad-free" versions of the product.

skydhash1 month ago

People wants to pay for content, but publishers are either not working on content worth paying for or don’t want you to purchase, they want you to rent instead.

ThrowawayTestr1 month ago

Decades of internet use has shown me that people absolutely do not want to pay for content. The average person will choose free+ads over paying every time.

Workaccount21 month ago

A large subset of those will choose free+ads with ad-block, so free+free.

Then they will complain that the internet is full of trash content that doesn't suite them.

pixl971 month ago

>Then they will complain that the internet is full of trash content that doesn't suite them.

This will happen regardless if you paid for content or not. The natural world is filled with parasites, it is an effective evolutionary strategy.

+1
techjamie1 month ago
skydhash1 month ago

Free is free, and you can choose not to look at the ads. No wonder people are taking this option. In the process, it cheapens the whole thing.

paulddraper1 month ago

The overwhelming majority do not want to pay for online publications.

They are willing to view ads (proving they value the material) but as a rule are unwilling to pay any cash.

duxup1 month ago

Chicken and the egg there as far as "worth paying for" although I'd argue that's not really any different than "don't want to pay".

StableAlkyne1 month ago

If it's "worth paying for," they can just squeeze more money out of the equation until it's barely worth the cost.

For a real world example, just look at the scientific journal system - researchers pay upwards of $5k to publish (after spending however many tens to hundreds of thousands on the science, out of their own pocket), readers can pay $50 per article or their institution can subscribe for tens of thousands of dollars (if they're lucky and have a good negotiator). Journals do nothing of value aside from hosting the PDFs (which absolutely does not cost $50/download) and facilitating anonymous peer review (which amounts to sending emails to a few academics who will review it for free, at no cost to the journal).

Even content that is worth paying for, like research, will quickly reach an equilibrium that maximizes profit while minimizing effort.

pjc501 month ago

The fundamental issue is that it's unreliable to know if you'll like content before you've experienced it. But afterwards you're not going to pay because you have already experienced it. It's a "market for lemons".

skydhash1 month ago

It was always a bet. You make something and hope that people like it enough to pay for it. Just like any business. Why should businesses be entitled to my money if their offerings have no value to me?

ben_w1 month ago

> Why should businesses be entitled to my money if their offerings have no value to me?

That's completely the wrong framing for this idea.

The problem is that you don't know if it's valuable until you've bought it, unless the seller gives it for free and just trusts you to pay later in the event you did.

If I see a new-to-me fruit in the supermarket, I can buy one to see if I like it, and I can be reasonably confident that my first taste will be a reliable indicator of if I should buy more.

People used to do this with entire newspapers, but (1) newspapers have been derided for taking nonsense for basically as long as we would even recognise them to be newspapers in the modern sense, and (2) dividing them up into separate web pages per article makes the challenge greater, as it's gone from 50p for the entire broadsheet based on the front page headline as an advert, to a "please subscribe" banner after seeing the headline and generic intro paragraph for a random article you were probably linked to because someone else thought it was interesting.

Ekaros1 month ago

I'm not honestly sure if people want to pay for content. They want to pay for convenience or value. That is to have content available easy. But not necessarily for content if they can avoid it or get it cheaper.

Not that there is no sub-groups that will happily pay.

petercooper1 month ago

And this is where branding and reputation comes into play.

For all their problems, I trust numerous media brands to not give me slop: The New York Times, The Financial Times, Monocle, Matt Levine, Linus Tech Tips, The Verge, hundreds of YouTubers and Twitter users, even Hacker News. Let media companies and creators who want to set fire to their good names get on with it, because it'll hopefully mean anyone doing a good, consistent job will rise.

anal_reactor1 month ago

> Linus Tech Tips

Caught red-handed faking test results, the official response "if we actually did the tests we wouldn't be able to publish videos fast enough"

petercooper1 month ago

I did say "for all their problems" ;-)

I still trust LTT more than a channel pumping out faceless review videos or one so unknown that there aren't enough viewers to even provide any scrutiny. To butcher the eponymous Linus's law: given enough eyeballs, all mistakes are shallow?

CM301 month ago

At least part of the issue is that much of this slop is 'good enough' for many people. As bad as many of those terrible recipe sites and video game walkthroughs and news articles might be, they're clearly good enough for the majority of the population, and enough so they don't see paying for content as worth the price.

The other part is that practically speaking, there's enough good free content in most fields that paying doesn't get you anything better. It's not surfaced well enough by the algorithms, but it does exist, and it makes it so in most areas, there's very little reason to pay for anything.

rini171 month ago

The subscription usually comes in a package and people hate to pay for something they disagree with, even if it's just one author of several, or the occassional mistake.

Might not be the main reason, just that these unsatisfied users are so vocal, idk.

Workaccount21 month ago

In my experience with a voluntary donation only service targeted and used primarily by educated middle and upper class people, virtually no one pays if they don't have to.

scotty791 month ago

The problem is people wouldn't know what content to pay for because paid content in large part is also scam and bait and switch these days.

You literally can't evaluate content before you consume it in it's entirety or the at least the amount you wish.

Given that copyright should be abolished, adverts should be banned and content industry should move to entirely post-paid voluntary financing.

Something like gaming piracy where you play the whole game for free and if you really liked it you "buy" "a copy" to support developers and their investors.

ALittleLight1 month ago

If OpenAI, or whoever, turns the AI crank once more and we go from GPT-4 to 5, and the jump is the same size as 3 to 4, then I think the answer will be pretty clear: AI content will improve in quality to meet or exceed human content.

barfbagginus1 month ago

False. False. False.

1. People could forcibly seize and redistribute content already made in years 1980-2020. Even that would be better than slop.

2. People could read only those - like scientists or open source and public domain authors - either funded by the state or otherwise willing to publish their works for free at high quality without the monetization slop text. The content exists, but monetized slop hides it.

3. Actually good AI can compress multiple slop articles into useful, non-sloppy content.

I'd be perfectly fine if the internet consists of just math textbooks and science papers, and actually good articles automatically distilled from slop.

It has the potential to give us what we need.

The problem is that slop hides all that!

fullshark1 month ago

So the other alternative is piracy and only consuming public domain information?

Zambyte1 month ago

Maybe we should stop demonizing unauthorized sharing? People like to pretend nobody would make art in the absence of copyright, but it's easy to point them to millenia of contrary evidence.

barfbagginus1 month ago

Those are other alternatives, yes. That and mining the slop with automated tools.

The problem with public domain stuff is that there is more than enough of it, but you cannot access public domain information because monetized slop has superseded it in the search results.

I believe that automated AI engines will eventually help individuals find non-sloppy public domain articles, or assemble them from slop directly.

But piracy is always a good option.

EForEndeavour1 month ago

The LLM/generative-AI genie is out of the lamp. I'm just some random midwit, but some predictions:

- Slop will continue to become cheaper to generate, and people will only notice the obvious stuff

- Hyperpersonalized content will abound, yet authenticity will run dry

- The lack of authenticity in electronic channels will drive a small segment of people offline into less fakeable (for now) social contexts

- Humans online will walk a treadmill of increasingly convoluted shibboleths / Gnirut tests (reverse Turing tests ;)) to self-identify as likely not AI-generated, i.e., subtly run-on sentences that are intelligible but slightly non-conformist to prevailing AI model outputs, and usage of old-school emoticons and other quirks

- Humans will walk on similar "Gnirut treadmills" for visual art, speech, video, and music

- AI models will gladly chase humans along these Gnirut treadmills, filling in canyons and sections of the Uncanny Valley with fractally sophisticated humanlike content

4star3star1 month ago

Fifteen years ago (I remember the apartment where I had this thought), it occurred to me that time was running out to write an authentic novel. Soon, computers would generate whole stories in an endless variety of styles, and even if future authors would hand write a book from start to finish, they would likely have been influenced by other artificial writing at some point. Readers would be unable to emotionally connect with authors due to the nagging awareness that the text might have been fully or partially generated by an unthinking, unfeeling machine.

Though I try, I fail to think of a comparable scenario in our past, at least as relates to language. You can look around whatever room you are in and try to identify an object that was made by human hands rather than a factory process. That's a fact that always makes me a bit sad. I think we're headed in a similar direction with the language we consume. Craftsmanship falls by the wayside, and our world loses even more of the human touch that connects us with one another.

axpvms1 month ago

Interesting thought. Looking around the room I'm in the only think I'm sure was definitely handmade is a musical instrument, because I know the guy who made it for me. And even then it uses parts which were machine made: strings, tuners, bridge etc.

BeFlatXIII1 month ago

Thank you for your reminder that I should finish rereading If on a winter's night a traveler. It has sections that, IIRC, deal with similar themes.

scrps1 month ago

- The lack of authenticity in electronic channels will drive a small segment of people offline into less fakeable (for now) social contexts

I think this segment might start small but I think it will grow rapidly if the utility of the internet is dwarfed with low quality crap. The belief that non-technical people won't catch on to the shenanigans and simply look elsewhere is a bad bet some are making and I think everyone living on the internet during covid gave non-technical people an intuitive feeling for all the manipulation and tenuous quality of the internet as a tool/public utility they can trust in any form.

citizenpaul1 month ago

Hop on various markertplace apps like craiglist or facebook market and you will discover there already exists a decent chunk of invisible population that largely rejects the online world for all but basic communication means.

pixl971 month ago

Once we get to the point where models are or are nearly continuous learning and are getting data streamed from thousands of sources I feel it may be very hard to figure out if humans are leading the Gnirut or following.

noman-land1 month ago

I weirdly think the shibbolethization of human culture will be a good thing because it will encourage everyone to be creative, lest they be accused of being a bot and ignored.

chromaton1 month ago

Slop has been around a while. I was researching a topic, and noticed that most of the top search results had the same misunderstanding of some of the definitions. The writers were clearly not familiar with the topic, and I'm sure they were just copying each other. All of the articles pre-dated GPT-3.5.

The kicker is that if you ask GPT-4 about it, it spits out the same incorrect information, meaning that GPT-4 was likely trained on this bad data. FWIW, GPT-4o gives a much more accurate response.

soco1 month ago

I wonder (as an outsider) why would GPT-4o give more accurate responses? The training data is the same, right... so maybe somebody can point this out like for a kid, thank you.

willvarfar1 month ago

Beyond slop, there will be personalized slop where AI 'optimizes' articles for each reader.

And beyond slop, there will be AI models that do product placement. OpenAI's "publisher partnerships" deck explains https://news.ycombinator.com/item?id=40310228

So soon you'll go to a news website and get the political filter bubble that reinforces - or outrages - your prejudices to maximize your engagement. And in the middle of it, the AI will slip in that the brand of grill that caused the fire was rumoured to be ${insert_name_of_competitor_here} etc?

The big future for AI is to move slop beyond outrage and into intimacy territory. If rage was the engagement of the last ten years, then ending up only talking to AIs who pretend to care will be the even more addictive engagement of the next ten :(

duxup1 month ago

What happens if AI slop learns that I like accurate content and it suddenly gets good?

iAMkenough1 month ago

It'll sell you on the idea that you can trust it; that it only sources verified reports and data; that it's better than the other AI slop that doesn't care about accuracy or privacy.

Then it'll build up habits and routines with you. You'll feel good that your AI slop is actually good AI and is trying to benefit you.

All the while, the verified reports and data always supports your expectations and worldview. It's almost perfect, aside from the fact it sometimes cites The Federalist more than it should. And it convinced you to throw out all your pillows and buy a whole new set for your family. But you're able to look past it, because it's actually good AI.

willvarfar1 month ago

The incentives are not aligned. They are not optimizing strictly for your preferences, they have agendas they are paid by advertisers to push.

Swizec1 month ago

Is slop new or is it just a continuation of SEO, blogspam, and “content”? I love that we have a new better word that captures the nuance, but it doesn’t feel like a new phenomena.

thih91 month ago

I'd guess both: a continuation and its own new flavor.

It's better than what automated scripts could produce and cheaper than human generated copy. It has expertise of the whole Internet and its reasoning capabilities are often worse than those of a three year old.

Similar for sure, and yet something different.

cj1 month ago

Our company is working with a SEO firm we pay $11k/mo for (who I want to fire).

They sent us new copy for our core marketing pages last week, and many sections simply sound non-sensical. As in it just didn't make cohesive sense, and it was clear a human being didn't write (or even review) the content.

This is the problem with new AI "slop". In the past, blogspam / SEO spam was at least reviewed and written by a human. Now, we have content getting published straight out of the mouth of text generators.

The quality of long-form content from text generators is significantly worse than even mediocre $10/hr copywriters in many cases.

whyage1 month ago

Unlike slop, SEO content can't be totally off-base, or search engines will rank it low. The ecosystem keeps it annoying but consistent with the facts, at least.

whstl1 month ago

I'd argue that the strange Gemini answers that were making the rounds of social media are more absurd than what you would see in regular blogspam. But again, they're extremes (and some of the reported screenshots were even fake).

On the other hand, there's indeed a lot of popular human-made fake content lately, especially in TikTok, like fake guitar playing and fake nonsensical DIY videos. So it's not an AI exclusive phenomena.

troyvit1 month ago

Now I want to go ask one of my AIs to produce some clearly fake content so that I can get boosted on social media for complaining about awful AI.

antifa1 month ago

Slop is not a new term, it refers to content people like (perjoratively, similar to how fast food is something people like) but for whatever reason (low brow? bad for you? clickbait? lowest common denominator?) should not be classified as good quality content. This slang is older than AI, possibly by decades.

atrettel1 month ago

Yes, this is not a new term. It's not hard to find earlier examples of the term "social media slop" to refer to endless posts on social media. A simple Google search for this reveals many examples of this particular term before 2022. I remember the term "slop" being used in the context of social media as a play on the idea of the "feed" giving you mass quantities of low-quality hogwash, etc.

acureau1 month ago

NYT writer discovers the term 'slop' days after NYT source leaks on 4chan. Specifically on a board where this is a common phrase. Cites 4chan. I'm connecting the dots

tolerance1 month ago

All this talk and jargon is indicative of a mass existential crisis as humanity is faced with the reality that many of its shared cultural artifacts are essentially frivolous.

fluffet1 month ago

Love the term.

People should call it for what it is. Tried to find some answers on Google earlier in the day, and the first result pages were 100% generated slop. Funnily enough, any AI summary of the slop would be slop squared.

It's everywhere, and I hate it. What ways do people have to combat it out of their day?

kjkjadksj1 month ago

Its infesting google image search too. I tried to find a picture of a guitarist playing a certain guitar. I got hits from “openart” where it looked like kurt cobain got crossed with a sand worm

compiler14101 month ago

Like with all terms originating from 4chan, some people will try to reject it and make it the new n-word like they're paid to do this. The irony is they do it for free on a Friday afternoon. At the end of the day normal people who don't larp as Internet hall monitors don't care and adopt the term anyway. Many such cases. And people haven't learned in the 20 years this kept happening over and over again.

1vuio0pswjnm71 month ago
bazil3761 month ago

There’s going to be a lot of garbage content out there—but isn’t there already? People have been writing junk to try to get search engine placement for 20+ years.

I’m not necessarily seeing the slop problem. People should always have been skeptical of content on untrusted websites.

Now, if reputable sources start trying to pump out content with AI, that’d be a problem. I suspect for those who try, they’ll quickly lose their reputation.

pavel_lishin1 month ago

> There’s going to be a lot of garbage content out there—but isn’t there already? People have been writing junk to try to get search engine placement for 20+ years.

Yes, but people's output is limited by their ability to type words on a keyboard. LLMs and other generative A.I. aren't bound by this limitation, and can put out significantly more.

> People should always have been skeptical of content on untrusted websites. Now, if reputable sources start trying to pump out content with AI, that’d be a problem.

How do you define untrusted websites, or reputable source? Especially when Google - which should be a trusted, reputable source - starts pumping out garbage as they did?

bazil3761 month ago

On the first point - I’m not sure there’s a difference to internet users between 1 billion junk articles on a topic and 1 trillion junk articles.

On the second point - this is precisely what I’m talking about when I say if reputable sources start churning out junk, they will lose their reputation. This is a negative publicity event for google. If it keeps happening, people will no longer trust the information coming from google.

haizan1 month ago

> On the first point - I’m not sure there’s a difference to internet users between 1 billion junk articles on a topic and 1 trillion junk articles.

But there is a difference whether the ratio of good to bad articles is 1:10 or 1:10,000 one is tedious but managable, the other is hopeless.

GaggiX1 month ago

"Slop" is a general term, you can create slop as a human, for example YouTubers who upload daily talking about the latest Twitter drama they are usually referred to as making slop, especially if they have a main channel where they upload high quality content. It's not a new term and it has little to do with AI by itself.

pavel_lishin1 month ago

I think an important difference between Youtubers putting out low-quality videos is that it still takes them at least as much time to generate the video as the video's run-time - same with pointless self-promo blogposts, unhinged LinkedIn posts, etc.

With generative A.I., this kind of slop can be pumped out at an industrial scale.

It'd be like equating your neighbor dumping a bag of garbage on the roadside, with the industrial plant down the road pouring out thousands of gallons of toxic waste per minute into a river.

ChrisMarshallNY1 month ago

> at least as much time to generate the video as the video's run-time

That's for the very lowest-tier video.

The usual formula, is that for every minute of runtime, you have at least 30 minutes of editing.

With professionally-produced video, I think it's triple or quadruple that.

GaggiX1 month ago

I'm talking about how the term is used because the author doesn't seem to be too familiar with Internet slang and believes that the term is new and strictly related to AI.

coldblues1 month ago

I think it's pretty obvious that the term "slop" has food origins. When you think of "slop", you think of oily, greasy fast food, or disgusting amounts of sugar, syrup, icing, etc. The food allegory strikes again. When someone says something is "slop", they obviously mean mass-produced content that regular people willingly consume at their detriment, because it appeals to our most primitive desires. Something lacking of substance, non-challenging material, "Roller coaster" content.

https://www.youtube.com/watch?v=wyoNGSKWIaw

tuckerpo1 month ago

Like most sticky internet slang terms, "slop" stems from 4chan's "sloppa", initially used to describe gross looking food, i.e. "slop of shit" shortened to "sloppa"

Now used to describe anything that looks half-assed, poorly put together, etc.

Ekaros1 month ago

What I am thinking is bad cafeteria food. Something they slop on your plate from big container with zero ceremony. Which does not have any desirable, consistency, taste or colour...

neogodless1 month ago

That's not what I think of.

I think of the buckets of scraps you would feed a pig. The stuff the humans don't want to eat and would just as well put in the garbage bin.

coldblues1 month ago

Reasonable individuals would think of the food I've described as pig feed, if not worse, of course with some hyperbole added. The reality is that most people are not knowledgeable enough to even tell "AI slop" apart from regular slop, or even human-made content. In that regard, people are eating pig feed, scraps no reasonable people would want to touch. Best example is AI generated Facebook posts with comments and likes from elderly folk or the the lower end of the bell curve.

NikkiA1 month ago

I think of gears, then sometimes low-value food.

yadaeno1 month ago

Slop is more of a term to describe extreme cost savings pushed onto consumers and marketed as “progress”.

FezzikTheGiant1 month ago

Would people be willing to pay for a gmail plugin that takes a stab at combatting ai email spam? something like Mailman [1] but with an LLM layer for detecting AI slop?

[1]: https://mailmanhq.com/

nateburke1 month ago
callamdelaney1 month ago

You'll soon no longer be able to find anything of use via traditional search engines. How will we keep improving AI models when the amount of 'slop' in their training data starts to outnumber the real content?

dorkwood1 month ago

Do they need new training data? I'm pretty sure Midjourney, OpenAI etc. already scraped all of the images in existence long before the well was poisoned. They can use other methods, such as improved tagging, to improve their models.

southernplaces71 month ago

I've been calling it sludge since it started to clog the internets all over the place (like cat photos but so much less cute or real). Slop works too though.

DoItToMe811 month ago

'Slop' has been used to describe low quality, mass produced web content for at least five years. Media is behind the curve, as usual.

sangnoir1 month ago

All I need is a Cloud-to-Butt-like browser plugin that replaces all instances of "GenAI" and "generative AI" with "AI slop"

Havoc1 month ago

It’s starting to get more noticeable week on week. Both on insta and google

At this velocity it’ll make some categories pure noise by end of year

SirMaster1 month ago

If only we had an AI that could sift through all the slop and give us only the decent information that we wanted...

reportgunner1 month ago

Been reading too much slop lately eh ?

SirMaster1 month ago

I just mean it seems AI are fairly good at sifting through and summarizing information, so it may be that the same AI that generate all this slop may also be able to be used to sift through all the slop to pick out only what we need and is relevant.

reportgunner1 month ago

Sounds to me like you are trying to plug a hole by making a hole. I don't think AI is a perpetuum mobile for intellect.

pmdr1 month ago

We've been feeding it slop for years, if anything it's become better at writing it than us humans.

iamleppert1 month ago

AI will never come close to producing what people can, because an AI cannot be made to suffer. Suffering is essential to the creative process. Suffering is the key, it's the whole point, and at the core of the human experience. When we look at creative output, we are looking at the sum of the suffering experienced by the creator. The more suffering, the better the content.

berniedurfee1 month ago

That’s not the four letter s-word I would have gone with.

novaRom1 month ago

Interestingly, this article itself is an example of "slop"

simonw1 month ago

Why?

eugenekolo1 month ago

Schlopp. Beautiful schlopp. Beautiful schlopp with a cherry on top.

m3kw91 month ago

Google leading again on the minus side

xen2xen11 month ago

But was the term coined by an AI?

justinclift1 month ago

Even Kagi has this crap too. :(

spacecadet1 month ago

It was slop before

ai4ever1 month ago

see slop ? say something !

there will be a backlash against robotic phone assistants in support centers.

support businesses that dont put out slop or ai-garbage.

telepathy1 month ago

SLOP: Judith Miller at the NY Times circa 2003

hm-nah1 month ago

Nice work Simon!

ChrisArchitect1 month ago
bni30 days ago

BoomerArt slop

PM_me_your_math1 month ago

[dead]

veesahni1 month ago

tldr: "slop" is defined as unwanted AI content

Black616Angel1 month ago

[flagged]

verticalscaler1 month ago

[flagged]

taco_emoji1 month ago

wish i could read the fucking paywalled article