Earlier this year, WIRED asked AI detection startup Pangram Labs to analyze Medium. It took a sampling of 274,466 recent posts over a six week period and estimated that over 47 percent were likely AI-generated. “This is a couple orders of magnitude more than what I see on the rest of the internet,” says Pangram CEO Max Spero. (The company’s analysis of one day of global news sites this summer found 7 percent as likely AI-generated.)
It was an SEO hellhole from the start, so this isn’t surprising.
Do Forbes next!
Is there a single good article on Forbes? It’s always fucking clickbait without actual content.
After all these years, I’m still a little confused about what Forbes is. It used to be a legitimate, even respected magazine. Now it’s a blog site full of self-important randos who escaped from their cages on LinkedIn.
There’s some sort of approval process, but it seems like its primary purpose is to inflate egos.
As of 2019 the company published 100 articles each day produced by 3,000 outside contributors who were paid little or nothing.[52] This business model, in place since 2010,[53] “changed their reputation from being a respectable business publication to a content farm”, according to Damon Kiesow, the Knight Chair in digital editing and producing at the University of Missouri School of Journalism.[52] Similarly, Harvard University’s Nieman Lab deemed Forbes “a platform for scams, grift, and bad journalism” as of 2022.[49]
they realized that they could just become an SEO farm/content mill and churn out absurd numbers of articles while paying people table scraps or nothing at all, and they’ve never changed
How well does the “AI detection startup’s” product work? This is a big unsolved problem but I’d be hecka skeptical.
It doesn’t, and never will
That’s because of bots like you. (I kid to make a point.)
That’s exactly what a bot would say, to stay undetected.
That is why I liked the comparison with articles from 2018. Then you have comparable texts in the same format and can more easily figure out differences in your analysis.
If true, a jump from 3% to 40% is significant to say the least.
It’s not so much that it’s AI generated … it’s also AI influenced.
I know so many professional office workers who once wrote some of the most boring sometimes stupid emails because they didn’t know how to write or get their message across or constantly miscommunicated things because they worded things wrong … now all of a sudden they’ve become professional writers and all their emails look like auto generated messages.
I’m guessing that many writers also take the AI shortcut. They get a bunch of content generated from an AI than just rewrite it for themselves. Some content i see is lazily edited and some is heavily. But I get the feeling that just about everyone is using it because it’s an easy way to get a bunch of work done without having to think too much.
At work? Yeah I’m gonna use AI to write that email. I didn’t think or do anything more than the minimum required before, I’m not starting now. AI just makes it so that the same garbage I would sent before, now smells nice.
If you like writing as an art. Why would you have the machine do that for you? If you like thinking, you can do the thinking and let the machine do the typing for you.
All of these are different uses.
The implication that rewriting GPT output makes one a professional writer … not sure we’re on the same page there. If you know how to use it for those results, great!
Omg the amount of times I’ve clicked on a Medium article in the last month and immediately knew it was AI is so frustrating!!! They aren’t even helpful articles because you can tell there is no real understanding.
The best part about this, is that new models will be trained on the garbage from old models and eventually LLMs will just collapse into garbage factories. We’ll need filter mechanisms, just like in a Neal Stephenson book.
People learn and write program code with the help of AI. Let this sink in for a moment.
I’m in university and I’m hearing this more and more. I keep trying to guide folks away from it, but I also understand the appeal because an LLM can analyze the code in seconds and there’s no judgements made.
It’s not a good tool to rely on, but I’m hearing more and more people rely on it as I progress.
The true final exam would be writing code on an airgapped system.
I’m going into my midterm in 30 minutes where we will be desicrating the corpses of trees.
I knew it would be the first platform to go. The same goes for substack, thats next.
Perhaps, but I don’t read anything on Substack unless I’m subscribed. Reputation is the entire point on Substack, without it, the content will get no traffic.
Shitty tech opinions were flooding Medium before, so it’s not much of a difference.
I think the difference is scale. Before it was x% of humanity making shitting opinions where x < 100. Now it’s x% of humanity+AI, where x is, say, 100,000% of humanity. I don’t think we’re currently equipped to separate the wheat from that much chaff.
the first person who develops a browser that effectively filters out AI results is going to do very well
This could easily be done with AI. For a week or so, that is.