Given how Reddit now makes money by selling its data to AI companies, I was wondering how the situation is for the fediverse. Typically you can block AI crawlers using robot.txt (Verge reported about it recently: https://www.theverge.com/24067997/robots-txt-ai-text-file-web-crawlers-spiders). But this only works per domain/server, and the fediverse is about many different servers interacting with each other.
So if my kbin/lemmy or Mastodon server blocks OpenAI’s crawler via robot.txt, what does that even mean when people on other servers that don’t block this crawler are boosting me on Mastodon, or if I reply to their posts. I suspect unless all the servers I interact with block the same AI crawlers, I cannot prevent my posts from being used as AI training data?
We’re sick of closed walled-garden monoliths like Reddit! Let’s move to an open federated protocol where anyone can participate and the APIs can’t be locked down!
…wait, not like that!
Yeah. This is what you signed up for when you joined the Fediverse, the ActivityPub protocol broadcasts your content to any other servers that ask for it. And just generally, that’s how the Internet works. You’re putting up a public billboard and expecting to be able to control who gets to look at it. That’s not going to work. Even robots.txt is just a gentleman’s agreement, it’s not enforceable.
If you really want to prevent AI from training on your content with any degree of certainty you’re probably looking for a private forum of some kind that’s run by someone you trust.
I don’t expect anything, I was merely asking a question to clarify this
Well, I hope my answer clarifies it. You can’t prevent LLMs from being trained on your public posts.
We’re sick of closed walled-garden monoliths like Reddit! Let’s move to an open federated protocol where anyone can participate and the APIs can’t be locked down!
Can you point to where the fediverse collectively said that? Speak for yourself and don’t act like fediverse was designed to suit your definition of freedom. The fediverse is open and federated as in, there are multiple instances and owners without a centralized administration and the owners who hosts those instances decide what to lock down.
And some of those hosts can decide to serve up their content to AI trainers. Some of those hosts can be run by AI trainers, specifically to gather data for training. If one was to try to prevent that then one would be attacking the open nature of the fediverse.
There have been many people raging about their content being used to train AIs without permission or compensation. I’m speaking to those people, not the “fediverse collectively”. As you suggest, the fediverse can’t say anything collectively.
But robots.txt is not a legal document — and 30 years after its creation, it still relies on the good will of all parties involved
You can ask nicely, they can (and will) ignore it.
Also, I’ve already seen complaints about AI companies scraping everything ignoring
robots.txt
And we would block the obedient and useful crawlers while doing no harm to malicious
Surely the AI crawler company can set up their own node. They post nothing but collect everything going forward from the time they go live?
After reading your comment I was disappointed openai.social doesn’t exist
They don’t want AI to hate itself, so they don’t want our training data, thankfully.
I wonder if content should carry some license automatically. Like if you agree to the TOS of an instance, your comments are automatically all licensed as CC:BY or CC:O or the more restrictive license of choice of the instance owner.
There’s someone running around lemmy with a creative commons sharealike link as a signature. Quite funny to be honest. I can’t remember the username though. They’re bound to show up sooner or later :)
All rights reserved.
Oh yeah it was @onlinepersona@programming.dev
You go champ! If an AI starts ending their posts with a CC BY-NC-SA license I know who to credit!
You’re welcome
I don’t think that would make much of a difference. Training AI on copyright-protected data appears to be fair use.
Yup. There are dumps of Reddit’s entire archive of comments and posts available via torrent, I suspect the only reason Reddit’s getting paid for that stuff right now is that it’s a legal ass-covering that’s comparatively cheap. Anyone who’s a little daring could use it to train an LLM and if they prep the data well enough it’d be hard to even notice.
Really, there’s only one way to prevent that, but it would offer no guarantees; the instance with the weakest security in the group would allow your posts to be crawled.
It would require an agreement among instances to block crawler bot traffic (by user-agent, known IPs, etc) and only federating, via allow lists, with instances that adhere to the agreement. At that point, it’s more of a federated private forum, but there would still be some benefit I guess.
I don’t object to my content being used for training. I do object to Reddit profiting from that data. It’s the reason I basically don’t participate on Reddit anymore. Anything I post in the fediverse I am aware I am offering it up for free to be crawled and used as seen fit as long as it is not monetized without my consent. I don’t consider model training to be monetization.
Fair reason for not participating in Reddit. I would argue though that while model training is not monetization per se, with this “AI as a platform” rationale promoted by OpenAI, Google and others, there is very direct link between model training and monetization. Monetization without your consent - especially when these companies refuse to reveal the sources of their training data. Wouldn’t be surprised if GPT-4 or Gemini have been trained on your Fediverse posts, or will be in the near future
Agreed but it bugs me that I need to pay Reddit to not see ads and on top of that they get paid for the content we produce. The fediverse is a better model.