Nucleo’s investigation identified accounts with thousands of followers with illegal behavior that Meta’s security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
Nucleo’s investigation identified accounts with thousands of followers with illegal behavior that Meta’s security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
If a child is not being harmed, I truly do not give a shit.
The most compelling argument against AI generated child porn I have heard is it normalizes it and makes it more likely people will be unable to tell if it is real or AI. This allows actual children to get hurt when it is not reported or skimmed over because someone thought it was AI.
As a counterpart, the fact that it is so easy and simple to get those AI images, compared to the risk and extra effort of doing it for real, could make the actual child abuse become less common and less profitable for mafias and assholes in general. It’s a really complex topic that no simple straight answer would solve.
Normalising it would be horrible and should be avoided, but there will always be some amount of people looking for that content. I rather have them using AI to create it than having to go searching for real content. Persecuting the AI content is not only very inefficient, it might also be harmful as the only other content left would be the real one that is much harder to catch those who make it.
And we have absolutely no data to suggest that’s happening. It’s purely hypothetical.
What if it features real kid’s faces?
Then it is harming someone.
How do you think they train the models?
With a set of all images on the internet. Why do you people always think this is a “gotcha”?
I’ve been assuming it’s because they truly have no idea how this tech works
Hey.
I’ve been in tech for 20 years. I know python, Java, c#. I’ve worked with tensorflow and language models. I understand this stuff.
You absolutely could train an AI on safe material to do what you’re saying.
Stable diffusion and openai have not guaranteed that they trained their AI on safe materials.
It’s like going to buy a burger, and the restaurant says “We can’t guarantee there’s no human meat in here”. At best it’s lazy. At worst it’s abusive.
I mean, there is no photograph of a school bus with pegasus wings diving to the titanic, but I bet one of these AIs can crank out that picture. If it can do that…?
Ok, but by that definition Google should be banned because their trawler isn’t guaranteed to not pick up CP.
In my opinion, if the technology involves casting a huge net, and then creating an abstracted product from what is caught in the net, with no steps in between seen by a human, then is it really causing any sort of actual harm?