- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
For years, hashing technology has made it possible for platforms to automatically detect known child sexual abuse materials (CSAM) to stop kids from being retraumatized online. However, rapidly detecting new or unknown CSAM remained a bigger challenge for platforms as new victims continued to be victimized. Now, AI may be ready to change that.
Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest use of AI technology striving to expose unreported CSAM at scale.
An expansion of Thorn’s CSAM detection tool, Safer, the new “Predict” feature uses “advanced machine learning (ML) classification models” to “detect new or previously unreported CSAM and child sexual exploitation behavior (CSE), generating a risk score to make human decisions easier and faster.”
The model was trained in part using data from the National Center for Missing and Exploited Children (NCMEC) CyberTipline, relying on real CSAM data to detect patterns in harmful images and videos. Once suspected CSAM is flagged, a human reviewer remains in the loop to ensure oversight. It could potentially be used to probe suspected CSAM rings proliferating online.
This sounds like a bad idea, there’s already cases of people getting flagged for CSAM by sending photos of their children to doctors.
https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html
Here’s the article. Imagine losing access to everything that your physical driver’s license can’t help you get back. I would be in jail for one reason or another if google fucked my life over that bad
As for Mark, Ms. Lilley, at Google, said that reviewers had not detected a rash or redness in the photos he took and that the subsequent review of his account turned up a video from six months earlier that Google also considered problematic, of a young child lying in bed with an unclothed woman.
Mark did not remember this video and no longer had access to it, but he said it sounded like a private moment he would have been inspired to capture, not realizing it would ever be viewed or judged by anyone else.
They could have just made this up wholesale. What is Mark gonna do about it? He literally doesn’t have access to the video they claim incriminates him, and the police department has already cleared him of any wrongdoing. Google is just being malicious at this point.
This seems like a lot of risky effort for something that would be defeated by even rudimentary encryption before sending?
Mind you if there were people insane enough to be sharing csam “in the clear” then it would be better to catch them than not. I just suspect most of what’s going to be flagged by this will be kids making inappropriate images of their classmates
It could be a very useful tool, indeed, but I wouldn’t trust disphits who use “proprietary” as if its something to be proud of. If they really wanted to “protect the children”, they should’ve at least released the weights, IMO (given releasing the training data is illegal as fuck)