Google has developed a tool called SynthID that can watermark AI-generated images in a way that is imperceptible to humans but detectable by AI. The watermark is embedded in pixel values without noticeably changing the image. SynthID is being launched for use with Google Cloud’s image generator to verify original photos. While aimed initially at detecting deepfakes, the tool could also help businesses check AI-generated images used for tasks like product descriptions. Google hopes SynthID may become a web-wide standard but recognizes others are working on detection methods too. The launch marks the start of an arms race as hackers will try to circumvent the system, requiring it to continuously improve. Overall, SynthID is a first step toward greater transparency around AI-generated content online.
Upscale image 10x
Convert raster -> vector
Downscale image 1/10x
Convert vector -> raster
Hell you can probably just save it as a JPG in MS Paint a few times
How well will the watermark survive resizing and compression? What about sharpening or blurring the image?
If the watermark is robust, then it would be nice if web browsers could flag any AI generated images so long as the watermark detection can be done 100% client side.
And of course it will be impossible to remove this watermark programs can see programmatically because humans can’t see it, right?
I mean, go for it if you want. We’re already, today, past the point where a photo or video in and of itself constitutes reliable evidence due to how close known tools can get. You need to show chain of custody like you would any other forensic evidence, including a credible original source on the record, for it to be actually reliable. Faking anything is absolutely plausible.
As these companies make these types of watermarks, we need people to make tools to counter these watermarks. Can’t trust the large corporations not to abuse their watermark tool things to suppress anything they dislike.



