China has released a set of guidelines on labeling internet content that is generated or composed by artificial intelligence (AI) technology, which are set to take effect on Sept. 1.
China has released a set of guidelines on labeling internet content that is generated or composed by artificial intelligence (AI) technology, which are set to take effect on Sept. 1.
This is a smart and ethical way to include AI into everyday use, though I hope the watermarks are not easily removed.
Think a layer deeper how can it misused to control naratives.
You read some wild allegation, no AI marks (they required to be visible), so must written by someone? Right? What if someone, even the government jumps out as said someone use an illiegal AI to generate the text? The questioning of the matter will suddently from verifying if the allegation decribed happened, to if it itself is real. The public sentiment will likely overwhelmed by “Is this fakenews?” or “Is the allegation true?” Compound that with trusted entities, discrediting anything become easier.
Give you a real example. Before Covid spread globally there was a Chinese whistleblower, worked in the hospital and get infected. He posted a video online about how bad it was, and quickly got taken down by the government. What if it happened today with the regulation in full force? Government can claim it is AI generated. The whistleblower doesn’t exist. Nor the content is real. 3 days later, they arrested a guy, claiming he spread fakenews using AI. They already have a very efficient way to control naratives, and this piece of garbage just give them an express way.
You though that only a China thing? No, every entities including governments are watching, especially the self-claimed friend of Putin and Xi, and the absolute free speech lover. Don’t think it is too far to reach you yet.
It will be relatively easy to strip that stuff off. It might help a little bit with internet searches or whatever, but anyone spreading deepfakes will probably not be stopped by that. Still better than nothing, I guess.
Having an unreliable verification method is worse than nothing.
How so? If it’s anything like llm text based “water marks” the watermark is an integral part of the output. For an llm it’s about downrating certain words in the output, I’m guessing for photos you could do the same with certain colors, so if this variation of teal shows up more than this variation then it’s made by ai.
I guess the difference with images is that since you’re not doing the “guess the next word” aspect and feeding the output from the previous step into the next one, you can’t generate the red green list from the previous output.
You can use things like steganography to embed data into the AI output.
Imagine a text has certain letters in certain places which can give you a probability rating that it’s AI generated, or errant pixels of certain colors.
Printers already do something like this, printing imperceptible dots on pages.
I’m going to develop a new AI designed to remove watermarks from AI generated content. I’m still looking for investors if you’re interested! You could get in on the ground floor!
I’ve got a system that removes the watermark and adds two or three bonus fingers, free of charge! Silicon Valley VC is gonna be all over this.