The White House wants to ‘cryptographically verify’ videos of Joe Biden so viewers don’t mistake them for AI deepfakes::Biden’s AI advisor Ben Buchanan said a method of clearly verifying White House releases is “in the works.”
Digital signature as a means of non repudiation is exactly the way this should be done. Any official docs or releases should be signed and easily verifiable by any public official.
Maybe deepfakes are enough of a scare that this becomes standard practice, and protects encryption from getting government backdoors.
Hey, congresscritters didn’t give a shit about robocalls till they were the ones getting robocalled.
We had a do not call list within a year and a half.
That’s the secret, make it affect them personally.
Doesn’t that prove that government officials lack empathy? We see it again and again but still we keep putting these unfeeling bastards in charge.
Well sociopaths are really good at navigating power hierarchies and I’m not sure there is an ethical way of keeping them from holding office.
Would someone have a high level overview or ELI5 of what this would look like, especially for the average user. Would we need special apps to verify it? How would it work for stuff posted to social media
linking an article is also ok :)
Depending on the implementation, there are two cryptographic functions that might be used (perhaps in conjunction):
-
Cryptographic hash: An arbitrary amount of data (like a video file) is used to create a “hash”—a shorter, (effectively) unique text string. Anyone can run the file through the same function to see if it produces the same hash; if even a single bit of the file is changed, the hash will be completely different and you’ll know the data was altered.
-
Public key cryptography: A pair of keys are created, one of which can only encrypt data (but can’t decrypt its own output), and the other, “public” key can only decrypt data that was encrypted by the first key. Users (like the White House) can post their public key on their website; then if a subsequent message purporting to come from that user can be decrypted using their public key, it proves it came from them.
a shorter, (effectively) unique text string
A note on this. There are other videos that will hash to the same value as a legitimate video. Finding one that is coherent is extraordinarily difficult. Maybe a state actor could do it?
But for practical purposes, it’ll do the job. Hell, if a doctored video with the same hash comes out, the White House could just say no, we punished this one, and that alone would be remarkable.
Finding one that is coherent is extraordinarily difficult.
You’d need to find one that was not just coherent, but that looked convincing and differed in a way that was useful to you—and that likely wouldn’t be guaranteed, even theoretically.
Even for a 4096 bit hash (which isn’t used afaik, usually only 1024 bit is used (but this could be outdated)), you only need to change 4096 bits on average. Even for a still 1080p image, that’s 1920x1080 pixels. If you change the least significant bit of each color channel, you get 6,220,800 bits you can change within anyone noticing. That means on average there are 1,518 identical-looking variations of any image with a given 4096 bit hash, on average. This goes down a lot when you factor in compression: those least significant bits aren’t going to stay the same. But using a video brings it up by orders of magnitude: rather than one image, you can tweak colors in every frame The difficulty doesn’t come from the existence, it comes because you need to check 2⁵¹² = 10¹⁵⁴ different images to guarantee you’ll find a match. Hash functions are designed to take a while to compute, so you’d have to run a supercomputer for an extremely long time to brute force a hash collision
There are other videos that will hash to the same value
This concept is known as ‘collision’ in cryptography. While technically true for weaker key sizes, there are entire fields of mathematics dedicated to probably ensuring collisions are cosmically unlikely. MD5 and SHA-1 have a small enough key space for collisions to be intentionally generated in a reasonable timeframe, which is why they have been deprecated for several years.
To my knowledge, SHA-2 with sufficiently large key size (2048) is still okay within the scope of modern computing, but beyond that, you’ll want to use Dilithium or Kyber CRYSTALS for quantum resistance.
-
The best way this could be handled is a green check mark near the video that you could click on it and it would give you all the meta data of the video (location, time, source, etc) with a digital signature (what would look like a random string of text) that you could click on and your browser would show you the chain of trust, where the signature came from, that it’s valid, probably the manufacturer of the equipment it was recorded on, etc.
Just make sure the check mark is outside the video.
The issue is making that green check mark hard to fake for bad actors. Https works because it is verified by the browser itself, outside the display area of the page. Unless all sites begin relying on a media player packed into the browser itself, if the verification even appears to be part of the webpage, it could be faked.
Hope verification gets built in to operating systems as compromised applications present a risk too.
But I’m sure a crook would build a MAGA Verifier since you can’t trust liberal Apple/Microsoft technology.
The only thing that comes to mind is something that forces interactivity outside the browser display area; out of the reach of Javascript and CSS. Something that would work for both mobile and desktop would be a toolbar icon that is a target for drag-and-drop. Drag the movie or image to the “verify this” target, and you get a dialogue or notification outside the display area. As a bonus, it can double for verifying TLS on hyperlinks while we’re at it.
Edit: a toolbar icon that’s draggable to the image/movie/link should also work the same. Probably easier for mobile users too.
Adobe is actually one of the leading actors in this field, take a look at the Content Authenticity Initiative (https://contentauthenticity.org/)
Like the other person said, it’s based on cryptographic hashing and signing. Basically the standard would embed metadata into the image.
For the average end-user, it would look like “https”. You would not have to know anything about the technical background. Your browser or other media player would display a little icon showing that the media is verified by some trusted institution and you could learn more with a click.
In practice, I see some challenges. You could already go to the source via https, EG whitehouse.gov, and verify it that way. An additional benefit exists only if you can verify media that have been re-uploaded elsewhere. Now the user needs to check that the media was not just signed by someone (EG whitehouse.gov. ru), but if it was really signed by the right institution.
As someone points out above, this just gives them the power to not authenticate real videos that make them look bad…
I honestly feel strategies like this should be mitigated by technically savvy journalism, or even citizen journalism. 3rd parties can sign and redistribute media in the public domain, vouching for their origin. While that doesn’t cover all the unsigned copies in existence, it provides a foothold for more sophisticated verification mechanisms like a “tineye” style search for media origin.
Videos by third parties, like Trump’s pussy grabber clip, would obviously have to be signed by them. After having thought about it, I believe this is a non-starter.
It just won’t be as good as https. Such a signing scheme only makes sense if the media is shared away from the original website. That means you can’t just take a quick look at the address bar to make sure you are not getting phished. That doesn’t work if it could be any news agency. You have to make sure that the signer is really a trusted agency and not some scammy lookalike. That takes too much care for casual use, which defeats the purpose.
Also, news agencies don’t have much of an incentive to allow sharing their media. Any cryptographic signature would only make sense for them if directs users to their site, where they can make money. Maybe the potential for more clicks - basically a kind of clickable watermark on media - could make this take off.
It needs some kind of handler, but we mostly have those in place. A web browser could be the handler for instance. A web browser has the green dot on the upper left, telling you a page is secure, that https is on and valid. This could work like that, the browser can verify the video and display a green or red dot in the corner, the user could just mouse over it/tap on it to see who it’s verified to be from. But it’s up to the user to mouse over it and check if it says whitehouse.gov or dr-evil-mwahahaha.biz
TL;DR: one day the user will see an overlay or notification that shows an image/movie is verified as from a known source. No extra software required.
Honestly, I can see this working great in future web browsers. Much like the padlock in the URL bar, we could see something on images that are verified. The image could display a padlock in the lower-left corner or something, along with the name of the source, demonstrating that it’s a securely verified asset. “Normal” images would be unaffected. The big problem is how to put something on the page that cannot be faked by other means.
It’s a little more complicated for software like phone apps for X or Facebook, but doable. The problem is that those products must choose to add this feature. Hopefully, losing reputation to being swamped with unverifiable media will be motivation enough to do so.
The underlying verification process is complex, but should be similar to existing technology (e.g. GPG). The key is that images and movies typically contain a “scratch pad” area in the file for miscellaneous stuff (metadata). This is where the image’s author can add a cryptographic signature for the file itself. The user would never even know it’s there.
Probably you’d notice a bit of extra time posting for the signature to be added, but that’s about it, the responsibility for verifying the signature would fall to the owners of the social media site and in the circumstances where someone asks for a verification, basically imagine it as a libel case on fast forward, you file a claim saying “I never said that”, they check signatures, they shrug and press the delete button and erase the post, crossposts, and if it’s really good screencap posts and those crossposts of the thing you did not say but is still being attributed falsely to your account or person.
It basically gives absolute control of a person’s own image and voice to themself, unless a piece of media is provable to have been made with that person’s consent, or by that person themself, it can be wiped from the internet no trouble.
Where it comes to second party posters, news agencies and such, it’d be more complicated but more or less the same, with the added step that a news agency may be required to provide some supporting evidence that what they said is not some kind of misrepresentation or such as the offended party filing the takedown might be trying to insist for the sake of their public image.
Of course there could still be a YouTube “Stats for Nerds”-esque addin to the options tab on a given post that allows you to sign-check it against the account it’s attributing something to, and a verified account system could be developed that adds a layer of signing that specifically identifies a published account, like say for prominent news reporters/politicians/cultural leaders/celebrities, that get into their own feed so you can look at them or not depending on how ya be feelin’ that particular scroll session.
Huh. They actually do something right for once instead of spending years trying to ban A.I tools. I’m pleasantly surprised.
Bingo. If, at the limit, the purpose of a generative AI is to be indistinguishable from human content, then watermarking and AI detection algorithms are absolutely useless.
The ONLY means to do this is to have creators verify their human-generated (or vetted) content at the time of publication (providing positive proof), as opposed to attempting to retroactively trying to determine if content was generated by a human (proving a negative).
I mean banning use cases is deffo fair game, generating kiddy porn should be treated as just as heinous as making it the “traditional” way IMO
Yikes! The implication is that it does not matter if a child was victimized. It’s “heinous”, not because of a child’s suffering, but because… ?
Man imagine trying to make “ethical child rape content” a thing. What were the lolicons not doing it for ya anymore?
As for how it’s exactly as heinous, it’s the sexual objectification of a child, it doesn’t matter if it’s a real child or not, the mere existence of the material itself is an act of normalization and validation of wanting to rape children.
Being around at all contributes to the harm of every child victimised by a viewer of that material.
I see. Since the suffering of others does not register with you, you must believe that any “bleeding heart liberal” really has some other motive. Well, no. Most (I hope, but at least some) people are really disturbed by the suffering of others.
I take the “normalization” argument seriously. But I note that it is not given much credence in other contexts; violent media, games, … Perhaps the “gateway drug” argument is the closest parallel.
In the very least, it drives pedophiles underground where they cannot be reached by digital streetworkers, who might help them not to cause harm. Instead, they form clandestine communities that are already criminal. I doubt that makes any child safer. But it’s not about children suffering for you, so whatever.
Man imagine continuing to try and argue Ethical Child Rape Content should be a thing.
If we want to make sweeping attacks on character, I’d rather be on the “All Child Rape Material is Bad” side of the argument but whatever floats ya boat.
I don’t think he’s arguing that, and I don’t think you believe that either. Doubt any of us would consider that content ethical, but what he’s saying is it’s not nearly the same as actually doing harm (as opposed what you said in your original post).
You implying that anyone who disagrees with you is somehow into those awful things is extremely poor taste. I’d expect so much more on Lemmy, that is a Reddit/Facebook level debate tactic. I guess I’m going to get accused of that too now?
I don’t like to give any of your posts any credit here, but I can somewhat see the normalization argument. However, where is the line drawn regarding other content that could be harmful because normalized. What about adult non consensual type porn, violence on TV and video games, etc. Sliding scale and everyone might draw the line somewhere else. There’s good reason why thinking about an awful things (or writing, drawing, creating fiction about it) is not the same as doing an awful thing.
I doubt you’ll think much of this, but please really try to be better. It’s 2024, time to let calling anyone you disagree with a pedo back on facebook in the 90s.
Idk, making CP where a child is raped vs making CP where no children are involved seem on very different levels of bad to me.
Both utterly repulsive, but certainly not exactly the same.
One has a non-consenting child being abused, a child that will likely carry the scars of that for a long time, the other doesn’t. One is worse than the other.
E: do the downvoters like… not care about child sexual assault/rape or something? Raping a child and taking pictures of it is very obviously worse than putting parameters into an AI image generator. Both are vile. One is worse. Saying they’re equally bad is attributing zero harm to the actual assaulting children part.
Man imagine trying to make the case for Ethical Child Rape Material.
You are not going to get anywhere with this line of discussion, stop now before you say something that deservedly puts you on a watchlist.
I’m not making the case for that at all, and I find you attempting to make out that I am into child porn a disgusting debate tactic.
“Anybody who disagrees with my take is a paedophile” is such a poor argument and serves only to shut down discussion.
It’s very obviously not what I’m saying, and anybody with any reading comprehension at all can see that plainly.
You’ll notice I called it “utterly repulsive” in my comment - does that sound like the words of a child porn advocate?
The fact that you apparently don’t care at all about the child suffering side of it is quite troubling. If a child is harmed in its creation, then that’s obviously worse than some creepy fuck drawing loli in Inkscape or typing parameters into an AI image generator. I can’t believe this is even a discussion.
Yeah good luck getting to general public to understand what “cryptographically verified” videos mean
The general public doesn’t have to understand anything about how it works as long as they get a clear “verified by …” statement in the UI.
The problem is that even if you reveal the video as fake,the feeling it reinforces on the viewer stays with them.
“Sure that was fake,but the fake that it seems believable tells you everything you need to know”
“Herd immunity” comes into play here. If those people keep getting dismissed by most other people because the video isn’t signed they’ll give up and follow the crowd. Culture is incredibly powerful.
It could work the same way the padlock icon worked for SSL sites in browsers back in the day. The video player checks the signature and displays the trusted icon.
Democrats will want cryptographically verified videos, Republicans will be happy with a stamp that has trumps face on it.
I mean, how is anyone going to crytographically verify a video? You either have an icon in the video itself or displayed near it by the site, meaning nothing, fakers just copy that in theirs. Alternatively you have to sign or make file hashes for each permutation of the video file sent out. At that point how are normal people actually going to verify? At best they’re trusting the video player of whatever site they’re on to be truthful when it says that it’s verified.
Saying they want to do this is one thing, but as far as I’m aware, we don’t have a solution that accounts for the rampant re-use of presidential videos in news and secondary reporting either.
I have a terrible feeling that this would just be wasted effort beyond basic signing of the video file uploaded on the official government website, which really doesn’t solve the problem for anyone who can’t or won’t verify the hash on their end.
Maybe some sort of visual and audio based hash, like musicbrainz ids for songs that are independant of the file itself but instead on the sound of it. Then the government runs a server kind of like a pgp key server. Then websites could integrate functionality to verify it, but at the end of the day it still works out to a “I swear we’re legit guys” stamp for anyone not techinical enough to verify independantly thenselves.
I guess your post just seemed silly when the end result of this for anyone is effectively the equivalent of your “signed by trump” image, unless the public magically gets serious about downloading and verifying everything themselves independently.
Fuck trump, but there are much better ways to shit on king cheeto than pretending the average populace is anything but average based purely on political alignment.
You have to realize that to the average user, any site serving videos seems as trustworthy as youtube. Average internet literacy is absolutely fucking abysmal.
People aren’t going to do it, the platforms that 95% of people use (Facebook, Tik Tok, YouTube, Instagram) will have to add the functionality to their video players/posts. That’s the only way anything like this could be implemented by the 2024 US election.
In the end people will realise they can not trust any media served to them. But it’s just going to take time for people to realise… And while they are still blindly consuming it, they will be taken advantage of.
If it goes this road… Social media could be completely undermined. It could become the downfall of these platforms and do everyone a favour by giving them their lives back after endless doom scrolling for years.
Do it basically the same what TLS verification works, sure the browsers would have to add something to the UI to support it, but claiming you can’t trust that is dumb because we already use that to trust the site your on is your bank and not some scammer.
Sure not everyone is going to care to check, but the check being there allows people who care to reply back saying the video is faked due to X
“Not everybody will use it and it’s not 100% perfect so let’s not try”
That’s not the point. It’s that malicious actors could easily exploit that lack of knowledge to trick users into giving fake videos more credibility.
If I were a malicious actor, I’d put the words “✅ Verified cryptographically by the White House” at the bottom of my posts and you can probably understand that the people most vulnerable to misinformation would probably believe it.
Just make it a law that if as a social media company you allow unverified videos to be posted, you don’t get safe harbour protections from libel suits for that. It would clear right up. As long as the source of trust is independent of the government or even big business, it would work and be trustworthy.
Back in the day, many rulers allowed only licensed individuals to operate printing presses. It was sometimes even required that an official should read and sign off on any text before it was allowed to be printed.
Freedom of the press originally means that exactly this is not done.
Jesus, how did I get so old only to just now understand that press is not journalism, but literally the printing press in ‘Freedom of the press’.
You understand that there is a difference between being not permitted to produce/distribute material and being accountable for libel, yes?
“Freedom of the press” doesn’t mean they should be able to print damaging falsehood without repercussion.
What makes the original comment legally problematic (IMHO), is that it is expected and intended to have a chilling effect pre-publication. Effectively, it would end internet anonymity.
It’s not necessarily unconstitutional. I would have made the argument if I thought so. The point is rather that history teaches us that close control of publications is a terrible mistake.
The original comment wants to make sure that there is always someone who can be sued/punished, with obvious consequences for regime critics, whistleblowers, and the like.
So your suggestion is that libel, defamation, harassment, et al are just automatically dismissed when using online anonymous platforms? We can’t hold the platform responsible, and we can’t identify the actual offender, so whoops, no culpability?
I strongly disagree.
That’s not what the commenter said and I think you are knowingly misrepresenting it.
I am not. And if that’s not what’s implied by their comments then I legitimately have no idea what they’re suggesting and would appreciate an explanation.
As long as the source of trust is independent of the government or even big business, it would work and be trustworthy
That sounds like wishful thinking
I don’t blame them for wanting to, but this won’t work. Anyone who would be swayed by such a deepfake won’t believe the verification if it is offered.
Agreed and I still think there is value in doing it.
I honestly do not see the value here. Barring maybe a small minority, anyone who would believe a deepfake about Biden would probably also not believe the verification and anyone who wouldn’t would probably believe the administration when they said it was fake.
The value of the technology in general? Sure. I can see it having practical applications. Just not in this case.
If a cryptographic claim/validation is provided then anyone refuting the claims can be seen to be a bad faith actor. Voters are one dimension of that problem but mainstream media being able to validate election videos is super important both domestically, but also internationally as the global community needs to see efforts being undertaken to preserve free and fair elections. This is especially true given the consequences if america’s enemies are seen to have been able to steer the election.
Sure, the grandparents that get all their news via Facebook might see a fake Biden video and eat it up like all the other hearsay they internalize.
But, if they’re like my parents and have the local network news on half the damn time, at least the typical mainstream network news won’t be showing the forged videos. Maybe they’ll even report a fact check on it?!?
And yeah, many of them will just take it as evidence that the mainstream media is part of the conspiracy. That’s a given.
I don’t think that’s what this is for. I think this is for reasonable people, as well as for other governments.
Besides, passwords can be phished or socially engineered, and some people use “abc123.” Does that mean we should get rid of password auth?
Deepfakes could get better. And if they do, a lot more people will start to get fooled
This doesn’t solve anything. The White House will only authenticate videos which make the President look good. Curated and carefully edited PR. Maybe the occasional press conference. The vast majority of content will not be authenticated. If anything this makes the problem worse, as it will give the President remit to claim videos which make them look bad are not authenticated and should therefore be distrusted.
It needs to be more general. A video should have multiple signatures. Each signature relies on the signer’s reputation, which works both ways. It won’t help those who don’t care about their reputation, but will for those that do.
A photographer who passes off a fake photo as real will have their reputation hit, if they are caught out. The paper that published it will also take a hit. It’s therefore in the paper’s interest to figure out how trustworthy the supplier is.
I believe canon recently announced a camera that cryptographically signs photographs, at the point of creation. At that point, the photographer can prove the camera, the editor can prove the photographer, the paper can prove the editor, and the reader can prove the newspaper. If done right, the final viewer can also prove the whole chain, semi-independently. It won’t be perfect (far from it) but might be the best will get. Each party wants to protect their reputation, and so has a vested interest in catching fraud.
For this to work, we need a reliable way to sign images multiple times, as well as (optionally) encode an edit history into it. We also need a quick way to match cryptographic keys to a public key.
An option to upload a time stamped key to a trusted 3rd party would also be of significant benefit. Ironically, Blockchain might actually be a good use for this. In case a trusted 3rd can’t be established.
Great points and I agree. I also think the signature needs to be built into the stream in a continuous fashion so that snippets can still be authenticated.
Agreed. Embed a per-frame signature it into every key frame when encoding. Also include the video file time-stamp. This will mean any clip longer than around 1 second will include at least 1 signed frame.
I don’t think that’s practical or particularly desirable.
Today, when you buy something, EG a phone, the brand guarantees the quality of the product, and the seller guarantees the logistics chain (that it’s unused, not stolen, not faked, not damaged in transport, …). The typical buyer does not care about the parts used, the assembly factory, etc.
When a news source publishes media, they vouch for it. That’s what they are paid for (as it were). If the final viewer is expected to check the chain, they are asked to do the job of skilled professionals for free. Do-your-own-research rarely works out, even for well-educated people. Besides, in important cases, the whole chain will not be public to protect sources.
It wouldn’t be intended for day to day use. It’s intended as a audit trail/chain of custody. Think of it more akin to a git history. As a user, you generally don’t care, however it can be excellent for retrospective analysis, when someone/something does screw up.
You would obviously be able to strip it out, but having it as a default would be helpful with openness.
I’ve thought about this too but I’m not sure this would work. First you could hack the firmware of a cryptographically signed camera. I already read something about a camera like this that was hacked and the private key leaked. You could have an individual key for each camera and then revoke it maybe.
But you could also photograph a monitor or something like that, like a specifically altered camera lens.
Ultimately you’d probably need something like quantum entangled photon encoding to prove that the photons captured by the sensor were real photons and not fake photons. Like capturing a light field or capturing a spectrum of photons. Not sure if that is even remotely possible but it sounds cool haha.
I don’t understand your concern. Either it’ll be signed White House footage or it won’t. They have to sign all their footage otherwise there’s no point to this. If it looks bad, don’t release it.
The point is that if someone catches the President shagging kids, of course that footage won’t be authenticated by the WH. We need a tool so that a genuine piece of footage of the Pres shagging kids would be authenticated, but a deepfake of the same would not. The WH is not a good arbiter since they are not independent.
Politicians and anyone at deepfake risk wear a digital pendant at all times. Pendant displays continually rotating time-based codes. People record themselves using video hardware which crypto graphically signs output.
Only a law/Big 4 firm can extract video from the official camera (which has a twin for hot swapping).
But we are talking about official WH videos. Start signing those.
If it’s not from the WH, it isn’t signed. Or perhaps it’s signed by whatever media company is behind its production or maybe they’ve verified the video and its source enough to sign it. So maybe, let’s say the Washington Post can publish some compromising video of the President but it still has certain accountability as opposed to some completely random Internet video.
Then this exercise is a waste of time. All the hard hitting journalism which presses the President and elicits a negative response will be unsigned, and will be distributed across social media as it is today: without authentication. All the videos for which the White House is concerned about authenticity will continue to circulate without any cause for contention.
Anyone can digitally sign anything (maybe not easily or for free). The Whitehouse can verify or not verify whatever they choose but if you, as a journalist let’s say, want to give credence to video you distribute you’ll want to digitally sign it. If a video switches hands several times without being signed it might as well have been cooked up by the last person that touched it.
That’s fine?
Signatures aren’t meant to prove authenticity. They’re proving the source which you can use to weigh the authenticity.
I think the confusion comes from the fact that cryptographic signatures are mostly used in situations where proving the source is equivalent to proving authenticity. Proving a text message is from me proves the authenticity as there’s no such thing as doctoring my own text message. There’s more nuance when you’re using signatures to prove a source which may or may not be providing trustworthy data. But there is value in at least knowing who provided the data.
Fucking finally. We’ve had this answer to digital fraud for ages.
Sounds like a very Biden thing (or for anyone well into their Golden Years) to say, “Use cryptography!” but it’s not without merit. How do we verify file integrity? How to we digitally sign documents?
The problem we currently have is that anything that looks real tends to be accepted as real (or authentic). We can’t rely on humans to verify authenticity of audio or video anymore. So for anything that really matters we need to digitally sign it so it can be verified by a certificate authority or hashed to verify integrity.
This doesn’t magically fix deep fakes. Not everyone will verify a video before distribution and you can’t verify a video that’s been edited for time or reformatted or broadcast on the TV. It’s a start.
We’ve had this discussion a lot in the Bitcoin space. People keep arguing it has to change so that “grandma can understand it” but I think that’s unrealistic. Every technology has some inherent complexities that cannot be removed and people have to learn if they want to use it. And people will use it if the motivation is there. Wifi has some inherent complexities people have become comfortable with. People know how to look through lists of networks, find the right one, enter the passkey or go through the sign on page. Some non-technical people know enough about how Wifi should behave to know the internet connection might be out or the route might need a reboot. None of this knowledge was commonplace 20 years ago. It is now.
The knowledge required to leverage the benefits of cryptographic signatures isn’t beyond the reach of most people. The general rules are pretty simple. The industry just has to decide to make the necessary investments to motivate people.
The President’s job isn’t really to be an expert on everything, the job is more about being able to hire people who are experts.
If this was coupled with a regulation requiring social media companies to do the verification and indicate that the content is verified then most people wouldn’t need to do the work to verify content (because we know they won’t).
It obviously wouldn’t solve every problem with deepfakes, but at least it couldn’t be content claiming to be from CNN or whoever. And yes someone editing content from trusted sources would make that content no longer trusted, but that’s actually a good thing. You can edit videos to make someone look bad, you can slow it down to make a person look drunk, etc. This kind of content should not considered trusted either.
Someone doing a reaction video going over news content or whatever could have their stuff be considered trusted, but it would be indicated as being content from the person that produced the reaction video not as content coming from the original news source. So if you see a “news” video that has it’s verified source as “xXX_FlatEarthIsReal420_69_XXx” rather than CNN, AP News, NY Times, etc, you kinda know what’s up.
The number of 80 year olds that know what cryptography is AND know that it’s a proper solution here is not large. I’d expect an 80 year old to say something like “we should only look at pictures sent by certified mail” or “You cant trust film unless it’s an 8mm and the can was sealed shut!”
It would become quite easy to dismiss anything for not being cryptographically verified simply by not cryptographically verifying.
I can see the benefit of having such verification but I also see how prone it might be to suppressing unpopular/unsanctioned journalism.
Unless the proof is very clear and easy for the public to understand the new method of denial just becomes the old method of denial.
It would be nice if none of this was necessary… but we don’t live in that world. There is a lot of straight up bullshit in the news these days especially when it comes to controversial topics (like the war in Gaza, or Covid).
You could go a really long way by just giving all photographers the ability to sign their own work. If you know who took the photo, then you can make good decisions about wether to trust them or not.
Random account on a social network shares a video of a presidential candidate giving a speech? Yeah maybe don’t trust that. Look for someone else who’s covered the same speech instead, obviously any real speech is going to be covered by every major news network.
That doesn’t stop a ordinary people from sharing presidential speeches on social networks. But it would make it much easier to identify fake content.
Once people get used to cryptographical signed videos, why only trust one source? If a news outlet is found signing a fake video, they will be in trouble. Loss of said trust if nothing else.
We should get to the point we don’t trust unsigned videos.
If a news outlet is found signing a fake video, they will be in trouble.
I see you’ve never heard of Fox News before.
https://en.wikipedia.org/wiki/Fox_News_controversies#Video_footage_manipulation
Yes, and now people don’t trust Fox News, to the point it is close to being banned from being used as a source for anything on Wikipedia
I don’t know that ‘about to be banned by Wikipedia’ is a good metric for how much the general American public trusts Fox News. It could be that most of them don’t, but that is not a good way to tell considering there’s no general public input on what Wikipedia accepts as a source.
Also, it should have been banned by Wikipedia years ago.
Not trusting unsigned videos is one thing, but will people be judging the signature or the content itself to determine if it is fake?
Why only one source should be trusted is a salient point. If we are talking trust: it feels entirely plausible that an entity could use its trust (or power) to manufacture a signature.
And for some it is all too relevant that an entity like the White House, (or the gambit of others, past or present), have certainly presented false informstion as true to do things like invade countries.
Trust is a much more flexible concept that is willing to be bent. And so cryptographic verification really has to demonstrate how and why something is fake to the general public. Otherwise it is just a big ‘trust me bro.’
Your right in that cryptographic verification only can prove someone signed the video. But that will mean nutters sharing “BBC videos”, that don’t have the BBC signature can basically be dismissed straight off. We are already in a soup of miss information, so sourcing being cryptographically provable is a step forward. If you trust those sources or not is another matter, but at least your know if it’s the true source or not. If a source abuse trust it has, it loses trust.
the technology to do this has existed for decades and it’s crazy to me that people aren’t doing it all the time yet
You mean to tell me that cryptography isn’t the enemy and that instead of fighting it in the name of “terrorism and child protection” that we should be protecting children by having strong encryption instead??
Why not just official channels of information, e.g. White house Mastodon instance with politicians’ accounts, government-hosted, auto-mirrored by third parties.
I’m sure they do. AI regulation probably would have helped with that. I feel like congress was busy with shit that doesn’t affect anything.
I salute whoever has the challenge of explaining basic cryptography principles to Congress.
Might just as well show a dog a card trick.
That’s why I feel like this idea is useless, even for the general population. Even with some sort of visual/audio based hashing, so that the hash is independant of minor changes like video resolution which don’t change the content, and with major video sites implementing a way for the site to verify that hash matches one from a trustworthy keyserver equivalent…
The end result for anyone not downloading the videos and verifying it themselves is the equivalent of those old ”✅ safe ecommerce site, we swear" images. Any dedicated misinformation campaign will just fake it, and that will be enough for the people who would have believed the fake to begin with.
I see no difference between creating a fake video/image with AI and Adobe’s packages. So to me this isn’t an AI problem, it’s a problem that should have been resolved a couple of decades ago.
When it comes to misinformation I always remember when I was a kid I’m the early 90s, another kid told me confidently that the USSR had landed on Mars, gathered rocks, filmed it and returned to earth(it now occurs to me that this homeschooled kid was confusing the real moon landing.) I remember knowing it was bullshit but not having a way to check the facts. The Internet solved that problem. Now, by God , the Internet has recreated the same problem.
Government also puts backdoor in said math, gets hacked, official fakes released
Or more likely they will only discredit fake news and not verify actual footage that is a poor reflection. Like a hot mic calling someone a jackass, white House says no comment.
I think this is a great idea. Hopefully it becomes the standard soon, cryptographically signing clips or parts of clips so there’s no doubt as to the original source.
what if I meet Joe and take a selfie of both of us using my phone? how will people know that my selfie is an authentic Joe Biden?
Ultimately, reputation based trust, combined with cryptographic keys is likely the best we can do. You (semi automatically) sign the photo, and upload it’s stamp to a 3rd party. They can verify that they received the stamp from you, and at what time. That proves the image existed at that time, and that it’s linked to your reputation. Anything more is just likely to leak, security wise.
That’s the big question. How will we verify anything as real?
Probably a signed comment from the Double-Cone Crusader himself, basically free PR so I don’t see why he or any other president wouldn’t at least have an intern give you a signed comment fist bump of acknowledgement