Brad Smith urges steps to curtail deepfakes "with an intent to deceive or defraud."
See full article...
See full article...
Let me just hazard a guess that Krita, GIMP, Blender and all the open source art software won't be able to watermark in an "accepted" way. This proposal really strikes me as yet another attempt at vendor lock-in rather than actually addressing the problem of fakes.
"We don't want less scrupulous vendors to take our revenue, so we'll tell you it's harmful as we continue to sell it."What does it mean when companies like Microsoft are selling the very product that they are warning us about?
Basically this - AI will lower the bar somewhat, but there are too many humans willing to attach too much significance to something they see which they already want to believe, and then deciding to propagate it. Go back forty years to the Hitler Diaries debacle which had nothing to do with AI but still persuaded huge numbers of people who should have known better that Hitler, for example, wrote these diaries on paper that didn't even exist during WWII.Flagging the potential harm from AI generated deepfakes is popular . . . and also wildly overstated, for the following reason:
AI can't do anything that sufficiently motivated people can do - and we have PLENTY of those. Deepfake blast at the Pentagon? I can photoshop a better image than the deepfake in minutes. And people could do that digitally for decades now. Has that been a "hair-on-fire" problem? No. The only thing AI could do is lower the bar, increasing the quantity of faked materials, but that isn't a problem, because provenance is either crucial . . . or it isn't. Did it come from a respected journalistic source that would - you know - verify a wildly extravagant image or claim before publishing, or is it some random FB or Twitter post?
If you tend to believe a rando post that confirms a particular bias, well - good luck. You're lost, regardless if you see two human-fabricated posts a day or a ten thousand AI generated ones. And you've already been inundated with misinformation for decades now, whether it's fake news or too-good-to-be-true advertisements. Is AI a fundamental change to that situation? Nope.
Since nearly the dawn of image-making, people have learned to be skeptical of depictions. AI is just another reason to up that dubiousness, and it will inevitably have that effect.
I see a legitimate concern being the speed of social media and modern communication, but that's not AI's problem. 30 years ago, SVB could have been saved, but the facts that led to the recent bank run were not faked.
I really don't know but I tend to doubt that it can and it will likely get even more difficult as AI improves.Is digital forensics capable of determining if a photo is deepfaked?
This is, quite obviously, completely impossible.This would have plenty of limitations, but I wonder if it would be possible for digital cameras to sign their output so you could verify it was unedited.
I'm hoping for a future where you can at least sometimes trust that a news broadcast or cell phone video has documented actual reality. Not that cameras aim to document reality with 100% accuracy, but if every 10x10 grid of pixels had a checksum & a list of corrections made by hardware that is signed with a private key it could do some good.
Editors & broadcasters could sign their output too so that you'd be able to review a news 5 years later & be confident it hasn't changed. Badge-cams & Dashcams especially would benefit from being verifiable. It won't be long before the possibility of a deepfake provides reasonable doubt to a jury.
There is also a risk that if the fingerprint is unique enough it could identify the specific device & who recorded the video, but pixels presented as recorded and unedited by an iPhone 13 might be good enough. There's no reason why video from law enforcement bodycams shouldn't be signed & tied to specific hardware to be used in court.
Spoof the AI-generated watermark? What a sorry state of affairs it will be when human generated work is snubbed as a matter of course.I really don't know but I tend to doubt that it can and it will likely get even more difficult as AI improves.
It seems to me that could be the answer - require some kind of easily detectable watermark in all AI-generated images and videos. Not necessarily by a human either - digital detection could work as in some form of steganography. Of course, that is much easier to write (and say) than to implement and legislate.
ETA: as soon as those steps are taken it is likely the ability to spoof those required watermarks will appear making them essentially useless.
The problem with all the talk of watermarking is the simple fact the most popular AI image generation software is open source. Don't want a watermark? Don't compile that module.I really don't know but I tend to doubt that it can and it will likely get even more difficult as AI improves.
It seems to me that could be the answer - require some kind of easily detectable watermark in all AI-generated images and videos. Not necessarily by a human either - digital detection could work as in some form of steganography. Of course, that is much easier to write (and say) than to implement and legislate.
ETA: as soon as those steps are taken it is likely the ability to spoof those required watermarks will appear making them essentially useless.
So the question remains: What does it mean when companies like Microsoft are selling the very product that they are warning us about?
1. What keys are the cameras using to sign each frame? Where are they stored? How are they accessed? How are they verified?There is a simple technical fix for this.
Have the cameras sign each frame. Other than needing a new camera, don't see the issue here. Then you can deep fake all you want and it gets marked. No mark... Means it is highly suspect.
The tech already exists, and is even already implented in some cameras. It would be a software change to have my smartphone start doing this. One signature from the device and then my signature as the owner/operator.
The camera has my camera signing key and the camera's signing key. Standard PKI stuff. Anything not signed by trusted keys is not able to be authenticated.1. What keys are the cameras using to sign each frame? Where are they stored? How are they accessed? How are they verified?
2. Hex editors let you manipulate the individual bits in a file. What keeps someone from editing the raw image file to remove the watermarks?
3. Why is no mark highly suspect, if the mark is a sign of a deepfake? That sounds backwards.
4. If the solution is as “simple” as you suggest, why isn’t it being used? Or do you honestly think you’ve thought of something no one else in the field ever has?