Microsoft president declares deepfakes biggest AI concern

Along these lines, companies such as Adobe, Google, and Microsoft are all working on ways to watermark or otherwise label AI-generated content.
Let me just hazard a guess that Krita, GIMP, Blender and all the open source art software won't be able to watermark in an "accepted" way. This proposal really strikes me as yet another attempt at vendor lock-in rather than actually addressing the problem of fakes.
 
Upvote
10 (10 / 0)
Of course these guys are worried about the tip of the iceberg -- that's what they're paid to see, and at best that's what they'll see.

To me the really scary things are the unknown unknowns - the ways people will use this stuff no one has thought of. We're not good as a species at the precautionary principle, and this time it's going to be really rough on us.
 
Upvote
6 (6 / 0)

Edgar Allan Esquire

Ars Praefectus
3,096
Subscriptor
What does it mean when companies like Microsoft are selling the very product that they are warning us about?
"We don't want less scrupulous vendors to take our revenue, so we'll tell you it's harmful as we continue to sell it."
I recall my FE ethics exam guide referring to it as a "cop-out."
 
Upvote
3 (3 / 0)

Whiner42

Ars Scholae Palatinae
1,200
Flagging the potential harm from AI generated deepfakes is popular . . . and also wildly overstated, for the following reason:

AI can't do anything that sufficiently motivated people can do - and we have PLENTY of those. Deepfake blast at the Pentagon? I can photoshop a better image than the deepfake in minutes. And people could do that digitally for decades now. Has that been a "hair-on-fire" problem? No. The only thing AI could do is lower the bar, increasing the quantity of faked materials, but that isn't a problem, because provenance is either crucial . . . or it isn't. Did it come from a respected journalistic source that would - you know - verify a wildly extravagant image or claim before publishing, or is it some random FB or Twitter post?

If you tend to believe a rando post that confirms a particular bias, well - good luck. You're lost, regardless if you see two human-fabricated posts a day or a ten thousand AI generated ones. And you've already been inundated with misinformation for decades now, whether it's fake news or too-good-to-be-true advertisements. Is AI a fundamental change to that situation? Nope.

Since nearly the dawn of image-making, people have learned to be skeptical of depictions. AI is just another reason to up that dubiousness, and it will inevitably have that effect.

I see a legitimate concern being the speed of social media and modern communication, but that's not AI's problem. 30 years ago, SVB could have been saved, but the facts that led to the recent bank run were not faked.
 
Upvote
1 (4 / -3)

KBGB

Ars Scholae Palatinae
705
This would have plenty of limitations, but I wonder if it would be possible for digital cameras to sign their output so you could verify it was unedited.

I'm hoping for a future where you can at least sometimes trust that a news broadcast or cell phone video has documented actual reality. Not that cameras aim to document reality with 100% accuracy, but if every 10x10 grid of pixels had a checksum & a list of corrections made by hardware that is signed with a private key it could do some good.

Editors & broadcasters could sign their output too so that you'd be able to review a news 5 years later & be confident it hasn't changed. Badge-cams & Dashcams especially would benefit from being verifiable. It won't be long before the possibility of a deepfake provides reasonable doubt to a jury.

There is also a risk that if the fingerprint is unique enough it could identify the specific device & who recorded the video, but pixels presented as recorded and unedited by an iPhone 13 might be good enough. There's no reason why video from law enforcement bodycams shouldn't be signed & tied to specific hardware to be used in court.
 
Upvote
-6 (0 / -6)

rjd185

Ars Scholae Palatinae
784
Subscriptor
Flagging the potential harm from AI generated deepfakes is popular . . . and also wildly overstated, for the following reason:

AI can't do anything that sufficiently motivated people can do - and we have PLENTY of those. Deepfake blast at the Pentagon? I can photoshop a better image than the deepfake in minutes. And people could do that digitally for decades now. Has that been a "hair-on-fire" problem? No. The only thing AI could do is lower the bar, increasing the quantity of faked materials, but that isn't a problem, because provenance is either crucial . . . or it isn't. Did it come from a respected journalistic source that would - you know - verify a wildly extravagant image or claim before publishing, or is it some random FB or Twitter post?

If you tend to believe a rando post that confirms a particular bias, well - good luck. You're lost, regardless if you see two human-fabricated posts a day or a ten thousand AI generated ones. And you've already been inundated with misinformation for decades now, whether it's fake news or too-good-to-be-true advertisements. Is AI a fundamental change to that situation? Nope.

Since nearly the dawn of image-making, people have learned to be skeptical of depictions. AI is just another reason to up that dubiousness, and it will inevitably have that effect.

I see a legitimate concern being the speed of social media and modern communication, but that's not AI's problem. 30 years ago, SVB could have been saved, but the facts that led to the recent bank run were not faked.
Basically this - AI will lower the bar somewhat, but there are too many humans willing to attach too much significance to something they see which they already want to believe, and then deciding to propagate it. Go back forty years to the Hitler Diaries debacle which had nothing to do with AI but still persuaded huge numbers of people who should have known better that Hitler, for example, wrote these diaries on paper that didn't even exist during WWII.

Even if legislation ignored the tool used to create deceptive content, and successfully created an enforceable sanction on the creative act, the problem is, as noted, the contagion not the act of creation. Legislating against what people can or can't repeat is 'somewhat tricky'.
 
Upvote
2 (2 / 0)
Is digital forensics capable of determining if a photo is deepfaked?
I really don't know but I tend to doubt that it can and it will likely get even more difficult as AI improves.

It seems to me that could be the answer - require some kind of easily detectable watermark in all AI-generated images and videos. Not necessarily by a human either - digital detection could work as in some form of steganography. Of course, that is much easier to write (and say) than to implement and legislate.

ETA: as soon as those steps are taken it is likely the ability to spoof those required watermarks will appear making them essentially useless.
 
Upvote
-3 (0 / -3)
This would have plenty of limitations, but I wonder if it would be possible for digital cameras to sign their output so you could verify it was unedited.

I'm hoping for a future where you can at least sometimes trust that a news broadcast or cell phone video has documented actual reality. Not that cameras aim to document reality with 100% accuracy, but if every 10x10 grid of pixels had a checksum & a list of corrections made by hardware that is signed with a private key it could do some good.

Editors & broadcasters could sign their output too so that you'd be able to review a news 5 years later & be confident it hasn't changed. Badge-cams & Dashcams especially would benefit from being verifiable. It won't be long before the possibility of a deepfake provides reasonable doubt to a jury.

There is also a risk that if the fingerprint is unique enough it could identify the specific device & who recorded the video, but pixels presented as recorded and unedited by an iPhone 13 might be good enough. There's no reason why video from law enforcement bodycams shouldn't be signed & tied to specific hardware to be used in court.
This is, quite obviously, completely impossible.
 
Upvote
2 (2 / 0)
I really don't know but I tend to doubt that it can and it will likely get even more difficult as AI improves.

It seems to me that could be the answer - require some kind of easily detectable watermark in all AI-generated images and videos. Not necessarily by a human either - digital detection could work as in some form of steganography. Of course, that is much easier to write (and say) than to implement and legislate.

ETA: as soon as those steps are taken it is likely the ability to spoof those required watermarks will appear making them essentially useless.
Spoof the AI-generated watermark? What a sorry state of affairs it will be when human generated work is snubbed as a matter of course.
 
Upvote
0 (0 / 0)

panton41

Ars Legatus Legionis
11,115
Subscriptor
I really don't know but I tend to doubt that it can and it will likely get even more difficult as AI improves.

It seems to me that could be the answer - require some kind of easily detectable watermark in all AI-generated images and videos. Not necessarily by a human either - digital detection could work as in some form of steganography. Of course, that is much easier to write (and say) than to implement and legislate.

ETA: as soon as those steps are taken it is likely the ability to spoof those required watermarks will appear making them essentially useless.
The problem with all the talk of watermarking is the simple fact the most popular AI image generation software is open source. Don't want a watermark? Don't compile that module.

Remember, the same software has code to prevent pornographic pictures that literally everyone disables, even the official builds.
 
Upvote
1 (1 / 0)
There is a simple technical fix for this.

Have the cameras sign each frame. Other than needing a new camera, don't see the issue here. Then you can deep fake all you want and it gets marked. No mark... Means it is highly suspect.

The tech already exists, and is even already implented in some cameras. It would be a software change to have my smartphone start doing this. One signature from the device and then my signature as the owner/operator.
 
Upvote
-1 (0 / -1)

Auie

Ars Scholae Palatinae
2,114
So the question remains: What does it mean when companies like Microsoft are selling the very product that they are warning us about?

Wrong premise; it's everyone else's product that they're scaremongering "warning us," about, while Microsoft will obviously be the ones that automatically get the licenses they propose, because they seem to care so much about us.
 
Upvote
0 (0 / 0)

Celery Man

Ars Legatus Legionis
10,060
There is a simple technical fix for this.

Have the cameras sign each frame. Other than needing a new camera, don't see the issue here. Then you can deep fake all you want and it gets marked. No mark... Means it is highly suspect.

The tech already exists, and is even already implented in some cameras. It would be a software change to have my smartphone start doing this. One signature from the device and then my signature as the owner/operator.
1. What keys are the cameras using to sign each frame? Where are they stored? How are they accessed? How are they verified?

2. Hex editors let you manipulate the individual bits in a file. What keeps someone from editing the raw image file to remove the watermarks?

3. Why is no mark highly suspect, if the mark is a sign of a deepfake? That sounds backwards.

4. If the solution is as “simple” as you suggest, why isn’t it being used? Or do you honestly think you’ve thought of something no one else in the field ever has?
 
Upvote
0 (0 / 0)

ASTUC

Smack-Fu Master, in training
38
Everything published is intended to deceive and control you. None of it is real. AI just does it better. From the very beginning of publishing the goal has been to control society. All of humanity is controlled by lies published buy the Gutenburg Press some hundreds of years ago. The practice just keeps getting more and more sophisticated. Literacy was invented to spread the lies of the church and the state.
 
Last edited:
Upvote
-6 (0 / -6)
1. What keys are the cameras using to sign each frame? Where are they stored? How are they accessed? How are they verified?

2. Hex editors let you manipulate the individual bits in a file. What keeps someone from editing the raw image file to remove the watermarks?

3. Why is no mark highly suspect, if the mark is a sign of a deepfake? That sounds backwards.

4. If the solution is as “simple” as you suggest, why isn’t it being used? Or do you honestly think you’ve thought of something no one else in the field ever has?
The camera has my camera signing key and the camera's signing key. Standard PKI stuff. Anything not signed by trusted keys is not able to be authenticated.

Why should you trust a pic/video with no signature and not a website?
 
Upvote
0 (0 / 0)