Microsoft president declares deepfakes biggest AI concern

Bigdoinks

Ars Scholae Palatinae
1,001
Licenses! We obviously get our license automatically of course, since we wrote the regs, but everyone else, LICENSES!
In other words, secure oligopoly position through the vein of security.

ETA: It's also rich coming from Microsoft given how many thousands of ignored GPL licenses they blew through collecting training data.
 
Last edited:
Upvote
50 (56 / -6)

IrishMonkee

Ars Scholae Palatinae
1,374
What! Microsoft is worried about deepfakes that lead to deceit or fraud?! Aren't they getting a little ahead of themselves and shouldn't we be waiting for, ya know, actual harm to occur. Let's not jump the gun, we should wait until a gun goes off and prove that actual harmed has occured, by oh, I dunno, $1000 or more. I mean... isn't that their line of thinking? When did it change?

/s
 
Upvote
-17 (6 / -23)

Celery Man

Ars Legatus Legionis
10,060
Smith also pushed for the introduction of licensing for critical forms of AI, arguing that these licenses should carry obligations to protect against threats to security, whether physical, cybersecurity, or national too onerous for startups and less-well-funded competitors to comply with.
FTFY, Mr. Smith.
 
Upvote
10 (12 / -2)

Nowicki

Ars Tribunus Angusticlavius
7,567
Every single medium used on electronic devices for communicating will be hijacked by an ever changing series of LLM, and other AI systems, and there will be no way to keep up with the details in the moment. We may months or years later be able to confidently say something is generative, or not, but the world will likely have made their assumptions about it before verification can be had.

For example. LLMs will be able to create convincing texts, and emails that scam, and defraud people, but do not contain enough verbiage to run through systems with a 100% confidence rate on if it was generative.
 
Upvote
15 (16 / -1)
I see a real potential here for an Elizabeth Holmes type person to come out soon with technology to make all cameras record real pictures on the blockchain using ntfs and digital ai bullshit buzzword buzzword bullshit, and fool people into investing billions into some bullshit mumbo jumbo that absolutely doesn't work.
 
Upvote
22 (22 / 0)

hizonner

Ars Scholae Palatinae
1,140
Subscriptor
Hmm. So Altman wants licensing, which would conveniently cement incumbent control over the market, and now Smith also wants "know your customer", which would be a reason to collect all kinds of data that couldn't possibly be used for anything else... an area in which Microsoft products are already pushing the frontier.

Strange coincidences...
 
Upvote
22 (22 / 0)

Uncivil Servant

Ars Scholae Palatinae
4,726
Subscriptor
How do you program something to determine the intent of its output? This isn't U238 enrichment, there isn't a quantifiable threshold.

The same technology that lets fans "ship" two celebrities or characters for amusement value can be used to create fake sexual material for defamation or blackmail.

How do you determine the difference between someone who wants to see Maryanne and Ginger leave the boys and "explore the other side of the island", and someone who wants to create a fake photo of an ex-lover or coworker in a compromising situation?

How do you distinguish between someone who wants to see a politician they dislike get eaten by a Sarlacc, and someone who creates a fake photo that depicts a political assassination?

Human moderators can decide in-context which uses they want to host, but I'm not sure how you would prevent these things from a technical standpoint.
 
Upvote
13 (13 / 0)

hizonner

Ars Scholae Palatinae
1,140
Subscriptor
Your "good" examples aren't exactly slam dunks.
How do you determine the difference between someone who wants to see Maryanne and Ginger leave the boys and "explore the other side of the island", and someone who wants to create a fake photo of an ex-lover or coworker in a compromising situation?
"Mary Anne" and "Ginger" were actual living actresses, one still living, who might not want their images used that way.
How do you distinguish between someone who wants to see a politician they dislike get eaten by a Sarlacc, and someone who creates a fake photo that depicts a political assassination?
Well, that's not a very credible assassination, but anyway maybe the people providing the models aren't that interested in supporting that anyway?

On edit: Given the downvotes without associated comments, I'm assuming everybody on here thinks that agreeing to be on a silly TV show in 1964 makes you fair game for deep fake porn, and that there's some obligation to provide people with a way of living out their bloody fantasies about any public figure.
 
Last edited:
Upvote
-9 (5 / -14)

hizonner

Ars Scholae Palatinae
1,140
Subscriptor
The aspects of ai that would put thousands or even millions of Americans out of work aren't the issue, the issue is rich people like himself might be embarrassed...
Why is "American" a relevant category here?
I wish I were shocked at the lack of introspection.
Indeed.
 
Upvote
8 (9 / -1)
Clicked through to read the speech. This mf thinks someone is going to enforce content labeling in order to police disinformation and that there will be "accountability". Delusional pablum.
Strategically deluded pablum. The risks he talks about are unrelated to the revenue producing parts of AI that MS can foresee for their business.

The obvious, real risks are in text LLMs being used for misinformation: "flooding the zone with shit" as well as text LLMs giving people bad input, that is actually in the interest of the corporations running the AI (e.g., What toaster should I buy? Gives you advice based on what toaster makes the most money for MS to recommend). He's not going to mention those problems b/c those problems are actually good for Microsoft.
 
Upvote
20 (20 / 0)

HungaryMan7

Ars Praetorian
426
Subscriptor
Is digital forensics capable of determining if a photo is deepfaked?
For humans it’s usually easier to critique than it is to create. Technology is obviously very different from people but I’d think that if computers can create images then it should be easier than that to create automated ways to check images for signs they may be fake.

Alternatively, maybe what we really need is a better way to combat misinformation.
 
Upvote
1 (1 / 0)

lucubratory

Ars Scholae Palatinae
1,430
Subscriptor++
I think the regulations China implemented were basically right, for the deepfake issue. If you produce a work that could reasonably be perceived as depicting real events and it's edited by AI, you have to note prominently that the image is AI edited or generated. Similar to the French laws around Photoshop, but across society rather than just in specific classifications of publications.
 
Upvote
2 (2 / 0)
D

Deleted member 270259

Guest
I think the regulations China implemented were basically right, for the deepfake issue. If you produce a work that could reasonably be perceived as depicting real events and it's edited by AI, you have to note prominently that the image is AI edited or generated. Similar to the French laws around Photoshop, but across society rather than just in specific classifications of publications.
Is there evidence that works? I can easily see the unintended result being to teach people that anything not labeled as AI is automatically real, which makes passing off a fake that much easier if you already don't care much about the ethics or legality of what you're doing.
 
Upvote
8 (8 / 0)

lucubratory

Ars Scholae Palatinae
1,430
Subscriptor++
Is there evidence that works? I can easily see the unintended result being to teach people that anything not labeled as AI is automatically real, which makes passing off a fake that much easier if you already don't care much about the ethics or legality of what you're doing.
Not that I have access to. It's only a couple of years old, and I don't speak Chinese well enough to understand primary source material. Anecdotally, I follow a few people in mainland China and there doesn't seem to be an epidemic of fake images. Most normal people seem to be complying with the regulations (more than I thought would be required, so you'll see it sometimes on all AI generated imagery) so if you see an AI generated image, everyone I know says it is labelled clearly. I don't think the Pope puffy jacket went viral there but I can ask sometime.

In general, it's not really about preventing all instances of deceptive images from existing - you would need to ban Photoshop and go remove it from millions of computers for that. It's just about trying to institute new norms backed by law, so that if someone does engage in the practice almost everyone agrees is bad (intentionally creating deceptive imagery using AI to try and convince people a fake event happened or a real one didn't), then that person can be charged with a crime.
 
Upvote
0 (0 / 0)
If your going to light something then twirl,the birds and show the badge - no problem. This spy verses spy stuff is rediculous. Mean nobodies keeping anybody from joining the loop. Except for perhaps clueless politicians,and dateless housewifes.

The server monopoly,and the media monopoly arent going to leave anybody clueless unless someone actually notices. That guy too.
 
Upvote
-7 (0 / -7)

train_wreck

Ars Scholae Palatinae
677
If your going to light something then twirl,the birds and show the badge - no problem. This spy verses spy stuff is rediculous. Mean nobodies keeping anybody from joining the loop. Except for perhaps clueless politicians,and dateless housewifes.

The server monopoly,and the media monopoly arent going to leave anybody clueless unless someone actually notices. That guy too.

🤔
 
Upvote
-1 (1 / -2)
So, is it time to ban AI now??
No, that was a decade ago, and even then you'd have missed the boat to some degree.

Say what you want about the current state, but the cat's out of the bag. Even if the US banned it, the tech's out there, it would continue to be developed elsewhere and you'd still have to deal with the results even if it were the first truly effective ban in history and nobody on US soil ever touched it again.
 
Upvote
4 (4 / 0)
Is there evidence that works? I can easily see the unintended result being to teach people that anything not labeled as AI is automatically real, which makes passing off a fake that much easier if you already don't care much about the ethics or legality of what you're doing.
Also, the problem is human beings anyway. There were some hilariously obvious bad Photoshops doing the rounds during the last US election, but the target audience didn't seem to care. So long as it was in the correct meme format, they believed it anyway because they wanted it to be true, and that overrode the fact that it was obvious fraud.
 
Upvote
10 (10 / 0)
In other words, what they actually mean to say is: now that we have a jump ahead on the technology in front of others, we must secure our dominance.
Pretty much this right here.

—-
The collective dissolution of reality isn’t going to happen by dis/misinformation - though that will be a compounding factor.

As in this article, the ‘art’ is generated by stable diffusion. We are not far at all from it being easier to generate “Microsoft CEO looking scared” than Google and browse for images of the same person. Which in short order will be filled with generated images…

even well-meaning, respected reporting that DOESNT have an agenda, will contribute to the dissolution of reality (what is real), as these tools become part of workflows for huge numbers of people.

Deepfake fraud - yes will be a problem - but it’s not going to scale in anywhere near the same level as generated-reality-breakdown will, because there far more good actors than bad actors in the world just doing their jobs.
 
Upvote
2 (2 / 0)