Brad Smith urges steps to curtail deepfakes "with an intent to deceive or defraud."
See full article...
See full article...
FTFY, Mr. Smith.Smith also pushed for the introduction of licensing for critical forms of AI, arguing that these licenses should carry obligationsto protect against threats to security, whether physical, cybersecurity, or nationaltoo onerous for startups and less-well-funded competitors to comply with.
Your company funded and productized a lot of this crap. It's on you.
Your "good" examples aren't exactly slam dunks.[...]
"Mary Anne" and "Ginger" were actual living actresses, one still living, who might not want their images used that way.How do you determine the difference between someone who wants to see Maryanne and Ginger leave the boys and "explore the other side of the island", and someone who wants to create a fake photo of an ex-lover or coworker in a compromising situation?
Well, that's not a very credible assassination, but anyway maybe the people providing the models aren't that interested in supporting that anyway?How do you distinguish between someone who wants to see a politician they dislike get eaten by a Sarlacc, and someone who creates a fake photo that depicts a political assassination?
Why is "American" a relevant category here?The aspects of ai that would put thousands or even millions of Americans out of work aren't the issue, the issue is rich people like himself might be embarrassed...
Indeed.I wish I were shocked at the lack of introspection.
What does it mean when companies like Microsoft are selling the very product that they are warning us about?
Strategically deluded pablum. The risks he talks about are unrelated to the revenue producing parts of AI that MS can foresee for their business.Clicked through to read the speech. This mf thinks someone is going to enforce content labeling in order to police disinformation and that there will be "accountability". Delusional pablum.
For humans it’s usually easier to critique than it is to create. Technology is obviously very different from people but I’d think that if computers can create images then it should be easier than that to create automated ways to check images for signs they may be fake.Is digital forensics capable of determining if a photo is deepfaked?
Is there evidence that works? I can easily see the unintended result being to teach people that anything not labeled as AI is automatically real, which makes passing off a fake that much easier if you already don't care much about the ethics or legality of what you're doing.I think the regulations China implemented were basically right, for the deepfake issue. If you produce a work that could reasonably be perceived as depicting real events and it's edited by AI, you have to note prominently that the image is AI edited or generated. Similar to the French laws around Photoshop, but across society rather than just in specific classifications of publications.
Not that I have access to. It's only a couple of years old, and I don't speak Chinese well enough to understand primary source material. Anecdotally, I follow a few people in mainland China and there doesn't seem to be an epidemic of fake images. Most normal people seem to be complying with the regulations (more than I thought would be required, so you'll see it sometimes on all AI generated imagery) so if you see an AI generated image, everyone I know says it is labelled clearly. I don't think the Pope puffy jacket went viral there but I can ask sometime.Is there evidence that works? I can easily see the unintended result being to teach people that anything not labeled as AI is automatically real, which makes passing off a fake that much easier if you already don't care much about the ethics or legality of what you're doing.
If your going to light something then twirl,the birds and show the badge - no problem. This spy verses spy stuff is rediculous. Mean nobodies keeping anybody from joining the loop. Except for perhaps clueless politicians,and dateless housewifes.
The server monopoly,and the media monopoly arent going to leave anybody clueless unless someone actually notices. That guy too.
No, that was a decade ago, and even then you'd have missed the boat to some degree.So, is it time to ban AI now??
Also, the problem is human beings anyway. There were some hilariously obvious bad Photoshops doing the rounds during the last US election, but the target audience didn't seem to care. So long as it was in the correct meme format, they believed it anyway because they wanted it to be true, and that overrode the fact that it was obvious fraud.Is there evidence that works? I can easily see the unintended result being to teach people that anything not labeled as AI is automatically real, which makes passing off a fake that much easier if you already don't care much about the ethics or legality of what you're doing.
Pretty much this right here.In other words, what they actually mean to say is: now that we have a jump ahead on the technology in front of others, we must secure our dominance.