"Did anybody suffer at least a thousand dollars' worth of damage" because of AI?
See full article...
See full article...
It feels like most people screaming for regulation of "AI" and machine learning are doing so for reasons which are basically independent of AI/ML/LLM/etc. For example:
1) Copyright violation. Most people would agree being inspired by something does not make it a copyright violation. And yet because it's "on a computer" all of a sudden it is a big concern. As an example, I guarantee James Cameron saw Fern Gully before writing the first Avatar but most people don't see it as a copyright violation.
2) Jobs are routinely displaced by technological advances. Unless you are going to say everyone should go back to subsistence farming and there should be nothing more advanced than a club, a shovel, or an oar I don't know why all of a sudden this technology is uniquely bad in this regard. How many people were needed to move something before livestock domestication? How many people were needed once that was supplanted by rail or trucks? How many "human calculators" were needed before calculators and computers were made? How many people were needed for an old style printing press, or even to hand copy manuscripts before that? Similar questions for any other industry. Why is taking this next step all of a sudden a catastrophe?
3) Misinformation does not spontaneously come about and AI doesn't seem to make spreading it any easier than the Internet, cable television, or newsgroups/forums/social media have already enabled.
4) No matter how much some people think otherwise there is not a single car available today that is rated for self driving but it's always the "AI's" fault when an advanced cruise control doesn't respond properly. And yet no one claims a an accident caused by normal cruise control or an ABS failure was anything but the fault of the driver.
5} There are biases in just about every "algorithm" ever developed. Algorithms have been used for decades to decide, credit scores, employment decisions, etc. but now since it's "AI" it's some how a new unique problem. If you want to say nothing should be decided by algorithm due to the likely possibility of in-built bias, sure do that, but it would affect a lot of existing industries well beyond those diving with both feet into the latest "AI" craze.
There's also the other side where we've been using things that would fall under the broader definition of "machine learning" for decades and yet no one claimed they were bad. As an example, back when I was in school in my undergraduate control laws class we discussed "evolutionary control law development" that would self tune their gains based on the performance relative to desired characteristics. That was more than 20 years ago. The same thing is being done today just with more signals, more outputs, and faster development just under the "AI" moniker. How would you write legislation to ban or regulate ML or AI without having unintended negative consequences for areas it's literally been used for decades?
This whole panic over AI seems to be a mix of anti-technology bias, ignorance, and a whole bunch of "what about" and "what if". If you want to regulate for illegal biases or other bad side effects regulate the people using the tool.
This comment deserves to be promoted. Well said.Honestly, I don't care much to draw the line between "normal" algorithms and "AI". A restriction on using algorithms for certain use cases (perhaps following the "guy named Bob" standard) would be a good start, as would actual data privacy laws in the mould of the GDPR, and a way for copyright holders to opt out of having their data scraped for training sets. Add additional penalties to defamation facilitated by AI-generated content. Forbid use of AI-generated content in political advertising. And so on. Mandating technological control measures, such as indelible watermarking of AI-generated content, are still unproven and always at risk of being circumvented, so they're not the way to go, for now.
Am I going to claim that these proposals are watertight and perfect as they stand? No; I'm some rando on a tech forum, but at least they constitute a good-faith effort to engage with the subject, and not some dogmatic position that technological progress, whatever that means, should take precedence over negative impacts to our society.
And in a year or a few when he is proven wrong beyond a shadow of a doubt, he'll say something to the tune of, "Well, I really wasn't wrong, considering the facts I had at my disposal at the time. I feel like our engineers really didn't understand or communicate the risks we were facing. Who could have possibly known?!""Why would we close the barn door? None of the horses have escaped yet."
- Microsoft Chief Farmer Michael Schwarz
"The first time we started requiring driver's licenses, it was after many dozens of people died in car accidents, right?" Schwarz said. {snip} "Did anybody suffer at least a thousand dollars' worth of damage because of that? Should we jump in to regulate something on a planet of 8 billion people when there is not even a thousand dollars of damage? Of course not."
they actually turned those off before we went full Ultron. they decided twitter was bad for robot children.They've seen the films right? By the time AI causes harm, it could already be Skynet.
We may have a new world record for time to “we have to put radioactive isotopes in the children’s playset or else the terrorists win” invocation in an Ars AI thread.The whole argument is moot. How do we logistically 'regulate' the evolution of anything? Even if we could somehow guarantee AI will never evolve past a certain point, can we be certain that other countries will do the same? Of course not-- see Russia 2023. It is neither logistically or strategically practical to handicap AI in any meaningful way.
The luddites had a point, as has already been mentioned. If the people pushing for the AI revolution were also advocating an end to the existing capitalist order, reorganising society to not be based on the idea that literally the life of anyone not providing labour to the capital holders is expendable then maybe the rest of us would be more easily persuaded.It's the Luddite uprising all over again. We need to swim with the current, not against it.
"There has to be at least a little bit of harm, so that we see what is the real problem," Schwarz explained. "Is there a real problem? Did anybody suffer at least a thousand dollars' worth of damage because of that? Should we jump in to regulate something on a planet of 8 billion people when there is not even a thousand dollars of damage? Of course not."
"Let's allow just a few deaths, one or two entire industries eliminating their workforce, and maybe a couple of years of society-damaging misinformation and persecution of minorities, then we can think about maybe doing something to regulate something that will by then already be completely out of control."About 100 angry comments with really poor arguments, but nobody even attempts to explain how exactly this new wave of AI should be regulated.
The problem from a regulatory viewpoint is that 99% of the use cases and applications of this technology haven’t even been invented yet. And the vast majority of those applications are likely to be good for us.
It’s impossible to say that AI can’t be used used for X, Y and Z because such things do not exist - yet.
Any regulatory process will take years - and the field will look totally different in 2028. Banning stuff afterwards is really the only thing that can work.
(The possible loss of some jobs isn’t a reason to regulate or ban anything, unless you think (for example) that modern industrial farming which liberated 90% of people from incredibly strenuous and dangerous jobs to more productive jobs was a bad thing.)
In the same way we regulate food via food labels, make the creators list sources designate authorship, so that output can be tracked. Much in the same way we didn't know certain food dyes can cause weird allergy problems or worse health problems, that would be multitudes easier to track if we knew the actual source of things. If MS is so concerned about a limit, make it where ANY damage cause above $1k, including incidental damages, unforseen damages, etc., (using a probable cause standard, which is already fairly well established in our jurisprudence) are to be borne by the creator (i.e. owner of the AI technology), with a pro-rata portion being charged to bad actors for using technology in ways it was clearly NOT intended to be used, and criminal liability for anything resulting in loss of life or limb.- Label it as being an AI product.
- For generative AI products, provide a list of source material used by the AI.
Or is the AI amongst us alreadyI’m curious about the one downvote on this… is someone concerned that the output from ChatGPT might be inaccurate or harmful?
Yes, yes clearly we needed nuclear peace talks before a nuke ever got deployed.
Schwarz does not appear to be totally against AI regulations but says as an economist, he likes efficiency and would want laws to balance costs and gains from AI.
Why should AI be treated any differently than other technological advances? How many deaths do we allow due to cars? A lot more than a few. Or how about stairs? What other technology should we regulate further or eliminate to prevent "a few deaths"?"Let's allow just a few deaths, one or two entire industries eliminating their workforce, and maybe a couple of years of society-damaging misinformation and persecution of minorities, then we can think about maybe doing something to regulate something that will by then already be completely out of control."
![]()
Yeah, well, going from an iceman delivery to using refrigerators also didn't come with the side effect of the refrigerator maiming, killing, defrauding it's users, so there was that.Why should AI be treated any differently than other technological advances? How many deaths do we allow due to cars? A lot more than a few. Or how about stairs? What other technology should we regulate further or eliminate to prevent "a few deaths"?
We also eliminate entire industries all the time. I don't have an iceman delivering to me for example. Should we eliminate refrigeration to protect that industry? How many industries have been eliminated due to semiconductors and computer control? Do we need to regulate that out of existence as well to protect workforces?
Some what true but refrigeration, especially with older refrigerants, have contributed to global warming and other ill affects that I'm sure could be traced to at least a "few" injuries or death. But that's beside the point. The post I was replying to implied that eliminating industries by itself was "bad" and we shouldn't allow it with regard to AI.Yeah, well, going from an iceman delivery to using refrigerators also didn't come with the side effect of the refrigerator maiming, killing, defrauding it's users, so there was that.
Cars have lots of regulation, licensing, insurance requirements, manufacturing safety standards, emissions requirements, and end user repair requirements. If there were a single industry you want to point to against regulation, the auto industry is not it...
It's also funny that you bring up stairs, because we do have regulation on stairs actually. There's a lot of architects out there who'd love some of your time to talk about them I bet.
It's a good thing he isn't running for office because that one take should result in termination from his employment too.Cool, so I'm going to break in the cheapest looking window on your house, or at least one you could replace for $100 from Lowe's, as I browse your possessions for the remaining $900 of my "no harm done" budget.
What an awful, awful take.
Your arguement is a bit pedantic at this point. I doubt continuing the discussion will change your mind at this point, but your own example of residential vs business stairs is pretty relevant here isn't it?Some what true but refrigeration, especially with older refrigerants, have contributed to global warming and other ill affects that I'm sure could be traced to at least a "few" injuries or death. But that's beside the point. The post I was replying to implied that eliminating industries by itself was "bad" and we shouldn't allow it with regard to AI.
And yet the auto industry still has way more than "a few" injuries or deaths each year, month, week, day, hour, and minute. If the auto industry is the standard then AI doesn't need regulation because it isn't causing anywhere near that kind of mayhem. If it isn't and "a few" deaths are unacceptable then existing auto regulations are woefully inadequate and there probably is not set of standards or regulations that could be imposed to meet that level of safety.
Those regulations for stairs also don't eliminate slips, trips, and falls that can lead to injury or death. They also only apply to business settings (OSHA) and don't apply to residential or other applications. Residential buildings have their own set of regulations (IRC) that address stairs too but we also have a lot of existing infrastructure that doesn't meet either of those requirements. Should those be torn down because the stairs are too steep and narrow or don't meet some other standard? Finally there are other stair cases that fall outside of both standards and are thus completely unregulated. And once again even with those regulations there are more than a "few" deaths every year. So going by the standard some people think we need for AI they should probably just be completely eliminated and nothing should be taller than a single story.
Alternatively we could have prevented a lot of those needless deaths if some basic sensible laws had been put in place before cars become as widespread as they did before the institution of drivers licenses."The first time we started requiring driver's licenses, it was after many dozens of people died in car accidents, right?" Schwarz said. "And that was the right thing," because "if you would've required driver's licenses when there were the first two cars on the road," then "we would have completely screwed up that regulation."
I'm not the downvoter on that comment, but I often do downvote folks who just paste ChatGPT output into posts, because I usually find them boring to read and lazy. In this case AI offering accurate examples of the dangers of AI was humorous enough to spare it that fate. From me.I’m curious about the one downvote on this… is someone concerned that the output from ChatGPT might be inaccurate or harmful?