“Meaningful harm” from AI necessary before regulation, says Microsoft exec

IrishMonkee

Ars Scholae Palatinae
1,362
That is the corporate way of thinking in a nutshell, do anything and everything until you get in trouble. Then wait, and then do it some more until next time. Honestly, regulation isn't going to mean jack if it doesn't come with a large amount of sharp teeth and lots of pain. It'll mostly come out to be industry written, I mean sponsored and will pass with that oh so light regulatory touch that corporations have come to know and love. Anyways, if the administration/congress flips, then regulations will get undone and boom, Bob's your Uncle.
 
Upvote
13 (13 / 0)

nicname_gally

Seniorius Lurkius
4
Subscriptor
Well I’m all for regulation tho there is a point to be made regarding the potential harm of premature legislation, he just totally botched it.
I do fear something like a blanket ban on AIs reading medical data for example, how many invisible unseen deaths would happen as a result of a doctor overlooking a diagnosis which an AI may have spotted?

We absolutely have the potential to cause great harm with a knee jerk reaction to AI, just look at the the millions of unnecessary deaths from coal that continue to happen because of the reaction to nuclear fear.

If he really wanted to use a car analogy maybe traffic lights would be a better example, here’s what my unregulated GPT4 spit out:

"Regulating a new technology prematurely is akin to creating traffic light rules before cars were commonplace. Initially, we might only think to include red and green lights - stop and go. However, the yellow light, symbolizing caution, could be overlooked.
The yellow light is vital; it provides a buffer, a period of caution between stopping and going. It prepares drivers for an imminent change, reducing the likelihood of collisions. If missed in the initial legislation before cars were being regularly used, the system may lead to many collisions.
In the haste to regulate, the nuances - the 'yellow lights' of the situation - may be overlooked, leading to imperfect rules. Correcting these oversights later can be challenging, costly, and time-consuming. During the transition, people may suffer due to the initial flawed rules. Thus, rushing into legislation without a thorough understanding can result in serious, even fatal, consequences."
 
Last edited:
Upvote
-5 (3 / -8)

DeSelbey

Smack-Fu Master, in training
3
It feels like most people screaming for regulation of "AI" and machine learning are doing so for reasons which are basically independent of AI/ML/LLM/etc. For example:
1) Copyright violation. Most people would agree being inspired by something does not make it a copyright violation. And yet because it's "on a computer" all of a sudden it is a big concern. As an example, I guarantee James Cameron saw Fern Gully before writing the first Avatar but most people don't see it as a copyright violation.
2) Jobs are routinely displaced by technological advances. Unless you are going to say everyone should go back to subsistence farming and there should be nothing more advanced than a club, a shovel, or an oar I don't know why all of a sudden this technology is uniquely bad in this regard. How many people were needed to move something before livestock domestication? How many people were needed once that was supplanted by rail or trucks? How many "human calculators" were needed before calculators and computers were made? How many people were needed for an old style printing press, or even to hand copy manuscripts before that? Similar questions for any other industry. Why is taking this next step all of a sudden a catastrophe?
3) Misinformation does not spontaneously come about and AI doesn't seem to make spreading it any easier than the Internet, cable television, or newsgroups/forums/social media have already enabled.
4) No matter how much some people think otherwise there is not a single car available today that is rated for self driving but it's always the "AI's" fault when an advanced cruise control doesn't respond properly. And yet no one claims a an accident caused by normal cruise control or an ABS failure was anything but the fault of the driver.
5} There are biases in just about every "algorithm" ever developed. Algorithms have been used for decades to decide, credit scores, employment decisions, etc. but now since it's "AI" it's some how a new unique problem. If you want to say nothing should be decided by algorithm due to the likely possibility of in-built bias, sure do that, but it would affect a lot of existing industries well beyond those diving with both feet into the latest "AI" craze.

There's also the other side where we've been using things that would fall under the broader definition of "machine learning" for decades and yet no one claimed they were bad. As an example, back when I was in school in my undergraduate control laws class we discussed "evolutionary control law development" that would self tune their gains based on the performance relative to desired characteristics. That was more than 20 years ago. The same thing is being done today just with more signals, more outputs, and faster development just under the "AI" moniker. How would you write legislation to ban or regulate ML or AI without having unintended negative consequences for areas it's literally been used for decades?

This whole panic over AI seems to be a mix of anti-technology bias, ignorance, and a whole bunch of "what about" and "what if". If you want to regulate for illegal biases or other bad side effects regulate the people using the tool.

"...we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios. "

Yet the whole community of developers are happy talk about the POTENTIAL benefits of AI, as though they aren’t imaginary scenarios.

The point is, ALL aspects of this technology should be discussed, not just the marketing guy’s take.
 
Upvote
19 (19 / 0)

KChat

Ars Scholae Palatinae
810
Subscriptor
Honestly, I don't care much to draw the line between "normal" algorithms and "AI". A restriction on using algorithms for certain use cases (perhaps following the "guy named Bob" standard) would be a good start, as would actual data privacy laws in the mould of the GDPR, and a way for copyright holders to opt out of having their data scraped for training sets. Add additional penalties to defamation facilitated by AI-generated content. Forbid use of AI-generated content in political advertising. And so on. Mandating technological control measures, such as indelible watermarking of AI-generated content, are still unproven and always at risk of being circumvented, so they're not the way to go, for now.

Am I going to claim that these proposals are watertight and perfect as they stand? No; I'm some rando on a tech forum, but at least they constitute a good-faith effort to engage with the subject, and not some dogmatic position that technological progress, whatever that means, should take precedence over negative impacts to our society.
This comment deserves to be promoted. Well said.
 
Upvote
7 (7 / 0)
Already did in Australia in 2016. https://theconversation.com/robodeb...land-it-also-broke-laws-of-mathematics-201299 People went bankrupt, people died because they could not afford treatment, people killed themselfs. Before complaining that this was not "true" AI. Nothing we have is even near an AGI (Artificial General Intelligence), but this is what happens when we use the limited stuff we have now. That the newer stuff is more powerfull does not make it better, it makes it worse.
 
Upvote
20 (20 / 0)
Post content hidden for low score. Show…

croc123

Ars Scholae Palatinae
649
How about this for new technology.... It must be proved to be beneficial before it is allowed to be used?

Ok, Ok... Not realistic. But, being realistic, tech is in of and by itself usually pretty benign. People though.... Well, as soon as you let people use technology, there WILL be consequences. Good, bad, but consequences. And sometimes the good consequences are the worst. Cars, for example. Kill more people every year than guns. Well in most countries.
 
Upvote
-9 (0 / -9)
Post content hidden for low score. Show…
"Why would we close the barn door? None of the horses have escaped yet."

- Microsoft Chief Farmer Michael Schwarz
And in a year or a few when he is proven wrong beyond a shadow of a doubt, he'll say something to the tune of, "Well, I really wasn't wrong, considering the facts I had at my disposal at the time. I feel like our engineers really didn't understand or communicate the risks we were facing. Who could have possibly known?!"
 
Upvote
17 (17 / 0)
"The first time we started requiring driver's licenses, it was after many dozens of people died in car accidents, right?" Schwarz said. {snip} "Did anybody suffer at least a thousand dollars' worth of damage because of that? Should we jump in to regulate something on a planet of 8 billion people when there is not even a thousand dollars of damage? Of course not."

Well smack my fucking flabbers and ghast my gahdahm gobs.
That is next level psychopath shit right there.
A human life is worth less than $1000USD to this fucking twatwaffle.
 
Upvote
13 (14 / -1)

TVPaulD

Ars Tribunus Militum
2,005
The whole argument is moot. How do we logistically 'regulate' the evolution of anything? Even if we could somehow guarantee AI will never evolve past a certain point, can we be certain that other countries will do the same? Of course not-- see Russia 2023. It is neither logistically or strategically practical to handicap AI in any meaningful way.
We may have a new world record for time to “we have to put radioactive isotopes in the children’s playset or else the terrorists win” invocation in an Ars AI thread.

Once again, consider the possibility that if your defense of allowing a technology to be unleashed, unchecked, upon a society wholly and completely unprepared for it is that “the bad people” will surely do it first otherwise: consider that doing the things “the bad people” do is not a desirable course of action.

The response to “the Empire might build a Death Star” is not “build Starkiller Base!”
It's the Luddite uprising all over again. We need to swim with the current, not against it.
The luddites had a point, as has already been mentioned. If the people pushing for the AI revolution were also advocating an end to the existing capitalist order, reorganising society to not be based on the idea that literally the life of anyone not providing labour to the capital holders is expendable then maybe the rest of us would be more easily persuaded.

But as it’s being pushed by multi-billion dollar corporations and the billionaires and millionaires who own and run them, you’re going to have to forgive those of us forced to rent the roofs above our heads for being maybe just a teensy bit suspicious about the level of “altruism” y’all in the AI enthusiast community profess.

And, as with the crypto/blockchain crowd, accusing technology enthusiasts of being ideologically opposed to “technology” when they question your chosen magical all-solution‘s practical reality is a pretty good way of galvanising opposition to you.
 
Upvote
21 (21 / 0)
Post content hidden for low score. Show…

DriveBy

Ars Tribunus Militum
1,856
"There has to be at least a little bit of harm, so that we see what is the real problem," Schwarz explained. "Is there a real problem? Did anybody suffer at least a thousand dollars' worth of damage because of that? Should we jump in to regulate something on a planet of 8 billion people when there is not even a thousand dollars of damage? Of course not."

He had to have been speaking ironically here, right?

Right?
 
Upvote
5 (6 / -1)

DriveBy

Ars Tribunus Militum
1,856
About 100 angry comments with really poor arguments, but nobody even attempts to explain how exactly this new wave of AI should be regulated.

The problem from a regulatory viewpoint is that 99% of the use cases and applications of this technology haven’t even been invented yet. And the vast majority of those applications are likely to be good for us.

It’s impossible to say that AI can’t be used used for X, Y and Z because such things do not exist - yet.

Any regulatory process will take years - and the field will look totally different in 2028. Banning stuff afterwards is really the only thing that can work.

(The possible loss of some jobs isn’t a reason to regulate or ban anything, unless you think (for example) that modern industrial farming which liberated 90% of people from incredibly strenuous and dangerous jobs to more productive jobs was a bad thing.)
"Let's allow just a few deaths, one or two entire industries eliminating their workforce, and maybe a couple of years of society-damaging misinformation and persecution of minorities, then we can think about maybe doing something to regulate something that will by then already be completely out of control."

🤡
 
Upvote
18 (18 / 0)

jeffbax

Ars Scholae Palatinae
886
His point is obvious and the assumption that regulators, aka the same US Congress that gets complained about daily, have the wisdom to predict what good regulation looks like is a fallacy.

The same fatal conceit that leaves us left with significantly old tech with the sunscreens Ars reports on time to time that are forever behind other nations because the FDA would rather flex than enable progress.

And the same that botched early COVID testing horrendously.

Most of the writing on “AI” is total hysteria with no real understanding how LLMs even work and are effectively fancy statistical simulations more than they’re something even close to approaching thought and understanding.

I’m stunned by the blind, pithy faith on display here in people so far removed from understanding technology almost ever, the same people that try to pass FOSTA/SESTA laws and porn bans, to suddenly being so wise they won’t hamstring tech that can really be the next Industrial Revolution.

Bad regulation is an invisible graveyard for progress, and piles of it are the technical debt eroding very bloated US governance these days.
 
Upvote
-17 (2 / -19)

McTurkey

Ars Tribunus Militum
2,209
Subscriptor
If we can't regulate it now, what makes these bloviating fuckwads think it'll ever be regulated? The probability of that harm actually being a net benefit to someone is quite high. The probability of that someone being quite rich and powerful is even higher. America's regulatory structure is largely dictated by those with the most money and power unless hard physics or economics starts getting in the way - take the energy sector for example, where the economics of solar and wind have finally made regulating coal burning power plants out of existence inevitable.. yet coal still clings to life, and will continue to pollute the air and create many hundreds of thousands more cases of cancer before it finally goes away.
 
Upvote
10 (10 / 0)

Tango*Urilla

Smack-Fu Master, in training
81
Subscriptor++
Upvote
10 (10 / 0)

InIgnem

Wise, Aged Ars Veteran
141
Subscriptor++
- Label it as being an AI product.

- For generative AI products, provide a list of source material used by the AI.
In the same way we regulate food via food labels, make the creators list sources designate authorship, so that output can be tracked. Much in the same way we didn't know certain food dyes can cause weird allergy problems or worse health problems, that would be multitudes easier to track if we knew the actual source of things. If MS is so concerned about a limit, make it where ANY damage cause above $1k, including incidental damages, unforseen damages, etc., (using a probable cause standard, which is already fairly well established in our jurisprudence) are to be borne by the creator (i.e. owner of the AI technology), with a pro-rata portion being charged to bad actors for using technology in ways it was clearly NOT intended to be used, and criminal liability for anything resulting in loss of life or limb.

There, Mr. MS - you don't think damages under $1k should be regulated, so solved that for you. Dont' want to take the risk? Fine, create an entire insurance industry and buy yourself insurance to protect yourself for when the lawyers come knocking.

But for God's sake, do SOMETHING. "Trust us, we're MS" isn't gonna cut it here (Substitute MS for Tesla/Google/FB/etc.)
 
Upvote
6 (6 / 0)

ender78

Ars Tribunus Militum
1,881
Subscriptor
Yes, yes clearly we needed nuclear peace talks before a nuke ever got deployed.

We are not at war. You're suggesting we should have talks after WMDs have been made available for months/years and only regulate their proliferation if anyone used them undiscimently.
 
Upvote
1 (1 / 0)

Qyygle

Ars Praetorian
485
Subscriptor
Schwarz does not appear to be totally against AI regulations but says as an economist, he likes efficiency and would want laws to balance costs and gains from AI.

At this point, I no longer consider economics a STEM field. It's just a facade of statistics being used to justify garbage business practices.
 
Upvote
12 (12 / 0)

lithven

Ars Tribunus Militum
2,186
"Let's allow just a few deaths, one or two entire industries eliminating their workforce, and maybe a couple of years of society-damaging misinformation and persecution of minorities, then we can think about maybe doing something to regulate something that will by then already be completely out of control."

🤡
Why should AI be treated any differently than other technological advances? How many deaths do we allow due to cars? A lot more than a few. Or how about stairs? What other technology should we regulate further or eliminate to prevent "a few deaths"?

We also eliminate entire industries all the time. I don't have an iceman delivering to me for example. Should we eliminate refrigeration to protect that industry? How many industries have been eliminated due to semiconductors and computer control? Do we need to regulate that out of existence as well to protect workforces?

Misinformation did not all of sudden appear due to AI. It doesn't enable anything new it just makes some of it easier; but so things like Photoshop, social media, and the printing press. If you want to completely protect society from misinformation then it's time to shutdown the internet, regulate all media, and get out there arresting people who spread misinformation.

Persecution of minorities comes in two flavors, the willful and the ignorant. The willful happens and is going to happen because some people are horrible and they want to. Restricting or eliminating AI will not prevent or even reduce that at all. Working on the ignorant is the one area where some amount of regulation may be useful. Having said that, it is nothing new for AI. AI is only reflecting what is already there. How many reports have there been of existing algorithms, or even manual human decision making, that have unintentional (but still horrible) built in biases? Attack the problem not the latest tool. I'm all in favor of severe punishment for people who use an AI to make decisions that turn out to be racist, sexist, transphobic, etc. But I'm also in favor of that same punishment if they are using a non-AI algorithm or even make the decision "manually".
 
Upvote
-17 (0 / -17)

Qyygle

Ars Praetorian
485
Subscriptor
Why should AI be treated any differently than other technological advances? How many deaths do we allow due to cars? A lot more than a few. Or how about stairs? What other technology should we regulate further or eliminate to prevent "a few deaths"?

We also eliminate entire industries all the time. I don't have an iceman delivering to me for example. Should we eliminate refrigeration to protect that industry? How many industries have been eliminated due to semiconductors and computer control? Do we need to regulate that out of existence as well to protect workforces?
Yeah, well, going from an iceman delivery to using refrigerators also didn't come with the side effect of the refrigerator maiming, killing, defrauding it's users, so there was that.

Cars have lots of regulation, licensing, insurance requirements, manufacturing safety standards, emissions requirements, and end user repair requirements. If there were a single industry you want to point to against regulation, the auto industry is not it...
It's also funny that you bring up stairs, because we do have regulation on stairs actually. There's a lot of architects out there who'd love some of your time to talk about them I bet.
 
Upvote
14 (14 / 0)

lithven

Ars Tribunus Militum
2,186
Yeah, well, going from an iceman delivery to using refrigerators also didn't come with the side effect of the refrigerator maiming, killing, defrauding it's users, so there was that.

Cars have lots of regulation, licensing, insurance requirements, manufacturing safety standards, emissions requirements, and end user repair requirements. If there were a single industry you want to point to against regulation, the auto industry is not it...
It's also funny that you bring up stairs, because we do have regulation on stairs actually. There's a lot of architects out there who'd love some of your time to talk about them I bet.
Some what true but refrigeration, especially with older refrigerants, have contributed to global warming and other ill affects that I'm sure could be traced to at least a "few" injuries or death. But that's beside the point. The post I was replying to implied that eliminating industries by itself was "bad" and we shouldn't allow it with regard to AI.

And yet the auto industry still has way more than "a few" injuries or deaths each year, month, week, day, hour, and minute. If the auto industry is the standard then AI doesn't need regulation because it isn't causing anywhere near that kind of mayhem. If it isn't and "a few" deaths are unacceptable then existing auto regulations are woefully inadequate and there probably is not set of standards or regulations that could be imposed to meet that level of safety.

Those regulations for stairs also don't eliminate slips, trips, and falls that can lead to injury or death. They also only apply to business settings (OSHA) and don't apply to residential or other applications. Residential buildings have their own set of regulations (IRC) that address stairs too but we also have a lot of existing infrastructure that doesn't meet either of those requirements. Should those be torn down because the stairs are too steep and narrow or don't meet some other standard? Finally there are other stair cases that fall outside of both standards and are thus completely unregulated. And once again even with those regulations there are more than a "few" deaths every year. So going by the standard some people think we need for AI they should probably just be completely eliminated and nothing should be taller than a single story.
 
Upvote
-17 (1 / -18)
Cool, so I'm going to break in the cheapest looking window on your house, or at least one you could replace for $100 from Lowe's, as I browse your possessions for the remaining $900 of my "no harm done" budget.

What an awful, awful take.
It's a good thing he isn't running for office because that one take should result in termination from his employment too.

If you go out of your way to demonstrate a lack of empathy, it should not be surprising when you receive none and start losing political representation, employment, and even rights in extreme cases at this point.
 
Upvote
9 (9 / 0)

Qyygle

Ars Praetorian
485
Subscriptor
Some what true but refrigeration, especially with older refrigerants, have contributed to global warming and other ill affects that I'm sure could be traced to at least a "few" injuries or death. But that's beside the point. The post I was replying to implied that eliminating industries by itself was "bad" and we shouldn't allow it with regard to AI.

And yet the auto industry still has way more than "a few" injuries or deaths each year, month, week, day, hour, and minute. If the auto industry is the standard then AI doesn't need regulation because it isn't causing anywhere near that kind of mayhem. If it isn't and "a few" deaths are unacceptable then existing auto regulations are woefully inadequate and there probably is not set of standards or regulations that could be imposed to meet that level of safety.

Those regulations for stairs also don't eliminate slips, trips, and falls that can lead to injury or death. They also only apply to business settings (OSHA) and don't apply to residential or other applications. Residential buildings have their own set of regulations (IRC) that address stairs too but we also have a lot of existing infrastructure that doesn't meet either of those requirements. Should those be torn down because the stairs are too steep and narrow or don't meet some other standard? Finally there are other stair cases that fall outside of both standards and are thus completely unregulated. And once again even with those regulations there are more than a "few" deaths every year. So going by the standard some people think we need for AI they should probably just be completely eliminated and nothing should be taller than a single story.
Your arguement is a bit pedantic at this point. I doubt continuing the discussion will change your mind at this point, but your own example of residential vs business stairs is pretty relevant here isn't it?

With AI, we're not talking about a home-brewed LLM or image generation program for personal tinkering, that's not the problem here. We're talking about Microsoft, Google, Facebook, companies that're arguably monopolies in their space, with access to a world-spanning swath of personal data and influence, putting together programs and systems that are or will be run with AI direction.
When they make mistakes, it's not, "Oops, I've crashed my home system for a few days", it's, "Ooops, we've fueled genocide in X Country for months"
 
Upvote
14 (14 / 0)
"The first time we started requiring driver's licenses, it was after many dozens of people died in car accidents, right?" Schwarz said. "And that was the right thing," because "if you would've required driver's licenses when there were the first two cars on the road," then "we would have completely screwed up that regulation."
Alternatively we could have prevented a lot of those needless deaths if some basic sensible laws had been put in place before cars become as widespread as they did before the institution of drivers licenses.

Especially with the way that AIs have the potential to "explode" extremely rapidly, putting some very basic safeguards and regulations in place seems more than sensible as the result of getting it wrong could very well be the death of millions, if not billions. Or in the case we really get a "Skynet" level event the destruction of humanity as a whole.
 
Upvote
10 (10 / 0)

Tofystedeth

Ars Tribunus Angusticlavius
6,350
Subscriptor++
I’m curious about the one downvote on this… is someone concerned that the output from ChatGPT might be inaccurate or harmful?
I'm not the downvoter on that comment, but I often do downvote folks who just paste ChatGPT output into posts, because I usually find them boring to read and lazy. In this case AI offering accurate examples of the dangers of AI was humorous enough to spare it that fate. From me.
 
Upvote
10 (10 / 0)