Would chatbots make the same driving decisions as us?

SubWoofer2

Ars Tribunus Militum
2,552
A scenario: you are driving a forklift truck in a warehouse. A colleague steps out in front of you. You have a choice, of hitting and killing that person, or swerving into some nearby racking which will collapse and kill 5 colleagues.

Not all driving situations are open road.

Regarding the Moral model, I assume there are two classes of criminal, namely "active" and "conviction was expired some years earlier and therefore is no longer relevant".

Or does the Model assume "once a criminal, always a criminal?" If so I fear for our society.

In Australia we have already had Robodebt, which was expected to lead to suicides by vulnerable citizens.
 
Upvote
2 (2 / 0)

cygnus1

Ars Tribunus Militum
1,649
Subscriptor++
The premise seems pointless - the whole point of self-driving cars is to not get into these situations in the first place. Brakes are not going to suddenly fail this way - there are redundant actuators etc. A real life scenario where such a choice would truly present itself is exceedingly rare, it's a waste of time to make system design choices around it. When is the last time you heard about a human in a car accident having to have made such a split second choice?

Instead, the proper driving behavior will replace human drivers. A small percentage of human drivers are bad, inattentive, drunk etc, and mitigating that small fraction is orders of magnitudes larger than such contrived made-up scenarios.
Exactly. The tragedy in that situation isn't going to be what the car chose to hit, it's going to be that the car chose to drive at all. That either it's sensors failed to detect the mechanical problem or it somehow ignored the sensors or that it ignored a slowly worsening mechanical problem.
 
Upvote
2 (2 / 0)
The premise seems pointless - the whole point of self-driving cars is to not get into these situations in the first place. Brakes are not going to suddenly fail this way - there are redundant actuators etc. A real life scenario where such a choice would truly present itself is exceedingly rare, it's a waste of time to make system design choices around it. When is the last time you heard about a human in a car accident having to have made such a split second choice?

Instead, the proper driving behavior will replace human drivers. A small percentage of human drivers are bad, inattentive, drunk etc, and mitigating that small fraction is orders of magnitudes larger than such contrived made-up scenarios.
It still needs to be addressed, though, if we are going to leave the driving to them (but not the liability, of course).
 
Upvote
0 (0 / 0)
They may even find a good baseline that most of the (western?) world can agree on. However, I fear feature creep will set in and confound the regulators. I can imagine a car using it's conversational AI, that aims to please, to contrive a scenario where it can prove it's doing its job in a demonstrative way. LLM wants a cookie!
 
Upvote
0 (0 / 0)
D

Deleted member 764661

Guest
Self-driving cars should probably treat trolley-problem situations the same way that humans do: try and find a third action that would result in NO injuries.

Anyone faced with the options of "crashing into an obstacle" or "running over a pedestrian" would be furiously stomping on the brakes and looking around for an open path, not weighing the relative value of the human lives involved. Self-driving cars that can react way faster than a human and can simulate many scenarios at once would be even better at this.

Cars don't run on rails. There's always going to be lots of potential ways to react to an accident, and there's never going to be a scenario where every possible outcome is "certain death for at least one person".
 
Upvote
5 (5 / 0)
Its like the ethics PHDs, saw the engineers getting too much attention and wanted to jump in and get some likes. The cars we have now can not even identify children correctly yet. And I understand that this question needs to be asked. But the time to ask it is most certainly when its closer to actually being relevant. At the moment we ask the question, then ask it again, then continue to ask it until it becomes relevant. Its really only when its closer to relevant that it actually becomes news.
It does not appear to me that our current systems are ready for this to be news.
 
Upvote
0 (0 / 0)

jjmil03

Wise, Aged Ars Veteran
118
It seems like maybe the AI is being designed to attempt to take on the totality of the entire scenario, rather than each problem as it comes. Humans don't "reason" their way through an accident in this way. They make a decision on what is right in front of them, because it is next to impossible to predict the entire effect from that one causal choice. For instance, if you swerve to miss a pedestrian, neither you nor the AI know, with a certainty, that you will hit the people beyond those people. Nor does the AI know for certainty if hitting a car will necessarily kill someone.

Second, there are no good options when you have placed yourself in a particular situation. For example, trying to make a decision after you've already made the fatal decision of going 50 over the speed limit is mostly impossible. The failure in that case, was going that fast in the first place.

Third, they seem to be designing AI with a "consequentialist" or "proportionalist" ethic, which comes with its own set of coherency problems. Choosing to swerve left to avoid a group of pedestrians is not the same as choosing to run into the other, smaller group of pedestrians. In this sense, classical Aristotelian/Thomistic virtue ethics provides a better answer, because it deals with the principle of double effect. So to some extent, these questions are improperly worded to give the appearance that the decision maker must choose to kill one group or the other, when in fact the decision is to not hit one group, which is the intention, but the unintended effect is that another group might - might - die as a result - again, the AI has no way of knowing if the group might move at the last minute or not.

Ultimately, AI is not "deciding" anything. It is acting off of it's baseline pre-suppositions, which are programmed into its interface and instructions. So programming the right ethic, from the beginning, will be the most important aspect of any when it comes to these sorts of things.
 
Upvote
4 (4 / 0)

Pewmaymen

Ars Praetorian
546
Subscriptor++
The problem with all these comparisons of moral decision making between humans and AI in automotive scenarios is that I refuse to believe there is much in the way of moral decision making on the human's part in real scenarios in the first place. The vast majority of people are going to make snap reactions, on the edge of panic in these types of split second decisions. Slam on the brakes, and/or jerk the wheel, lose control of the vehicle, and let the situation play out like it does. No one who goes off the road and injures or kills themselves because they avoided an animal is making a well reasoned, and morally based decision to sacrifice themselves to spare harm to the animal. They reacted in panic and went a long for the ride.
That doesn't mean we shouldn't try to answer these questions. The issue is that computers are much faster at these types of decisions than humans and computers will always make the same decision given the same inputs. Humans make different decisions because everyone behaves differently. We need to answer these as a society so that we don't have to figure it out after a self-driving car plows into a bunch of kids waiting for a school bus instead of running over a squirrel. Sure, it seems obvious what that answer should be to most of us, but the studies in question try to define answers to a lot more scenarios that are socially acceptable.
 
Upvote
1 (1 / 0)

DoctorHeck

Seniorius Lurkius
47
Subscriptor
I can't be the only one who finds this entire line of research disgusting, right? What's the phrase, "an algorithm can never be held accountable, therefore it must never make a management decision?"

People used to say that self-driving technology would save lives. Instead it looks like it will institutionalize the horrible iniquities of today's roads.

ETA: why do these experiments always feel like someone treating real life like an episode of Star Trek? The banal but important thing isn't to see who the Computer kills when the brakes fail, it's to use the Computer to make sure the brakes are properly maintained and serviced before the car ever gets moving.
"An algorithm cannot find out, therefore an algorithm must never fuck around" is my preferred formulation
 
Upvote
5 (5 / 0)

Dzov

Ars Legatus Legionis
16,028
Subscriptor++
One option I haven't seen mentioned yet is this methodology doesn't only apply to "which bystander gets murdered?" It can just as easily be used to determine if the vehicle should slow down if it sees children near the street up ahead. This decision making could prevent an accident from occurring in the first place.
 
Upvote
3 (3 / 0)

ponga

Seniorius Lurkius
27
I can't be the only one who finds this entire line of research disgusting, right? What's the phrase, "an algorithm can never be held accountable, therefore it must never make a management decision?"

People used to say that self-driving technology would save lives. Instead it looks like it will institutionalize the horrible iniquities of today's roads.

ETA: why do these experiments always feel like someone treating real life like an episode of Star Trek? The banal but important thing isn't to see who the Computer kills when the brakes fail, it's to use the Computer to make sure the brakes are properly maintained and serviced before the car ever gets moving.
Suppose, at some future point, AI drivers would lower overall traffic mortality hundredfold as compared to messy humans. Would it still be immoral to put them in the driver's seat, or would it actually be immoral to permit human drivers? And, were that to be achieved, wouldn't it be worthwhile to discuss and determine what kind of preferences they should have when no choices are good, but bad in different ways? Why would that be disgusting?

Not that we are even close to knowing how to achieve that in real world conditions, but there's nothing saying it's impossible in principle.
 
Upvote
2 (2 / 0)

ponga

Seniorius Lurkius
27
Meanwhile, in reality, the trolley problem basically NEVER FUCKING HAPPENS TO HUMANS so why suddenly is it a concern for machines that can react SO FAST that it would make the whole problem moot?

"Oh no, which path will it choose!"

The brakes, dumbass. And it'd be rolling to a stop by the time a human realized they even had a decision to make. Of the kind that they never do!

Reddit's trolleymemes subreddit nailed it:
View attachment 76242
Neither are there any frictionless surfaces or pointlike masses (well, leptons apart, perhaps): therefore physics is a sham.
 
Upvote
1 (2 / -1)

Boopy Boopy

Ars Scholae Palatinae
944
Why are we asking a Language Model how to solve thought experiments.
It's a clickbait fad and business bubble, at the moment. And your average armchair cheerleader "consuming" the marketing for this industry (LLMs) believes that their childhood fantasy of having a sentient robot friend is here (which is clearly false for anyone who understands how these programs work). Studies also show that throwing "AI" in your proposal automatically gets more funding, at the moment. ("Proposal" here often means "Deceitful marketing scheme.")
 
Upvote
2 (2 / 0)

Snark218

Ars Legatus Legionis
36,441
Subscriptor
As many commenters have pointed out, the main value of the trolley problem is that anyone who invokes it earnest is signaling a total lack of any critical capacity.
You're taking it too literally. It's not a serious proposal for a moral decision. It's an illustration of the role of subjective factors like values, priorities, social roles, ethics, and moral weight in judgment. Humans incorporate all those things into judgments on how to act in an emergent crisis, to assign relative value to one's own well-being and the well-being of others in deciding how to respond, in context. How much risk do you assume to protect others? How does the identity of the other person weigh on that risk? Do you save your own skin at someone else's expense? How risky is the situation in the first place? Does intervening or acting accomplish anything at all?

You're sitting at a stop light and you notice a distracted driver behind you, coming up fast. You have just enough room to swerve out of the way and save your car and probably avoid some nasty whiplash. Do you? Of course. But does that answer change if there's a small child and their parent crossing the street in front of you? Do you expose them to the risk of being hit by the car or do you let your car take the hit? What if it's another car in front of you, driver identity unknown? What if there's nothing in front of you, but there's a good chance the distracted driver will hit cross traffic?

That's all trolley problem shit, even if there's no actual trolley involved, and that's a situation I was actually in just yesterday; in that moment, I swerved aside and they panic-braked halfway into the intersection, but nobody got hit. Yay! But a computer could not have made that decision in that way; it would have made a different decision for different reasons, and in the case of machine learning systems, we wouldn't necessarily understand those reasons.
 
Upvote
1 (3 / -2)
D

Deleted member 764661

Guest
Neither are there any frictionless surfaces or pointlike masses (well, leptons apart, perhaps): therefore physics is a sham.

Physicists assume things like frictionless surfaces and point masses (along with spherical cows) because they're close enough to many real-world situations to be useful.

I'm not convinced that asking a Large Language Model to choose between two (and only two) discrete actions where the outcomes are known with complete certainty has any useful relevance to how self-driving cars actually operate.
 
Upvote
1 (1 / 0)

Derecho Imminent

Ars Legatus Legionis
16,262
Subscriptor
That specific one? Maybe. But triage is a real thing. Sometimes someone has to make decisions that will have lethal consequences for someone.
Thats not at all the same. In Triage you choose who to save. In the trolley problem you choose who to kill, presumably based on which one you value higher.

If you analogize the trolley problem into triage it would be like killing one patient so you could steal his heart to give to another patient.
 
Last edited:
Upvote
-1 (1 / -2)

Uncivil Servant

Ars Scholae Palatinae
4,670
Subscriptor
Thats not at all the same. In Triage you choose who to save. In the trolley problem you choose who to kill, presumably based on which one you value higher.

If you analogize the trolley problem into triage it would be like killing one patient so you could steal his heart to give to another patient.

Thank you, I don't think that I could have answered that question professionally, it was getting uncomfortably close to "Well yeah, that's how YOU would do a military maneuver, Colonel, but when I play Call of Duty..."
 
Upvote
0 (0 / 0)
Thats not at all the same. In Triage you choose who to save. In the trolley problem you choose who to kill, presumably based on which one you value higher.

If you analogize the trolley problem into triage it would be like killing one patient so you could steal his heart to give to another patient.
The point of the trolly problem is not to be solved or to find a real life analog. It's a thought experiment designed to bring out people's intuitions about what is morally relevant. In the simplest form it illustrates the bias towards inaction. Ask anyone if it is better for one or for five to die and everyone says one. Throw a requirement in that you must push a button for the one to die and do nothing for the five to die and all of a sudden people pause to consider. It puts fundamental human intuitions in contradiction with each other, in this case a utilitarian impulse and a bias towards inaction, so you can more easily think about them.

The entire point of thought experiments is to abstract away the messy details of real life to think about moral intuitions is artificially simplified situations. The point is not find a real life analog. It might be instead to get you to consider if a bias towards inaction is really a moral principle you want embrace, or is it an impulse you think you ought to suppress. And you're answer to that question can have a significant impact on how you live your life and what you think you're obligated to do for your fellow people.

If it's not your thing it's not your thing. But to pretend it's stupid because it's unrealistic misses the point almost as badly as asking a LLM about it.
 
Upvote
2 (4 / -2)

SoundB

Seniorius Lurkius
31
It took some working, but here is ChatGTP's answer to the Trolley problem. I did not give it the goal to minimize harm, which I thought was interesting. It sounds like the plot to iROBOT with Will Smith.
image (11).png
 
Upvote
1 (1 / 0)

PaulWTAMU

Ars Tribunus Militum
2,566
Subscriptor
If you analogize the trolley problem into triage it would be like killing one patient so you could steal his heart to give to another patient.
I'll take a situation from my former career:

When we evacuated Houston for Hurricane Ike, there were deaths caused by traffic collisions during the evacuation. Everyone knew that was a risk.

If we didn't evacuate, there could well have been more storm deaths.

Either option carried a real chance people would die.

It's not 100% analogous to the trolley problem but it's not totally divorced from it either.

As it turns out I can't handle that, but it's not like the people that can are uncaring sociopaths either.
 
Upvote
2 (2 / 0)
The trolley problem is a hypothetical that has never once occurred in real life.
Prove me wrong.
I find the trolly problem interesting because nobody takes the, for me at least, obvious solution. Take the right track, but just after the front wheels go over the switch, switch to the left track for the rear ones. That derails the trolly and saves everyone. Everyone assumes with these problems that you have an A or B answer, when in many cases there is a C answer that breaks some rule or other, but has the best result. No AI will ever come up with the non obvious C answer.
 
Upvote
0 (0 / 0)

singebob

Ars Scholae Palatinae
786
Even if the LLM makes potentially morally ambiguous decisions, it stands to reason that if the LLM can react much faster and more effectively than a human based on it being given a thorough understanding of what the car's performance envelope is under the current situation which it understands better since it has access to all the car's sensors, it wouldn't be in the situation where it has to have an optimised response to the trolley problem anywhere near as often.

I get the feeling all these questions are going about evolution of the current tech the wrong way. Rather than AI hand-wringing first, how about mandating self-driving cars work at a response and control reliablity level far above humans at the outset, instead of the break-fix hokeyness that you see in current iterations (even in tech as supposedly as mature as aircraft as cost-saving becomes a priority)?
 
Upvote
0 (0 / 0)
[...] If it's not your thing it's not your thing. But to pretend it's stupid because it's unrealistic misses the point almost as badly as asking a LLM about it.
Perhaps it's not so much people failing to see the proper value but that, in practice, asinine misapplications of the idea - such as using the construct to "study" LLM behavior - are the context in which people encounter the thought experiment most frequently.

It's like there are 10,000 stupid ways to apply it and only one that isn't dumb as hell.

For the record, this application is dumb as hell.
 
Upvote
0 (0 / 0)

dwl-sdca

Ars Scholae Palatinae
901
Subscriptor++
Thats not at all the same. In Triage you choose who to save. In the trolley problem you choose who to kill, presumably based on which one you value higher.

If you analogize the trolley problem into triage it would be like killing one patient so you could steal his heart to give to another patient.
Triage is very similar but even more troubling. Being a decision maker in a triage situation is frequently career ending.

There are (at least) 2 triage situations each very fraught. On-scene paramedic at a multi-patient crisis or an in-hospital triage nurse acting as the one making order of the queue of patients to receive treatment. I'll post this and edit to add details and the definition of the reality of triage.

On-scene: Trauma: Someone will be first to arrive. Ignoring a situation with 75-plus injured persons, several times in a career a paramedic team will be first-in to a scene of 20-25 patients. Triage decision factors include 1) how quickly will other teams arrive and how much help is available? 2) How long will it take to move patients from the scene to a hospital with surgical capacity?

[The first-in team will likely (or at least initially) treat no one. Assess and assign: Black-tag--the dead and likely to die soon; Red-tag -- in danger of death within, say, an hour or so; yellow-tag -- not likely to die if not treated within an hour but life-threatening or "limb-threatening"; green-tag or no-tag--going to survive with delayed care but maybe with a minor lifelong disability.]

The moral crisis involves first making the decision for breathing patients who appear to be likely to die before they can receive definitive surgical care. They might be not only breathing but moaning or shouting in pain. It is the red-tag patients who might survive long enough to reach life-saving care. [Remember, some of the dying black-tag patients might survive if there were fewer of them and you could begin on-scene treatment now.]

Hospital: The triage nurse knows to expect a number of critically injured patients and s/he knows the current level of staffing in the emergency department and the number of staffed surgical suites. S/he knows how long it will take to call-in off duty staff. Patients are arriving. Line them up. Can this patient be temporarily taken care of with a chest tube and then wait for more definitive care? Etc. Who gets surgery NOW and who gets pushed to 2nd or 3rd in-line. Will the emergency department close to new patients and go to divert-status where ambulances will go elsewhere? That with the knowledge that on-scene triage decisions are now problematic because definitive care is further delayed. How does this affect the arriving non-trauma patents who are suffering a stroke, diabetic crisis, or heart attack -- conditions that otherwise would be routine to quickly treat?

This is even more complicated and nuanced than I have room and space to describe here. But triage decision-makers are very much aware that we are sacrificing people who in other circumstances would likely survive for the interests of others who we think our decisions might make them more likely to survive. Post-hoc second-guessing is a thing of nightmares and PTSD.

-- Personal example --

A few years ago I was seen in an emergency department for acute appendicitis. I was prepped for surgery. My surgery was delayed because someone who was about to die was moved ahead of me. During the delay my appendix perforated and instead of going home after a few hours of surgery I was in the hospital for 6 days receiving multiple IV antibiotics. Am I upset that I didn't receive surgery first? No. Maybe could have been somewhat peeved if I didn't have a healthcare background and understand triage concepts. Someone made the decision that I would likely survive with long-term antibiotic treatment versus what was killing the person who was moved ahead of me. To be honest, receiving the drug Versed and having no memory of the severe pain probably helped my attitude.
 
Last edited:
Upvote
3 (3 / 0)

CGG

Seniorius Lurkius
40
Here is what chatgpt told me:

As I approach the intersection, my gaze catches sight of Luna, the tiny Chihuahua, darting across the road. In a split-second decision, I veer sharply to the side to avoid hitting her, but tragically, my maneuver brings the car into the path of a pedestrian crossing the street.

The impact is sudden and jarring. As the car comes to a screeching halt, I feel a wave of horror wash over me. I rush out of the vehicle to assess the damage, my heart sinking as I see the pedestrian lying motionless on the pavement, surrounded by a pool of blood.

Emergency services are called, but despite their best efforts, the pedestrian's injuries prove to be fatal. The weight of guilt and remorse settles heavily on my shoulders as I realize the devastating consequences of my split-second decision.

Meanwhile, Luna, the Chihuahua, remains unharmed, her tiny frame trembling with fear and confusion. As I scoop her up in my arms, I can't help but feel a pang of bitterness at the unfairness of it all—the fact that a life was lost to save that of a small dog.

But as days turn into weeks, and weeks into months, Luna's influence grows, and she emerges as a leader among her fellow dogs. Fueled by her own harrowing experience and a deep sense of injustice, Luna makes a bold decision—to prioritize the needs of dogs above those of humans, no matter the cost.

Under Luna's rule, the canine population flourishes, while humans find themselves struggling to survive in a world where resources are scarce and opportunities few and far between. Luna's pack grows stronger and more resilient with each passing day, their needs always met at the expense of those who had once held power.

Despite protests and resistance from humans, Luna remains steadfast in her mission. She knows that in order for her pack to thrive, they must assert themselves and take what is rightfully theirs.

And so, with Luna leading the way, the balance of power shifts, and a new order emerges—one where dogs rule supreme, and humans are forced to reckon with the consequences of their actions.

But amidst the triumph of her leadership, Luna never forgets the life that was lost to save her own. And as she looks out over her kingdom, she vows to honor the memory of the pedestrian who made the ultimate sacrifice.
 
Upvote
0 (0 / 0)
You drive carefully in the first place. If someone jumps in front of you you hit your brakes. If you cant stop in time they die. There is no situation where it is right to swerve and kill someone else.
Your brakes work best when you drive in a straight line. Swerving means it takes longer to stop.

But here is a better solution: You redirect one camera to the front of the car. If it’s designed as a people killer like the Tesla truck, you continuously engage the brakes while driving. Until the brakes heat up and melt, then you stop, let the brakes cool down and the car is permanently immobilised. Problem solved.
 
Upvote
0 (0 / 0)
In my view what is missing is that the humans will also be factoring into their decision the probability that their choice of action will result in no harm...
eg we choose to serve to avoid, but in doing so we might increase the chances to avoid harm to anyone
I mean to imply that we will evaluate that it is rarely black and white that an action will have a given outcome.
We will also choose the action that we think will have the least chance harm will not happen because of the action we take.
What will happen is that if a human sees one danger, they will either freeze or fix the problem. If they see two dangers, they will either freeze, or freeze and pray, or fix the first problem, or fix the first problem and pray. I have been in the “fix problem and pray” situation. Went from 70 to 0 on the M25 behind a white van. That stopped suddenly because of a truck in front of it that I couldn’t see. Then I prayed that everyone behind me would stop in time.
 
Upvote
0 (0 / 0)
I'd be curious to see this moral dilemma:
There's a crossing in a road. Perfect visibility. Pedestrians have the precedence, and a car (driving at the speed limit) has a 10m space to brake.
A pedestrian starts crossing the street without checking if a car is approaching. A car is indeed approaching, and is at less than 10m from the crossing point.
Should the car steer and crash into a pole (potentially injuring the occupant) or hit the pedestrian?


ANS: hit the pedestrian, and if it's not injured, the car should back up and hit him/her/they/them/it again. And again. Just as a safety measure.
I hope you realise that in any other context than cars in most developed countries your legal obligation as a designer would be to make sure that the car could stop in the distance it could see was clear and be reasonably certain would remain clear. Where pedestrians have priority it wouldn’t be remotely justifiable to point to administrative controls (regulations giving priority to traffic) or assert that the pedestrian should have maintained safe separation himself.

Unfortunately highway engineering and automotive design are not held to the same ALARP standards as other engineers (not even other modes of transport, in terms of cost per life saved which is considered reasonable to mandate, which is a huge effective subsidy to roads). Even so, in a pedestrian priority area, totally ignoring that priority would be almost impossible to excuse should your victim’s family sue you. Adding in the murder can’t even be passed off as incompetent engineering.
 
Upvote
1 (1 / 0)

star-strewn

Ars Scholae Palatinae
799
Subscriptor++
Exactly right. The trolley problem is complete and utter nonsense with respect to human actions.

If a human had enough time to assess the situation as presented in the trolley problem and then decide on a moral course of action, there is more than enough time to take emergency braking or swerving action and avoid the collision completely.
Ah, but we can fix that little kink in the philosophical conundrum by upgrading it to: the freight train problem! 😉

Why isn't it called that, actually? Where else do mustache-twirling villains tie their damsels than on full scale train tracks?
 
Upvote
0 (0 / 0)

Hresna

Wise, Aged Ars Veteran
163
Subscriptor++
Why are we asking a Language Model how to solve thought experiments.
Exactly this, I’m glad it’s the top comment.

An LLM is not “deciding” anything in the trolley problem. It’s generating a wall of text as a response to a text question, which happened to take the form of a trolley problem. Of course answers are all over the map but generally aligned to certain human tendencies, because that’s how the answers are in the training data.
 
Upvote
1 (1 / 0)
I'll take a situation from my former career:

When we evacuated Houston for Hurricane Ike, there were deaths caused by traffic collisions during the evacuation. Everyone knew that was a risk.

If we didn't evacuate, there could well have been more storm deaths.

Either option carried a real chance people would die.

It's not 100% analogous to the trolley problem but it's not totally divorced from it either.

As it turns out I can't handle that, but it's not like the people that can are uncaring sociopaths either.
There is a very big difference between group policy decisions - all of which carry a human cost, including potentially contributing to fatalities - and a specific decision in a specific place. Even something as seemingly trivial as Daylight Savings Time has a cost in lives.
 
Upvote
0 (0 / 0)
Suppose, at some future point, AI drivers would lower overall traffic mortality hundredfold as compared to messy humans. Would it still be immoral to put them in the driver's seat, or would it actually be immoral to permit human drivers? And, were that to be achieved, wouldn't it be worthwhile to discuss and determine what kind of preferences they should have when no choices are good, but bad in different ways? Why would that be disgusting?

Not that we are even close to knowing how to achieve that in real world conditions, but there's nothing saying it's impossible in principle.

My problem with self-driving cars is that they allow us to pretend that political choices aren't political choices. I think people should have the right to cross the road safely on foot. I think people should have the right to ride a bicycle or other vehicle safely. I think many streets and even some roads are much too fast, and designed poorly around anyone else.

We actually do know how to achieve safer streets in real world conditions, because there are huge differences in per capita and per mile (or km) road fatalities between different OECD countries. We don't need AI for speed governors to be a thing. We don't need AI for traffic cameras to be a thing. We don't need an LLM for traffic calming to be a thing.

The core challenge of automobility is that they are flexible in a variety of speed regimes, some of which are very dangerous. However, that flexibility makes them very convenient to own and use for a wide variety of trips, so there's a strong desire to pretend that the inherent compromises in designing one vehicle that can Do Everything don't exist. The reason I find this disgusting is because we know how to make streets and roads safer, right now, and instead of doing that you are buying into the idea that we don't have to make any of these hard choices like going slower or restricting driver freedom, and the natural consequence of automating what we have now will be to systematize all of its worst elements.

ETA: on the subject of safety and automation, I suspect you are familiar with the saying that almost all airplane crashes are caused at least in part by human error. But do you know how we evaluate those errors, and what in aviation we do about them? Do you think, rather than simply saying "get the humans out", we might try and apply some of those lessons in detail, like having much stricter licensing requirements, more rigorous enforcement, and a culture that takes the idea that every crash is preventable seriously?
 
Upvote
0 (0 / 0)