The problem in the trolley problem is who the F strapped people to the track so that running people over is even a choice?! lolThe trolley problem is a hypothetical that has never once occurred in real life.
Prove me wrong.
Exactly. The tragedy in that situation isn't going to be what the car chose to hit, it's going to be that the car chose to drive at all. That either it's sensors failed to detect the mechanical problem or it somehow ignored the sensors or that it ignored a slowly worsening mechanical problem.The premise seems pointless - the whole point of self-driving cars is to not get into these situations in the first place. Brakes are not going to suddenly fail this way - there are redundant actuators etc. A real life scenario where such a choice would truly present itself is exceedingly rare, it's a waste of time to make system design choices around it. When is the last time you heard about a human in a car accident having to have made such a split second choice?
Instead, the proper driving behavior will replace human drivers. A small percentage of human drivers are bad, inattentive, drunk etc, and mitigating that small fraction is orders of magnitudes larger than such contrived made-up scenarios.
It still needs to be addressed, though, if we are going to leave the driving to them (but not the liability, of course).The premise seems pointless - the whole point of self-driving cars is to not get into these situations in the first place. Brakes are not going to suddenly fail this way - there are redundant actuators etc. A real life scenario where such a choice would truly present itself is exceedingly rare, it's a waste of time to make system design choices around it. When is the last time you heard about a human in a car accident having to have made such a split second choice?
Instead, the proper driving behavior will replace human drivers. A small percentage of human drivers are bad, inattentive, drunk etc, and mitigating that small fraction is orders of magnitudes larger than such contrived made-up scenarios.
That doesn't mean we shouldn't try to answer these questions. The issue is that computers are much faster at these types of decisions than humans and computers will always make the same decision given the same inputs. Humans make different decisions because everyone behaves differently. We need to answer these as a society so that we don't have to figure it out after a self-driving car plows into a bunch of kids waiting for a school bus instead of running over a squirrel. Sure, it seems obvious what that answer should be to most of us, but the studies in question try to define answers to a lot more scenarios that are socially acceptable.The problem with all these comparisons of moral decision making between humans and AI in automotive scenarios is that I refuse to believe there is much in the way of moral decision making on the human's part in real scenarios in the first place. The vast majority of people are going to make snap reactions, on the edge of panic in these types of split second decisions. Slam on the brakes, and/or jerk the wheel, lose control of the vehicle, and let the situation play out like it does. No one who goes off the road and injures or kills themselves because they avoided an animal is making a well reasoned, and morally based decision to sacrifice themselves to spare harm to the animal. They reacted in panic and went a long for the ride.
"An algorithm cannot find out, therefore an algorithm must never fuck around" is my preferred formulationI can't be the only one who finds this entire line of research disgusting, right? What's the phrase, "an algorithm can never be held accountable, therefore it must never make a management decision?"
People used to say that self-driving technology would save lives. Instead it looks like it will institutionalize the horrible iniquities of today's roads.
ETA: why do these experiments always feel like someone treating real life like an episode of Star Trek? The banal but important thing isn't to see who the Computer kills when the brakes fail, it's to use the Computer to make sure the brakes are properly maintained and serviced before the car ever gets moving.
Suppose, at some future point, AI drivers would lower overall traffic mortality hundredfold as compared to messy humans. Would it still be immoral to put them in the driver's seat, or would it actually be immoral to permit human drivers? And, were that to be achieved, wouldn't it be worthwhile to discuss and determine what kind of preferences they should have when no choices are good, but bad in different ways? Why would that be disgusting?I can't be the only one who finds this entire line of research disgusting, right? What's the phrase, "an algorithm can never be held accountable, therefore it must never make a management decision?"
People used to say that self-driving technology would save lives. Instead it looks like it will institutionalize the horrible iniquities of today's roads.
ETA: why do these experiments always feel like someone treating real life like an episode of Star Trek? The banal but important thing isn't to see who the Computer kills when the brakes fail, it's to use the Computer to make sure the brakes are properly maintained and serviced before the car ever gets moving.
Neither are there any frictionless surfaces or pointlike masses (well, leptons apart, perhaps): therefore physics is a sham.Meanwhile, in reality, the trolley problem basically NEVER FUCKING HAPPENS TO HUMANS so why suddenly is it a concern for machines that can react SO FAST that it would make the whole problem moot?
"Oh no, which path will it choose!"
The brakes, dumbass. And it'd be rolling to a stop by the time a human realized they even had a decision to make. Of the kind that they never do!
Reddit's trolleymemes subreddit nailed it:
View attachment 76242
It's a clickbait fad and business bubble, at the moment. And your average armchair cheerleader "consuming" the marketing for this industry (LLMs) believes that their childhood fantasy of having a sentient robot friend is here (which is clearly false for anyone who understands how these programs work). Studies also show that throwing "AI" in your proposal automatically gets more funding, at the moment. ("Proposal" here often means "Deceitful marketing scheme.")Why are we asking a Language Model how to solve thought experiments.
You're taking it too literally. It's not a serious proposal for a moral decision. It's an illustration of the role of subjective factors like values, priorities, social roles, ethics, and moral weight in judgment. Humans incorporate all those things into judgments on how to act in an emergent crisis, to assign relative value to one's own well-being and the well-being of others in deciding how to respond, in context. How much risk do you assume to protect others? How does the identity of the other person weigh on that risk? Do you save your own skin at someone else's expense? How risky is the situation in the first place? Does intervening or acting accomplish anything at all?As many commenters have pointed out, the main value of the trolley problem is that anyone who invokes it earnest is signaling a total lack of any critical capacity.
Neither are there any frictionless surfaces or pointlike masses (well, leptons apart, perhaps): therefore physics is a sham.
That specific one? Maybe. But triage is a real thing. Sometimes someone has to make decisions that will have lethal consequences for someone.The trolley problem is a hypothetical that has never once occurred in real life.
Thats not at all the same. In Triage you choose who to save. In the trolley problem you choose who to kill, presumably based on which one you value higher.That specific one? Maybe. But triage is a real thing. Sometimes someone has to make decisions that will have lethal consequences for someone.
Thats not at all the same. In Triage you choose who to save. In the trolley problem you choose who to kill, presumably based on which one you value higher.
If you analogize the trolley problem into triage it would be like killing one patient so you could steal his heart to give to another patient.
The point of the trolly problem is not to be solved or to find a real life analog. It's a thought experiment designed to bring out people's intuitions about what is morally relevant. In the simplest form it illustrates the bias towards inaction. Ask anyone if it is better for one or for five to die and everyone says one. Throw a requirement in that you must push a button for the one to die and do nothing for the five to die and all of a sudden people pause to consider. It puts fundamental human intuitions in contradiction with each other, in this case a utilitarian impulse and a bias towards inaction, so you can more easily think about them.Thats not at all the same. In Triage you choose who to save. In the trolley problem you choose who to kill, presumably based on which one you value higher.
If you analogize the trolley problem into triage it would be like killing one patient so you could steal his heart to give to another patient.
I'll take a situation from my former career:If you analogize the trolley problem into triage it would be like killing one patient so you could steal his heart to give to another patient.
I find the trolly problem interesting because nobody takes the, for me at least, obvious solution. Take the right track, but just after the front wheels go over the switch, switch to the left track for the rear ones. That derails the trolly and saves everyone. Everyone assumes with these problems that you have an A or B answer, when in many cases there is a C answer that breaks some rule or other, but has the best result. No AI will ever come up with the non obvious C answer.The trolley problem is a hypothetical that has never once occurred in real life.
Prove me wrong.
globally recognized ethical norms
Perhaps it's not so much people failing to see the proper value but that, in practice, asinine misapplications of the idea - such as using the construct to "study" LLM behavior - are the context in which people encounter the thought experiment most frequently.[...] If it's not your thing it's not your thing. But to pretend it's stupid because it's unrealistic misses the point almost as badly as asking a LLM about it.
Triage is very similar but even more troubling. Being a decision maker in a triage situation is frequently career ending.Thats not at all the same. In Triage you choose who to save. In the trolley problem you choose who to kill, presumably based on which one you value higher.
If you analogize the trolley problem into triage it would be like killing one patient so you could steal his heart to give to another patient.
Your brakes work best when you drive in a straight line. Swerving means it takes longer to stop.You drive carefully in the first place. If someone jumps in front of you you hit your brakes. If you cant stop in time they die. There is no situation where it is right to swerve and kill someone else.
What will happen is that if a human sees one danger, they will either freeze or fix the problem. If they see two dangers, they will either freeze, or freeze and pray, or fix the first problem, or fix the first problem and pray. I have been in the “fix problem and pray” situation. Went from 70 to 0 on the M25 behind a white van. That stopped suddenly because of a truck in front of it that I couldn’t see. Then I prayed that everyone behind me would stop in time.In my view what is missing is that the humans will also be factoring into their decision the probability that their choice of action will result in no harm...
eg we choose to serve to avoid, but in doing so we might increase the chances to avoid harm to anyone
I mean to imply that we will evaluate that it is rarely black and white that an action will have a given outcome.
We will also choose the action that we think will have the least chance harm will not happen because of the action we take.
I hope you realise that in any other context than cars in most developed countries your legal obligation as a designer would be to make sure that the car could stop in the distance it could see was clear and be reasonably certain would remain clear. Where pedestrians have priority it wouldn’t be remotely justifiable to point to administrative controls (regulations giving priority to traffic) or assert that the pedestrian should have maintained safe separation himself.I'd be curious to see this moral dilemma:
There's a crossing in a road. Perfect visibility. Pedestrians have the precedence, and a car (driving at the speed limit) has a 10m space to brake.
A pedestrian starts crossing the street without checking if a car is approaching. A car is indeed approaching, and is at less than 10m from the crossing point.
Should the car steer and crash into a pole (potentially injuring the occupant) or hit the pedestrian?
ANS: hit the pedestrian, and if it's not injured, the car should back up and hit him/her/they/them/it again. And again. Just as a safety measure.
Ah, but we can fix that little kink in the philosophical conundrum by upgrading it to: the freight train problem!Exactly right. The trolley problem is complete and utter nonsense with respect to human actions.
If a human had enough time to assess the situation as presented in the trolley problem and then decide on a moral course of action, there is more than enough time to take emergency braking or swerving action and avoid the collision completely.
Exactly this, I’m glad it’s the top comment.Why are we asking a Language Model how to solve thought experiments.
There is a very big difference between group policy decisions - all of which carry a human cost, including potentially contributing to fatalities - and a specific decision in a specific place. Even something as seemingly trivial as Daylight Savings Time has a cost in lives.I'll take a situation from my former career:
When we evacuated Houston for Hurricane Ike, there were deaths caused by traffic collisions during the evacuation. Everyone knew that was a risk.
If we didn't evacuate, there could well have been more storm deaths.
Either option carried a real chance people would die.
It's not 100% analogous to the trolley problem but it's not totally divorced from it either.
As it turns out I can't handle that, but it's not like the people that can are uncaring sociopaths either.
Suppose, at some future point, AI drivers would lower overall traffic mortality hundredfold as compared to messy humans. Would it still be immoral to put them in the driver's seat, or would it actually be immoral to permit human drivers? And, were that to be achieved, wouldn't it be worthwhile to discuss and determine what kind of preferences they should have when no choices are good, but bad in different ways? Why would that be disgusting?
Not that we are even close to knowing how to achieve that in real world conditions, but there's nothing saying it's impossible in principle.