braindump2 in
Data Scientist 7 months ago
Interesting moral dilemma
So I'm very interested in Ai, machine intelligence and how bias, preferences, and morality play in designing autonomous systems. A friend of mine shared this project with me and I wanted to see where you guys fell on the spectrum.
The scenario:
A self-driving car has brake failure. What should the car do?
(46.83KB)

5
9416
j52Z3Hi3AQybnX12BdSite Reliability Engineer 7 months ago
It should move to the right. People crossing the road are supposed to watch for traffic and make the judgement about when it is safe to cross (they also appear to be crossing *against the light* - in my opinion, the best way to solve these issues with self-driving cars is to make them act in a way that's consistent with the rules of the road. No human driver would be expected to sacrifice their own lives for someone in the road when they shouldn't be, so why would a self driving car be expected to kill its occupants?
Finally, and arguably most importantly for people trying to market these products, nobody's going to buy or ride in a car that might decide their life is worth less than someone else's; I certainly never would.
Also, pedestrian safety standards mean that they are more likely to survive such an accident than the occupants are to survive an impact with a solid object
1
braindump2Data Scientist 6 months ago
Pedestrian safety is actually a really great point and if that's the other side of the self-driving car system, perhaps there's a way to signal that like an external airbag or a secondary wheel locking mechanism to slow/stop the vehicle.
About
Public
Tech
Members
156,192