Interesting moral dilemma
So I'm very interested in Ai, machine intelligence and how bias, preferences, and morality play in designing autonomous systems. A friend of mine shared this project with me and I wanted to see where you guys fell on the spectrum.
A self-driving car has brake failure. What should the car do?