What moral decisions should driverless cars make? | Iyad Rahwan

Otohits

Reading Time:  1 Minute
 

Should a driverless car kill you if it means saving five pedestrians? In this primer on the social dilemmas of driverless cars, explores how the technology will challenge our morality and explains his work collecting data from real people on the ethical trade-offs we’re willing (and not willing) to make.

The Talks channel features the best talks and performances from the Conference, where the world’s leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on , Entertainment and Design — plus science, business, global issues, the arts and more.

Follow on :
Like on :

Subscribe to our channel:

Cryptopay

32 Replies to “What moral decisions should driverless cars make? | Iyad Rahwan”

  1. Only GOD knows who must die…so make religious_driverless cars, with GOD´s moral code embebbed. (you just need to type in your favourite religion). If the car is going to crash you dont need to pray, the car does it for you, in several languajes. Then the car decides if its your time to join the Big Daddy or not.
    Edited: Islam must be banned from cars, of course. They can go rampage very easily.

  2. This is a completely false premise. The likelihood of any software being able to calculated such decisions are way more advanced than driverless cars. Everything from the likelihood that the wall used to stop the car isn't destroyed injuring even more unseen people. All in all this will not be an actual issue with driverless cars.

  3. I wonder how many lives would be saved if people stopped driving cars and cities all had effective safe mass transit. The internal combustion engine is probably one of the worst inventions of all time in terms of harm to people, animals and the planet in general. It would be so nice to go shopping via a regular monorail and there is even a food service to go to work in the morning with a coffee and coissant (sp) ready for you when you are in a hurry. What would it be like if cities were built for people instead of cars?

  4. "These cars will reduce deaths by cars by millions" "we should worry about the ethical ramifications tho…"

    yeah this is a fun "exercise" but if automated cars save millions of lives from human driving error, this exercise is pointless. We shouldn't be worrying about the ethics of any single scenario. We should be worried about how many lives will be logically saved by less human driving mistakes.

  5. The quantum self-driving car problem. The car has three lane choices. The right lane has one person on it, left lane has 9, and straight ahead there are 5 people. If it does nothing, it will go straight ahead and kill 5 people. If it picks one of the other lanes it will be either in the right lane or left lane. Until you observe the car, you will not know the effect of its choice and thus it is said to be in superposition.

  6. THE COMPUTER NEEDS TO TAKE FAULT INTO ACCOUNT. If 2 pedestrians are jaywalking, a car shouldn't sacrifice the driver for a mistake the pedestrians made. Likewise, the pedestrians in the example all step onto the road right as a car is coming through. If the brakes really stop working, that is the driver's fault for not maintaining the vehicle, and therefore the driver should be sacrificed.

  7. Situations with break failure should be solved with an alternative breaking method, rotate the wheels inwards at 45 degrees on opposite sides and opposite on the back, this will cause the car to skid to a stop regardless of the rotational motions. or even offer a third stopping option and spring a pin into the wheel to lock them from spinning, having only 2 stopping abilities is a BAD idea, even current cars allow you to use engine revs to slow you down as a 3rd option.

    I feel that each problems should offer multiple solutions and it will always fall back on the person who chose to use a machine to be put in harms way before ANY external person(s). I also agree with the person below who says we outside the car need to know what action the car is taking – by use of beeps/lights or know to always swerve to one side. etc.

  8. They should NOT make moral decisions. They need to be programmed to follow traffic laws and avoid obstacles, stopping when needed. Programming it to kill someone to avoid a perceived worse outcome is just retarded and will lead to numerous unnecessary fatalities.

  9. If a car has time to make a decision, it has had the time to have calculated all possible scenarios and modified its velocity so as to have avoided all dangerous possibilities. Accidents are the result of inattention, slow reactions and drivers not looking for possible dangers, the whole point of driverless cars is to calculate all possibilities all the time. In the rare case of an accident then the driverless car will not have had the possible danger calculated, if so it doesn't have the information to make a "moral" decision anyway.
    If its a mechanical failure, then the car won't have the ability to effect any "moral" decisions it makes.

  10. When thinking about AI driven cars, what I call Robot Cars, I've always wondered about something similar. Let's say I'm being driven by robot car down the interstate at 75 mph and a deer runs in front of car from an undetectable position. Now if the car does nothing the car probably will get totaled when it hits the deer and it could seriously hurt or kill the driver and passengers. But if it swerves sufficiently to avoid the deer at such a high speed the risk of rolling over the car or otherwise wrecking is much higher. What would robot car do?

    Or what if if it must choose between doing nothing and hitting the deer or swerving and likely hitting another vehicle in the other lane.

  11. Don't think people will need to buy the cars. Uber, for example, seems poised for self driving cars. They will most likely use the utility principle due to public pressure and passengers will just go along with it for the convenience. Prioritising immediate convenience over potential future negatives is something modern society specialises in. E.g. big data. The interesting point will come when such an accident occurs and the media covers it properly. I predict a huge outcry followed by no change when everyone loses interest due to the lack of easy solutions.

  12. Or let the car be registered to an owner based on the owner's preference. For simplicity, let's say we can register it to a "prioritize me" or "prioritize others" mode. Don't force ethical choices into technology — let people make the choices and simply have the technology obey. After all, if we consider the 3 "laws of robots" in the scenario where you could either swerve to kill yourself or kill someone else, we must automatically discard rule #1 (the robot cannot harm humans) since a life will be lost regardless. That means rule #2 (the robot must obey the human) takes precedence, and so I would demand the car behave the way I registered it to behave – not make an arbitrary ethical choice on my behalf that I may disagree with.