Moral Dilemmas of Self-Driving Cars
I would love to have my own self-driving car. I mean, who wouldn’t?
But they’re not perfect. If you think about it, self-driving cars have to make decisions like you and I. They don’t eliminate the possibility of collisions (yet)… just decrease the chances of it happening in the first place.
During an accident, they’re not exactly choosing who to kill, but they are choosing who not to kill. Cars don’t go out murdering people on purpose, but in case of a collision they specifically avoid whatever they are told to. If the choice is between two different groups of people, then choosing not to kill one would be deadly to the other. So it’s all about priorities and values. This is a fundamental consideration that programmers take into account because they decide what the computer does, what course of action the car takes during its course. So who, what, and how does someone decide?
If you have watched Circle (2015) on Netflix, you know a bit about how it goes. If you haven’t, watch it! It really gets you thinking about the future of humanity, and the psychological decisions we make on a day-to-day basis — an hour an a half of pure “Living in the Present”, or whatever they say. The premise of the movie is simple: 50 people are in a room, and only 1 can make it out alive — sort of like an ‘ultimate goal’. But they choose as a collective (although not always) who deserves to die. Each person has a different story, and could be worthy of getting out alive So they choose.
What you think is the ‘right’ moral choice is going to be different than what the people around you think. Because there is morality is subjective — and every single person’s values differ. We are also all very flawed, whether you like it or not: we make decisions from prejudgements and bias from our past experiences.
It’s also really hard to debate on whether you should save Person A or Person B in an accident because you’re judging the value of a human being based on some abstract criteria that you’ve made up and what is socially accepted.
To my teachers… where was the mark scheme for what constitutes a good human being?
So instead of universal morality, there is a generally accepted consensus of what is ‘more important to society’. Doctors, pregnant women, and children are common people to value, and it’s not hard to figure out why (I elaborate on this later below). Even in the Titanic, women and children were the first to leave on the safety boats. Obviously, other things played into that…but still.
It’s actually interesting to see who would pick what. It says a lot about their character, just like the classic trolley problem. At the end of the “Circle”, you can see who the other groups in different rooms decided to save. The same ‘themes of life’ return: which do we value: the group or the individual?
When you think about a collision either protecting your pregnant sister and her husband, who’s a doctor or a random stranger, I’m going to go out on a limb here and guess that you’d save your sister and brother-in-law.
But what if it was the random stranger that was in the vehicle? Say that in order to save your family, the stranger would have to die. You’d probably still want to go through with that. Say the stranger was in the self-driving vehicle too — you’d probably still pick to save your sister, her husband and her baby. And I get it, because people don’t always want to save the passengers — it’s more about how many and what types of people are killed.
Cars can be programmed to do the ‘best case scenario’ — that’s easy. The hard part is that nobody wants to buy a car that would sacrifice its passengers in case of an accident. So that’s bad for businesses: how do they market or sell their product? Should they be programmed to avoid intervention and collision at all costs? And what if the pregnant mother, and husband were breaking the law by jaywalking? There are no definitive answers.
My inspiration for this article came from personal intrigue, but also because I took an MIT quiz that made you choose on a course of action (and who not to kill) in different scenarios with a self-driving car.
At the end of the quiz, they give you a summary of results. Surprisingly, my ‘Most Saved Character’ was a female doctor while my ‘Most Killed Character’ was an elderly man. I felt pretty bad about this…but it makes sense as to why in terms of their practical, functional value and based on my personal bias and past experiences.
Note also that this isn’t necessarily an accurate indicator of what you would do when you were placed in that exact situation. Obviously, everything is different in theory versus in practice.
When you see your results, you’ll also see how you compare to the other people who have taken the test. Overall, people had a tendency to save girls over boys. This goes back to some old biological reasoning since women are the ones who reproduce and have children, but it isn’t be the only one. There’s still a stereotype that women are ‘caregivers’ and display maternal behaviours, which makes killing them all the more cruel.
It’s interesting to think that people in today’s world would save a woman over a man. Obviously this small sample size isn’t a definitive statistic, but it does provide a suggestive tendency. If you think about gender roles and the past masculine superiority, you can see how things have progressed. I’m not saying that the underlying bias doesn’t exist anymore, either — just take a look at the gender discrepancy in the STEM field!
Not only do people save girls over boys, but they will save humans over animals, time and time again. There’s a species preference, simply because we are human and have basic instincts of self-preservation. It’s slightly different if you have a personal connection with that animal, like if it’s your pet. We are inherently selfish.
I told you before that my ‘Most Killed Character’ was an old man. I don’t know if I consciously agree with this, because I really do think there’s value in the elder population.
There is no denying that there is wisdom and knowledge to be gained from older generations based on their past experiences. While they might not concretely contribute to the society in the most typical way, who is to say that their knowledge and/or wisdom is invalid?
Granted, times change. Apparent wisdom of the 10th century doesn’t necessarily apply to all generations that follow. Basically, take everything with a grain of salt.
For young people, there’s a gamble. You might be inclined to save them, because they have a whole life ahead of them. I know that’s what I did. What that means is that they have the potential to be great, do incredible things, and change the world. That could very well be the next Elon Musk! On the flip side, that also means young children have the potential to be really really bad people… Not to name examples, but you get the idea. Sorry. But it’s the truth, whether you like it or not.
When making decisions, I found myself picking the doctor over the homeless person, time and time again. And I’m not the only one, based on the average. I honestly don’t think this is a good way of thinking, because social status should not be the determining factor of whether you live or die.
We value doctors because we see their work as being essential to the survival of humankind — and honestly, it is. Not everyone can do the job, and you need lots of education to be able to practice medicine. What people don’t look at is the fact that it isn’t the only role that someone needs to live a healthy and fulfilling life. No farmers? Can’t eat. No engineers? Don’t have computers, bridges, factories, you name it. No educators? How will you learn?
I’ve realized that the quiz in itself is flawed because the information given about the different choices is limited — they only give gender and occasionally a job — but there are still questions to be raised.
Is it an inherent societal flaw that we how much we value a person is based on what they contribute to society? The answer is yes.
Thinking about this problem as ‘choosing who not to kill’ is harsh. But it’s the reality of our situation, and we all know it.
So what’s the answer? You just slam on the car brakes…before the accident occurs! Haha, no — I’m just kidding — but while that is a possibility, creating self-driving cars has to account for a ton of different scenarios, including the situation where you can’t slam the brakes and the car has to make an ethical decision on who to kill. It might be rare (or we hope), but the consideration is inevitable.
The point is, there is no real answer. And that’s a problem because there is no recipe for what is ‘right’ and what is ‘wrong’. Each and every one of us would save different people, so who’s ethics are the self-driving cars programmed to follow?
It’s not about 0.001% chance of this particular scenario with self-driving cars deciding who to kill, but rather how risk factors are being distributed. I don’t have the be-all-end-all answers to any of these questions, but I hope I gave you something to think about.
Let me know what you think!
Follow me on Linkedin and Medium for more.
Moral Dilemmas of Self-Driving Cars
Research & References of Moral Dilemmas of Self-Driving Cars|A&C Accounting And Tax Services
Source
0 Comments