Now here is something fascinating: the “would you kill the fat man” thought experiment it being used to assist in the programming of automated vehicles. In effect, they are trying to program the vehicles to make ethical decisions under circumstances where there is a diabolical choice to be made. The car is going too fast, and it can veer to the left or the right; thereby killing one person or five. The trolley problem tells the engineers (who are said to lack any real understanding of ethics) how to write the algorithms. That’s a good reason for revival of a thought-provoking mind-wank – aint it? (http://www.theatlantic.com/technology/archive/2015/10/trolley-problem-history-psychology-morality-driverless-cars/409732/)
As I understand it, based on some confusing and conflicting arguments morals and ethics are closely related; but in a walnut shell, morals are our personal beliefs (rights and wrong) and ethics are our social beliefs (rights and wrongs). One author put it succinctly as morals are how you treat the people close to you; ethics are how you treat people you don’t know. I think the concepts blur because everything seems to get quite confusing by all reports, including the Latin etymology which has ethics related to “etiquette” and morals related to “moralis“, the former being a social code, the latter pertaining to good manners. It seems to me that morals are generally inward looking (and how you judge yourself) and ethics are more outward looking (and how you might be judged by others).
In which case, the fat man experiment really only tests ethics when the subjects are asked the question, because they are giving the answer that they feel you (as the asker) will least judge. The answer inside that you feel in your body might reflect your morals, but these seem influenced by ethics, so it seems better to steer clear of fat men and out of control trolleys… because its all not so straightforward.
But, lets say we let each driver program his own car. The Tesla fires up and before driving each driver has to answer a series of questions: the car then “drives” itself according to your moral code as framed by societal ethics. This is very interesting, because its basically what happens now (the drivers are making all the choices we just don’t happen to see them codified in binary).
There are two issues that spring to mind: The first is that for the most part its just safer not to let the unwashed masses make all of these individual decisions because (as a wise man tells me) a type of social anarchy is likely to ensue. Consider this: Lets say the car asks: do I kill one man, or five men: the answer might be one man (fewer deaths look better). The car might ask: do I kill five women, or one man. For the misogynist cultures, the answer might be five women. The car might ask, should I always avoid pregnant women and children? The car might ask: do I always avoid the group in which there is someone I recognise as a friend from your FB page? The car might ask, do I always aim for the enemies you have “blocked” on FB? Should I always aim for the group in which there are known homosexuals? Or women who have had an abortion? Devil worshippers? Christians? People with stars on their bellies? I might not even get into a car with someone until I had received a print out of their moral code, as programmed, reviewed it and considered it acceptable. The whole thing could get very awkward.
The second issue for me is “Blink”; a book that argues that we reveal our true moral standpoint under pressure. We might not even know we are prejudiced, sexist or racist until we have to make a subconscious decision under pressure. Or visa versa. We might act homophobic, but actually be quite reasonable in our deep thinking – our expression of the fear of pink being a desire to fit in with the lads. My point is that what we might programme into the car could differ vastly from what we would actually do under such circumstances. Even if we have clear thoughts, our actions can speak differently. So expecting such a brain to provide individual programming seems illogical.
Given the choice, there is no way I would want to program my own automated vehicle. But I can imagine that there are people that would want to do it. These are the people that demonstrate, by their very desire to do it, that they should be the last people given any sort of permission to do so – ever. Crikey. It makes me wonder how we are, as individuals, actually allowed to do anything. Being safe in the knowledge that the car has been programmed by philosophers that have thought deeply and come up with something close to the middle of the bell curve thinking suits me. However, this does require that someone has done the thinking and asked the questions and generally been all academic about it. But, now I think on it, I never interview anyone before I get into a car with them… or check their understanding of philosophy. Or how they feel about fat men.