2 Comments
13 hrs agoLiked by Tina Lee Forsee

Say, the trolley problem went in this manner: the train was supposed to go through track A, but because of some wrong information, Track A had 5 workers sitting their having their lunch. Track B had one person who was trying to cross it, knowing that the train would travel through track A. The AI is wrong to think about it in monetary terms. Say, if track B was faster and the train was not scheduled to take the route because of some reason like the station master was on leave, then the AI might make the decision of travelling through track B. This utilitarian approach would mean that the AI doesn't take morality into equation at all.

Say, the case was that track A had one person and track B had none. Track A being the scheduled path and track B the slower one. The AI might then choose not to divert. Had track B been faster, the AI would choose to divert. But then, what if some terrorists had chosen to exploit this nature of AI to tamper with the tracks. Maybe the track team did not perform the routine check knowing that Track A would be taken. In such a case, saving the one person on the track would be place all the train passengers in danger. So, in any case, unless the safety of passengers is not forsaken, the train should not divert from its scheduled path.

Expand full comment

Certainly interesting questions to ask there. However the utilitarian approach is inherently taking morality into the equation at all. People think that because it's presented as an equation. But underneath is the assumption that life has value. Which is not necessarily an equation you can produce an answer with.

Expand full comment