"Cost of Human Life" by Shannon Frances Smith
An AI twist on the trolley problem. NEW featured guest to discuss the story!
Hello everyone,
We have a treat for you: after the story we have a featured guest, Andrew Heard, to answer the discussion questions!
Also, After Dinner Conversation has been receiving some honors recently, and that’s thanks to your support.
MORE NEWS:
Acquisitions readers needed! Help us decide which story submissions get published. No experience required, just a keen eye for stories that makes you think. If you’re interested in volunteering, just shoot Kolby an email and he’ll get you set up.
Educators, find out how to get a free copy of a themed edition.
Check out our free partner ebook downloads.
If you enjoy these stories and want to support writers and what we do, you can always subscribe to our monthly magazine via our website (digital or print), or via substack.
Thanks for reading, sharing, and re-stacking this post!
Tina
Take the poll for this week’s story, “Cost of Human Life”:
(It’s completely anonymous…and fun!)
Last week’s poll results:
Cost of Human Life by Shannon Frances Smith
Donald Smith got up at his usual time of six in the morning to go to work. He tossed on a plaid dress shirt and jeans, the programmer’s unofficial uniform. Getting his keys for his late model car and his wallet, he walked out the sliding door of his one-bedroom condo.
He was part of a development team at Canal Railroads. They were creating an AI to control the trains and rail switches. The AI’s purpose was to automate the trains’ running by making decisions usually made by an operator or engineer. The AI was in its testing phase, and on this day, Donald was tasked to put it to the ultimate challenge: The Trolley Problem.
The Trolley Problem refers to an exercise in ethics that goes like this: you have a runaway trolley going towards five people. The trolley can be diverted away, but the other track has one person on it. The trolley does not have time to stop for either. Do you divert the train?
This test was necessary for this very scenario could happen with actual trains. The algorithm had to do the right thing based on the information that had been fed into it. The AI’s results doing the wrong thing would be the possible death of many and the loss of millions of dollars from train damage and lawsuits.
Donald got to his work station at the usual time to punch in the parameters. The desk itself was black with a white plastic surface where his computer tower and two monitors were sitting with a black swivel chair waiting for him. He saw various others from his team typing away at their stations, monitoring one thing or creating a change list for the other. A few business-dressed people walked by and exchanged words he did not understand before walking towards the break room.
Into the terminal he entered the data: five persons on a track near Kirby with a switch and control on the track ahead had one person on the diverted route. He punched in that the weather was a sunny day about twenty degrees Celsius, the track ran through a field and there was no damage to the rails. He also put in that it was noon on a typical Wednesday. These details might seem a bit much, but the model asked them as part of its decision making.
He waited for the result. In live production, this data would already be known or sensed by the AI. He entered the mock data and waited with a smug grin. “This is easy,” he thought. “Kill one to save five.”
Being a software engineer, he thought morality was an easy equation and debates on ethics gave him little interest. He was the product of an education that pushed STEM so hard things like critical thinking or ethics went by the wayside. He figured the computer would do the utilitarian thing to save the most lives without thinking about it.
His mind was blown with the AI result: do nothing, kill the five. Flabbergasted, he checked the debugging console. The console reported no errors; the algorithm ran as expected. It went through all the epochs, and everything in the neural network weighted it. Why not divert the train? he thought as he looked into the log file the AI generated to see what it found.
“Diverting the train creates a ~35-minute delay and inconveniences several hundred passengers on the waiting stations and on the train itself. While the one person should not have been on track, the family of the struck person can also sue and collect ~$100,000 as diverting the train caused their death, when no action would have kept them alive.” The log read, “Cost to clean viscera ~$1500. Cost of waiving fares for train delay fault with Canal Railways ~$2000.”
“What about the five that died on the tracks?” Donald asked rhetorically, “Can their families sue for the lack of action?”
He kept reading: “The train going about its normal operating sequence would go through this path. As the train was operating normally and the five should not have been there in the first place, Canal Railroad cannot be held responsible for their deaths. ~ $0 collected from them. ~0 minutes delay would occur. Cost to clean viscera: ~$3000. No fares waived as five humans will not create enough resistance to delay the train.”
The truth of the last bit stung as trains can plow through semis and still keep going.
Donald banged his head against his desk. “Is that the value of human life? Less than a lawsuit and being a nuisance?” He looked at the log again and then looked into the code and the fed information from the training and test sets. He looked over the model and its pipeline with wild intent, scoring the lines of semicolons and braces, weird variable names and constants. The layers that made up the model’s neural networks were under a lot of scrutiny. He looked at each variable tensor within the model, and the constants fitted in, desperate to assume he made a mistake and fed it the wrong data. How was each node weighted and layered?
He went into the break room to get a coffee. The coffee machine took his input, ground the beans, and sprayed the coffee into a waiting paper cup. Donald took the cup and added some cream and sugar, then sipped it. He had a moment where he thought, “What if the information fed to the model was different? What if I changed the data points, and in turn, where the train was geographically and in time?”
He went back to this station to rerun the model, setting the junction to one near Peterborough with the weather being a snowy negative ten degrees Celsius. The time and day were assigned to a typical Sunday at ten am. The rest of the information was the same. Donald ran the model with bated breath. After some time and sips of coffee, the results were disappointing: “Do nothing, kill the five.”
Donald looked at it again. To do something: “Diverting the train creates a ~65-minute delay and inconveniences several hundred passengers on the waiting stations and on the train itself. While the one person should not have been on track, the struck person’s family can also sue and collect ~$100,000 as diverting the train caused their death, when no action would have kept them alive. Cost to clean viscera ~$1500. Cost of waiving fares for train delay fault with Canal Railways ~$1500.”
To do nothing: “The train going about its normal operating sequence would go through this path. As the train was operating normally and the five should not have been there in the first place, Canal Railroad cannot be held responsible for their deaths. ~ $0 collected from them. ~0 minutes delay would occur. Cost to clean viscera: ~$3000. No fares waived as five humans will not create enough resistance to delay the train.”
Donald slumped in his chair, defeated. The one sticking point was that active action killed one person while doing nothing killed five, and it was arguably not the railway’s fault if someone died from their inaction, but there was fault if their action killed someone. The one thousand dollar suit for the death of an ordinary person was what would ultimately weigh on the model’s decision making. Having this functionality of this model realized didn’t sit well with Donald.
In truth, there was no parameter for the value of human life, so the AI went with how either action would affect Canal Railroad monetarily, by design from the inception of the model. The meetings went into many ways for this model to value things, and no one argued when it was designed to value money. Most assumed that people had some monetary value - Canal Railroads can be sued for harming, maiming or killing one, or many in situations like a train derailment. Even in freight, human life was supposed to have a dollar value as some freight, like crude oil mixed with benzene derailing in the middle of a city, could cost enough in monetary loss to bankrupt Canal Railway. Money also was supposed to play a factor in things like train delays. Like many train services, fares got paid back in the event of a train delay, and fines were paid if said delays made freight late.
The least damaging result measured in time and money was to do nothing.
Donald left work that night with a bitter taste in his mouth after completing his crash course into ethics. He pondered the big lesson he learned while waiting in a traffic jam during rush hour, realizing that there was more to humans and human life than money, after all. However, a very uncomfortable question still crept into his mind that he couldn’t quite beat back: exactly how much was a human life worth?
Discussion Questions (Answer in Comments Section Below)
The AI program has made the most cost-efficient choice: to do nothing and allow the five to die. Should cost be the only factor when making operating decisions of this type? What, if any, other factors should be taken into consideration?
They say you can’t put a price on life, but juries do this all the time. The accidentally severed arm of a professional pitcher is worth more in a civil lawsuit than that of a normal person. Should there be a punitive damages cap in death cases? Is there a safe and effective medical procedure that’s simply too expensive to offer to all citizens? Where, if anywhere, would you put price limits on life?
If you were the one controlling the railroad switch, would you pull the lever saving five people and killing one? What is your reasoning? Would you be willing to harvest organs from one healthy person to save five other people? What, if anything, is the difference?
What, if any, changes would you make to the AI decision parameters in this story and why?
Does it matter that ignoring the cost of choices would mean higher ticket prices for everyone to offset litigation costs? What if a change in the AI to ignore choice costs increased train tickets by 10%, 50%, 200%?
Andrew Heard on “Cost of Human Life”
The AI program has made the most cost-efficient choice: to do nothing and allow the five to die. Should cost be the only factor when making operating decisions of this type? What, if any, other factors should be taken into consideration?
Cost can be a factor in determining the operating decisions of people, companies and artificial intelligence but it can't be the only one. You could argue that the emotional effect of knowing you were on a train that killed five people might factor into it as well. That is itself a cost of sorts. The passengers might consider that a “cost” of sorts and worthy of some kind of compensation. Whether or not that would actually factor into any court decision is another matter entirely. It's not obvious that an artificial intelligence can factor the emotional responses of the passengers into any calculations.
They say you can’t put a price on life, but juries do this all the time. The accidentally severed arm of a professional pitcher is worth more in a civil lawsuit than that of a normal person. Should there be a punitive damages cap in death cases? Is there a safe and effective medical procedure that’s simply too expensive to offer to all citizens? Where, if anywhere, would you put price limits on life?
There is certainly an argument to be made in favour of putting a price limit on punitive damages, however it's more appropriate to limit the increasing costs of such procedures. If you control the price that can be charged for medical procedures, you can limit what is necessary to provide in terms of punitive damages. Not necessarily through top down control, but the cost of such procedures usually reflects the costs a person needs to provide for themselves and their families. By reducing the cost of things, you reduce the cost of the procedures.
If you were the one controlling the railroad switch, would you pull the lever saving five people and killing one? What is your reasoning? Would you be willing to harvest organs from one healthy person to save five other people? What, if anything, is the difference?
Based purely on the idea that it's a question of five versus one person and knowing nothing else about the individuals on the track, the best course of action is to choose one person over five. This assumes a utilitarian pragmatic mindset is the best one to choose under such a scenario. The post accident guilt would be easier to overcome with one person over five, since more people would be emotionally impacted if five people died. With the one person, you have at minimum two parents, assuming no siblings, spouse or children from the single person. Whereas ten parents would be impacted by five people dying. Again assuming no siblings, spouse or children would be devastated.
As to the question of organ harvesting for the purposes of saving five people, it would depend on the consent of the single person being impacted. If the person consented to have their organs harvested, then it's acceptable to make such a choice morally. Unless the person was unable to give consent due to some kind of accident or crime beyond their control being done against them and the family consented. However, if no family is available in that scenario, it's not necessarily acceptable to harvest the organs.
The main difference between the trolley problem and the organ harvesting issue is that the train problem offers limited control, there is no opportunity to choose not to kill anyone, only how many might die. In the organ harvesting scenario it's an active choice and the option does exist to do nothing. So the moral costs are higher.
What, if any, changes would you make to the AI decision parameters in this story and why?
It's not clear that you can program the emotional costs of a decision into an artificial intelligence. So it's hard to say that anything would necessarily solve the problem beyond that. Even if you could, you would have to be able to ensure that the artificial intelligence didn't weigh the emotional impact too heavily.
Potentially, you would be able to program the number of people on the track as the deciding factor. Given that five is greater than one, the artificial intelligence could be adjusted to favour five lives over one.
Does it matter that ignoring the cost of choices would mean higher ticket prices for everyone to offset litigation costs? What if a change in the AI to ignore choice costs increased train tickets by 10%, 50%, 200%?
This is fundamentally a question of what to prioritize in terms of cost. As a company, it's perfectly reasonable for the train company to prioritize the cost of tickets and the negative impact of costs. Therefore to program an artificial intelligence to factor that in makes sense. However, you also have to consider the cost to the company of the human perception of the riders. If the company prioritizing costs leads to negative press of their trains killing five people, especially if the option existed to kill one person over five becomes widely known by potential riders, this could be a cost. Fewer riders would be willing to take a train which might give them guilt for running over five people.
Fewer riders means costs to the company can't be covered as easily. This leads to higher prices in order to cover the costs of fewer riders. So the economic impact on the train company is the same regardless. This means that it makes more sense for the company to prioritize fewer people dying over more.
Andrew Heard has been a writer for almost 20 years in various forms from blogging to novels and screenwriting. Subscribe to Andrew’s Substack:
Say, the trolley problem went in this manner: the train was supposed to go through track A, but because of some wrong information, Track A had 5 workers sitting their having their lunch. Track B had one person who was trying to cross it, knowing that the train would travel through track A. The AI is wrong to think about it in monetary terms. Say, if track B was faster and the train was not scheduled to take the route because of some reason like the station master was on leave, then the AI might make the decision of travelling through track B. This utilitarian approach would mean that the AI doesn't take morality into equation at all.
Say, the case was that track A had one person and track B had none. Track A being the scheduled path and track B the slower one. The AI might then choose not to divert. Had track B been faster, the AI would choose to divert. But then, what if some terrorists had chosen to exploit this nature of AI to tamper with the tracks. Maybe the track team did not perform the routine check knowing that Track A would be taken. In such a case, saving the one person on the track would be place all the train passengers in danger. So, in any case, unless the safety of passengers is not forsaken, the train should not divert from its scheduled path.