Paternal Beneficence and Following The Threads of Blame

There are so many aspects to Artificial Intelligence (AI) that call for us to pause the pursuit of progress and take a moment to ensure we do things right the first time. This powerful technology brings so much apprehension not just because of its techniques but because its rollout is quick and there is no face to blame if things go awry. This brings up the two issues which I want to cover today – paternal beneficence and how to attribute blame.

How do we create a system which makes decisions for other people, especially if other people don’t know that decisions are being made for them? This happens all the time, even before the widespread dissemination of AI as businesses and governments decided what is important for one group or another, and what thresholds people meet to before they may have access to benefits. This problem is only exacerbated when using AI to help or make decisions for us. This is because the definitions of goals and outcomes for each agency will differ and because of that it is susceptible to benevolent decisions having malevolent outcomes. Say you wanted to decrease the mortality rate in a hospital, though on the surface is seems like an ideal goal that is because we have assumed idea as to what the parameters around the decisions which should and could be made. An AI system that is agnostic to moral platitudes may simply reduce the rate of high-risk patients coming to the hospital, rerouting them elsewhere to ensure that the cases faced by the doctors have a higher likelihood of success. This would ultimately not be discovered unless there was someone constantly supervising the AI and an audit of the system would be conducted but in the meantime, hundreds of injuries or deaths may have been prevented if the system was not brought online. This goes into the process of deblack-boxing which calls for us to be as explicit with the outcomes and parameters as possible. This though, as with legislation, requires hard lines to be drawn and for people who fall through the cracks in the system as we can’t account for everyone. This also presupposes those who would ultimately be sorted by this system are unable to impact the system in the moment in which they are coming in contact with it. A type of paternalistic choice being made as we believe that the system or administrators has the expertise to make a better more informed decision. Conversely, if we have the user make the decision it may slow down the decision-making process in a moment where time is scarce. Though there really isn’t any best practice approach to this it does lead to the next problem and crisis AI has to contend with.

How do we attribute blame when a system goes awry. If in this same scenario we find ourselves rerouted to a different hospital care clinic and as such don’t receive the level of care necessary leading us to require life long assistance, who should take the blame and be responsible for my care as the situation in this instance would have been preventable if I had gotten to the better hospital which I was routed away from. This is a persistent problem as AI systems are being put in charge of frameworks that can cause larger magnitudes of harm. Do we blame the administration for making the decision to decrease mortality? Do we blame the ambulance driver for following the decision of the algorithm? Do we blame the AI for the parameters it wasn’t given in the first place? Do we blame the developer for not putting the safeguards in the system originally? Who is responsible then for my care? These attributions of blame, just like how it is difficult for companies to single out one person as the cause of the problem make legislation hard and bring justice when tragedy strikes harder. We can’t throw up our hands and say we don’t know either because the stakes are already so high, people’s lives are being altered by the decisions being made by AI systems and people get further out of the loop blame gets harder to attribute.

Though there isn’t an easy solution to any of these issues it does bring up the complexity of the problems with AI and how we need to be thinking about these problems now before we get to the point of no return.