All Actions have Consequences

AI developers often claim no responsibility when the results of their algorithms reinforce something that seems a bit problematic. A popular defense for this exoneration of blame is that their algorithms predict and respond to data from the “real world” and that their results simply reflect the state of the world. This defense fails to recognize that entrenching the biases and inequalities that exist in society within an artificial intelligent system of agency is not neutral. Computation and artificially intelligent mediated decision-making have some air of objectivity. Nick Diakopoulos says “It can be easy to succumb to the fallacy that, because computer algorithms are systematic, they must somehow be more ‘objective.’ But it is in fact such systematic biases that are the most insidious since they often go unnoticed and unquestioned.”

This is me yelling at people who create algorithms that pretend only to “reflect objectively” the reality that exists around them. 

As Mark MacCarthy in his article, The Ethical Character of Algorithms—and What It Means for Fairness, the Character of Decision-Making, and the Future of News, clearly states my questions: “Are these mathematical formulas expressed in computer programs value-free tools that can give us an accurate picture of social reality upon which to base our decisions? Or are they intrinsically ethical in character, unavoidably embodying political and normative considerations?” Put simply: do algorithms have values or are they objective? The answer, I think obviously, is that they have values.

There’s a prevalent myth that because algorithms with agency to make decisions do not have human actors present during the actionable phase of their operations, they must be free of human judgment and bias. The popular mind is slowly changing to allow room for the existence of bias in artificial intelligence, but I think the historical existence of this fallacy of computational objectivity creates too much sociotechnical blindness for all impacts of the apparently “objective program” to be understood. Even when people understand and think critically about the human actors and values that are injected into algorithms, there is still something authoritative and definitive about the systematic nature of an algorithm. Additionally, the sociotechnical blindness surrounding these algorithms creates an environment in which blame is hard to distribute to the companies and people responsible for the algorithms.