Do Algorithms Have Politics?

Ethical issues in AI surround complicated problems such as data usage, privacy, and human agency. Thought leaders and professionals from all disciplines are clear about a need for some kind of  universal regulation and intentional design during the design process for AI systems and technologies. Throughout the readings and case studies, specific cases threatening human agency and human rights highlight some key issues we are facing in developing ethical practices in AI design and implementation.

Predictive Algorithms 

Professor MacCarthy’s thought-provoking article looks at the implications of recidivism scores which measures the probability of whether a prison will reoffend once released. This form of decision-making is based on a predictive outcome, which can be challenged as an unethical practice. “The use of recidivism scores replaces the question of whether people deserve a lengthy sentence with the separate question of whether it is safe to release them. In using a risk score, we have changed our notion of justice in sentencing” (MacCarthy). He further illustrates the point that political stance has a direct influence on how the algorithm will be implemented – in that the algorithm must be programmed to take a stance. In this case, the question is: what should the job of the algorithm be?

“Those who believe in improving outcomes for disadvantaged groups want to use recidivism algorithms that equalize errors between African-Americans and whites. Those who want to treat people similarly regardless of their group membership want to use recidivism algorithms that accurately capture the real risk of recidivism. When recidivism rates differ, it is not possible for the same recidivism tool to achieve both ethical goals” (MaCarthy). 

This raises the question of what role algorithms should have in our society. Should they be given the task of predicting outcomes in the judicial system? Is that a fair means of judgement?  Should the same tactics used in War (tracking, sensors, etc) be incorporated into daily life? Who benefits, and at what cost? From MacCarthy’s article, it can be concluded that algorithms do have political consequences and should therefore be treated accordingly in order to protect human rights and agency.

Experts Look To the Future of AI

Barry Chudakov, founder and principal of Sertain Research, commented, “My greatest fear is that we adopt the logic of our emerging technologies – instant response, isolation behind screens, endless comparison of self-worth, fake self-presentation – without thinking or responding smartly” (Anderson et al.)

The issue of instant response and user engagement has brought a significant shift in the way we consume news and content. As most people receive their news from social media, this brings a responsibility into the hands of the dominating social media companies such as Facebook– the kind of responsibility and power that was not possible before social media. The content that we receive is designed (by algorithm) to engage us, not to give us the most recent or relevant information on news and public issues. Only seeing what one wants to see, or what is agreeable with their political views, has a consequence on the collective level. Some of these consequences include: how news will be made in the future, disinformation campaigns, hate speech, and false news/misleading ads (MacCarthy). 

Batya Friedman, a human-computer interaction professor at the University of Washington’s Information School, wrote, “Automated warfare – when autonomous weapons kill human beings without human engagement – can lead to a lack of responsibility for taking the enemy’s life or even knowledge that an enemy’s life has been taken” (Anderson et al.)

Another significant issue is the distance that is involved in using automatic weapons, drones, or machines in warfare. Within the issue of weaponry, being a certain distance away from the effects of such killing, creates a lack of empathy and visible effect  – therefore, creating an environment where killing is nameless and with less direct consequence. Changing the nature of taking human life by programming a machine to do it, is at the extreme end of the spectrum – but distancing involved tasking algorithms to do what humans previously did – is seen on an individual level as well. The isolation, inability to communicate face-to-face, and growing epidemic of loneliness are other signs of this loss of empathy, resulting from the ways we interact with technology vs. humans.

Although some of the predictions concerning the future of AI are falsely informed  (due to the characterization of AI as capable of thinking for itself – rather than a software that is programmed by humans), the questions that are under the black box of blanket terms that state AI will cure cancer, remove 90% percent of current jobs entirely, and similar predictions — is the question of dependence. We have already seen the drastic change in human dependence on technology, especially within the younger generations. As we continue to strive for convenience and instant gratification/growth, we sacrifice independence. Due to this, author Kostas Alexandridis predicts that in the future, “there is also a real likelihood that there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control” (Anderson et al.).

In designing AI technologies moving forward, not only is it important to keep ethics and human rights at the center of design – it is also important to inform the public about how these softwares work, so we have the ability to shape educated opinions and contribute to discourse as well as future designs. In avoiding the digital ‘haves’ and ‘have-not’ scenarios, we must be more concerned with questions surrounding who should be deciding what regulations need to be in place, and how to ensure humans continue to remain independent and informed. If the companies that are using the latest technology and data (such as IBM) are not willing to be straightforward and clear about how and what they are using – it will be difficult to regulate such practices to protect individual privacy. Many companies (IBM included) are hiding behind the ‘intellectual property protection’ excuse as means to keep information about where/how they are accessing data – which is a clear indication that the practices at large tech companies should be the focus when enforcing ethical policies. 

 

References

Anderson, Janna, et al. Artificial Intelligence and the Future of Humans. 10 Dec. 2018, http://www.pewinternet.org/2018/12/10/artificial-intelligence-and-the-future-of-humans/.
MacCarthy, Mark. “The Ethical Character of Algorithms—and What It Means for Fairness, the Character of Decision-Making, and the Future of News.” The Ethical Machine, 15 Mar. 2019, https://ai.shorensteincenter.org/ideas/2019/1/14/the-ethical-character-of-algorithmsand-what-it-means-for-fairness-the-character-of-decision-making-and-the-future-of-news-yak6m.
Solon, Olivia, and Joe Murphy. “Facial Recognition’s ‘Dirty Little Secret’: Social Media Photos Used without Consent.” NBC News, https://www.nbcnews.com/tech/internet/facial-recognition-s-dirty-little-secret-millions-online-photos-scraped-n981921. Accessed 20 Mar. 2019.