Over the years, in one way or another we have all seen the existence and implementation of Artificial Intelligence take over so many aspects of our lives. For the most part, we don’t even see or realize that there is some sort/form of AI being implemented and used for a specific thing, products, circumstance. However, when we do notice we should also be recognizing the countesses instances where the biases and prejudices that existence within AI, are very much so present. I think because it is something not tangible or so obsolete, we assume that as an “electronic” being there is no association between AI and the societal issues that exist in our societies today. But the AI hasn’t coded itself. It hasn’t necessarily created itself out of nowhere. And so as us, the humans, “manufacture”, code and establish these presences in our lives, unfortunately or fortunately (since we know what is wrong and work towards fixing it) with it any bias of prejudice that is embedded in our human minds and lives will get encoded with it. Apart from those, relying so heavily on produced intelligence can have its own ethical implications and issues.
With data collection being a huge part of our electronic and digital presence, most times we’re not even aware of that taking place. We’re not always really sure or aware of how and when data is being collected and what is being done with it. But if there is one thing I have realized is that data is constantly collected. Some of our readings had this as two separate ethical issues that govern AI but I feel like they are pretty similar and interchangeable and that is the autonomy and power that we rely on our AIs. The automation that goes behind AI causes “individuals [to] experience a loss of control over their lives” (Anderson & Rainie, 2018). “When we adopt AI and its smart agency, we willingly cede some of our decision-making power to
technological artefacts (Floridi & Cowel, 2019, 7). Partially, this is because the deblack-boxing of AI is still very much so in the box. However, there is a question to be posed here; will we truly ever learn what is behind the AI that we use on a daily basis? Will these companies/products ever truly reveal how they work, what they really do with all this information and data collection? Honestly, probably not since that would make them weaker to competitors. Unless, more people start realizing, noticing and want some change in terms of the control and power they have towards their use of this type of technology. As Nemitz, also explains, large (tech) corporation have taken over every aspect our of lives whether or not we realize it or sign up for it. “The pervasiveness of these corporations is thus a reality not only in technical terms, but also
with regard to resource attribution and societal matters” (Nemitz, 2018, 3). All these companies and brands, have basically collected in “their hands” countless information and data that with it they basically are able to control so many aspects of the human life especially in terms of technological, economic and political power that has been given to them through this digital power. Since nowadays, we rely so heavily on technological and the use of a digitized framework, most aspects of human life are also controlled by technology. So in a way whoever is more “ahead of the game” in the field, is the one who also has the power, the information, the data. Everyone else has pretty much lost their ability to pick and choose when, how, where they share information. It is one way or the other. If you want to have any sort of digital presence, talk on the phone, use your credit car, pay for something, look up something, everything you do is pretty much tracked down and collected, formed into a bigger overall ‘picture’.
Another ethical issue/implication of AI, is of course the idea that all this information and data can be used for the destruction and with malicious intent towards others. Apart from ” autonomous military applications and the use of weaponized information” (Anderson & Rainie, 2018) we can also speak on the collection of information aimed towards capturing people such as facial recognition. The problem here is who is using this technology? and for what reasons? Of course, we also have to consider yet again the biases that go into this type of “vigilantism”. Racists views and opinions definitely influence who this type of technology can be geared at and who are going to be the people mostly being targeted by it. Floridi et al, also explain this in terms of how “developers may train predictive policing software on policing data that contains deeply ingrained prejudices. When discrimination affects arrest rates, it becomes embedded
in prosecution data. Such biases may cause discriminatory decisions (e.g., warnings or arrests) that feed back into the increasingly biased datasets, thereby completing a vicious cycle” (Floridi et al., 2019, 1788).
How do we apply laws/regulations/safety measures for something so widely used?
We have seen how hard it has been to manage data privacy uses and laws from one country to another, how can something so universal become so specific when it comes to protecting people?
References
Janna Anderson, Lee Rainie, and Alex Luchsinger, “Artificial Intelligence and the Future of Humans,” Pew Research Center: Internet and Technology, December 10, 2018.
Karen Hao, “In 2020, Let’s Stop AI Ethics-Washing and Actually Do Something,” MIT Technology Review, December 27, 2019.
Karen Hao, “Establishing an AI Code of Ethics Will Be Harder Than People Think,” MIT Technology Review, October 21, 2018.
Karen Hao, “This Is How AI Bias Really Happens — and Why It’s so Hard to Fix,” MIT Technology Review, February 4, 2019.
Luciano Floridi and Josh Cowls, “A Unified Framework of Five Principles for AI in Society,” Harvard Data Science Review 1, no. 1
Luciano Floridi, Josh Cowls, et al., “How to Design AI for Social Good: Seven Essential Factors,” Science and Engineering Ethics, 26/3, 2020: 1771-1796.
Paul Nemitz, “Constitutional Democracy and Technology in the Age of Artificial Intelligence,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, no. 2133 (November 28, 2018): 20180089