Ethics and socio-technical implications of AI or the broader technology we use has been a recent hot topic. However, although it is a “rising phenomenon”, I’d like to question even where and when we come across it. Documentaries such as Coded Bias and The Social Dilemma depict who are the few who are aware of the implications AI can have on our lives and livelihood as well as our rights as we have known them thus far. This course is an example of that; people in academia, people who work in the field such as coders, data scientists, etc., ‘tech activists’ and politicians specialized on privacy, tech and internet issues and legislature. But are the big companies concerned? Are the governments concerned? Who does this system benefit?
Few weeks ago I was at the doctor’s and when asked what I study he turned around and said “Aren’t you scared?”. In that moment my first thought was what a bizarre question to get from a doctor. Scared of what? Robots taking over? Not necessarily. Machines taking over? Ehh, not necessarily still far away from that, plus humans still control what we create in terms of human vs robot. However, what is scary is the way in which the technologies we use today are embedded in human biases, racism, sexism, human rights issues and other societal implications. People assume that because we don’t see what is behind our technology, we don’t see the “blackbox” that governs how the technologies we use on a daily basis operate, who operate them and how. I think up until now it has been hard for people to understand why there are biases in algorithms or issues within the system. A lot of question that typically pop up are: Don’t more women work for tech companies? Don’t people of color work in the field nowadays? Don’t we have a more diverse workforce in the tech industry? Yes sure, is the simple answer (not as many as they should but let’s start with baby stems for the sake of this description). The problem is that that isn’t the root of the issue. I truly appreciated how well Coded Biases explains why and how these biases exist from the ground up. Simple starting with the Dartmouth conference in the summer of 1956 we see who were the creators of this initiative, of AI, of ML, white men. So the whole process, the whole system starts being fed with patterns that depict just that. Not necessarily in terms of what photos we fead the ‘machine’ for facial recognition, but concepts, theoretical and social representations were those depicted by a small pool of data. The data that you feed in the machine is what it will learn from so if most photos are of red flowers for example, the system has an easier time reading red typical flowers than lets say a white purple-dotted orchid. So the discord of AI began and picked up on that narrative alone. A machine doesn’t have a soul, doesn’t have its own thinking, it learns from what humans feed into it. It does what humans tell it to do. So who is scarier? Humans or the machine?
Yes, it’s mostly based on a code. Yes there is more diversity in the tech industry and field of AI. But who writes the code? Or better yet, who had written all the code, today’s code is based off of. Last class, we talked about people in academia, such us ourselves, who question the technology we study, the technology we use and work on even though it may be our passion, it may be our interest. However, that is what CCT and other Interdisciplinary, ‘liberal art’ courses/programs are for. Studying what your interests are, but also question what that is. Don’t just look at it from one point. Nothing about technology is simple or lined up in one straight path. Maybe it’s the anthropologists in me but it’s hard to grasp that so many people in the overall field and not concerned about the implications of their actions. There is code so we hide behind it. For the most part, computer scientists, software engineers, data scientists don’t come out of undergrad or their extremely software engineer concentrated master’s having discussed the big issues of the system, of the technologies we use today. Yes it is an amazing advancement of our days that has exponential improved our lives in more ways that we can probably think on the top of our head, but are the socio-political-economic repercussions and biases greater that what we have made them?
We saw in the weeks of Cloud Computing and Big Data, the consequences or ideas of monopolizing an industry in which regular people and companies alike, rely on for the safekeeping and management of their documents, files and many more. The naiveness of people who assume that just because they don’t have their location settings on or they don’t use social media, that they also don’t have a technological footprint or a way for governments to monitor them. Unless you are 100% detached from everything technological, which honestly can be pretty hard these days especially because if you as an individual might be, something that you use such as a product or service might not be, one way or another data or information about you is out there. Big Data isn’t just random numbers and information collected from thousands of sources. Those numbers and figures didn’t magically appear out of nowhere. They are being fed into the system by our own usage of everything that we do. At the end of the day, we are the machines. Our societies are the technologies. They didn’t create themselves out of nowhere. And being skeptical and able to question what is really going on behind the scenes is what is going to help us conceptualize and overcome the many issues and implications that exist and are constantly happening, as depicted in Coded Bias.
Film Documentary, Coded Bias (Dir. Shalini Kantayya, 2020). Available on Netflix, and free to view on the PBS Independent Lens site
Rob Kitchin, The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences. London; Thousand Oaks, CA: SAGE Publications, 2014. Excerpts.
Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Crown, 2016).
Boudewijn de Bruin and Luciano Floridi, “The Ethics of Cloud Computing,” Science and Engineering Ethics 23, no. 1 (February 1, 2017): 21–39.
Geoff Dougherty, Pattern Recognition and Classification: An Introduction (New York: Springer, 2012). Excerpt: Chaps. 1-2.