Over the last few years, prominent figures and big companies in Silicon Valley have participated in public debate over the benefits and concerns over artificial intelligence and machine learning and its possible consequences for humanity. Some embrace technology advancement openly, advocating for a non-limiting environment arguing that, otherwise, it would prevent from progress and innovation. Others offer warnings similar to those of a sci-fi utopia film; they argue that artificial intelligence could be an existential threat to humanity if -or more likely when- machines become smarter than humans.
The defenders of the latter insist that, as much as unrealistic as it sounds, it is a very possible future. As unsettling as it is, they focus on a hypothetical threat forcing us to rethink and assess how are we managing the process of machine learning. However, more than looking into how fixing a utopic future before it happens, there are other questions that arise during this process:
Machines are learning how humans think and behave based on sets of data that humans ‘feed’ them, therefore, what are we feeding to these machines? If machines are learning how to imitate human cognitive processes, then what kind of human behaviors, social constructions, and biases are these machines picking up and replicating by design?
There is a long history of cases in which technology has been designed with unnecessarily deterministic biases on it: the famous case of the low bridges in New York preventing minorities from using public transportation to go to the beach; the for-long-time perpetuated ‘flesh’ color of crayons, band aids, paint, and more recently ballerina shoes; the famous case of Kodak’s Shirley Cards used by photo labs to calibrate skin tones, shadows and light during the printing process of color film, making it impossible to print darker skin facial expressions and details, among others.
We couldn’t expect any less than this pattern of embedding biases in technology being replicated when it comes to artificial intelligence and machine learning.
There is a systematic racist, sexist, gendered, class-oriented, and other axes of discrimination bias embedded in the data that has been collected by humans, and those patterns and principles are being picked up and replicated by the machines. Therefore, instead of erasing divisions through objectivity in decision making, this process is exacerbating inequality in the workplace, the legal and judicial systems, and other spaces of public life in which minorities interact making it even more difficult to escape from it.
The data fed to the machines is diverse: images, text, audio, etc. The decision of what data is fed to the machine and how to categorize it is entirely human. Based on this, the system will build a model of the world accepted as a unique reality. That is, only what is represented by the data have the meaning attached to it, without room for other ways of ‘being’ in the world. For example, facial recognition trained on overwhelmingly categorizing white men as successful potential candidates for a job position, will struggle to pick up others that don’t fit into those categories or labels.
(Unnecessarily) Gendering technology
There are two aspects that need to be taken into account to get a broader perspective: 1) the lack of transparency from companies to reveal how these systems make data-driven decisions due to intellectual property and market competition; and 2) the gendered somewhat contradictory representation of these technologies to the users and in pop culture media as well. Let’s start by addressing the latter.
For decades, visual mediated spaces of representation such as movies and tv in the genre of sci-fi, have delved into topics of technology and sentient machines. Irit Sternberg states that these representations tend to ‘gender’ artificial intelligence as female and rebellious: “It goes back to the mother of all Sci-Fi, “Metropolis” (1927), which heavily influenced the futuristic aesthetics and concepts of innovative films that came decades later. In two relatively new films, “Her” (2013) and “Ex-Machina” (2014), as well as in the TV-series “Westworld”, feminism and AI are intertwined.” (2018, October 8).
These depictions present a gender power struggle between AI and humans, which is sometimes problematic and others empowering: “In all three cases, the seductive power of a female body (or voice, which still is an embodiment to a certain extent) plays a pivotal role and leads to either death or heartbreak”. However, the representation of the level of agency in a female-gendered AI offers the imagined possibility that, through technology, systematic patriarchal oppression can be challenged and surpassed by the oppressed.
AIs are marketed with feminine identities, names and voices. Examples such as Alexa, Siri, Cortana demonstrate this; even though they enable male identities, the fact that the predetermined setting is female speaks loudly. Another example is the female humanoid robot Sophia, developed by Hanson Robotics in Hong Kong. Sophia is clearly built as a representation of a white slender woman with no hair (enhancing her humanoid appearance) and, inexplicably, with heavy make up on her lips, eyes and eyebrows.
Creator David Hanson says that Sophia uses artificial intelligence, visual data processing and facial and voice recognition. She is capable of replicating up to 50 human gestures and facial expressions and is able to hold a simple conversation about predetermined simple topics, but she is designed to get smarter over time, improving her answers and social skills. Sophia is the first robot to receive citizenship of any country (Saudi Arabia), she was also named United Nations Development Programme’s first ever Innovation Champion, making her the first non-human to be given any United Nations title.
These facts are mind-boggling. As Sternberg asks, “why is it that a feminine humanoid is accepted as a citizen in a country that would not let women get out of the house without a guardian and a hijab?” (2018, October 8). What reaction do engineers and builders assume the female presence and identification generates during the human-machine interaction?
Sternberg says that, fictional and real decisions of choosing feminine characters are replicas of gender relations and social constructs that already exist in our society: “does giving a personal assistant feminine identity provide the user (male or female) with a sense of control and personal satisfaction, originating in the capability to boss her around?” (2018, October 8). As a follow up question, is that what we want the machines to learn and replicate?
If machines are going to replicate human behavior, what kind of human do we need them to be? This is a more present and already underway threat. As Kate Crawford wrote in the New York Times, the existential threat of a world overtaking by machines rebelling against humans might be frightening to the male white elite that dominates Silicon Valley, “but for those who already face marginalization or bias, the threats are here” (2016, June 26).
- Crawford, K. (2016, June 26). A.I.’s White Guy Problem. (Sunday Review Desk) (OPINION). The New York Times. https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html
- Sternberg, I (2018, October 8). Female AI: The Intersection Between Gender and Contemporary Artificial Intelligence. Hackernoon. https://hackernoon.com/female-ai-the-intersection-between-gender-and-contemporary-artificial-intelligence-6e098d10ea77
- Weller, C. (2017, October 2017). “Meet the first-ever robot citizen — a humanoid named Sophia that once said it would ‘destroy humans'”. Business Insider. October 27, 2017.