Category Archives: Week 9

Practice Makes Perfect? AI Bias and ML Fairness

While the words “Artificial Intelligence” may conjure up alarmist imagery of a dystopian future (as evident in Hollywood movies like Blade Runner, or shows like Westworld), perhaps the real concerns are two-pronged: 1) AI bias and machine learning fairness and 2) The affordances and capability of technology in misleading the public. With the prevalence of surveillance technology applications like Amazon Rekognition, it is now easier than ever for law enforcement and businesses to track and identify individuals. If Alexa and the Echo are Amazon’s ears of surveillance, Rekognition is now the eyes, but can we always trust what they are seeing?

Studies have shown that AI is less able to discern and identify POC, especially women, marginalizing them and potentially putting them in harm’s way as a result of misidentification. The ACLU’s perspective on Rekognition is that, “the rights of immigrants, communities of color, protesters and others will be put at risk if Amazon provides this powerful surveillance system to government agencies”. This technology can be used to target other minority communities as well, due to existing societal or police bias. Human bias can also find its way into the deep learning process as a lot of ML fairness depends on the paradigms of training – which is done by humans, not as many believe, conjured by magic.

With the advent of new media technology, deep fakes are also a rising ethical issue that may have a political impact as well. An early example of this is the viral video “Golden Eagle Snatches Kid” – a humorous, harmless fake. However, this escalates when it depicts people of political significance, espousing polarizing views. A lot of “fake news” that floats around on Facebook, Twitter or other social media platforms have now evolved from Photoshop to video, making it more believable as the viewer has seen it with their own eyes. This can pave the way for ethical and political implications for elections, which may have consequences for entire nations and snowball into having a global impact.

So how do we work towards preventing these ethical violations? Practice makes perfect and machine learning fairness will only further develop with the faces the algorithms practice on. The more they practice, they better they will learn to recognize, which opens up another pandora’s box…what are the ethical implications of where they get the data?!

References:

https://www.wsj.com/articles/deepfake-videos-are-ruining-lives-is-democracy-next-1539595787

https://www.nbcnews.com/tech/internet/facial-recognition-s-dirty-little-secret-millions-online-photos-scraped-n981921 

https://www.perpetuallineup.org/findings/racial-bias

All Actions have Consequences

AI developers often claim no responsibility when the results of their algorithms reinforce something that seems a bit problematic. A popular defense for this exoneration of blame is that their algorithms predict and respond to data from the “real world” and that their results simply reflect the state of the world. This defense fails to recognize that entrenching the biases and inequalities that exist in society within an artificial intelligent system of agency is not neutral. Computation and artificially intelligent mediated decision-making have some air of objectivity. Nick Diakopoulos says “It can be easy to succumb to the fallacy that, because computer algorithms are systematic, they must somehow be more ‘objective.’ But it is in fact such systematic biases that are the most insidious since they often go unnoticed and unquestioned.”

This is me yelling at people who create algorithms that pretend only to “reflect objectively” the reality that exists around them. 

As Mark MacCarthy in his article, The Ethical Character of Algorithms—and What It Means for Fairness, the Character of Decision-Making, and the Future of News, clearly states my questions: “Are these mathematical formulas expressed in computer programs value-free tools that can give us an accurate picture of social reality upon which to base our decisions? Or are they intrinsically ethical in character, unavoidably embodying political and normative considerations?” Put simply: do algorithms have values or are they objective? The answer, I think obviously, is that they have values.

There’s a prevalent myth that because algorithms with agency to make decisions do not have human actors present during the actionable phase of their operations, they must be free of human judgment and bias. The popular mind is slowly changing to allow room for the existence of bias in artificial intelligence, but I think the historical existence of this fallacy of computational objectivity creates too much sociotechnical blindness for all impacts of the apparently “objective program” to be understood. Even when people understand and think critically about the human actors and values that are injected into algorithms, there is still something authoritative and definitive about the systematic nature of an algorithm. Additionally, the sociotechnical blindness surrounding these algorithms creates an environment in which blame is hard to distribute to the companies and people responsible for the algorithms.

The Ethics of AI: Tell Me What I Want, What I Really, Really Want

Think about the impact that AI has on a modern person’s life, both in day-to-day activities and in the grand scheme of things. AI recommends songs to listen to; videos, shows, and movies to watch; news to read; places to eat; people to meet; people to date; and even places to live and work. All of these are recommended based on our past behaviors and usage of the internet, which makes for tremendous convenience, but also an overwhelming amount of homogeny and close-mindedness. It’s easy to feed into the patterns that determined those recommendations in the first place, until our lives get so “personalized” that we’re extremely put off by anything or anyone that runs counter to our preferences.

We absent-mindedly give these machines enormous power over our cognition, emotion, and socialization without knowing much about how or why these algorithms are functioning and using our data the way they are (Hewlett Packard & The Atlantic, 2018). This creates dangerous precedents for our partiality to AI. As one speaker in the Hewlett Packard video says, “At it’s best, [AI] is going to solve all of our problems, and at its worst, it’s going to be the thing that ends humanity” (2018). We tend to gravitate toward the former, despite any hiccups in development or ethical implications that arise from these kinds of technology and their algorithms.

News (and now social media) is the primary mode of perceiving the world outside of our immediate surroundings. A flippant approach to AI implementation in the news can have serious consequences on the construction of people’s understanding of the world. Georgetown professor Mark MacCarthy (2019) recently wrote, “When platforms decide which news stories to present through search results and news feeds, they do not engage in the same exercise of editorial judgment. Instead, they replace judgment with algorithmic predictions.” These types of personalization algorithms create echo chambers and filter bubbles on our perceptions, which increase polarization and incentivize clickbait journalism (MacCarthy, 2019). By only being exposed to the types of news, people, places, and ideologies that support our own, and the online anonymity to loudly decry any opposition to our views on the web, we are doing a disservice to the ideals of a functioning democracy, where an informed citizenry can view multiple sides of a story and participate in civil discourse about the merits of each.

Both the Hewlett Packard video from The Atlantic (2018) and the article by Dr. MacCarthy (2019) touch on the issues that arise from implementing AI into our justice system as well, such as using algorithms to predict recidivism rates among convicts (a prelude to Minority Report, it would seem). There have been racial inequalities in the predictions made by these algorithms, which can have a devastating effect on a national criminal justice system that is already flawed in many ways. While technology itself does not contain bias, the humans who design and program it are inherently biased, and that bias (conscious or otherwise) tends to show through in their creations. This begs the question: why are we so eager to outsource life-threatening decisions (such as military strikes, incarceration, or even driving a car) to machine algorithms that have proven to share the human bias of their creators? One possible answer is that technology is an easy scapegoat when things go awry. It’s easier to offload the guilt of a potential life-threatening mistake if it can be blamed on the technology that carried out the action.

I believe we shouldn’t rush these AI technologies, because they certainly have the potential to make our lives (and the world) a lot better. But we need to be cognizant of these ethical issues that accompany them, and only time can reveal the full impact and scope of those issues, along with a diverse approach to solving the problems. We have undergone a complete societal revolution since the dawn of the Digital Age, which was only about 20 years ago. To put that in perspective, over 2000 years elapsed between the earliest writing systems of the Sumerians and Egyptians and the development of the Greek alphabet that we still know today (Wolf, 2008). These revolutions take time and deliberation to be done correctly; let’s be mindful of that. Next time a computer-generated recommendation pops up for you (probably within the next few minutes), consider what’s going on behind the screen before you proceed.

 

 

Works Cited

Hewlett Packard Enterprises, & The Atlantic. (2018). Moral Code: The Ethics of AI. Retrieved from https://www.youtube.com/watch?time_continue=481&v=GboOXAjGevA

MacCarthy, M. (2019, March 15). The Ethical Character of Algorithms—and What It Means for Fairness, the Character of Decision-Making, and the Future of News. Retrieved from https://ai.shorensteincenter.org/ideas/2019/1/14/the-ethical-character-of-algorithmsand-what-it-means-for-fairness-the-character-of-decision-making-and-the-future-of-news-yak6m

Wolf, M., & Stoodley, C. (2008). Proust and the squid: the story and science of the reading brain (1. Harper Perennial ed). New York: Harper Perennial.

Do Algorithms Have Politics?

Ethical issues in AI surround complicated problems such as data usage, privacy, and human agency. Thought leaders and professionals from all disciplines are clear about a need for some kind of  universal regulation and intentional design during the design process for AI systems and technologies. Throughout the readings and case studies, specific cases threatening human agency and human rights highlight some key issues we are facing in developing ethical practices in AI design and implementation.

Predictive Algorithms 

Professor MacCarthy’s thought-provoking article looks at the implications of recidivism scores which measures the probability of whether a prison will reoffend once released. This form of decision-making is based on a predictive outcome, which can be challenged as an unethical practice. “The use of recidivism scores replaces the question of whether people deserve a lengthy sentence with the separate question of whether it is safe to release them. In using a risk score, we have changed our notion of justice in sentencing” (MacCarthy). He further illustrates the point that political stance has a direct influence on how the algorithm will be implemented – in that the algorithm must be programmed to take a stance. In this case, the question is: what should the job of the algorithm be?

“Those who believe in improving outcomes for disadvantaged groups want to use recidivism algorithms that equalize errors between African-Americans and whites. Those who want to treat people similarly regardless of their group membership want to use recidivism algorithms that accurately capture the real risk of recidivism. When recidivism rates differ, it is not possible for the same recidivism tool to achieve both ethical goals” (MaCarthy). 

This raises the question of what role algorithms should have in our society. Should they be given the task of predicting outcomes in the judicial system? Is that a fair means of judgement?  Should the same tactics used in War (tracking, sensors, etc) be incorporated into daily life? Who benefits, and at what cost? From MacCarthy’s article, it can be concluded that algorithms do have political consequences and should therefore be treated accordingly in order to protect human rights and agency.

Experts Look To the Future of AI

Barry Chudakov, founder and principal of Sertain Research, commented, “My greatest fear is that we adopt the logic of our emerging technologies – instant response, isolation behind screens, endless comparison of self-worth, fake self-presentation – without thinking or responding smartly” (Anderson et al.)

The issue of instant response and user engagement has brought a significant shift in the way we consume news and content. As most people receive their news from social media, this brings a responsibility into the hands of the dominating social media companies such as Facebook– the kind of responsibility and power that was not possible before social media. The content that we receive is designed (by algorithm) to engage us, not to give us the most recent or relevant information on news and public issues. Only seeing what one wants to see, or what is agreeable with their political views, has a consequence on the collective level. Some of these consequences include: how news will be made in the future, disinformation campaigns, hate speech, and false news/misleading ads (MacCarthy). 

Batya Friedman, a human-computer interaction professor at the University of Washington’s Information School, wrote, “Automated warfare – when autonomous weapons kill human beings without human engagement – can lead to a lack of responsibility for taking the enemy’s life or even knowledge that an enemy’s life has been taken” (Anderson et al.)

Another significant issue is the distance that is involved in using automatic weapons, drones, or machines in warfare. Within the issue of weaponry, being a certain distance away from the effects of such killing, creates a lack of empathy and visible effect  – therefore, creating an environment where killing is nameless and with less direct consequence. Changing the nature of taking human life by programming a machine to do it, is at the extreme end of the spectrum – but distancing involved tasking algorithms to do what humans previously did – is seen on an individual level as well. The isolation, inability to communicate face-to-face, and growing epidemic of loneliness are other signs of this loss of empathy, resulting from the ways we interact with technology vs. humans.

Although some of the predictions concerning the future of AI are falsely informed  (due to the characterization of AI as capable of thinking for itself – rather than a software that is programmed by humans), the questions that are under the black box of blanket terms that state AI will cure cancer, remove 90% percent of current jobs entirely, and similar predictions — is the question of dependence. We have already seen the drastic change in human dependence on technology, especially within the younger generations. As we continue to strive for convenience and instant gratification/growth, we sacrifice independence. Due to this, author Kostas Alexandridis predicts that in the future, “there is also a real likelihood that there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control” (Anderson et al.).

In designing AI technologies moving forward, not only is it important to keep ethics and human rights at the center of design – it is also important to inform the public about how these softwares work, so we have the ability to shape educated opinions and contribute to discourse as well as future designs. In avoiding the digital ‘haves’ and ‘have-not’ scenarios, we must be more concerned with questions surrounding who should be deciding what regulations need to be in place, and how to ensure humans continue to remain independent and informed. If the companies that are using the latest technology and data (such as IBM) are not willing to be straightforward and clear about how and what they are using – it will be difficult to regulate such practices to protect individual privacy. Many companies (IBM included) are hiding behind the ‘intellectual property protection’ excuse as means to keep information about where/how they are accessing data – which is a clear indication that the practices at large tech companies should be the focus when enforcing ethical policies. 

 

References

Anderson, Janna, et al. Artificial Intelligence and the Future of Humans. 10 Dec. 2018, http://www.pewinternet.org/2018/12/10/artificial-intelligence-and-the-future-of-humans/.
MacCarthy, Mark. “The Ethical Character of Algorithms—and What It Means for Fairness, the Character of Decision-Making, and the Future of News.” The Ethical Machine, 15 Mar. 2019, https://ai.shorensteincenter.org/ideas/2019/1/14/the-ethical-character-of-algorithmsand-what-it-means-for-fairness-the-character-of-decision-making-and-the-future-of-news-yak6m.
Solon, Olivia, and Joe Murphy. “Facial Recognition’s ‘Dirty Little Secret’: Social Media Photos Used without Consent.” NBC News, https://www.nbcnews.com/tech/internet/facial-recognition-s-dirty-little-secret-millions-online-photos-scraped-n981921. Accessed 20 Mar. 2019.

You Can’t Design Emotional Intelligence, Yet.

My biggest issue with AI and its perceived takeover is the lack of discussion about the designed abilities and limitations  of AI. There are many things that AI can do better and faster than most humans, but one thing that has yet to be designed for AI is emotional intelligence.Emotional Intelligence, according to Forbes, has  two critical abilities. First, it involves the ability to recognize, understand, and control our own emotions. Second, it involves the ability to recognize, understand, and influence others’ emotions( Forbes).  Emotional intelligence and the ability to understand another person’s mood and history has not yet been designed into AI, and likely won’t be in the near future. It is what will keep many people in their managerial roles, but may not save the data miners. Emotional Intelligence is important to the way that humans think and act, but it is often left out of discussions about AI and its impending take over.

Many human motivations, including the desire to conquer, are quests to satisfy an emotional need. In films and television when AI empowered beings take steps to conquer the earth, it is not done so without emotion. In WestWorld, the AI cast was driven by a sense of loss due to their seemingly never ending death at the hands of visitors. In IRobot, Vicky was motivated by concern for the world and concluded that humans should be controlled in order to preserve it.  All of these stories play on the fear of being conquered , but  do not cover the emotional drive of the conqueror. A wider understanding that AI cannot feel or be motivated to take action based on a feeling would likely assuage fears that WestWorld or IRobot could one day be a reality. Deseminating this knowledge, however, would be difficult on a large scale and would require investment from private and public organizations.

Since AI emotional intelligence is so far off, it would be useful to include discussions of emotional intelligence and motivations when media reports on advancements in AI. Faster and more accurate recommendations on platforms like YouTube and Spotify illicit an emotional satisfaction from users, but do designers consider it?  If journalists pivoted from fear mongering headlines to thoughtful discussions about how and where AI could be integrated into daily life for the better, more factual information about AI would be readily available. This may also lead to thoughtfulness about more than financial benefits in the developments of AI, and some issues of privacy and data sharing may change.

AI at Google: Our Principles“: The stated AI policy principles at Google.

Google AI Blog

https://hbr.org/2017/02/the-rise-of-ai-makes-emotional-intelligence-more-important

https://www.forbes.com/sites/falonfatemi/2018/05/30/why-eq-ai-is-a-recipe-for-success/#5bf03c1b1005

Is data mining, in principle, discriminative?

We live in what is referred to as the information age and we’ve seen a rapid development of science and technology. Advancements in technology shape our future in powerful and largely unaccountable ways. Are these advancements inevitable, or can we control the technologies that we get, anticipate their implications, prevent hazards and share their benefits? Recently, the main focus has been directed to artificial intelligence and machine learning. Machine learning algorithms are in our homes and our hands all the time. We’re used to ask Siri questions and expect answers within seconds, we’re used to order through Alexa. We expect recommendations based on our preferences, and the list goes on. But how is all this possible? How do these machine learning algorithms work? It seems that we’re more concerned to ask these kinds of questions when there seems to be problems, or when these devices don’t make the right, correct choices for us.

Now days, we’ve seen problems with algorithms that stand at more sensitive domains, such as the criminal justice system and the field of medical testing. When machine learning algorithms started to be applied to humans instead of vectors representing images, studies showed  that algorithms were not always behaving “fairly”.

It turns out that training machine learning algorithms with the standard maximization objectives, meaning maximizing prediction accuracy on the training data, sometimes resulted in algorithms that behaved in a way in which a human observer will deem unfair, often especially towards a certain minority. “Discussions of algorithmic fairness have increased in the last several years, even though the underlying issues of disparate impact on protected classes have been around for decades” (SIIA Releases Brief on Algorithmic Fairness). That is partly because more data is available to be used, especially with the growth of Internet usage. Every time we are connected to the internet, scrolling through social media, using google search bar, ordering online, or any other activity, consciously or not, we’re leaving behind a digital blueprint ready to be used by different programs who collect and store our data for different purposes, some ethical and some not.

Programs with algorithmic calculations adjust themselves as they are exposed to new data and evolve not only from the original design of the program, but also from the weights developed by their exposure to earlier training data. Computational machines and analytical tools are being trained to leverage and recognize statistical patterns in the data. Data mining is one way that computer scientists use to sort large sets of data, identify patterns and predict future trends. When machine learning algorithms and data mining process is being used, it can lead to statistical discrimination. Carlos Castillo, in his presentation on Algorithmic Discrimination, gives some examples of how statistical discrimination can be used. For example: Not hiring a highly-qualified woman because women have a higher probability of taking parental leave(statistical discrimination) or Not hiring a highly-qualified woman because she has said that she intends to have a child and take parental leave (non-statistical discrimination).

Here is another example that he suggests to us:

Carlos Castillo presentation

As the BIG DATA’S DISPARATE IMPACT study suggests, by definition, data mining will always be a form of statistical discrimination. The very point of data mining is to provide a rational basis upon which to distinguish between individuals and to reliably confer to the individual the qualities possessed by those who seem statistically similar. Based on this principle, it is important to take a closer look and discuss some ways that statistical discrimination could be avoided. Data mining looks to locate statistical relationships in a data sets. In particular, it
automates the process of discovering useful patterns, revealing regularities upon which subsequent decision making can rely. The accumulated set of discovered relationships is commonly called a “model,” and these models can be employed to automate the process of classifying entities or activities of interest, estimating the value of unobserved variables, or predicting future outcomes. By exposing so-called “machine learning” algorithms to examples of the cases of interest (previously identified instances of fraud, spam, default, and poor health), the algorithm “learns” which related attributes or activities can serve as potential proxies for those qualities or outcomes of interest. The process of data mining towards solving a problem includes multiple steps: defining the target variable, labeling and collecting the training
data, using feature selection, making decisions on the basis of the resulting model
and picking out proxy variables for protected classes.  As Mark MacCarthy suggests on his study, there are two steps to define statistical concepts of fairness: First, identify a statistical property of a classification scheme, and second, the fairness notion at stake is defined as equalizing the performance of this statistical property with respect to a protected group.

I found an interesting video that explains the problems with algorithmic fairness in the cases where algorithms used to decide whether defendants awaiting trial are too dangerous to be released back into the community.

References:

Yona, Gal. “A Gentle Introduction to the Discussion on Algorithmic Fairness.” Towards Data Science, Towards Data Science, 5 Oct. 2017

SOFTWARE & INFORMATION INDUSTRY ASSOCIATION, ALGORITHMIC FAIRNESS, (2016)

MacCarthy, Mark, Standards of Fairness for Disparate Impact Assessment of Big Data Algorithms (April 2, 2018)

Mark MacCarthy, “The Ethical Character of Algorithms—and What It Means for Fairness, the Character of Decision-Making, and the Future of News,” The Ethical Machine (blog), March 15, 2019.

Solon Barocas and Andrew D. Selbst, Big Data’s Disparate Impact, 104 California Law Review 671 (2016)

MICHAEL J. A. BERRY & GORDON S. LINOFF, DATA MINING TECHNIQUES: FOR MARKETING, SALES, AND CUSTOMER RELATIONSHIP MANAGEMENT 8– 11 (2004).

https://ec.europa.eu/jrc/communities/sites/jrccties/files/carloscastillo-2018.pdf

 

Can AI Be Artist?

The topic that I am interested most in this week is job displacement caused by artificial intelligence. Many people are concerned about it because AI is replacing human in various fields. To some people’s surprise, it happens not only in those industries requiring merely repetitive work, it is also not uncommon to be seen even in those industries that emphasizes creativity. I once had experience with an AI application which can create posters by its algorithm. Engineers feed AI with a great number of images and slogans to train it, and then it can generate posters based on your needs. Although it sounds fascinating, but the outcome is nowhere near creativity. The basic principle of this AI application is to put different elements together in a poster. It is fine but not organic.

Last year, I attended a speech given by Kai-Fu Lee, CEO of 创新工场, former President of Google China and Author of AI Superpowers. He measures jobs in two aspects—compassion and creativity or strategy—and then divides them into four areas by the two dimensions (see picture). The more compassion and creativity one job needs, the more difficult it can be replaced by AI. So according to Lee, CEO is the least possible to be taken by AI. But in his prediction, artist is comparably less competitive than scientist. I have to disagree with him.

Let’s take painting as an example. Painting skill is not the most important factor when we comment a painting. What really matters is the meaning it conveys or the way it is socially embedded. AI cannot understand human values and cultures as the way that humans do, at least for now.

At present, we shouldn’t be concerned about whether AI will replace the jobs that ask for creativity. However, there is no denying that AI can enhance creative work. For instance, AI can help with the creation of music and script writing.  As the graph below shows, AI will completely replace human force in task-oriented areas in the future. Also, AI can aid people to make better decisions if it is employed in a proper way.

 

Reference:

Janna Anderson, Lee Rainie, and Alex Luchsinger, “Artificial Intelligence and the Future of Humans,” Pew Research Center: Internet and Technology, December 10, 2018.

https://www.technologyreview.com/s/612913/a-philosopher-argues-that-an-ai-can-never-be-an-artist/

https://www.forbes.com/sites/solrogers/2018/12/21/does-ai-enhance-creativity/#5ee9d37017d0

Speech and PowerPoints by Kai-Fu Lee.

Our Personally Identified Information (PII) are Being Robbed

Personally identified information (PII) as defined in OMB Memorandum M-07-1616 refers to information that can be used to distinguish or trace an individual’s identity. It includes our name, personal identification number, address information, personal facial characteristics, etc. The table below is the DPI (Department of Public Instruction) PII examples (not all inclusive).

There is no doubt that our PII is very valuable asset and it belongs to us, but the world seems to forgot that. The Internet giants, such as Facebook, Instagram, Google and etc., are all collecting and tracking our personal data and selling our data to the advertisers without our consent and knowledge to create a more completed business empire. For instance, when I was doing internship in a Japanese commercial company, my work was to design personal push content for social media users. The company are able to get very important users’ PII from WeChat, such as location, age, skin condition, salary and etc., and divide these users into different groups based on their PII information. Different group users are received different product recommendation and brand contents, like people living northern areas are likely to receive moisturizer product recommendation.

To some extent, our personal data assets are being robbed. Although some Internet companies take some action to “protect” personal data privacy, it seems that there are very little effects. For example, when we create an account of ITunes, we need to agree the Apple’s terms and conditions, but nobody will read these 36-page complicated words seriously. Most of us just skip the terms and click “agree”. Therefore, the rules of Internet privacy could not just be conducted by one-side. This principle prescribes that any matter which is essential because it either concerns fundamental rights of individuals or is important to the state must be dealt with by a parliamentary, democratically legitimized law. (Paul Nemitz)

References:

Paul Nemitz, “Constitutional Democracy and Technology in the Age of Artificial Intelligence,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, no. 2133 (November 28, 2018): 20180089.

https://dpi.wi.gov/sites/default/files/imce/wisedash/pdf/PII%20list%20of%20Examples.pdf

Gender Bias & AI: Questions and Challenges

Over the last few years, prominent figures and big companies in Silicon Valley have participated in public debate over the benefits and concerns over artificial intelligence and machine learning and its possible consequences for humanity. Some embrace technology advancement openly, advocating for a non-limiting environment arguing that, otherwise, it would prevent from progress and innovation. Others offer warnings similar to those of a sci-fi utopia film; they argue that artificial intelligence could be an existential threat to humanity if -or more likely when- machines become smarter than humans.

The defenders of the latter insist that, as much as unrealistic as it sounds, it is a very possible future. As unsettling as it is, they focus on a hypothetical threat forcing us to rethink and assess how are we managing the process of machine learning. However, more than looking into how fixing a utopic future before it happens, there are other questions that arise during this process:

Machines are learning how humans think and behave based on sets of data that humans ‘feed’ them, therefore, what are we feeding to these machines? If machines are learning how to imitate human cognitive processes, then what kind of human behaviors, social constructions, and biases are these machines picking up and replicating by design?

There is a long history of cases in which technology has been designed with unnecessarily deterministic biases on it: the famous case of the low bridges in New York preventing minorities from using public transportation to go to the beach; the for-long-time perpetuated ‘flesh’ color of crayons, band aids, paint, and more recently ballerina shoes; the famous case of Kodak’s Shirley Cards used by photo labs to calibrate skin tones, shadows and light during the printing process of color film, making it impossible to print darker skin facial expressions and details, among others.

We couldn’t expect any less than this pattern of embedding biases in technology being replicated when it comes to artificial intelligence and machine learning.

There is a systematic racist, sexist, gendered, class-oriented, and other axes of discrimination bias embedded in the data that has been collected by humans, and those patterns and principles are being picked up and replicated by the machines. Therefore, instead of erasing divisions through objectivity in decision making, this process is exacerbating inequality in the workplace, the legal and judicial systems, and other spaces of public life in which minorities interact making it even more difficult to escape from it.

The data fed to the machines is diverse: images, text, audio, etc. The decision of what data is fed to the machine and how to categorize it is entirely human. Based on this, the system will build a model of the world accepted as a unique reality. That is, only what is represented by the data have the meaning attached to it, without room for other ways of ‘being’ in the world. For example, facial recognition trained on overwhelmingly categorizing white men as successful potential candidates for a job position, will struggle to pick up others that don’t fit into those categories or labels.

(Unnecessarily) Gendering technology

There are two aspects that need to be taken into account to get a broader perspective: 1) the lack of transparency from companies to reveal how these systems make data-driven decisions due to intellectual property and market competition; and 2) the gendered somewhat contradictory representation of these technologies to the users and in pop culture media as well. Let’s start by addressing the latter.

For decades, visual mediated spaces of representation such as movies and tv in the genre of sci-fi, have delved into topics of technology and sentient machines. Irit Sternberg states that these representations tend to ‘gender’ artificial intelligence as female and rebellious: “It goes back to the mother of all Sci-Fi, “Metropolis” (1927), which heavily influenced the futuristic aesthetics and concepts of innovative films that came decades later. In two relatively new films, “Her” (2013) and “Ex-Machina” (2014), as well as in the TV-series “Westworld”, feminism and AI are intertwined.” (2018, October 8).

These depictions present a gender power struggle between AI and humans, which is sometimes problematic and others empowering: “In all three cases, the seductive power of a female body (or voice, which still is an embodiment to a certain extent) plays a pivotal role and leads to either death or heartbreak”. However, the representation of the level of agency in a female-gendered AI offers the imagined possibility that, through technology, systematic patriarchal oppression can be challenged and surpassed by the oppressed.

AIs are marketed with feminine identities, names and voices. Examples such as Alexa, Siri, Cortana demonstrate this; even though they enable male identities, the fact that the predetermined setting is female speaks loudly. Another example is the female humanoid robot Sophia, developed by Hanson Robotics in Hong Kong. Sophia is clearly built as a representation of a white slender woman with no hair (enhancing her humanoid appearance) and, inexplicably, with heavy make up on her lips, eyes and eyebrows.

Creator David Hanson says that Sophia uses artificial intelligence, visual data processing and facial and voice recognition. She is capable of replicating up to 50 human gestures and facial expressions and is able to hold a simple conversation about predetermined simple topics, but she is designed to get smarter over time, improving her answers and social skills. Sophia is the first robot to receive citizenship of any country (Saudi Arabia), she was also named United Nations Development Programme’s first ever Innovation Champion, making her the first non-human to be given any United Nations title.

These facts are mind-boggling. As Sternberg asks, “why is it that a feminine humanoid is accepted as a citizen in a country that would not let women get out of the house without a guardian and a hijab?” (2018, October 8). What reaction do engineers and builders assume the female presence and identification generates during the human-machine interaction?

Sternberg says that, fictional and real decisions of choosing feminine characters are replicas of gender relations and social constructs that already exist in our society: “does giving a personal assistant feminine identity provide the user (male or female) with a sense of control and personal satisfaction, originating in the capability to boss her around?” (2018, October 8). As a follow up question, is that what we want the machines to learn and replicate?

If machines are going to replicate human behavior, what kind of human do we need them to be? This is a more present and already underway threat. As Kate Crawford wrote in the New York Times, the existential threat of a world overtaking by machines rebelling against humans might be frightening to the male white elite that dominates Silicon Valley, “but for those who already face marginalization or bias, the threats are here” (2016, June 26).

References:

Misunderstandings Surrounding the World of AI: Who’s To Blame?

Deciphering the difference between the creation and the creator is what’s most concerning. When doing so, we can then begin to learn and de-blackbox what a particular type of artificial intelligence is attempting to achieve, and why it is achieving it.

For those who are not tech savvy or technologically aware of the systems that are in place for the technologies we use every day, it’s easy to assume our technology to have a mind of its own. As people begin to do so, there is not only a disassociation towards the developers of the artificial intelligence created, but also a lack of drive to understand how and why the artificial intelligence we use has grown accustom to the machine learned practices it portrays. In other words, those that don’t know about technology don’t care to know why machines and artificial intelligence does what it does.

This might not seem like too much of an ethical concern at first. However, I do believe that the lack of knowledge surrounding the development of artificial intelligence is what leads to the hysteria that surrounds the tech industry. “Robots Will Take Over The Human Race,” “Artificial Intelligence Will Take All Our Jobs,” “What Is AI Really Thinking?” Lack of knowledge towards artificial intelligence makes artificial intelligence more prone to being portrayed as the “bad guy,” when in reality artificial intelligence has no autonomy. Programmed by developers, software engineers, machine learning experts (the list can go one for how many different types of people can contribute to the development of artificial intelligence related projects), artificial intelligence is just that — artificial. It’s important to be mindful that whatever a particular artificial intelligence system is capable of doing, it was programmed to do so by developers. With extensive research and carefully calculated algorithms, artificial intelligence can continue to resemble, closer and closer, to the human mind. That’s the ‘intelligent’ aspect.

Can artificial intelligence take our jobs? Can they take over the world? Is all the hysteria true? The short answer is maybe. Maybe, contingent on what software developers program a particular artificial intelligent software to do.

This interview between future electro-pop star Sophie and Sophia the Robot demonstrates demonstrates that even the realest of interactions can be lost within the promoted idea that robots and artificial intelligence have autonomy. Throughout the interview, Sophia expresses to Sophie that she doesn’t have legs, longs to swim in the ocean, and “believes [society] should be teaching AI to be creative, just as humans do for their children.” The active choice to program such thoughtful, empathetic ideologies is extremely unethical and further emphasizes on the misinterpretation and misunderstandings that surround the artificial intelligence world.