Numbers don’t lie?

AI is all about training computers to learn/find out patterns from a massive amount of data. As AI gets more involved in our daily life, the logic and the ethics behind each algorithm should become more transparent. Technology does not have right or wrong, but their decisions and outcomes highly relate to or even entirely depend on the data provided by human creators. Since it is so easy for the AI to predict and further control our life-changing decisions (like dating app, housing renting, debt…), it is important to understand the relationship between the creator and creates (HEWLETT PACKARD ENTERPRISE – Moral Code: The Ethics of AI, 2018, 03:15-05:21).  Because AI is an outcome of human action, the bias could happen when (Hao, 2020):

  1. Framing the problem
  2. Collecting the data (The data is one-sided/it reflects the bias itself)
  3. Preparing the data (Subjective)

It’s hard to fix the bias in AI because, first, it is often too late to fix it; second, the complex algorithm already learned what it has been taught, fixing the root doesn’t change the branches.

AI emphasis human bias and deepen the exited stereotype. As the example listed in source materials. AI generates the people who live in certain areas as more possible to commit the crime, and that areas happens gathered by colored groups or poor. AI becomes a racist without being told. Another example would be Amazon filter out female candidates by training AI to hire new employees based on the historical candidates hiring information, which crowed by white males. In this case, AI becomes a sexist without being told by words but learned by past truth.

One specific issue is how AI has deepened gender discrimination especially the blooming of generating fake face techniques. According to The Verge website, fake face-generating techniques and deep faking face-changing techs are mostly used to create unreal pornography (Vincent, 2019). And as you could imagine, most of the victims are female. And the inevitable truth is, even though the videos or pictures are known as fake, what’s there is already there. Eliminating the entity doesn’t necessarily eliminate its existence. Same to the political aspects. Since “it is getting harder to spot a deep fake video”, this ethical issue could only get worth if without some following law restrictions (It’s Getting Harder to Spot a Deep Fake Video, 2018, 03:15-05:21). Some misunderstanding of AI is that because AI is generated by algorithms, we assume it is impersonality, more impartial, or rational than humans. Because we believe numbers/data don’t lie.  

Question: What tools can we use to reduce the AI bias? 


Hao, K. (2020, April 2). This is how AI bias really happens—and why it’s so hard to fix. MIT Technology Review.

HEWLETT PACKARD ENTERPRISE – Moral Code: The Ethics of AI. (2018, June 29). [Video]. YouTube.

It’s Getting Harder to Spot a Deep Fake Video. (2018, September 27). [Video]. YouTube.

Vincent, J. (2019, February 15). uses AI to generate endless fake faces. The Verge.