Ethics of deep fake

I want to talk about deep fake this week. Deep fake is closed to our daily life today. For example, we can upload our friends’ photo to combine with a dynamic meme. Actually, the top commitment below the video It’s Getting Harder to Spot a Deep Fake Video (Bloomberg Quicktake, 2018) is “2018: Deep fake is dangerous 2020: DAME DA NE”, which means from 2018 to 2020, the impression of deep fake is changed from dangerous to meme. In addition, My Heritage platform allows people to upload old photos and let them come to life. In my view, the ethic or social problems of deep fake technology are about the data collection and how to use it as a tool.

Deep fake, based on GANs (Generative Adversarial Networks), refers to the algorithms that input pictures and sounds and do the face manipulation. Put one person’s facial contour and expression on other specific person’s face, and at the same time use the realistic processing of the sound to create a synthetic but seemingly real video.

The first ethic problem is about the data collection. The deep fake may not have data bias problem in my view, since the goal of deep fake is to replace one’s face with others. It might have some “dangerous” patterns of race or gender, but we cannot find it out and it would not lead to bias output, at least in my opinion. But what about during the training of deep fake it may use many photos data without consent? I think refers to deep fake, the data collection does not infringe personal information and has no effect on each individuals. “The risk of any harm does not increase by more than a little bit as the result of the use of any individual’s data.”(Kearns & Roth, 2020) But whether the benefit overweight the sum of the cost of all the individuals and the distribution of benefit is fair are based on how to use the deep fake.

When deep fake is used in journalism, it seems that the Pandora’s Box is opened. From a computer science perspective, we now still have methods to differentiate whether a video using deep fake technology to generate “fake” faces. Since it is not creating a video with nothing but needs a large amount of data of specific person’s audio and video to extract the features and patterns. But when it comes to the communication and journalism, the point is not how well deep fakes can do but it has the ability to do. Visual texts was originally the most powerful evidence for constructing truth. But deep fake replaced different or even opposite content and meanings of the visual texts, resulting in self-subversion of the visual texts. In other words, deep fakes overturns the notion that seeing is believing. I am concerned and scared that because of the overturn, people might only be willing to believe what they want to believe and consider the videos that contradict one’s own point of view as the output of deep fakes. And like Danielle Citron said “When nothing is true then the dishonest person will thrive by saying what’s true is fake.”(You Thought Fake News Was Bad?, 2018)

 

Reference

Atlantic Re:think. (2018, June 30). HEWLETT PACKARD ENTERPRISE – Moral Code: The Ethics of AI. https://www.youtube.com/watch?v=GboOXAjGevA&t=104s

Bloomberg Quicktake. (2018, September 27). It’s Getting Harder to Spot a Deep Fake Video. https://www.youtube.com/watch?v=gLoI9hAX9dw

Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review. https://doi.org/10.1162/99608f92.8cd550d1

Kearns, M., & Roth, A. (2020). The ethical algorithm: The science of socially aware algorithm design. Oxford University Press.

This is how AI bias really happens—And why it’s so hard to fix. (n.d.). MIT Technology Review. Retrieved March 20, 2021, from https://www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/

You thought fake news was bad? Deep fakes are where truth goes to die. (2018, November 12). The Guardian. http://www.theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-truth