Category Archives: Week 9

“How Games Will Play You” Dilemma

How Games Will Play You (Togelius, 2019)

Who it effects:

Being a lifelong gamer myself, this particular issue held special significance to me. It seems gaming has become increasingly more mainstream than when I began gaming in the early 2000s. The number of individuals who identify as “gamers” has increased from 2 billion in 2015 to almost 3 billion in 2021. (Number of Gamers Worldwide 2023) Moreover, it is a relatively cheap form of entertainment if one is to look at the hours spent playing, juxtaposed with money spent on purchasing a console and the subsequent games. Yet more and more game developers are switching from pay to play models to “free to play.” (Koksal, 2019) The quotations are to infer nothing is truly “free” especially in the gaming industry. 

Why it is a problem: 

Game developers acquire user’s spending and playing habits in the same fashion as social media and search engine companies like Facebook and Google. (Boutilier, 2015) Furthermore, game developers are targeting younger audiences who do not have a fully developed prefrontal cortex, and capitalizing on this with microtransactions in free to play models. (Uytun, 2018, p. 8) Many loot box models (mystery rewards for cash) use certain psychological factors to encourage purchasing, such as loss aversion, impulse buying, and time constraints. (Duverge , 2016) Albeit, game developers should not shoulder all the burden associated with children, microtransactions, and data acquisition. After all, parents technically have the final say in how much time their children spend online, and what it is they do.  

Questions and potential solutions:

Should game developers be treated any differently than other companies who garner large amounts of revenue from selling data? Perhaps they should, considering the target audience of some of the most popular games in the industry target minors, (Fortnite, Hearthstone, Minecraft to name a few) all of which maintain “free to play” models with revenue based on microtransactions.  One policy approach which lightly addresses the issue was the enacting of COPPA (Children’s Online Privacy Protection Rule) in 1998. (Children’s Online Privacy Protection Rule(“COPPA”), 2013) The purpose of COPPA was to protect children’s personally identifiable information from being unknowingly collected in online environments. Although much has changed since 1998, and the numbers of children gamers has dramatically increased, and parents may not fully understand how much of their children’s data is being acquired. (Benedetti, 2011) It could be argued that COPPA is an antiquated policy which needs revision, and does not adequately address the acquisition of minor’s data. Raising the age requirement on video games also is not an adequate solution to the issue, but perhaps a more detailed warning to parents, coupled with a requirement for parent’s to either deny or allow access to their kids data.  


Benedetti, W. (2011, October 11). Ready to play? 91 percent of kids are gamers. NBC News.

Boutilier, A. (2015, December 29). Video game companies are collecting massive amounts of data about you. Thestar.Com.

Children’s online privacy protection rule(“Coppa”). (2013, July 25). Federal Trade Commission.

Duverge , G. (2016, February 25). Insert more coins: The psychology behind microtransactions. Touro University WorldWide.

Koksal, I. (2019). Video gaming industry & its revenue shift. Forbes.–its-revenue-shift/

Number of gamers worldwide 2023. (n.d.). Statista. Retrieved March 22, 2021, from

The games industry shouldn’t be ripping off children | Geraldine Bedell. (2019, September 15). The Guardian.

Togelius, J. (2019, April 17). How games will play you. The Ethical Machine.

Uytun, M. C. (2018). Development period of prefrontal cortex. In A. Starcevic & B. Filipovic (Eds.), Prefrontal Cortex. InTech.


Paternal Beneficence and Following The Threads of Blame

There are so many aspects to Artificial Intelligence (AI) that call for us to pause the pursuit of progress and take a moment to ensure we do things right the first time. This powerful technology brings so much apprehension not just because of its techniques but because its rollout is quick and there is no face to blame if things go awry. This brings up the two issues which I want to cover today – paternal beneficence and how to attribute blame.

How do we create a system which makes decisions for other people, especially if other people don’t know that decisions are being made for them? This happens all the time, even before the widespread dissemination of AI as businesses and governments decided what is important for one group or another, and what thresholds people meet to before they may have access to benefits. This problem is only exacerbated when using AI to help or make decisions for us. This is because the definitions of goals and outcomes for each agency will differ and because of that it is susceptible to benevolent decisions having malevolent outcomes. Say you wanted to decrease the mortality rate in a hospital, though on the surface is seems like an ideal goal that is because we have assumed idea as to what the parameters around the decisions which should and could be made. An AI system that is agnostic to moral platitudes may simply reduce the rate of high-risk patients coming to the hospital, rerouting them elsewhere to ensure that the cases faced by the doctors have a higher likelihood of success. This would ultimately not be discovered unless there was someone constantly supervising the AI and an audit of the system would be conducted but in the meantime, hundreds of injuries or deaths may have been prevented if the system was not brought online. This goes into the process of deblack-boxing which calls for us to be as explicit with the outcomes and parameters as possible. This though, as with legislation, requires hard lines to be drawn and for people who fall through the cracks in the system as we can’t account for everyone. This also presupposes those who would ultimately be sorted by this system are unable to impact the system in the moment in which they are coming in contact with it. A type of paternalistic choice being made as we believe that the system or administrators has the expertise to make a better more informed decision. Conversely, if we have the user make the decision it may slow down the decision-making process in a moment where time is scarce. Though there really isn’t any best practice approach to this it does lead to the next problem and crisis AI has to contend with.

How do we attribute blame when a system goes awry. If in this same scenario we find ourselves rerouted to a different hospital care clinic and as such don’t receive the level of care necessary leading us to require life long assistance, who should take the blame and be responsible for my care as the situation in this instance would have been preventable if I had gotten to the better hospital which I was routed away from. This is a persistent problem as AI systems are being put in charge of frameworks that can cause larger magnitudes of harm. Do we blame the administration for making the decision to decrease mortality? Do we blame the ambulance driver for following the decision of the algorithm? Do we blame the AI for the parameters it wasn’t given in the first place? Do we blame the developer for not putting the safeguards in the system originally? Who is responsible then for my care? These attributions of blame, just like how it is difficult for companies to single out one person as the cause of the problem make legislation hard and bring justice when tragedy strikes harder. We can’t throw up our hands and say we don’t know either because the stakes are already so high, people’s lives are being altered by the decisions being made by AI systems and people get further out of the loop blame gets harder to attribute.

Though there isn’t an easy solution to any of these issues it does bring up the complexity of the problems with AI and how we need to be thinking about these problems now before we get to the point of no return.

Laziness and Magic

Probably the most important ideological issues I have noticed with the advancement of AI/ML applications are the lack of accountability as well as the deep-seated nature of the issues that are trying to be tackled.

To begin, companies are money hungry and employees are a mixture of lazy/wanting to please that we allow shortcuts to be taken and questionable data to be used to train and test our systems. While beginning data collection tools were extremely cautious and used photoshoots with consenting individuals, time, money, and lack of diversity became an issue. So, employees began scrapping the web and used images of faces from websites like Flickr (where many photos are registered under the creative license) to build huge datasets of faces they can train on. This is where the issues begin. By 2007, researchers began downloading images directly from Google, Flickr, and Yahoo without concern for consent. “They found that researchers, driven by the exploding data requirements of deep learning, gradually abandoned asking for people’s consent. This has led more and more of people’s personal photos to be incorporated into systems of surveillance without their knowledge…People were extremely cautious about collecting, documenting, and verifying face data in the early days, says Raji. ‘Now we don’t care anymore. All of that has been abandoned,’ she says. ‘You just can’t keep track of a million faces. After a certain point, you can’t even pretend that you have control’” (Francisco). For a person who is completing potentially revolutionary work to say ‘they don’t care’ clearly suggests that they are not held accountable for their actions and there are no rules in place to do so. If the people creating the databases say they ‘can’t even pretend they have control,’ should we be rethinking the processes we are defining?

Once the data is trained, biases usually show up: “There are two main ways that bias shows up in training data: either the data you collect is unrepresentative of reality, or it reflects existing prejudices” (AI Bias). Truthfully – as long as humans are developing AI/ML technologies, I do not think there will be tech that is fully free from human bias. Maybe with developer teams that diverse enough we can thwart the issue, but to say a human-created technology will be free from humanity’s imperfections seems like a lofty goal. Similar to the lack of accountability shown when collecting data, it seems that the outputs of the AI/ML applications have no one responsible for them: “really strange phenomena start appearing, like auto-generated labels that include offensive terminology” (AI Ethics-Washing). How is anything that humans have created with a specific goal in mind contain a “strange phenomena”? Computers do not develop their own brain in the process of creating these applications where they can decide to be offensive, rather it is humans creating applications that lead to these offensive labels.

While we talk about the imperfections that come with being a human being present in AI/ML technology, our designs are also based off of the culture we are a part of, which is not only different from region to region but also different throughout time. For example, take the study of when people were asked about “moral decisions that should be followed by self-driving cars. They asked millions of people from around the world to weigh in on variations of the classic “trolley problem” by choosing who a car should try to prioritize in an accident. The results show huge variation across different cultures.” All humans have their own experiences: different upbringings, traumas, family histories, health conditions….to say we can all universally agree on the difficult decisions AI/ML applications should make is impossible. Even if we were able to, it is probable that 5 years or 20 years down the road, that decision would not continue to be agreed upon.

In my opinion, stricter rules and regulations need to be placed on employees of large tech companies. Employees of pharmaceutical companies work with many patients’ personal data and obtain consent from patients to enlist in their clinical trials and share data with the pharmaceutical companies. These employees must adhere to strict extremely strict guidelines set forth by the FDA and HIPAA and employees of tech companies should have to do the same. “A recent study out of North Carolina State University also found that asking software engineers to read a code of ethics does nothing to change their behavior.” Much like having to read the Terms & Conditions, no one does it and almost no one cares about the contents. Having ethics and regulations that are high stakes need to be enforced. If we are not enforcing them at the employee/developer level, how can we expect the users to use these applications ethically?


Francisco, Olivia Solon. “Facial Recognition’s ‘Dirty Little Secret’: Social Media Photos Used without Consent.” NBC News. Accessed March 19, 2021.
MIT Technology Review. “In 2020, Let’s Stop AI Ethics-Washing and Actually Do Something.” Accessed March 2o, 2021.
MIT Technology Review. “This is how AI bias really happens—and why it’s so hard to fix.” Accessed March 2o, 2021. 

Who has the power?

Over the years, in one way or another we have all seen the existence and implementation of Artificial Intelligence take over so many aspects of our lives. For the most part, we don’t even see or realize that there is some sort/form of AI being implemented and used for a specific thing, products, circumstance. However, when we do notice we should also be recognizing the countesses instances where the biases and prejudices that existence within AI, are very much so present. I think because it is something not tangible or so obsolete, we assume that as an “electronic” being there is no association between AI and the societal issues that exist in our societies today. But the AI hasn’t coded itself. It hasn’t necessarily created itself out of nowhere. And so as us, the humans, “manufacture”, code and establish these presences in our lives, unfortunately or fortunately (since we know what is wrong and work towards fixing it) with it any bias of prejudice that is embedded in our human minds and lives will get encoded with it. Apart from those, relying so heavily on produced intelligence can have its own ethical implications and issues. 

With data collection being a huge part of our electronic and digital presence, most times we’re not even aware of that taking place. We’re not always really sure or aware of how and when data is being collected and what is being done with it. But if there is one thing I have realized is that data is constantly collected. Some of our readings had this as two separate ethical issues that govern AI but I feel like they are pretty similar and interchangeable and that is the autonomy and power that we rely on our AIs. The automation that goes behind AI causes “individuals [to] experience a loss of control over their lives” (Anderson & Rainie, 2018). “When we adopt AI and its smart agency, we willingly cede some of our decision-making power to
technological artefacts (Floridi & Cowel, 2019, 7). Partially, this is because the deblack-boxing of AI is still very much so in the box. However, there is a question to be posed here; will we truly ever learn what is behind the AI that we use on a daily basis? Will these companies/products ever truly reveal how they work, what they really do with all this information and data collection? Honestly, probably not since that would make them weaker to competitors. Unless, more people start realizing, noticing and want some change in terms of the control and power they have towards their use of this type of technology. As Nemitz, also explains, large (tech) corporation have taken over every aspect our of lives whether or not we realize it or sign up for it. “The pervasiveness of these corporations is thus a reality not only in technical terms, but also
with regard to resource attribution and societal matters” (Nemitz, 2018, 3). All these companies and brands, have basically collected in “their hands” countless information and data that with it they basically are able to control so many aspects of the human life especially in terms of technological, economic and political power that has been given to them through this digital power. Since nowadays, we rely so heavily on technological and the use of a digitized framework, most aspects of human life are also controlled by technology. So in a way whoever is more “ahead of the game” in the field, is the one who also has the power, the information, the data. Everyone else has pretty much lost their ability to pick and choose when, how, where they share information. It is one way or the other. If you want to have any sort of digital presence, talk on the phone, use your credit car, pay for something, look up something, everything you do is pretty much tracked down and collected, formed into a bigger overall ‘picture’. 

Another ethical issue/implication of AI, is of course the idea that all this information and data can be used for the destruction and with malicious intent towards others. Apart from ” autonomous military applications and the use of weaponized information” (Anderson & Rainie, 2018) we can also speak on the collection of information aimed towards capturing people such as facial recognition. The problem here is who is using this technology? and for what reasons? Of course, we also have to consider yet again the biases that go into this type of “vigilantism”. Racists views and opinions definitely influence who this type of technology can be geared at and who are going to be the people mostly being targeted by it. Floridi et al, also explain this in terms of how “developers may train predictive policing software on policing data that contains deeply ingrained prejudices. When discrimination affects arrest rates, it becomes embedded
in prosecution data. Such biases may cause discriminatory decisions (e.g., warnings or arrests) that feed back into the increasingly biased datasets, thereby completing a vicious cycle” (Floridi et al., 2019, 1788). 


How do we apply laws/regulations/safety measures for something so widely used? 

We have seen how hard it has been to manage data privacy uses and laws from one country to another, how can something so universal become so specific when it comes to protecting people? 


Janna Anderson, Lee Rainie, and Alex Luchsinger, “Artificial Intelligence and the Future of Humans,” Pew Research Center: Internet and Technology, December 10, 2018.

Karen Hao, “In 2020, Let’s Stop AI Ethics-Washing and Actually Do Something,” MIT Technology Review, December 27, 2019.

Karen Hao, “Establishing an AI Code of Ethics Will Be Harder Than People Think,” MIT Technology Review, October 21, 2018. 

Karen Hao, “This Is How AI Bias Really Happens — and Why It’s so Hard to Fix,” MIT Technology Review, February 4, 2019.

Luciano Floridi and Josh Cowls, “A Unified Framework of Five Principles for AI in Society,” Harvard Data Science Review 1, no. 1

Luciano Floridi, Josh Cowls, et al., “How to Design AI for Social Good: Seven Essential Factors,” Science and Engineering Ethics, 26/3, 2020: 1771-1796.

Paul Nemitz, “Constitutional Democracy and Technology in the Age of Artificial Intelligence,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, no. 2133 (November 28, 2018): 20180089

Numbers don’t lie?

AI is all about training computers to learn/find out patterns from a massive amount of data. As AI gets more involved in our daily life, the logic and the ethics behind each algorithm should become more transparent. Technology does not have right or wrong, but their decisions and outcomes highly relate to or even entirely depend on the data provided by human creators. Since it is so easy for the AI to predict and further control our life-changing decisions (like dating app, housing renting, debt…), it is important to understand the relationship between the creator and creates (HEWLETT PACKARD ENTERPRISE – Moral Code: The Ethics of AI, 2018, 03:15-05:21).  Because AI is an outcome of human action, the bias could happen when (Hao, 2020):

  1. Framing the problem
  2. Collecting the data (The data is one-sided/it reflects the bias itself)
  3. Preparing the data (Subjective)

It’s hard to fix the bias in AI because, first, it is often too late to fix it; second, the complex algorithm already learned what it has been taught, fixing the root doesn’t change the branches.

AI emphasis human bias and deepen the exited stereotype. As the example listed in source materials. AI generates the people who live in certain areas as more possible to commit the crime, and that areas happens gathered by colored groups or poor. AI becomes a racist without being told. Another example would be Amazon filter out female candidates by training AI to hire new employees based on the historical candidates hiring information, which crowed by white males. In this case, AI becomes a sexist without being told by words but learned by past truth.

One specific issue is how AI has deepened gender discrimination especially the blooming of generating fake face techniques. According to The Verge website, fake face-generating techniques and deep faking face-changing techs are mostly used to create unreal pornography (Vincent, 2019). And as you could imagine, most of the victims are female. And the inevitable truth is, even though the videos or pictures are known as fake, what’s there is already there. Eliminating the entity doesn’t necessarily eliminate its existence. Same to the political aspects. Since “it is getting harder to spot a deep fake video”, this ethical issue could only get worth if without some following law restrictions (It’s Getting Harder to Spot a Deep Fake Video, 2018, 03:15-05:21). Some misunderstanding of AI is that because AI is generated by algorithms, we assume it is impersonality, more impartial, or rational than humans. Because we believe numbers/data don’t lie.  

Question: What tools can we use to reduce the AI bias? 


Hao, K. (2020, April 2). This is how AI bias really happens—and why it’s so hard to fix. MIT Technology Review.

HEWLETT PACKARD ENTERPRISE – Moral Code: The Ethics of AI. (2018, June 29). [Video]. YouTube.

It’s Getting Harder to Spot a Deep Fake Video. (2018, September 27). [Video]. YouTube.

Vincent, J. (2019, February 15). uses AI to generate endless fake faces. The Verge.

The ethical, political, and ideological issues surrounding AI/ML applications (is that real or exaggerated?)

Although Artificial Intelligence (AI) and its subfields like Machine Learning (ML) and Deep Learning (DL) have very various good uses, they were used in a bad way to surveil and target colored and minority communities (Solon, 2019).

AI facial recognition systems were used to track possible violent people. Unfortunately, the bad training dataset made some of those systems detect dark-skinned people as a potential threat (Hao, StopAIethics-washing-and-actually-do-something, 2019) (Hao, 2018).

Some experts expressed their concerns about the future effects of AI technologies. Two important issues of ethical, political, and ideological issues surrounding AI/ML applications are data abuse and the deep fake (ANDERSON & RAINIE, 2018).

Data Abuse

Facebook and Twitter gather massive information from users and use them to suggest recommendations based on the user’s interests. Facebook AD preference, for example, uses data such as political leaning and racial/ethnic affinities to generate materials for them. 60% of users assigned a multicultural affinity class said they have a very strong affinity for their assigned group. Most social medial users believe that these platforms can detect their main features like race, political opinion, religious beliefs, etc. However, there are variations between what platforms say about users’ political ideologies and what users are (Hitlin & Rainie, 2019).

Some advertisers use multicultural affinity AI tools to exclude certain groups of races through work interviews. Some studies say that AI is responsible for these problems. We can say that not AI in specific, but the wrong use of AI application causes those problems.

Variations of the training database affect the performance of AI systems. In a study of gender identification based on deep learning (WOJCIK & REMY, 2019), DL algorithms failed to detect dark-skinned people. In fact, the size of their database was very low. The study also did not take into account all possible races and ages. Therefore, the results of this study cannot be considered as accurate results.

Flickr website images used by IBM Company to train their face recognition. The problem is that you don’t know if your images were used by IBM or not, but the fact is that IBM can use your photos because you used Creative Common License, allowing nonprofits to use your photos for free!. Some people were annoyed about using their photos, while others said they could enhance the face recognition systems. In some countries, if IBM did not respond to your request for removing photos, you can complain to your data protection authority systems (Solon, 2019).

Deep Fake

The “” website was developed by Philip Wang to generate an infinite number of fake images. His technique was based on AI and used a very large dataset of real images. StyleGAN networks that were used in this website can accept not only humans’ faces but also any source helping graphical and animation designers to develop their applications (Games, Films Tricks, etc.). However, this technique can create fake videos by pasting people’s faces on target videos (Vincent, 2019). Trump appeared in a video offering advice to people of Belgium in case of climate change which was a fake film constructed by these deep fake networks. Some kind of bad use of AI can cause political criticisms and even mayhem (Schwartz, 2018). Maybe traditional Photoshop fake images would have the same bad effects by this AI technology.

Optimistic Future

Although all previous bad usage of AI, new detection methods have arisen. Fortunately, large groups of AI researchers are aware of AI ethics and have taken many approaches to solve this problem, like developing algorithms to reduce hidden biases within training datasets. They also focus on applying a process that holds AI companies responsible for fairer results (Hao, This_is_howAI_ bias_really_happens, 2019). Facebook committed to developing an ML algorithm detecting deep fakes (Schwartz, 2018). Some other AI researchers developed approaches to detect and reduce hidden biases within datasets (Hao, This_is_howAI_ bias_really_happens, 2019), (Hao, StopAIethics-washing-and-actually-do-something, 2019). AI companies protect user privacy, combating deep fake and taking into account wider datasets.

AI is the digital future of the world. Its benefits are obvious in all fields (Medical diagnosis, Data mining, Robotics, Big data analysis, image recognition, military application, security application, etc.). Deep fake also has positive benefits, like creating digital voices for people who lose theirs to diseases (Baker & Capestany, 2018).

Many high-profile initiatives established in the interest of socially beneficial AI and highly reputable. Like Montreal and IEEE, some principles said that the development of AI should ultimately promote the well-being of all humans. Other principles focused on the common good or benefit of AI applications’ humanity (Floridi & Cowls, 2019).


Baker, H & Capestany. C2018). It’s Getting Harder to Spot a Deep Fake Video. Retrieved from:

Vincent, J. (2019). uses AI to generate endless fake faces. Retrieved from:

Anderson, J. & Raini, L. (2018). artificial-intelligence-and-the-future-of-humans. Retrieved from:

Hao, K.. (2018). Retrieved from:

Hao, K. (2019). StopAIethics-washing-and-actually-do-something. Retrieved from:

Hao, K. (2019). This_is_howAI_ bias_really_happens. Retrieved from:

Floridi, L. & Cowls, J.. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review.

Solon, O.. (2019). facial recognition, dirty little secret. Retrieved from:

Schwartz, O.. (2018). deep-fakes-fake-news-truth. Retrieved from:

Hitlin.P & Rainie,L.. (2019). Facebook-algorithms-acknowledgements. Retrieved from:

Wojic, S. & Remy, R. (2019). The challenges of using machine learning to identify gender in images. Retrieved from:

Ethics of deep fake

I want to talk about deep fake this week. Deep fake is closed to our daily life today. For example, we can upload our friends’ photo to combine with a dynamic meme. Actually, the top commitment below the video It’s Getting Harder to Spot a Deep Fake Video (Bloomberg Quicktake, 2018) is “2018: Deep fake is dangerous 2020: DAME DA NE”, which means from 2018 to 2020, the impression of deep fake is changed from dangerous to meme. In addition, My Heritage platform allows people to upload old photos and let them come to life. In my view, the ethic or social problems of deep fake technology are about the data collection and how to use it as a tool.

Deep fake, based on GANs (Generative Adversarial Networks), refers to the algorithms that input pictures and sounds and do the face manipulation. Put one person’s facial contour and expression on other specific person’s face, and at the same time use the realistic processing of the sound to create a synthetic but seemingly real video.

The first ethic problem is about the data collection. The deep fake may not have data bias problem in my view, since the goal of deep fake is to replace one’s face with others. It might have some “dangerous” patterns of race or gender, but we cannot find it out and it would not lead to bias output, at least in my opinion. But what about during the training of deep fake it may use many photos data without consent? I think refers to deep fake, the data collection does not infringe personal information and has no effect on each individuals. “The risk of any harm does not increase by more than a little bit as the result of the use of any individual’s data.”(Kearns & Roth, 2020) But whether the benefit overweight the sum of the cost of all the individuals and the distribution of benefit is fair are based on how to use the deep fake.

When deep fake is used in journalism, it seems that the Pandora’s Box is opened. From a computer science perspective, we now still have methods to differentiate whether a video using deep fake technology to generate “fake” faces. Since it is not creating a video with nothing but needs a large amount of data of specific person’s audio and video to extract the features and patterns. But when it comes to the communication and journalism, the point is not how well deep fakes can do but it has the ability to do. Visual texts was originally the most powerful evidence for constructing truth. But deep fake replaced different or even opposite content and meanings of the visual texts, resulting in self-subversion of the visual texts. In other words, deep fakes overturns the notion that seeing is believing. I am concerned and scared that because of the overturn, people might only be willing to believe what they want to believe and consider the videos that contradict one’s own point of view as the output of deep fakes. And like Danielle Citron said “When nothing is true then the dishonest person will thrive by saying what’s true is fake.”(You Thought Fake News Was Bad?, 2018)



Atlantic Re:think. (2018, June 30). HEWLETT PACKARD ENTERPRISE – Moral Code: The Ethics of AI.

Bloomberg Quicktake. (2018, September 27). It’s Getting Harder to Spot a Deep Fake Video.

Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review.

Kearns, M., & Roth, A. (2020). The ethical algorithm: The science of socially aware algorithm design. Oxford University Press.

This is how AI bias really happens—And why it’s so hard to fix. (n.d.). MIT Technology Review. Retrieved March 20, 2021, from

You thought fake news was bad? Deep fakes are where truth goes to die. (2018, November 12). The Guardian.

AI/ML Critique- Chirin Dirani

In the last five decades, the major human-made threats that were widely discussed are the population explosion, environmental pollution, and the threat of a nuclear war. Today and due to the revolutionary development in computer systems in general and machine learning (ML) as part of artificial intelligence (AI) in specific, the threat shifted into another human-made threat. This new danger stems from the training process of general purpose AI algorithms (Strong AI). These algorithms pick up large amounts of information from stored data and have the ability to learn faster than humans; Reinforcement learning. The outcomes of these algorithms are “invisible technologies that affect privacy, security, surveillance, and human agency.” When it comes to the future of AI/ML, there is an ongoing debate about the impact of this system on humanity. Some people lean towards the benefits that will come from AI/ML applications; such as education, health care and human development. Others focus on the risks and the destructive power of AI/ML. For such critiques, I will discuss some urging ethical issues surrounding AI/ML today. I will do this for two main reasons. First, to highlight these concerns and think of their remedies before they become alarming issues. Second, it is a humble effort to participate in democratizing and decentralizing knowledge about AI/ML and reinforce human agency from being impacted to impacting how AI should be used for our good. 

Nowadays, AI/ML influences many aspects of our lives and many decisions we make. Our world has become more dependent on AI/ML technologies. For example, these technologies control our smart communication devices, home devices, televisions, businesses and even governmental entities. The more influential AI/ML becomes, the more effective ethics, social and governmental intervention should be. While reading the rich materials for this class, two main concerns about AI/ML stood out for me; bias and lack of surveillance. In the following paragraphs, I will shed light on these concerns in some details.


According to AI, Training Data & Bias, the better the data we input into machine learning training, the higher quality outputs we get. Similarly, the more biased the input data is, the more biased trained outputs we get. To demonstrate this point briefly, computer systems collect training data from lots of sources, then deep learning algorithms train these data by recognizing patterns using large numbers of filters (deep learning). In every single action we take while using AI/ML technologies, we as humans, are providing everyday an endless amount of training data to help machine learning to predict. The problem here lies in the kind of collected data we feed and the filters used to train these data. If these inputs of data are biased, the system’s predictions and outputs will definitely inherit these biases and prioritize or disfavor things over others. What is even worse is that when those who feed these data are not aware of their biases, the system learns from biased data and saves it as sources, to be used in future predictions and here lies the biggest problem. My question here is, how can we control these inherent biases in the data and how can we correct them during the training process or detect and extract them from the system. I would like to share here a personal experience on this matter as an example. The alphabet of my language, Arabic, doesn’t contain the letter P. Although I have trained myself to pronounce this letter as accurately as possible, everytime I speak to a chatbot and try to spell out a word that contains the letter P, the machine asks me over and over if I meant the letter B.  

lack of surveillance

As mentioned in the Ethics & AI: Privacy & the Future of Work video, there is a huge gap between “those are involved in creating the computing systems, and those that are impacted by these systems.” What matters here is what the society (creators and impacted) want to shape? and how technology should be used to achieve the target? The answer for this question may not be easy, but logically, I can suggest giving the agency for more impacted people by giving them and their representatives of community groups and policymakers, the opportunity to get more involved in evaluating and auditing decisions made by creators of AI/ML technologies. By getting involved, users impacted by these technologies get more knowledgeable about the process and make sure that innovations are ethical, inclusive and are useful to everyone in society. In other words and similar to social responsibility organizational departments, I would advocate to stop the vague and hard to implement Responsible AI plans by regulations and legislating Responsible AI departments in every organization active in the field of AI/ML, whether it is striving for profit or power.


  1. AI: Training Data & Bias
  2. Ethics & AI: Equal Access and Algorithmic Bias
  3. Janna Anderson, “Artificial Intelligence and the Future of Humans,” PEW Research Center, 10 December 2018, visited 20 March 2021.
  4. Karen Hao, “In 2020, Let’s Stop AI Ethics-washing and Actually Do Something,” MIT Technology Review, 27 December 2019, visited 20 March 2021.
  5. “Responsible AI,” Google, visited 20 March 2021.


Ethics for AI

In my opinion the biggest issue is data abuse. Pew research defines it as data use and surveillance in complex systems designed for profit by companies or for power by governments. The below will examine how both the private and public realm exploit data through my interpretations of certain cases.

In terms of the private realm, the ability for companies to create algorithms that perpetuate filter bubbles and echo chambers thereby increasing polarization is my main concern. These algorithms take the judgment out of information and replace it with predictive analysis of what a person would like to read, listen, or watch to reinforce the user’s opinions and thereby maintain their attention. These big data companies, notably coined the frightful five, have the power to influence and control the platforms used today for public discourse and in doing so are collecting vast amount of information that they profit from – “They are benefitting from the claim to empower people, even though they are centralizing power on an unprecedented scale.” These companies must be held accountable for there role in sowing discord and be regulated to prevent their accumulation of unfettered power. Looking specifically at Facebooks “Ad Preferences” we can examine this problem more thoroughly. Facebook categorizes and identifies users through interaction on Facebook, enhanced observation through Facebook’s Pixel application, and the ability to monitor users offline. With these inferences Facebook uses its deep learning models to label people for specific targeting purposes. This effort to curate advertisement and clickbait is an alarming invasion of privacy in which 51% are not comfortable with, yet it is still being done. What regulations can we make to impose transparency in big techs application of data? Should we break up big tech? Should big tech be liable for the content on their platforms?

In terms of the public realm, the ability for governments to create a surveillance state in which constant monitoring, predictive analysis, and censorship hinder the freedom of human agency is what I view as the most alarming utilization of AI. We see it developing in China with the social credit system and the exportation of safe cities to developing nations across the world. The extreme of China may not become a reality in America because of differing ideals, but that does not mean the government will not use AI in some form to secretly monitor citizens or at the very least violate privacy rights. Looking at Hao and Solon’s article we see how data collection for face recognition fell down a slippery slope. Organizations are downloading images without users consent to collect and hoard often without proper labeling for unimaginable uses in the future, namely surveillance. Critics rightly so are arguing against the legality of this collection and then distribution toward law enforcement agencies that exacerbates “historical and existing bias” that harms communities that are already “over-policed and over-surveilled.” What regulations should be imposed to prevent the exploitation of biometrics? Can we retroactively delete our images from these databases like it mentioned regarding IBM? Or will we forever have our biometrics stored? What laws can we make regulating companies and government cooperation over the collection of our data for their own purposes?

Data abuses lies with an uneducated public and unsubstantial regulatory laws. This façade of ethics of AI is just that a façade, a temporary band aid on a growing problem. Governments supported by companies need to create epistemological communities to foster discourse on standards and norms. The first step is creating a shared language of AI that can facilitate discussions between politicians and tech and educate the public. After coming to a consensus of definitions and terms the establishment of norms can be created. These norms do not require new ideas but rather can work off the framework of the biotech principles plus one that Floridi and Cowls argued: beneficence, non-maleficence, autonomy, justice, and explicability. Creating these norms unifies attitudes toward the development of AI that can be controlled and understood. From these norms we then need to establish laws that limit the acquisition of data without authorization from users, require notifications of when algorithms are perpetuating inherent biases, provide concise but understandable explanations of what algorithms are doing, and establish watchdogs for exploitation of AI to harm. This is just the regulatory side of AI because to approach the scientific community would be to demand understanding in AI to know what values and ethics are and implement them in choices, a feat that scientist struggle with. So, my question today focuses on the fact that knowing that AI is inherently flawed because it lacks the emotional intelligence what task should we prevent it from doing? Rather what task should we prevent it from being the sole decision maker in?


Floridi, Luciano, and Josh Cowls. 2019. “A Unified Framework of Five Principles for AI in Society.” Harvard Data Science Review 1 (1).
Francisco, Olivia SolonOlivia Solon is tech investigations editor for NBC News in San. n.d. “Facial Recognition’s ‘Dirty Little Secret’: Social Media Photos Used without Consent.” NBC News. Accessed March 19, 2021a.
———. n.d. “Facial Recognition’s ‘Dirty Little Secret’: Social Media Photos Used without Consent.” NBC News. Accessed March 19, 2021b.
“Hao-This Is How We Lost Control of Our Faces-2021.Pdf.” n.d. Google Docs. Accessed March 19, 2021.
“Nemitz-Constitutional Democracy and Technology in the Age of Artificial Intelligence-2018.Pdf.” n.d. Google Docs. Accessed March 19, 2021.
NW, 1615 L. St, Suite 800Washington, and DC 20036USA202-419-4300 | Main202-857-8562 | Fax202-419-4372 | Media Inquiries. 2018. “Artificial Intelligence and the Future of Humans.” Pew Research Center: Internet, Science & Tech (blog). December 10, 2018.
———. 2019a. “Facebook Algorithms and Personal Data.” Pew Research Center: Internet, Science & Tech (blog). January 16, 2019.
———. 2019b. “The Challenges of Using Machine Learning to Identify Gender in Images.” Pew Research Center: Internet, Science & Tech (blog). September 5, 2019.
“The Ethical Character of Algorithms—and What It Means for Fairness, the Character of Decision-Making, and the Future of News.” n.d. The Ethical Machine. Accessed March 19, 2021.