Laziness and Magic

Probably the most important ideological issues I have noticed with the advancement of AI/ML applications are the lack of accountability as well as the deep-seated nature of the issues that are trying to be tackled.

To begin, companies are money hungry and employees are a mixture of lazy/wanting to please that we allow shortcuts to be taken and questionable data to be used to train and test our systems. While beginning data collection tools were extremely cautious and used photoshoots with consenting individuals, time, money, and lack of diversity became an issue. So, employees began scrapping the web and used images of faces from websites like Flickr (where many photos are registered under the creative license) to build huge datasets of faces they can train on. This is where the issues begin. By 2007, researchers began downloading images directly from Google, Flickr, and Yahoo without concern for consent. “They found that researchers, driven by the exploding data requirements of deep learning, gradually abandoned asking for people’s consent. This has led more and more of people’s personal photos to be incorporated into systems of surveillance without their knowledge…People were extremely cautious about collecting, documenting, and verifying face data in the early days, says Raji. ‘Now we don’t care anymore. All of that has been abandoned,’ she says. ‘You just can’t keep track of a million faces. After a certain point, you can’t even pretend that you have control’” (Francisco). For a person who is completing potentially revolutionary work to say ‘they don’t care’ clearly suggests that they are not held accountable for their actions and there are no rules in place to do so. If the people creating the databases say they ‘can’t even pretend they have control,’ should we be rethinking the processes we are defining?

Once the data is trained, biases usually show up: “There are two main ways that bias shows up in training data: either the data you collect is unrepresentative of reality, or it reflects existing prejudices” (AI Bias). Truthfully – as long as humans are developing AI/ML technologies, I do not think there will be tech that is fully free from human bias. Maybe with developer teams that diverse enough we can thwart the issue, but to say a human-created technology will be free from humanity’s imperfections seems like a lofty goal. Similar to the lack of accountability shown when collecting data, it seems that the outputs of the AI/ML applications have no one responsible for them: “really strange phenomena start appearing, like auto-generated labels that include offensive terminology” (AI Ethics-Washing). How is anything that humans have created with a specific goal in mind contain a “strange phenomena”? Computers do not develop their own brain in the process of creating these applications where they can decide to be offensive, rather it is humans creating applications that lead to these offensive labels.

While we talk about the imperfections that come with being a human being present in AI/ML technology, our designs are also based off of the culture we are a part of, which is not only different from region to region but also different throughout time. For example, take the study of when people were asked about “moral decisions that should be followed by self-driving cars. They asked millions of people from around the world to weigh in on variations of the classic “trolley problem” by choosing who a car should try to prioritize in an accident. The results show huge variation across different cultures.” All humans have their own experiences: different upbringings, traumas, family histories, health conditions….to say we can all universally agree on the difficult decisions AI/ML applications should make is impossible. Even if we were able to, it is probable that 5 years or 20 years down the road, that decision would not continue to be agreed upon.

In my opinion, stricter rules and regulations need to be placed on employees of large tech companies. Employees of pharmaceutical companies work with many patients’ personal data and obtain consent from patients to enlist in their clinical trials and share data with the pharmaceutical companies. These employees must adhere to strict extremely strict guidelines set forth by the FDA and HIPAA and employees of tech companies should have to do the same. “A recent study out of North Carolina State University also found that asking software engineers to read a code of ethics does nothing to change their behavior.” Much like having to read the Terms & Conditions, no one does it and almost no one cares about the contents. Having ethics and regulations that are high stakes need to be enforced. If we are not enforcing them at the employee/developer level, how can we expect the users to use these applications ethically?

 

Francisco, Olivia Solon. “Facial Recognition’s ‘Dirty Little Secret’: Social Media Photos Used without Consent.” NBC News. Accessed March 19, 2021. https://www.nbcnews.com/tech/internet/facial-recognition-s-dirty-little-secret-millions-online-photos-scraped-n981921.
MIT Technology Review. “In 2020, Let’s Stop AI Ethics-Washing and Actually Do Something.” Accessed March 2o, 2021. https://www.technologyreview.com/2019/12/27/57/ai-ethics-washing-time-to-act/.
MIT Technology Review. “This is how AI bias really happens—and why it’s so hard to fix.” Accessed March 2o, 2021. https://www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/