In my opinion the biggest issue is data abuse. Pew research defines it as data use and surveillance in complex systems designed for profit by companies or for power by governments. The below will examine how both the private and public realm exploit data through my interpretations of certain cases.
In terms of the private realm, the ability for companies to create algorithms that perpetuate filter bubbles and echo chambers thereby increasing polarization is my main concern. These algorithms take the judgment out of information and replace it with predictive analysis of what a person would like to read, listen, or watch to reinforce the user’s opinions and thereby maintain their attention. These big data companies, notably coined the frightful five, have the power to influence and control the platforms used today for public discourse and in doing so are collecting vast amount of information that they profit from – “They are benefitting from the claim to empower people, even though they are centralizing power on an unprecedented scale.” These companies must be held accountable for there role in sowing discord and be regulated to prevent their accumulation of unfettered power. Looking specifically at Facebooks “Ad Preferences” we can examine this problem more thoroughly. Facebook categorizes and identifies users through interaction on Facebook, enhanced observation through Facebook’s Pixel application, and the ability to monitor users offline. With these inferences Facebook uses its deep learning models to label people for specific targeting purposes. This effort to curate advertisement and clickbait is an alarming invasion of privacy in which 51% are not comfortable with, yet it is still being done. What regulations can we make to impose transparency in big techs application of data? Should we break up big tech? Should big tech be liable for the content on their platforms?
In terms of the public realm, the ability for governments to create a surveillance state in which constant monitoring, predictive analysis, and censorship hinder the freedom of human agency is what I view as the most alarming utilization of AI. We see it developing in China with the social credit system and the exportation of safe cities to developing nations across the world. The extreme of China may not become a reality in America because of differing ideals, but that does not mean the government will not use AI in some form to secretly monitor citizens or at the very least violate privacy rights. Looking at Hao and Solon’s article we see how data collection for face recognition fell down a slippery slope. Organizations are downloading images without users consent to collect and hoard often without proper labeling for unimaginable uses in the future, namely surveillance. Critics rightly so are arguing against the legality of this collection and then distribution toward law enforcement agencies that exacerbates “historical and existing bias” that harms communities that are already “over-policed and over-surveilled.” What regulations should be imposed to prevent the exploitation of biometrics? Can we retroactively delete our images from these databases like it mentioned regarding IBM? Or will we forever have our biometrics stored? What laws can we make regulating companies and government cooperation over the collection of our data for their own purposes?
Data abuses lies with an uneducated public and unsubstantial regulatory laws. This façade of ethics of AI is just that a façade, a temporary band aid on a growing problem. Governments supported by companies need to create epistemological communities to foster discourse on standards and norms. The first step is creating a shared language of AI that can facilitate discussions between politicians and tech and educate the public. After coming to a consensus of definitions and terms the establishment of norms can be created. These norms do not require new ideas but rather can work off the framework of the biotech principles plus one that Floridi and Cowls argued: beneficence, non-maleficence, autonomy, justice, and explicability. Creating these norms unifies attitudes toward the development of AI that can be controlled and understood. From these norms we then need to establish laws that limit the acquisition of data without authorization from users, require notifications of when algorithms are perpetuating inherent biases, provide concise but understandable explanations of what algorithms are doing, and establish watchdogs for exploitation of AI to harm. This is just the regulatory side of AI because to approach the scientific community would be to demand understanding in AI to know what values and ethics are and implement them in choices, a feat that scientist struggle with. So, my question today focuses on the fact that knowing that AI is inherently flawed because it lacks the emotional intelligence what task should we prevent it from doing? Rather what task should we prevent it from being the sole decision maker in?
References: