In the last five decades, the major human-made threats that were widely discussed are the population explosion, environmental pollution, and the threat of a nuclear war. Today and due to the revolutionary development in computer systems in general and machine learning (ML) as part of artificial intelligence (AI) in specific, the threat shifted into another human-made threat. This new danger stems from the training process of general purpose AI algorithms (Strong AI). These algorithms pick up large amounts of information from stored data and have the ability to learn faster than humans; Reinforcement learning. The outcomes of these algorithms are “invisible technologies that affect privacy, security, surveillance, and human agency.” When it comes to the future of AI/ML, there is an ongoing debate about the impact of this system on humanity. Some people lean towards the benefits that will come from AI/ML applications; such as education, health care and human development. Others focus on the risks and the destructive power of AI/ML. For such critiques, I will discuss some urging ethical issues surrounding AI/ML today. I will do this for two main reasons. First, to highlight these concerns and think of their remedies before they become alarming issues. Second, it is a humble effort to participate in democratizing and decentralizing knowledge about AI/ML and reinforce human agency from being impacted to impacting how AI should be used for our good.
Nowadays, AI/ML influences many aspects of our lives and many decisions we make. Our world has become more dependent on AI/ML technologies. For example, these technologies control our smart communication devices, home devices, televisions, businesses and even governmental entities. The more influential AI/ML becomes, the more effective ethics, social and governmental intervention should be. While reading the rich materials for this class, two main concerns about AI/ML stood out for me; bias and lack of surveillance. In the following paragraphs, I will shed light on these concerns in some details.
Bias
According to AI, Training Data & Bias, the better the data we input into machine learning training, the higher quality outputs we get. Similarly, the more biased the input data is, the more biased trained outputs we get. To demonstrate this point briefly, computer systems collect training data from lots of sources, then deep learning algorithms train these data by recognizing patterns using large numbers of filters (deep learning). In every single action we take while using AI/ML technologies, we as humans, are providing everyday an endless amount of training data to help machine learning to predict. The problem here lies in the kind of collected data we feed and the filters used to train these data. If these inputs of data are biased, the system’s predictions and outputs will definitely inherit these biases and prioritize or disfavor things over others. What is even worse is that when those who feed these data are not aware of their biases, the system learns from biased data and saves it as sources, to be used in future predictions and here lies the biggest problem. My question here is, how can we control these inherent biases in the data and how can we correct them during the training process or detect and extract them from the system. I would like to share here a personal experience on this matter as an example. The alphabet of my language, Arabic, doesn’t contain the letter P. Although I have trained myself to pronounce this letter as accurately as possible, everytime I speak to a chatbot and try to spell out a word that contains the letter P, the machine asks me over and over if I meant the letter B.
lack of surveillance
As mentioned in the Ethics & AI: Privacy & the Future of Work video, there is a huge gap between “those are involved in creating the computing systems, and those that are impacted by these systems.” What matters here is what the society (creators and impacted) want to shape? and how technology should be used to achieve the target? The answer for this question may not be easy, but logically, I can suggest giving the agency for more impacted people by giving them and their representatives of community groups and policymakers, the opportunity to get more involved in evaluating and auditing decisions made by creators of AI/ML technologies. By getting involved, users impacted by these technologies get more knowledgeable about the process and make sure that innovations are ethical, inclusive and are useful to everyone in society. In other words and similar to social responsibility organizational departments, I would advocate to stop the vague and hard to implement Responsible AI plans by regulations and legislating Responsible AI departments in every organization active in the field of AI/ML, whether it is striving for profit or power.
References
- AI: Training Data & Bias
- Ethics & AI: Equal Access and Algorithmic Bias
- Janna Anderson, “Artificial Intelligence and the Future of Humans,” PEW Research Center, 10 December 2018, visited 20 March 2021.
- Karen Hao, “In 2020, Let’s Stop AI Ethics-washing and Actually Do Something,” MIT Technology Review, 27 December 2019, visited 20 March 2021.
- “Responsible AI,” Google, visited 20 March 2021.