Category Archives: Week 12

How to contribute to better future

Before this course, artificial intelligence, in my mind, is a kind of untouchable mysterious domain for ordinary people without a technological background to involve in. Even some of my friends studying in data science or information system don’t know the rules behind the computer or the algorithm. We all know technology is dominant, and AI will change the world, but we don’t know how it will change our world and lives. We absorb information from media stories, so we put AI in an overstate status. Individuals don’t see how those things work and their principles. They choose to understand it with those misleading terms, so some are afraid of emerging technologies like DL and ML. The reasons that those terms appear are pretty complicated. It may because the big companies intend to maintain their monopolies, or media agencies trying to take profits from it, or people have so many fantasies about futures and artificial intelligence. However, actually, we are still in the weak AI phase and still far from strong AI like what is performed in movies. Through this course, we learned how to de-blackbox it and convey concepts of the domain acceptably.

According to Dr. Irvine, we will de-blackbox it from four perspectives- ‘the systems view,’ ‘the design view,’ ‘the semiotic systems of view,’ ‘the ethics and policy view.’ From those four perspectives, we studied the software, the hardware, programming systems, semiotic systems like Unicode, and why it’s presented in specific ways. Furthermore, for ‘the dependencies for Socio-technical systems’ and considering ethics and policies, how should those techniques be regulated to adapt to human society.

In this week’s readings, we can see both political and academic institutions are making efforts to avoid the pessimistic predictions about AI. For example, the EU involves in promote General Data Protection Regulation (GDPR). It has made seven principles to regulate future AI. Also, universities are studying how to get better predictions from AI algorithms by adjusting their parameters.

However, I believe that the effects of AI depend on human society’s rules. In research of MIT Lab and documentary Code Bias, the high error rate in people of color’s facial recognition may intensify the bias towards the minority. Moreover, in another case, the algorithm itself may be racist. The health-care costs of different races may have distinguished differences. The former case will contribute to the inequities, and the inequalities contribute to the latter one. That will easily trap us into a vicious circle and leads to more severe social injustice. With AI or without AI, the predictions will be full of biases; after all, ML is based on human knowledge and incidents. I certainly expect a better future with more intelligent AI and believe our efforts will work to some extent. Still, I also think that what we should do to improve the quality of the results of programs is to improve our society first.

References

Brandom, R. (2018, May 25). Everything you need to know about GDPR. The Verge.

Gelman, A. (2019, April 3). From Overconfidence in Research to Over Certainty in Policy Analysis: Can We Escape the Cycle of Hype and Disappointment? The Ethical Machine.

Irvine, M. (n.d.). CCTP-607: Leading Ideas in Technology: AI to the Cloud. Retrieved April 17, 2021, from https://drive.google.com/file/d/1Hk8gLXcgY0G2DyhSRHL5fPQn2Z089akQ/view

Lipton, Z. C., & Steinhardt, J. (2018). Troubling Trends in Machine Learning Scholarship. ArXiv:1807.03341 [Cs, Stat]. http://arxiv.org/abs/1807.03341

Vincent, J. (2018, July 26). The tech industry doesn’t have a plan for dealing with bias in facial recognition. The Verge. https://www.theverge.com/2018/7/26/17616290/facial-recognition-ai-bias-benchmark-test

Vincent, J. (2019, April 8). AI systems should be accountable, explainable, and unbiased, says EU. The Verge. https://www.theverge.com/2019/4/8/18300149/eu-artificial-intelligence-ai-ethical-guidelines-recommendations

Issues associated with terminology in AI/ML

As we have discussed in previous modules, terminology (or buzz words) have a tendency to blackbox new technologies that would otherwise be comprehensible. Occasionally, syntax can do far worse than complicate technological concepts. A recent fatal Tesla vehicle crash resulted in two deaths, and the car was believed to be driverless by authorities. (Wong, 2021) Tesla, and in particular, Elon musk, has faced multiple controversies since Tesla’s rise to prominence as an EV producer. The problematic term, “autopilot” in Tesla vehicles, raises issues associated with syntax in marketing, and the detriments of uninformed consumers. (Leggett, 2021) 

Tesla dissolved its PR department, and seemingly uses only Musk’s tweets to push out information. (Morris, 2020) This is problematic in the sense that it removes a human interaction element in the sale of its cars, (which is also dealer-free). This begs the question, are consumers uninformed about the autonomous capability in Tesla vehicles? Is the labeling of the driver assistance feature as “autopilot” problematic? The sensationalism of Tesla may likely lead to overzealous use of their products, and can be damaging to the further development of autonomous vehicles. It is difficult to argue Elon Musk is savvy in regards to marketing, but marketing is not necessarily synonymous with public relations. 

Course Questions

The United States is falling behind in regards to regulation in multiple fields concerning technology and science, the CDA (Communications Decency Act) section 230 is an example of this. Whereas, the GDPR (General Data Protection Regulation) protects European citizens, yet has not prevented social media companies like Facebook and Instagram from operating there. (Gdpr Archives, 2021) Regulation does not need to carry the negative stigma of stifling innovation or economic growth. Several countries in Europe, to include Germany and the UK, have already made regulations concerning the use of the term “autopilot” in relation to Tesla vehicles. So why is this sort of overstep in the use of misleading syntax overlooked by US policy makers when multiple incidents have occurred? (Shepardson, 2021) A question which has persisted for me throughout the course is, are policy makers out of touch with rapidly evolving technology? What is considered too much regulation, and when is there not enough?

References

Gdpr archives. (n.d.). GDPR.Eu. Retrieved April 19, 2021, from https://gdpr.eu/tag/gdpr/

Leggett, T. (2021, April 19). Two men killed in Tesla car crash “without driver” in seat. BBC News. https://www.bbc.com/news/technology-56799749

Morris, J. (2020, October 10). Has tesla really fired its pr department? And does it matter? Forbes. https://www.forbes.com/sites/jamesmorris/2020/10/10/has-tesla-really-fired-its-pr-department-and-does-it-matter/

Shepardson, D. (2021, March 18). U.S. safety agency reviewing 23 Tesla crashes, three from recent weeks. Reuters. https://www.reuters.com/article/us-tesla-crash-idUSKBN2BA2ML

Wong, W. (2021, April 19). 2 dead in Tesla crash after car “no one was driving” hits tree, authorities say. NBC News. https://www.nbcnews.com/news/us-news/2-dead-tesla-crash-after-car-no-one-was-driving-n1264470

Synthesis: Inside The Black Box – AI

AI is a great leap forward but in most ways, we are still in its infancy. Like a tool which we find in our grandfather’s workshop, we may find a use for it but we don’t fully know how to use it properly. There are many mysteries concerning AI and how they work but until we truly open the black box we also won’t have control over them.

There are two things I want to touch upon. First, how these technologies work. Second, the implication of their working.

AI, like most of my field of psychology, works on statistical probabilities of a phenomenon occurring based on data from the past. This works on the foundation of three things. First, the understanding that if something has happened before, it will most likely happen again. Second, the past causes the present and the future to happen. Third, that information is interconnected, creating relationships between phenomenon that is measured. This leads to the creating of models to determine how and why phenomena occur in the first place. This is where AI and scientific thought diverges in a lot of ways. AI uses information fed into it to create models which, for the most part, cannot be understood by those who create them. This means, for the most part, the models being created are focused on the end result rather than the journey to get there. This means though that as long as we believe the AI is working, it doesn’t matter what type of biases it may be used to create the model to determine the answer. Data that is fed into a system directly relates to the results we get out, and so if the data set is biased then the results to are biased. The problem then with AI is that there is no oversight into the biases of data which leads to an overconfidence in the validity of the results until someone who is hurt by the systems speaks up. Ultimately we need to look to history to show that humans are wrought with bias, and that when systems are being used this way that there may be hundred people suffering in silence for every one that speaks up.

The second is the implication of these working. It’s a detachment from the method, the exclusion of the human element that makes us so confident in the results of the process. A type of no-holds-barred event where as long as AI gets at the answer, we don’t care about the method. We are just at the tip of the iceberg with AI, with most of the functions of AI on their way. Though most things may be helpful and benign, it’s important to understand that AI will be used a great sorter of things. Just as resources get distributed unevenly, so will the functionality of AI. Choice will become the luxury and that we will be faced with the facsimile of options. For most that will work but there will always be those left in the wake of the oncoming wave. The way AI work presupposed a sense of expertise and knowledge of you, but in truth the way AI has to work is  just like how data works, it has to flatten and categorize in imperfect containers to be able to create proper results. Just like the machines themselves, we will need to operate within a set of harder set parameters. Life is messy, and so are people but by making determinations based on these set parameters will further confine those who are at the bottom. What we may improve on in life we may lose in freedoms. AI isn’t all bad, and not everything it does is an existential crisis but it’s important to have these conversations about AI and it’s implications before we get there. Given the choice, people may chose to live a life without it.

AI is a mystery and is only getting more mysterious. The future I guarantee will at least be getting more interesting. The more you know, the less you may understand but learning about AI is important to be able to make choices about our technologies in the future.

Final Reflection of AL/ML, information and Data

In recent years, the application of more complex AI is increasing. However, it is dangerous and also unreliable to make decisions that are based on unexplainable techniques, due to fact that people prefer to adopt a technique that they could fully trust. Better interpreted the machine can assist them to make more reasonable predictions and correct the discriminations that exist in the model, and also provide a more explainable model. AI relatively shares more concepts of philosophy than any other science technique, because it involves more scenes of consciousness (McCarthy, 2012). However, AI is after all complex algorithms computed by humans, it is more like a representation of intelligence, or a deep explanation of intelligence, but not intelligence itself, not even close to self-awareness.

Using one sentence to conclude what I have learned in this class, is that data and information are used to feed AL and further develop ML, and output more data and information. The fun fact is that human-trained machines so hard so it could replace us do the majority of tasks, and leaves us being the boss. However, with the rapid development of AI/ML, and less transparency of this technology, and more reliance on them, it will soon become hard to provide then actual reliable pieces of evidence. We are creating a huge black box that we think we understand because we feed them the data collected by us and we have the ability to analyze the outcome. Humans became even more confident because we think that we fully understand the concept and the algorithms since its build by us. But the problem is, do we? Does the outcome extracts from AL are more accurate and justice? Or we just more use to disguise the truth by so-called absolute accuracy just because the information was formed by a machine?

What I have concerned about is fully presented in the documentary “Code Bias”. A facial recognition system created by an African American female is unable to accurately recognize dark skin race, especially females. It is ironic that people are planning to entirely rely on this technique. As Cathy O’Neil in the video said, algorithms are using historical data to make a prediction of the future. It’s even more true when it comes to Deep Learning. Looks like the future is all being controlled by the group of people who collects the data and knows the code.

Resources:

Artificial Intelligence (Stanford Encyclopedia of Philosophy). (2018, July 12). Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/artificial-intelligence/#PhilAI

C. Lipton, Z., & Steinhartd, J. (2018, July 27). Maak kennis met Google Drive: één plek voor al je bestanden. Troubling Trends in Machine Learning Scholars. https://accounts.google.com/ServiceLogin?service=wise&passive=1209600&continue=https://drive.google.com/file/d/1fPOFpwirqp0oLM5qu4swseQW3dt_FsoW/view&followup=https://drive.google.com/file/d/1fPOFpwirqp0oLM5qu4swseQW3dt_FsoW/view

Coded Bias | Trailer | Independent Lens | PBS. (2021, March 22). Independent Lens. https://www.pbs.org/independentlens/videos/coded-bias-full-film/

McCarthy, J. (2012). The Philosophy of AI and the AI of Philosophy. Professor John McCarthy. http://jmc.stanford.edu/articles/aiphil2.html

Week 12 Reflection

As this course nears its end, this week’s reflection looks at the bigger picture. While we’ve discussed an array of topics from ‘Big Data’ to NLP, we have consistently seen deep-rooted issues in our current developments in AI/ML. Included in these issues is failure to distinguish between explanation and speculation, failure to identify the sources of empirical gains, the use of mathematics that obfuscates or impresses rather than clarifies, e.g. by confusing technical and non-technical concepts, as well as misuse of language, e.g. by choosing terms of art with colloquial connotations or by overloading established technical terms (Lipton). This course has aimed at deblackboxing, or providing clarity on the technologies that are seemingly purposefully explained in a confusing matter. While this week’s readings aimed at these issues and possible solutions (like XIA), this topic begs the question: where do we start?

As students studying Computer Science/similar topics at the degree level, securing an internship/full-time role at a company working on developments in AI/ML is an accomplishment. There is such great competition, that new hires most likely would not dare try to challenge their bosses/company ethically. As time passes, complacency grows and the thankfulness for a paycheck keeps people from speaking up until finally, the damage has already been done. How do we teach students at the university level how to not only right the wrongs of current AI/ML development practices, but also implement them at their future places of work?

If, like the readings mention, we utilize some sort of independent governance to insure sound practice, what incentivizes companies to hire and pay these companies?  Why would they want to be more restricted in the work they do when they have been getting away with more cost-effective, profit-driving techniques currently? With laws put in place like section 240 or more irrelevant rulings like NYT v. Sullivan, it is clear that there is very little accountability from the government in terms of the influence technology has on individual’s lives. Without some sort of ruling/law from the government demanding better practice or immense social pressure (that will not happen any time soon because of the blackboxing of our technologies), it does not seem likely our technologies will ethically improve any time soon.  

As we’ve seen from articles like this, it is clear that large companies with ethics boards do not seem to have a lot of impact on the workings of companies. Instead, they are a pretty line on a reportings PowerPoint/get-out-of-jail-free card. Clearly, people’s hopes of AI/ML becoming more ethical in its development isn’t working. So, to round back to my initial question, where do we go from here?

Artificial intelligence (general concepts overview, application, ethics and future concerns)

The continuous interest in the field of Artificial Intelligence AI has been the main reason for the constant progress in this field and its improvements and the transition with qualitative steps from simple learning that requires a lot of effort and time to deep learning models and self-learning models that led to benefit from this field with all its capabilities for social good. Many successful AI factors are needed for good social usage, like preventing falsifiability, data protection, Situational fairness, Human-Friendly Semanticisation, etc. (Floridi, 2020).

All the capabilities and developments of AI must be at the service of the human process first, in the way that it is safe and not harmful to the environmental environment and always benefits in the long term and the optimal goal, and most importantly, the living organisms that must be carefully and carefully considered to harness this field to serve them (Shaping Europe’s digital future, 2019).

In order to safely exploit artificial intelligence, it was necessary to search for the best methods for this, especially since human trials must be subject to close scrutiny. Therefore it was essential to send a survey and quantitative analysis to all those registered in any experimental step with recording all the detailed notes and in a way that reaches an experimental model, trustworthy to carry out the practical application of industrial techniques with guidelines, ethical controls and self-assessment processes for these applications (Vincent, 2019).

Best AI Applications

Different distinctive AI applications are now available in all fields, such as machine translation (Google translator), big data analysis (deep learning for manipulation of large image dataset like Flickr and google), decision support systems, especially in the medical field, virtual assistant (like Siri and Alexa which can be used for multiple purposes such as setting alarms, suggesting a film-watch list, reminding appointments, querying about the weather, suggesting the best restaurants, etc.), education AI application, scene understanding and image captioning algorithms used by many platforms like Facebook and Twitter, face and speech recognition application, etc. (Useche, 2019).

Through a set of smart algorithms based on humans’ thinking process, AI can reach a similar result to a human think when given the same information. All this falls within the framework of supporting neural networks to provide virtual services that contribute to more sophistication.

Ethical controls for using artificial intelligence (GELMAN, 2019):

These ethical controls address a serious problem in the field of the application of artificial intelligence techniques in the lives of individuals. AI continues to improve the human reality as a whole, and to achieve this, AI must have barriers that prevent it from restricting human freedom, and as it achieves after all that accuracy, durability and security, especially since the issue here is the lives of individuals, their safety and their personal information, which should not at any moment be subject to theft or breach of privacy. All of this must fall under the concept of transparency and ease in taking advantage of these important services provided by AI (Marcus, 2017). For example, the deep fake is one of the bad usages of AI that can be used to create virtual fake images and videos of somebody or even create a virtual fake human.

Preventing the technological exploitation of artificial intelligence:

The human field and its advancement is the first thing that any company with profit-oriented goals thinks about. In the event that AI has a significant impact in the future in technological progress, all of this must be controlled so that it is not profitable and must be subject to control and accountability standards that make it protected from everyone who thinks to politicize its work.

All of these controls must consider digital privacy and the freedom to benefit from artificial intelligence for purposes that serve humanity. Still, the form in which these control methods must be pursued must be effective in a way that does not always depend on censorship (State for Digital, Culture, Media & Sport and the Secretary of State for the Home Department, 2019). The primary reliance on the immunization of artificial intelligence and its uses in a purposeful, protected, and accessible manner to everyone without any harm may result from it (Ballarchive, 2019).

Questions to be analyzed and focus on

Many important questions should be introduced in AI, like who is creating AI systems and why they are created for? Who can control these AI applications? Can we create AI useful applications, but bad usage of them is possible? Can deep-learning algorithms produce deep-learning students? And how can we get useful results from data? Are the virtual assistants totally safe so that my data cannot be accessed by anyone else? Is my cloud data secure and cannot be violated by others in the cloud? (Useche, 2019).

References:

Ballarchive, J. (2019, 4 8). The UK’s online laws could be the future of the internet—and that’s got people worried. Retrieved from technology review: https://www.technologyreview.com/2019/04/08/136157/the-uks-online-laws-could-be-the-future-of-the-internetand-thats-got-people-worried/

Floridi, L. (2020, 4 3). How to Design AI for Social Good: Seven Essential Factors. Science and Engineering Ethics, pp. 1771–1796.

GELMAN, A. (2019, 4 3). From Overconfidence in Research to Over Certainty in Policy Analysis: Can We Escape the Cycle of Hype and Disappointment? Retrieved from Shorenstein centre: https://ai.shorensteincenter.org/ideas/2019/4/3/from-overconfidence-in-research-to-over-certainty-in-policy-analysis-can-we-escape-the-cycle-of-hype-and-disappointment

Marcus, G. (2017). Deep Learning: A Critical Appraisal. New York: New York University.

Shaping Europe’s digital future. (2019, 4 8). Ethics guidelines for trustworthy AI. Retrieved from Europa: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

Solon, O. (2019). facial recognition dirty little secret. Retrieved from https://www.nbcnews.com/tech/internet/facial-recognition-s-dirty-little-secret-millions-online-photos-scraped-n981921

State for Digital, Culture, Media & Sport and the Secretary of State for the Home Department. (2019). Online Harms White Paper. UK: APS Group.

Useche, D. O. (2019). CCTP-607: “Big Ideas”: AI to the Cloud. Retrieved from Georgetown: https://blogs.commons.georgetown.edu/cctp-607-spring2019/category/week-12/

Vincent, J. (2019). AI systems should be accountable, explainable, and unbiased, says EU. Retrieved from Theverge.com: https://www.theverge.com/2019/4/8/18300149/eu-artificial-intelligence-ai-ethical-guidelines-recommendations

Synthesis of Learning

I’m with some older family members and friends this weekend, and like most family gatherings I’m asked what am I doing in life. After telling them that I’m in school, they then asked what I was learning and it was amazing how in depth I could explain concepts like AI/ML. It was also great to see their interest in trying to understand it and clear up some confusion regarding AI/ML and the ethical implications. Below is some concepts that I refined after looking through my notes to take with me as go to future family gatherings and figure is a good start in my learning synthesis. I’ll finish it with some thoughts on how I want to approach my final paper:

Before we talk about Artificial Intelligence (AI) we need to understand the role of computers. Computers are nothing more than machine for following instructions and those instructions are what we call programs and algorithms. AI then seeks to make computers do the sorts of things that minds can do with two main aims: to get useful things done (technical) and to help answer questions about human beings and other living things (scientific). To do so though requires intelligence that must ultimately be reduced to simple explicit instructions for computers to process. Which presents the fundamental challenge of AI: Can you produce intelligent behavior simply by following list of instructions? 

A machine is said to have AI if it can interpret data, potentially learn from the data, and use that knowledge to adapt and achieve specific goals. The type of AI we are facing today is nothing like that of which is found in movies and science fiction novels. Today we are  struggling with the concept of narrow/weak AI in which scientist are trying to build computer programs that carry out task that currently requires thought. Some examples include filtering spam messages, recognizing faces in pictures, and creating usable translations. Each require a similar design processes in which they work because of machine learning (ML) which is a subset of AI. ML is used when we believe there is a relationship between observations of interest, but we do not know exactly how. Using artificial neural networks ML can be used to predict new cases of a certain instance through pattern recognition. Artificial neural networks are three layers connected as links that replicate the brain’s neural network. The first layer is the input layer that gives a numeric value to the input. The second layer is the hidden layer that classifies data and transfers inputs into the last layer the output layer. To do so the hidden layer applies a bias to the weighted input and an algorithm starts training the neural network using labeled data from a training set. The output is a result of the hidden layer’s interpretation i.e. learning from the training data. 

Though these programs or “set of instructions” do not understand decisions made but can simulate understanding, which can be dangerous when society puts trust in systems they do not understand. Some ethical concerns include the inherent bias in existing AI because of the biases in training data. Others revolve around privacy rights regarding AI uses in facial recognition as well as the destructive uses in new AI innovations in natural language processing like GPT-3. I think one topic for a final paper could be examining some of these concerns more thoroughly  specifically regarding facial recognition and the ideas behind big data and unregulated collection of all this data. At the same time I’m interested in the design aspects of cloud computing and AI with regard another big issue, the environmental impact. The amount of energy consumed through AI algorithms like GPT-3 is alarming as climate change becomes the focus of governments and corporations. 

References:

Alapaydin, Ethem. 2016. Machine Learning-The New AI. MIT Press Essential Knowledge Series. Cambridge, MA: MIT Press. https://drive.google.com/file/d/1iZM2zQxQZcVRkMkLsxlsibOupWntjZ7b/view?usp=drive_open&usp=embed_facebook.

Boden, Margaret. 2016. AI-Its Nature and Future. Great Britain: Oxford University Press. https://drive.google.com/file/d/1P40hHqgDjysytzQfIE7ZXOaiG0Z8F2HR/view?usp=drive_open&usp=embed_facebook.

CrashCourse. 2017. Machine Learning & Artificial Intelligence: Crash Course Computer Science #34. https://www.youtube.com/watch?v=z-EtmaFJieY&t=2s. ———. 2019. What Is Artificial Intelligence? Crash Course AI #1. https://www.youtube.com/watch?v=a0_lo_GDcFw&list=PL8dPuuaLjXtO65LeD2p4_Sb5XQ51par_b&index=2.

Woodbridge, Micheal. 2020. A Brief History of Artificial Intelligence. 1st ed. New York: FlatIron Books. https://drive.google.com/file/d/1zSrh08tm9WbYtERSNxEWvItnKdJ5qmz_/view?usp=sharing&usp=embed_facebook.

Synthesis of Learning and AI Ethics- Chirin Dirani

A black box, in science, engineering and computing, “is a device, system, or object which can be viewed in terms of its inputs and outputs, without any knowledge of its internal workings.” Before CCTP-607 course, the computing system, AI/ML, Big data and Cloud computing system were a mere blackbox to me. However, this course has enabled me to  learn the design principles of these technologies, deconstruct the many interdependent layers and levels which compose this sophisticated system, and read about the history on how technologies have been developed. This new knowledge has enabled me to de-blackbox this complex system and understand its main architecture, components and mechanism. The de-blackboxing approach made me change the way I perceive technology and the way I interact with it. Many of the previous ambiguities and ‘assumptions’ about these technologies have been cleared out for me. In the previous assignments, we looked into the design principles and architecture of computing systems including Artificial Intelligence (AI) and Machine Learning (ML), Big Data and data analytics, and Cloud computing systems. Also, we investigated the convergence points that combined AL/ML applications, Cloud systems and Big data to emerge together, in a relatively short time, as the leading trends in the world of technology today. With this rapid emergence, a large number of ethical and social concerns have appeared in the last few years. The materials for this class inform us about some of the current issues and promising approaches to solve them.

 The Documentary, Coded Bias, highlights an important civil rights problem that was discovered by a young MIT Media Lab researcher, Joy Buolamwini. She proves the bias within facial recognition programs, in specific against those who do not look like the white men who initially created these technologies. The facial recognition created by biased AI/ML powerful algorithms can cause harm and misinformation to people of color, women and minorities around the world. Also, it can be used as a tool of state and corporate mass surveillance. In their account, Lipton and Steinhardt highlight some “troubling trends” in the creation and dissemination of knowledge about data-driven algorithms by AI/ML researchers. These trends include 1) Failure to distinguish between explanation and speculation, 2) Failure to identify the sources of empirical gains, 3) misuse of mathematics and confuse technical and non-technical concepts, and 4) misuse of language by choosing terms of art or overloading existed technical terms. Through their article, the authors call for recurring debate about what constitutes reasonable standards for scholarship in the AI/ML field, as this debate will lead to a societal self-correction and justice for all.

With the growing number of issues surrounding the AI/ML community, which the private sector cannot resolve alone, comes the need for a governmental thoughtful approach to regulate this field. The European Union (EU) was a pioneer in imposing durable privacy and security law concerning collecting data related to people in the EU. The General Data Protection Regulation (GDPR) panelizes anyone who violates its privacy and security standards with tens of millions of Euros. In the EU Ethics guidelines for trustworthy AI, AI must be lawful, ethical and robust. Also, the guidelines listed seven “key requirements that AI systems should meet in order to be considered trustworthy.;” 1) empower human beings to make informed decisions and nurture fundamental rights, 2) ensures resilience, 3) respects privacy and protects data 4) transparent, 5) diverse, 6) benefits all human beings including future generations, 7) responsible and accountable in its outcomes. Yet, in the US, there are no such regulations to govern and regulate the outcomes of AI/Ml. Until these regulations exist and cause a shift from measuring algorithm performance only to evaluating human performance and satisfaction; Human-centered AI (HCAI), there is a need to learn and understand how this system works. This understanding happens by de-blackboxing and exposing the layers and levels that make this system work the way it does. I have thoroughly enjoyed reading this week about the intersectionality of technology and ethics. The readings were an eye-opener of the amount of work and research that still needs to take place in order to ensure that human beings remain in control of technologies rather than vice versa.

References

1). Ben Shneiderman, “Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-Centered AI Systems.” ACM Transactions on Interactive Intelligent Systems 10, no. 4 (October 16, 2020): 26:1-26:31.

2). Film Documentary, Coded Bias (Dir. Shalini Kantayya, 2020).

3). Professor Irvine article, Introduction to Key Concepts, Background, and Approaches.”

4). The European Commission (EU), Ethics Guidelines for Trustworthy AI .

5). The European Commission (EU) General Data Protection Regulation

6). Will Kenton, “Black Box Model,” Investopedia, Aug 25, 2020, visited April 16, 2021, https://www.investopedia.com/terms/b/blackbox.asp.  

7). Zachary C. Lipton and Jacob Steinhardt, “Troubling Trends in Machine Learning Scholarship,” July 9, 2018.