A black box, in science, engineering and computing, “is a device, system, or object which can be viewed in terms of its inputs and outputs, without any knowledge of its internal workings.” Before CCTP-607 course, the computing system, AI/ML, Big data and Cloud computing system were a mere blackbox to me. However, this course has enabled me to learn the design principles of these technologies, deconstruct the many interdependent layers and levels which compose this sophisticated system, and read about the history on how technologies have been developed. This new knowledge has enabled me to de-blackbox this complex system and understand its main architecture, components and mechanism. The de-blackboxing approach made me change the way I perceive technology and the way I interact with it. Many of the previous ambiguities and ‘assumptions’ about these technologies have been cleared out for me. In the previous assignments, we looked into the design principles and architecture of computing systems including Artificial Intelligence (AI) and Machine Learning (ML), Big Data and data analytics, and Cloud computing systems. Also, we investigated the convergence points that combined AL/ML applications, Cloud systems and Big data to emerge together, in a relatively short time, as the leading trends in the world of technology today. With this rapid emergence, a large number of ethical and social concerns have appeared in the last few years. The materials for this class inform us about some of the current issues and promising approaches to solve them.
The Documentary, Coded Bias, highlights an important civil rights problem that was discovered by a young MIT Media Lab researcher, Joy Buolamwini. She proves the bias within facial recognition programs, in specific against those who do not look like the white men who initially created these technologies. The facial recognition created by biased AI/ML powerful algorithms can cause harm and misinformation to people of color, women and minorities around the world. Also, it can be used as a tool of state and corporate mass surveillance. In their account, Lipton and Steinhardt highlight some “troubling trends” in the creation and dissemination of knowledge about data-driven algorithms by AI/ML researchers. These trends include 1) Failure to distinguish between explanation and speculation, 2) Failure to identify the sources of empirical gains, 3) misuse of mathematics and confuse technical and non-technical concepts, and 4) misuse of language by choosing terms of art or overloading existed technical terms. Through their article, the authors call for recurring debate about what constitutes reasonable standards for scholarship in the AI/ML field, as this debate will lead to a societal self-correction and justice for all.
With the growing number of issues surrounding the AI/ML community, which the private sector cannot resolve alone, comes the need for a governmental thoughtful approach to regulate this field. The European Union (EU) was a pioneer in imposing durable privacy and security law concerning collecting data related to people in the EU. The General Data Protection Regulation (GDPR) panelizes anyone who violates its privacy and security standards with tens of millions of Euros. In the EU Ethics guidelines for trustworthy AI, AI must be lawful, ethical and robust. Also, the guidelines listed seven “key requirements that AI systems should meet in order to be considered trustworthy.;” 1) empower human beings to make informed decisions and nurture fundamental rights, 2) ensures resilience, 3) respects privacy and protects data 4) transparent, 5) diverse, 6) benefits all human beings including future generations, 7) responsible and accountable in its outcomes. Yet, in the US, there are no such regulations to govern and regulate the outcomes of AI/Ml. Until these regulations exist and cause a shift from measuring algorithm performance only to evaluating human performance and satisfaction; Human-centered AI (HCAI), there is a need to learn and understand how this system works. This understanding happens by de-blackboxing and exposing the layers and levels that make this system work the way it does. I have thoroughly enjoyed reading this week about the intersectionality of technology and ethics. The readings were an eye-opener of the amount of work and research that still needs to take place in order to ensure that human beings remain in control of technologies rather than vice versa.
1). Ben Shneiderman, “Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-Centered AI Systems.” ACM Transactions on Interactive Intelligent Systems 10, no. 4 (October 16, 2020): 26:1-26:31.
2). Film Documentary, Coded Bias (Dir. Shalini Kantayya, 2020).
3). Professor Irvine article, “Introduction to Key Concepts, Background, and Approaches.”
4). The European Commission (EU), Ethics Guidelines for Trustworthy AI .
5). The European Commission (EU) General Data Protection Regulation.
6). Will Kenton, “Black Box Model,” Investopedia, Aug 25, 2020, visited April 16, 2021, https://www.investopedia.com/terms/b/blackbox.asp.
7). Zachary C. Lipton and Jacob Steinhardt, “Troubling Trends in Machine Learning Scholarship,” July 9, 2018.