The readings for this week explain how the concept of simulating human intelligence has evolved throughout history. These concepts were accompanied by some serious efforts in the field of computing systems that lead to the emergence of a new science; artificial intelligence (AI). AI opened the door to new applications that invade every aspect of human life and as Ethem Alpadin said, “digital technology increasingly infiltrates our daily existence” and makes it dependent on computers and technologies. The relatively new discipline of innovation was received with varying reactions. Some perceived it as hope and others looked at it with fear and suspicion. This reflection does not aim at assessing whether the impact of AI on the human race is positive or negative, rather, I will explore the rationale behind the mystery and misperceptions around AI, and the uncertainty of its effect in our daily lives.
In a basic analogy, we depend on cars to move us around in our daily commute. It is basically not important for us to know how the engine functions, but that doesn’t make us worried about using cars daily. We are more concerned about controlling our car’s speed and destination. Similarly, normal users don’t know how AI works but that shouldn’t stop them from becoming users. The Only difference in our analogy is that the AI user doesn’t control this technology and doesn’t know where it will lead. It is rather controlled and managed by a very small percentage of companies such as Giant corporations and the government. This ambiguity of control and destination combined with the small size of institutions that make the decisions when it comes to the use of AI, have promoted and nurtured such unease and suspicion of AI among the public.
According to Britannica, human intelligence is defined as “mental quality that consists of the abilities to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to manipulate one’s environment.” In his account, Introductory Essay: Part 1, Professor Irvine said that today’s technologies of simulating human processes are represented in the shape of codes that run in systems. He added that these codes are protected by Intellectual Property (IP) for a small percentage of companies. The “lockdown” of codes by these companies combined with the “lock-in” of consumers by other companies, hinders a wide-range access to these codes. These restrictions blackbox AI and deter the public’s ability to understand this science. The same state of ambiguity leaves the AI users vulnerable to falsehoods generated by the media and the common public discourse on AI and technologies in general.
I am hoping to be able to understand, with time, whether or not such a monopoly over AI is useful. If not, will we witness a phase where AI becomes regulated and tightly monitored to ensure best practices and the protection of the public from the possible diversions in the use of AI by some firms i.e. intelligence, consumerism?
Ethem Alpaydın, Machine learning : the new AI (Cambridge: MIT Press, 2016), p. X.