The underlying theme in this week’s reading was the misinterpretation people have about AI and how they seem to feel it is going to dominate the world, leave people out of jobs or supersede human intelligence. Prof. Irvine’s introduction is very clear in that computers process symbols and that is the common language that is used for them to make interpretations. Computers came before AI, not the other way around.
Wooldbrige’s first chapter was very interesting in how it introduced AI and what it is able and not able to do. It can basically make simple calculations based on sets of rules that are given to it but cannot go further in its interpretation. This is extremely important as some of the misconceptions we have about AI are that machines are able to recognize patterns and learn exponentially. The reality is that we are far from an era where computers can actually write books and respond to dialogue in a meaningful way where they actually understand the conversation instead of giving answers to queues that were programmed into it initially. This is why when you ask an Alexa certain questions, it can’t really provide answers or why IBM’s “Project Debater” lost to a human debater in a debate.
https://www.research.ibm.com/artificial-intelligence/project-debater/film/
One of the most important questions that came to mind with these readings was how cross disciplined AI is. Even though it should have been obvious to me that psychology is considered, it was very surprising how that as well as cognition are important in the realm of designing AI. It also brings philosophy into the forefront to question what is considered “human”. This is perhaps the category that causes the most fear in people and, as pointed out in the readings, leads to dystopian fiction and people to believe AI is more advanced than it really is.
Some of the questions that came to mind while reading is, how can we explain what AI can and cannot do to people? Do they even care? Some of the other questions I would like to grapple with a bit more is what we want to use this technology for and the ethical implications of the way it is used now.