29 January 2019
In the Boden piece for this week, she starts by discussing the syntax of AI. I find myself a bit confused with this section. A question I developed from this was, Is the syntax capabilities of AI responsible for the word search and word predictability functions of Google and other search data bases? I find myself a bit confused as to a clear concrete definition of this syntax and what it really does. I’d also like to know how it actually works internally in a system. I also thought about this a lot because since Google is such a trustworthy search data base for millions of users, this capability of word predictability and word translate has marketed Google better than other competitors. From a personal stand point, I can honestly say google predictor has helped me conduct easier searches and helped me narrow down what I needed online. It’s highly functional and fairly a new feature, and I’d like to understand more about how it operates in a language I could understand better. Boden explains it basically as, “weighted search and data mining…data mining can find word patterns unsuspected by human users. Long used for market research on products and brands, it’s now being applied to big data; huge collections of texts or images…” (Boden, pg. 63). This got me thinking how is the syntax feature able to do this predictability without human emotion or human thought. Perhaps an enormous algorithm that tests relatable words that could relate to the first word you search? Would like to know more about this since we all do google searches all the time.
In the Alpaydin piece for this week, similar topics are discussed. To expand on what I wrote in the Boden piece, Alpaydin breaks down the supervised learning aspect to this when models are established with inputs and outputs. He says its similar to a linear regression that is produced in statistics, which is based off predictability. So when we use AI for online searches, he says we can think of this as supervised learning. He says, “machine learning implies getting better according to a performance criterion, and in regression, performance depends on how close the model predictions are to the observed output values in the training data” (Alpaydin, pg. 38). In other words, does this mean the models themselves train and learn and improve the more we search and input new information? This got me thinking more about the linear algorithms and how they relate or change in terms of predictability searches online. I’d like to expand more on this generalization concept. Does generalization solely rely on compression?
Since these are related to my inquiry, I tried to look for clarity in the Johnson and Verdicchio piece. They cover a lot of material in their case study. A few things they mentioned that helped me narrow down my ideas was, “If an artefact is able to acquire new patterns of behavior by means of proper training, then the systems autonomy may increase over time” (Johnson & Verdiccho, pg. 583). They later say, “Autonomous computational artefacts have a certain kind of unpredictability that is related to their autonomy. Because of their unpredictability derives from the limitations of human users and observers, it’s important to remember that autonomous computational artefacts are still bounded by their programming – even when they learn and their embodiment” (Johnson & Verdiccho, pg. 583). This got me thinking all of this is possible due to programmed embedded instructional based algorithms that pre-determine and alter search results and predictability functions in AI systems. I guess in this own way, could this be considered programmed human behavior, since it is based on input and can be changed?
To further this, I found a great article by writer, Peter Sweeney, for Medium Corporation, where he writes about this confusion that I noted. He provides us with a model that says AI is made up of machine learning and deep learning. Within this machine learning, he says this is where the predictive feature exists. The deep learning is where the simplifying process occurs. These two combined make up the AI blue bubble in the diagram. He says, “prediction is the essence of intelligence. And it will remain so until a more dominant technology teaches us otherwise” (Sweeney, pg.1).
Alpaydin, E. (2016). Machine learning: the new AI. MIT Press.
Boden, M. A. (2016). AI: Its nature and future. Oxford University Press.
Deborah G. Johnson and Mario Verdicchio, “Reframing AI Discourse,” Minds and Machines 27, no. 4 (December 1, 2017): 575–90.