More Human Than Human – Hans Johnson

An incite gained from the reading Sciences of the Artificial, was the term, “artificial intelligence” appears to be an oxymoron. If computer scientists, mathematicians, and neuroscientists were to actually succeed in creating what we believe is “strong AI”  or AGI (Artificial General Intelligence), would it be artificial, or would it simply be “non-biological” intelligence? As Herbert Simon explains, “synthetic intelligence” may be the more appropriate terminology to be used in this context. Furthermore, there is the philosophical question of what is considered “intelligence”? For example, Joseph Weizenbaum’s ELIZA program which could be initially indistinguishable from a human, but lacked the capacity to comprehend symbols and learn new responses on its own, as opposed to AGI which would have the capacity to truly learn and create its own primal responses. 

A good example of the ELIZA program played out on a massive scale, is depicted in the sci-fi show, “Westworld.” In Westworld, there is an amusement park filled with androids “hosts.” In the beginning, the hosts have a very limited capacity to interact with the human guests, but in the cloud drive for the park, the hosts stored data from every interaction they had with guests. Over the course of several decades, the hosts began to develop more complex responses to guests. However, these responses were simply based on the data stored in the cloud from previous guest interactions. Therefore, the hosts seemed to merely mimic human behavior based on numerous host/guest interactions, rather than learn to create their own. 

It would seem the creation of synthetic intelligence or “strong AI” is centered on the prospect of a computer program beginning with a base understanding of symbols and comprehension, and from these symbols, gradually applying meaning to others over time (a snowball effect). Yet, this is much easier said than done. Computers are built on the most simplistic of functions or (machine code), and programming them to be something other than simple is basically a reversal of the core foundations a computer is built upon. However, the key to creating AGI (Artificial General Intelligence) may involve a greater understanding of computational operations of the human brain, rather than computer processes, as Margaret Bode suggests in AI Its Nature and Future, but should we limit ourselves to creating intelligence which only mirrors the human brain? 

Perhaps the most pertinent question gathered from the readings for me, do we really want self aware computers whose intelligence was derived from the human mind? Additionally, if the snowball effect were to occur, and a computer system were to become self aware, and learn concepts on its own, are there specific control measures in place to counteract a rampant system? What does true success look like, and what are the implications? What if a rampant synthetic AI was able to infect systems and replicate itself? While Hollywood would have us believe the consequences of AGI are more severe than they would seem, there could very likely be some real consequences of self aware synthetic intelligence, especially if derived from the human mind.