The Following of Man By Machine – Matthew Leitao

There are two things that I got out of the readings and it has to do with the purpose of AI generally. AI has had a long and interesting history with many great minds (who I would have loved to meet and talk to) attempting to push forward the design and progress of code and the idea of artificial intelligence. What is interesting to me is that there is this split where we are trying to imitate people and trying to solve problems at the same time.

I think the explanation by Wooldridge of the Immitation Game used by Turing is a prime example of this conflict. Is the purpose to imitate or embody as these are two very different tasks. It’s like teaching a system how to beat someone in Chess or Go, we give them the system the rules but what is best will be determined by the computer and not necessarily the operator. This is why computers can leverage their data to solve problems in a different way, as AlphaGo solved this problem by analyzing final board states which would be meaningless to individuals for the most part.

This also brought me to wanting to understand the ways we attempt to make AI through different approaches explained by Bolden. Each of those systems would work perfectly for a specialize AI system as it works in a format that is conditional and two dimensional, especially considering we expect such a high success rate from these programs. I can tell you from having a psychological background, people make are prone to making mistakes all the time but we manage because there are consequences to when we get things wrong. A computer is agnostic to the right and wrong process as there is no programmed “suffering”. People and animals are machines meant to do the ambiguous goal of survive to propagate (as poised by Richard Dawkins). I wonder if we were to create a computer using these unsupervised methods and give it an ambiguous goal, positive and negative inputs, and needs if they also would be a full “human” by the time they are around 18-25 years old.

 

Questions:
Would making stimulus meaningful to AI make them better or worse at solving problems? If AI knew what an apple is as we know what an apple is would they improve?

Would forgoing the idea of trying to make computers like humans actually be more beneficial?