Category Archives: Week 2

Our World with AI

It is interesting to think about the conceptualization of Artificial Intelligence. Most of the readings, discuss the dystopian accountabilities that are attributed to AI since it was brought to the forefront of technology. However, people have always had this hidden or apparent fear of the deepening threat of technological innovation and the ultimate and detrimental effects that it could have on society and our world. When talking about cyborgs and robots people expected a type of technology that was cold, distinctive and far away. Yes a cyborg and/or a robot might re-install a lot of human attributes but at least it doesn’t necessarily resemble the human species. If anything, if a cyborg-robotic attack were to take place, the human race always had a dichotomized “us” and “them” side in order to defeat them. What AI reveals though, is that no one really imagined the form technology would soon take on. Maybe the familiarity with Siri’s voice and attitude, or Google’s Assistant ability to always know what you’re into, the fact that your phone always provides you with results and ads you have discussed with a group of friends, is what creates this bizarre feeling of acquaintance with this type of technology. Yet its complexity also creates the fear and misunderstanding that comes along with it. 

The independence found in this type of technologies, creates this “discourse and false assumptions around AI and computer systems” (Irvine 2021). Ironically, media culture has only deepened the misinformation issue with AI and enhanced this sense of a threatening dystopia (Boden, 2016; Wooldridge, 2020). In reality if more people were to truly understand AI and de-blackbox it, the desensitization towards AI would become obsolete. The success of AI as we know it today can only be attributed to the cumulative expansion and adaptation of various aspects of computation and computing systems that have taken place through out the years.  (Irvine, 2021; CrashCourse, 2019; Boden, 2016; Wooldridge, 2020). From the very early human symbol systems to complicated automated computing calculations, AI’s history is way closer to home and “humanity” than people often think; “Everything we do in computing and AI is part of a long continuum of extending the human symbolic-cognitive capacity for symbolic systems (from language to abstract algebra and digitally encoding multimedia), and designing technical media systems for and with those systems.” (Irvine, 2021, 9). Concepts and patterns that were created even thousands of years ago to improve and facilitated human life and development through its every stage are still being used and the reason behind why we are able to have the technology that is available for us today. 

I really enjoyed going through these readings as they were the perfect connection and delved into my focus on daily uses of AI. I found a lot of similarities in concepts and facts that were mentioned and that I had previously used for other class and research from previous semesters such as my own research for 505 and for Computing and the Meaning of Code (711). 

AI has the capacity to touch most aspects of our lives whether that is with its applications i.e. where can we find AI: everywhere! Its capabilities to adapt to a vast aspect of things from self driven cars to IPAs to sat-nav systems, AI’s “homologous design” (Boden, 2016) can be represented through a myriad of different ways, forms and even languages. It combines the “spirit” of humanistic psychology, philosophy and neuroscience with that of technology, binary, computational and symbol systems that together work towards enhancing and providing solutions for the real human world and our lives. 

Some more questions/comments: 

overselling AI? 

Marvin Minsky 

when overrating AI are we overrating our capacity? the computers/systems? or capacity of our binary symbol and other symbol systems? 

Where does the misinformation problem with AI really originate from? 

 

Sources:

The Following of Man By Machine – Matthew Leitao

There are two things that I got out of the readings and it has to do with the purpose of AI generally. AI has had a long and interesting history with many great minds (who I would have loved to meet and talk to) attempting to push forward the design and progress of code and the idea of artificial intelligence. What is interesting to me is that there is this split where we are trying to imitate people and trying to solve problems at the same time.

I think the explanation by Wooldridge of the Immitation Game used by Turing is a prime example of this conflict. Is the purpose to imitate or embody as these are two very different tasks. It’s like teaching a system how to beat someone in Chess or Go, we give them the system the rules but what is best will be determined by the computer and not necessarily the operator. This is why computers can leverage their data to solve problems in a different way, as AlphaGo solved this problem by analyzing final board states which would be meaningless to individuals for the most part.

This also brought me to wanting to understand the ways we attempt to make AI through different approaches explained by Bolden. Each of those systems would work perfectly for a specialize AI system as it works in a format that is conditional and two dimensional, especially considering we expect such a high success rate from these programs. I can tell you from having a psychological background, people make are prone to making mistakes all the time but we manage because there are consequences to when we get things wrong. A computer is agnostic to the right and wrong process as there is no programmed “suffering”. People and animals are machines meant to do the ambiguous goal of survive to propagate (as poised by Richard Dawkins). I wonder if we were to create a computer using these unsupervised methods and give it an ambiguous goal, positive and negative inputs, and needs if they also would be a full “human” by the time they are around 18-25 years old.

 

Questions:
Would making stimulus meaningful to AI make them better or worse at solving problems? If AI knew what an apple is as we know what an apple is would they improve?

Would forgoing the idea of trying to make computers like humans actually be more beneficial?

 

What I am afraid of AI – Fudong Chen

After finishing the introductory readings, there is a topic that impressed me, the evolution or the change of AI’s goals and aims.

AI has two main aims. One is technological: using computers to get useful things done. The other is scientific: using AI concepts and models to help answer questions about human beings and other living things. In my view, Alan Turing’s idea, Turing machines, belongs to the scientific one, though it was an abstract mathematical idea initially. So at least at the very beginning, AI was used to adapt and finish human beings’ goal and purpose. But throughout the history of AI, there are a large amount of people dread the AI since they are afraid that AI could be evolved to replace humans.

Actually, I am not afraid about this, since most of the AI now are still empirical and depend on statistics, namely they only have the ability to predict near future by the past data. But I do afraid the AI in some way, because I found a stereotype among the readings like a user can access data elsewhere or it’s an era of democratization of digital technology.

However, in the reality, algorithms and data have become assets of technology companies. Those who master big data seem to be more like AI that can predict the future. It is undeniable that business or commerce is usually expected to be used to provide private services to every single customers. Data allows companies to understand individuals better than individuals themselves, and the companies can even predict and show the interests and products to individuals that they do not know before. But what will the world become if this predict ability is used elsewhere? When the AI’s goals change, from helping people do useful things to using data to make money, which seems to be an inevitable result of commercialized, how do we do?

In addition, as a student without strong mathematic background, when I learn about machine learning, I found I just learn and use a black box. Data starts to drive the operation; it is not the programmers anymore but the data itself that defines what to do next. I just input data to get results, analyze the result and sometimes I even do not need to analyze it. I do not understand what happened in the black box. I would like to know what I can get with learning AI and machine learning,

Boden, M. A. (2016). AI: Its nature and future (First edition). Oxford University Press.

Ethem Alpaydin. (n.d.). Machine Learning.

Wooldridge, M. J. (2021). A brief history of artificial intelligence: What it is, where we are, and where we are going (First U.S. edition). Flatiron Books.

Duality in Development of Technology

It is likely that the most important reason why we develop technology (AI, ML, etc.) is for the benefit of the human – physically, emotionally, and mentally. The social reception of technologies, therefore, is the basis of what determines useful and useless advancements in tech. It can be argued that we as a society have held the view (explicitly and implicitly) that technology is an independent thing and can cause social, political, or economic effects. The utopian/dystopian creates hype, hope that developments in technology will bring about a better world, and hysteria regarding technology as independent, uncontrollable, and influential. On the other hand, I would argue that people who are somewhat knowledgeable about developments in technology can understand the human power, both negative and positive, behind the technology. We can see in films like The Facebook Dilemma the power a human-created algorithm has on politics around the world, or facial recognition scanners programmed to work better on specific races. While it is easy to live in the bliss of having tasks become easier and more automated, overall it is not as dreamlike or uninterpretable as it has once been. It may be more difficult to place blame on specific people, but nonetheless, we can see there is a clear human-powered bias in many of our technologies.

In addition to this idea of hype and hysteria, while I do understand the almost ridiculous technology and automation we see in movies (specifically for the time in which the movies were produced), I believe we are unaware of revolutionary technologies in the works. While talk of self-driving cars has been increasingly popular, we fail to recognize the history of automated vehicles. When attending a tour of the Google office in Chelsea, New York, a Googler said technologies like the Google Home have been in the works for over ten years and there are plenty more technologies in the works right now that no one knows about but will become all the rage in ten years time. 

The two frameworks for producing human-level intelligent behavior in a computer seem to be battling a popularity contest. We have the mind model, or symbolic AI utilizes a series of binary yes/no true/false 0/1 operations to arrive at a conclusion/action. It uses these symbols to represent what the system is reasoning about. Symbolic AI was the most widely adopted approach to building AI systems from the mid-1950s until the late 1980s, and it is beneficial because we can use it to explicitly understand the goal of our technology and the AI’s decision. Alternatively to the mind model is the brain model, which aims at simulating the human nervous system. Obviously, as the brain is extremely complex, it is not yet possible to replicate human-level intelligent behavior, but developers have created technology similar to the human brain. For example, neural networks are based on a collection of connected units or nodes modeled after the neurons in a biological brain.

What I am interested in learning more about is the list of tasks that Michael Wooldridge describes in A Brief History of Artificial Intelligence. At the ‘nowhere near solved’ level, he writes of interpreting what is going on in a photo as well as writing interesting stories. Notably, a scandal broke out in which women searching the word ‘bra’ when in the Photos app were returned photos of themselves in a bra/bathing suit. And we continue to see information from photos read like this. I can type ‘dog’ and get many dog photos from my camera roll, etc. And while I have not been able to do so myself, in an Intro NLP course last semester, we trained our system on a large dataset and could write extremely simple sentences using bigrams or trigrams. While technology cannot create an interesting story out of nothing, it cannot do anything without data and storytelling is no different. This data was also used to predict the order of a sentence and the part of speech of each word, making Wooldridge’s example of “Who [feared/advocated] violence?” or “What is too [small/large]?” as questions a more experienced developer would be able to program. So I suppose my question is: are there truly limits to what we can automate/create? It seems that as time progresses we continue to do things we thought were once impossible.

“A Brief History of Autonomous Vehicle Technology.” Wired, March 31, 2016. https://www.wired.com/brandlab/2016/03/a-brief-history-of-autonomous-vehicle-technology/.

MIT Technology Review. “A US Government Study Confirms Most Face Recognition Systems Are Racist.” Accessed February 1, 2021. https://www.technologyreview.com/2019/12/20/79/ai-face-recognition-racist-us-government-nist-study/.

the Guardian. “Apple Can See All Your Pictures of Bras (but It’s Not as Bad It Sounds),” October 31, 2017. http://www.theguardian.com/technology/shortcuts/2017/oct/31/apple-can-see-bra-photos-app-recognises-brassiere.

FRONTLINE PBS, Official. The Facebook Dilemma, Part One (Full Film) | FRONTLINE, 2018. https://www.youtube.com/watch?v=T48KFiHwexM.

What is AI for? And the Ethics in Data – Jianning Wu

As I have gained from the reading this week, with inspiration from the brain, Artificial Intelligence is a designed practical computer/machine that can mimic humans and make decisions as humans do. It is not an easy job and requires supportive hardware, learning abilities, and adaptive programs/algorithms. Although, currently, Artificial Intelligence is not that intelligent compared with what has been presented in universal imagination or “aspects of hype, hope, and hysteria” (cited from Prof. Irvine), it has taken us a long time to achieve, and it is everywhere. But what the ultimate purpose or goal of artificial intelligence is? As Hebert A Simon discussed the functional or purposeful aspect of artificial things, he mentioned that “fulfillment of purpose or adaptation to a goal involves a relation among three terms: the purpose or goal, the character of the artifact, and the environment in which the artifact performs.” The latter two terms are interacted and serve together for the purpose of an artifact. Is the AI just like its literal meaning- making an intelligence like a human? Or allowing computers and machines to function in an intelligent manner? Different goals will lead the consequence in different ways. In the former pattern, AI will become a moral challenge in the future when computing techniques are mature enough for constructing areal AI. Adversely, in the latter term, AI will assist us in making progress whatever it is ripe or not.

In addition, in the data section in Machine Learning: The New AI, Ethem Alpaydin demonstrated the importance of data- “data starts to drive the operation; it is not the programmers anymore but the data itself that defines what to do next.” Though the author provides us great ways of using data, such as helping build a better structure in retailers’ supply chains, this makes me curious and even worry about how we could guarantee those data will be used acceptably? How could we protect our private information? How could we ensure that our data will not be taken advantage of to achieve a particular party’s goals (like in an election or a business competition)? How media stay equitably and neutrally while they need to filter a huge amount of data/information in the meantime? And there are dozens of questions about data. The most critical one is how we could maintain the balance between ethics and utilization of data?

The Real State of AI

The underlying theme in this week’s reading was the misinterpretation people have about AI and how they seem to feel it is going to dominate the world, leave people out of jobs or supersede human intelligence. Prof. Irvine’s introduction is very clear in that computers process symbols and that is the common language that is used for them to make interpretations. Computers came before AI, not the other way around.

Wooldbrige’s first chapter was very interesting in how it introduced AI and what it is able and not able to do. It can basically make simple calculations based on sets of rules that are given to it but cannot go further in its interpretation. This is extremely important as some of the misconceptions we have about AI are that machines are able to recognize patterns and learn exponentially. The reality is that we are far from an era where computers can actually write books and respond to dialogue in a meaningful way where they actually understand the conversation instead of giving answers to queues that were programmed into it initially. This is why when you ask an Alexa certain questions, it can’t really provide answers or why IBM’s “Project Debater” lost to a human debater in a debate.

https://www.research.ibm.com/artificial-intelligence/project-debater/film/

One of the most important questions that came to mind with these readings was how cross disciplined AI is. Even though it should have been obvious to me that psychology is considered, it was very surprising how that as well as cognition are important in the realm of designing AI. It also brings philosophy into the forefront to question what is considered “human”. This is perhaps the category that causes the most fear in people and, as pointed out in the readings, leads to dystopian fiction and people to believe AI is more advanced than it really is.

Some of the questions that came to mind while reading is, how can we explain what AI can and cannot do to people? Do they even care? Some of the other questions I would like to grapple with a bit more is what we want to use this technology for and the ethical implications of the way it is used now.

Embracing the rise of Artificial Intelligence and its complexities:

Civilization advances by extending the number of important operations we can perform without thinking about them. —Alfred North Whitehead I am stuck with this quotation, which professor Irvine quoted at the beginning of the essay. It kept me thinking about how much time we spend on doing things that are important and necessary. It would be wonderful to have a reliable mechanism that does most of the repetitive work that doesn’t need our attention so that we can effectively focus on creative aspects like human relationships, mental health, arts, spirituality, etc. We are living in the best time to realize this dream of automation and artificial intelligence. We have massive data and infrastructure that supports transferring, storing, and analyzing the data at the personal level. As mentioned in the essay, We have evolved in Computing Systems in Longer Cognitive-Symbolic Continuum, and I believe we have a long way ahead of us. It’s fascinating to study AI from different viewpoints you mentioned. I’m particularly interested in exploring the Ethics and policy view. • How do you teach the algorithm to prioritize? Especially when you need to make an ethical choice between alternatives? • Who decides on what is right and wrong? How do you define the absolutes and relatives? • Are companies responsible if there is a failure in connection or a breakdown? • Is it harmful if everyone has access to robots without proper understanding and responsibility? • What should be the role of governments in regulating AI? What’s the fundamental difference between automation and machine learning? What kind of automation do we have in ML, and how does it evolve? We understand what is former logic is, but what is an informal logic method, and how does it function?

AI and Giraffe – Hao Guo

AI as an apparatus that follows simple instructions or algorithms computed by humans will not become a threat at all. On the contrary free more tedious, time-consuming jobs, getting works done more effectively, precisely, and a lot faster. I had this typical “hope statement” when I first started the readings. According to the crash course videos, the concept of AI has already been applied to many fields. And it’s making profound progress, from the financial area like loan landing to the medical industry as x-ray examine. Like many AI users, even people with zero programming backgrounds involuntary interact and negotiate with AI every day. As what’s going on behind these screens and machines remains as a Blackbox, the media and businesses took this advantage for making more profit. They exaggerated AI from a device that runs on code written by the human to an independent thing that generates and runs on its own code. However, in fact, those smart machines we use for the daily advance are not even near to the actual concept of AGI. As Michael Wooldridge mentioned in his book A Brief History of Artificial Intelligence, it took us decades from zero to one. The AI we are concerned to wipe out entire human beings is nothing more than machines for following instructions. Like the Youtuber, John Green presented in his video about how his robot companion can recognize his face but won’t be able to perform meaningful conversations or censoring and proving appropriate responses in an environment without receiving specific instructions. 

Do less humans make AI useless and less threatened to humans? Like Professor Irvine said: “American has a long history of creating a ‘new’ technology with a combination of hype, hope, and hysteria.” AI wiping out human beings is unwavering faith, but a matter of time. That’s my “hysteria” statement after I watched Ethem Alpaydin’s Machine Learning: The New AI and the documentary Do You Trust This Computer. Michael Wooldridge believes AI is hard to exceed human intelligence because AI requires tremendous space and power to process such a massive amount of data. And look back to history, what we have achieved today took much more effort and experienced more failures than we could imagine. However, like the Youtuber John Green said: “History reminds us that revolutions are not many events as they are processed.” In other words, no matter how long it took us from inventing a room large fast calculator to a pocket-small smartphone, it only takes us a happening of singularity to develop an AI to a real AGI. For example, the happening of the big bang and the first existence of living creatures. 

The singularity of AI development somehow reminds me of the studies of giraffes. Research indicates that giraffes’ necks were just as short as other animals at the very beginning. However, due to climate change (the causing remains as myth, but climate change is considered as one of them), there were fewer eatable plants left on the ground, so these short neck giraffes were forced to grew longer necks to achieve the food from upper places, along with an extra blooding pumping organ to help their hearts send sufficient blood through their extreme long neck to their heads. However, from all the fossils we discovered, we could only find the origin short neck giraffes. Still, not one fossil indicates any of their evolution processes like a mid-long neck giraffe. (Sounds so irrelevant, it just occurred to me). 

And the last I want to say after I examined all the materials is that they don’t necessarily need to be smarter than us to destroy us. Like a nuclear bomb, they are not smart but can wipe out the entire creature by a mistake made by human beings. 

 

AI: Between Hope and Mystery- Chirin Dirani

The readings for this week explain how the concept of simulating human intelligence has evolved throughout history. These concepts were accompanied by some serious efforts in the field of computing systems that lead to the emergence of a new science; artificial intelligence (AI). AI opened the door to new applications that invade every aspect of human life and as Ethem Alpadin said,  “digital technology increasingly infiltrates our daily existence” and makes it dependent on computers and technologies. The relatively new discipline of innovation was received with varying reactions. Some perceived it as hope and others looked at it with fear and suspicion. This reflection does not aim at assessing whether the impact of AI on the human race is positive or negative, rather, I will explore the rationale behind the mystery  and misperceptions around AI, and the uncertainty of its effect in our daily lives.

In a basic analogy, we depend on cars to move us around in our daily commute. It is basically not important for us to know how the engine functions, but that doesn’t make us worried about using cars daily. We are more concerned about controlling our car’s speed and destination. Similarly, normal users don’t know how AI works but that shouldn’t stop them from becoming users. The Only difference in our analogy is that the AI user doesn’t control this technology and doesn’t know where it will lead. It is rather controlled and managed by a very small percentage of companies such as Giant corporations and the government. This ambiguity of control and destination combined with the small size of institutions that make the decisions when it comes to the use of AI, have promoted and nurtured such unease and suspicion of AI among the public. 

According to Britannica, human intelligence is defined as “mental quality that consists of the abilities to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to manipulate one’s environment.” In his account, Introductory Essay: Part 1, Professor Irvine said that today’s technologies of simulating human processes are represented in the shape of codes that run in systems. He added that these codes are protected by Intellectual Property (IP) for a small percentage of companies.  The “lockdown” of codes by these companies combined with the “lock-in” of consumers by other companies, hinders a wide-range access to these codes. These restrictions blackbox AI and deter the public’s ability to understand this science. The same state of ambiguity leaves the AI users vulnerable to falsehoods generated by the media and the common public discourse on AI and technologies in general. 

I am hoping to be able to understand, with time, whether or not such a monopoly over AI is useful. If not, will we witness a phase where AI becomes regulated and tightly monitored to ensure best practices and the protection of the public from the possible diversions in the use of AI by some firms i.e. intelligence, consumerism?

Bibliography

Ethem Alpaydın, Machine learning : the new AI (Cambridge: MIT Press, 2016), p. X.

“Human Intelligence | Definition, Types, Test, Theories, & Facts.”

“Irvine-607-Introduction-Rev.Pdf.”

More Human Than Human – Hans Johnson

An incite gained from the reading Sciences of the Artificial, was the term, “artificial intelligence” appears to be an oxymoron. If computer scientists, mathematicians, and neuroscientists were to actually succeed in creating what we believe is “strong AI”  or AGI (Artificial General Intelligence), would it be artificial, or would it simply be “non-biological” intelligence? As Herbert Simon explains, “synthetic intelligence” may be the more appropriate terminology to be used in this context. Furthermore, there is the philosophical question of what is considered “intelligence”? For example, Joseph Weizenbaum’s ELIZA program which could be initially indistinguishable from a human, but lacked the capacity to comprehend symbols and learn new responses on its own, as opposed to AGI which would have the capacity to truly learn and create its own primal responses. 

A good example of the ELIZA program played out on a massive scale, is depicted in the sci-fi show, “Westworld.” In Westworld, there is an amusement park filled with androids “hosts.” In the beginning, the hosts have a very limited capacity to interact with the human guests, but in the cloud drive for the park, the hosts stored data from every interaction they had with guests. Over the course of several decades, the hosts began to develop more complex responses to guests. However, these responses were simply based on the data stored in the cloud from previous guest interactions. Therefore, the hosts seemed to merely mimic human behavior based on numerous host/guest interactions, rather than learn to create their own. 

It would seem the creation of synthetic intelligence or “strong AI” is centered on the prospect of a computer program beginning with a base understanding of symbols and comprehension, and from these symbols, gradually applying meaning to others over time (a snowball effect). Yet, this is much easier said than done. Computers are built on the most simplistic of functions or (machine code), and programming them to be something other than simple is basically a reversal of the core foundations a computer is built upon. However, the key to creating AGI (Artificial General Intelligence) may involve a greater understanding of computational operations of the human brain, rather than computer processes, as Margaret Bode suggests in AI Its Nature and Future, but should we limit ourselves to creating intelligence which only mirrors the human brain? 

Perhaps the most pertinent question gathered from the readings for me, do we really want self aware computers whose intelligence was derived from the human mind? Additionally, if the snowball effect were to occur, and a computer system were to become self aware, and learn concepts on its own, are there specific control measures in place to counteract a rampant system? What does true success look like, and what are the implications? What if a rampant synthetic AI was able to infect systems and replicate itself? While Hollywood would have us believe the consequences of AGI are more severe than they would seem, there could very likely be some real consequences of self aware synthetic intelligence, especially if derived from the human mind.

What is AI?- Chloe Wawerek

To better understand this course, a solid definition of AI and its history is needed, and the following is what I gained from this weeks readings. From my understanding AI is any machine capable of interpreting data, potentially learning for the data, and use the knowledge to adapt and achieve specific goals. AI’s roots come from the history of computers which are machines that can reliably follow very simple instructions very, very quickly, and they can make decisions as long as those decisions are precisely specific. With that being said the question then posed is, can computers produce intelligent behavior simply by following lists of instructions like these? I think machine learning is a step in that direction but the issues that AI faces is twofold:

1) We have a recipe for the problem that works in principle, it doesn’t work in practice because it would require impossibly large amounts of computing time and memory. 

2) Or we have no real idea what a recipe for solving the problem might look like.

Because of these issues AI research is currently focused on first developing AGI the ability of computers to have the same intellectual capabilities as humans but isn’t concerned with issues such as consciousness or self-awareness (weak AI). AI research embodies various disciplines including physics, statistics, psychology, cognitive sciences, neuroscience, linguistics, computer science and electrical engineering, which is wild. From my understanding the furthest we’ve gotten to being close to weak AI is machine learning: constructing a program that fits the given data by creating a learning program that is a general template with modifiable parameters.

Combing Simon and Alpaydin’s work we see that machine learning is a requirement for AI and that it is based off the human brain. In fact all of AI takes inspiration from the brain hence the various disciplines involved in its advancement. Though Simon poses an interesting hypothesis that intelligence is the work of symbol systems, and by comparing the human brain to a computer system, both symbol systems in work, computers are therefore intelligent? The logic comes form the argument that logic is computation and since both the brain and computers work to compute data they are therefor intelligent. *please correct me if I’m wrong in this analysis.* I can see the reasoning behind this but I believe there are so many more gaps to fill, but with AI being so vast this could be a correct interpretation of weak AI or Narrow AI but what about the Grand Dream?

Having the understanding that computers work off of binary codes and require instructions and guidance to complete task, it seems improbable that the Grand Dream will come into fruition. Even machine learning requires some sort of template of instructions for the computer to go off of. I think with the amount of data available and continually growing computers can start inferring patterns and making predictions, but humans as much as human are predictable they are also unpredictable. Additionally, there are certain unwritten rules that govern relationships which is why maybe we can get to a point that computers can become indistinguishable from humans but can they pass the Winograd schemas? I think this ability to relate to humans on an emotional level is what will always prevent this sort of self-awareness in machines. 

Questions that I still have:

  1. What is the difference between an algorithm and a program?
  2. What is the difference between an artifact and a symbol?
  3. How does electromagnetic energy convert to symbols ie binary codes?
    1. Specifically how did the Turing Machine work and why was that the foundation of computers?
  4. Is cybernetics just another word for neural networking? 
  5. Why is the divide between cybernetics and symbolic computing in AI so hotly debated? What really is the difference?

References: