Category Archives: Week 2

Reactionary Reporting on Technology

This is a long delayed post meant to be posted last week. My apologies.

At a Martin Luther King Jr Day event, Alexandria Ocasio-Cortez (rep. D-NY), explained some of the questions we’ve worked with last week’s readings and this week’s as well. Algorithms, and technology in general, is designed by humans. Humans are biased and there is a history of replicating those biases into technology. There are many examples in many fields of biased data fed to algorithms that produce discriminatory results.

The first reports of her comments were clearly sensationalist and inaccurate. One headline said “Ocasio-Cortez says algorithms, a math concept, are racist”. It made me think of the topic we’ve discussed in class of how is media reporting and talking about technology. Media are either repeating or paraphrasing press releases from tech companies without a critical analysis, or the people reporting on technology don’t take the time to dig deeper into the technologies to understand how they work and what are the real concerns beyond sci-fi sensationalism.

Looking at that headline, it is clear that the representative’s comments are not exactly what the headline says, but that is a whole other issue that is separate from the social issue surrounding the technology she’s describing. A week after the comments, we can find more in-depth reporting explaining the different ways in which her statements are accurate, citing experts in the field, and providing examples of peer-reviewed studies that have addressed these issues for a long time.

Therefore, it might seem that there are two types of reporting about technology. First, an immediate reactionary over simplification of a technology in order to create an emotion (positive or negative) in order to get user engagement quickly; this kind of reporting is the one most prone to inaccuracy and misconceptions. The second one, is usually done by people that have a deeper understanding of the technology and take the time to research and provide proof and examples for a more broad and critical take on the issue. However, the second one usually appears as a reaction of the first one, or the first one has a wider reach to the public, which makes it difficult for more in-depth reporting to navigate its way through all the rhetoric garbage to the user.



The AI Misinformation Epidemic

In today’s ever-shifting technologic landscape, it is more important than ever to make informed decisions on how new software, applications, and smart devices are designed. Technology applications and devices are deeply embedded into the fabric of everyday life, and the potential impacts on the human brain and psychology are significant (as we have seen with social media and iPhones). In reading the articles about media coverage on AI, it is clear that the tech industry narrative is not concerned with informing or protecting the public’s interests (democracy, fundamental rights, etc) – which is a large red flag for society (Naugton, 2019).

Coverage of AI is a significant component to consider because it is how people learn about how AI is being used, what it is, and how it will continue to develop. The example given in the Schwartz article of misrepresented research going viral brings the larger issue of misinformation in the news and social media to the forefront. According to the article, recent interest in “machine learning” and “deep learning” has “led to a deluge of this type of opportunistic journalism, which misrepresents research for the purpose of generating retweets and clicks – he calls it the “AI misinformation epidemic” (Schwartz, 2018). The issue of over-hyped inaccurate AI news cycles is creating distraction from actual important issues in the field. Social media influencers have the power to draw attention to AI, but by circulating hype are taking the attention of policymakers, who are the agents for change (Schwarts, 2018).

One of the important distinctions to be made is that although AI is meant to “do the sorts of things that minds can do,”  the intelligence (or program) does not have an ability to think for itself (Boden, 2016, pg. 1). It can only do what it is programmed to do. It is interesting that several articles (throughout history) tend to continuously paint a picture of robots as able to think for themselves, teach themselves, and have human consciousness – which is a mirror of cultural anxiety and fear towards the future of AI. In the current news and media industry, it is not surprising that these narratives and stories are blown out of proportion to get attention. From the beginning of AI reportage, the bottom line has always been money. Some journalists don’t have a choice but to sensationalize the news, because that is how they make a living. The Schwartz article brings in a useful solution to this issue – researchers in the field should create and fund a publication that will be credible and also supportive of their writers financially. This will create a way to get well-researched and accurate information out to the public.

John Naughton, “Don’t Believe the Hype: The Media Are Unwittingly Selling Us an AI Fantasy,” The Guardian, January 13, 2019.

Herbert A Simon, The Sciences of the Artificial (Cambridge, MA: MIT Press, 1996

Oscar Schwartz, “‘The Discourse Is Unhinged’: How the Media Gets AI Alarmingly Wrong,” The Guardian, July 25, 2018

ELIZA and Sophia and Sisyphus

One of the biggest issues facing the field of artificial intelligence is that the concept of intelligence is subjective. When discussing the metaphoric birth of the field of artificial intelligence and what it means to achieve “true” artificial intelligence, the consensus seems to shift constantly. The root of this confusion is that intelligence can be defined in dramatically different ways. To some, the birth of artificial intelligence could be considered the advent of the abacus, as this instrument was able to have a memory of a certain state. If one defines intelligence as the ability to store information, then indeed, the abacus seems to be elementary artificial intelligence.

However, most people seem to attribute a uniquely human element to the conceptualization of artificial intelligence. That is, artificial intelligence popularly refers to actions delegated to computers that – put vaguely – a human would traditionally need to do.

The Turing Test perpetuates this concept — and probably was the genesis of the popularity —  that artificial intelligence is a replication of human thought and action. The Turing Test famously determines if a program is “intelligent” if it can fool another human being into believing that the program is actually a human being (Hern).

ELIZA was thrown out as a realization of artificial intelligence because her vocalized responses were often repetitive or canned. Because ELIZA did not synthesize information and respond with something unique, when viewed from the present, ELIZA is not considered artificially intelligent. But, when we talk about the scope and possibility of artificial intelligence today, the conversation often pivots to another notoriously aspirationally human program: Sophia the Robot.

What is interesting between the case of ELIZA and Sophia is that one’s programming is more obscured than the other. Sophia gives similarly scripted responses to answers, but she has a robotic body and even given official citizenship in Saudi Arabia. In her own words: “Think of me as a personification of our dreams for the future of AI, as well as a framework for advanced AI and robotics research, and an agent for exploring human-robot experience in service and entertainment applications” (Sophia).

The historical trajectory seems to be that when a computer can do a single humanoid thing, we are quick to contemporarily denote the program as artificially intelligent. But, soon enough, the discourse shifts to suggest that the marker of true artificial intelligence is not one specific humanoid task, but a different one. The past iteration of artificial intelligence, such as ELIZA, is seen as primitive in comparison with something of contemporary creation, such as Sophia. As Warwick eloquently summarizes, “In each case, what was originally AI has become regarded as just another part of a computer program” (Warwick, pg. 8).


Hern, A. (2014, June 9). What is the Turing test? And are we all doomed now? The Guardian. Retrieved from

Sophia. (n.d.). Retrieved January 23, 2019, from

Warwick, K. (2012). Artificial intelligence: the basics. London: Routledge.

AI Unpacked

Annaliese Blank

When I came into this course, I wasn’t fully aware of AI and its capabilities. Some things I wanted to look for were definitions, functions, components, formations, and real world examples of AI. When reading these pieces, it came to my attention that AI is difficult to define since it has numerous functions and capabilities and comes in many forms that has transformed human life in ways that most of society will never truly recognize. It’s hard to define and hard to understand when most people are unaware of what it really does is definitely a common theme I noticed. AI is smart technology that in simple terms problem solves human life and comes in various structures of human design. It’s meant to be a positive space, not negative.

When deblackboxing this AI, some helpful things that came to mind were the components. One thing that stuck with me was the Bode piece says, “virtual machines are actual realities” (Boden, 2016, pg. 4). This resonated with me because it shows that when it comes to this technology, there is a system and function not only on the inside, but on the outside. Processors functions are to achieve something within their own hardware and coding. There is always an input and output that controls its capabilities. Some negative sides to de-blackboxing AI showed me how complex its entities are and became a bit blurry connecting the dots. Another fact that stuck out to me was the difference between machine learning and AI. I never thought of this concept this way, that AI relies on machine learning and its’ data it gathers in order to revive and reproduce itself, hence the intelligence portion to its name. So the real question here would be Is machine learning really changing the world, and not AI?

I think it’s pretty typical for AI to be considered or thought of as things rather than human designs since most people are unaware of the AI that surrounds our world. Some examples of this being that in our daily lives, people use computers, phones, laptops, televisions, radio, cars, airplanes, trains, Wi-Fi, internet, printers, robots etc. and are unaware of the technology they are using. These are technology that are human designs because they are “designed” to enhance and make human lives easier. It’s possible to only consider them as “things” because they do lack human emotion and feeling, but this can be a good thing because the human mind and emotion will still have the capacity to not be outnumbered by robot and machine technology in the IT and medical field, and others. I also think the problem is since people automatically associate AI with computers and computer related components and activity, they associate the computer as the thing because it is just an object. The same problem applies here to AI because even though it is not one specific object, it is the components to any smart technology that actually operates the technology. Therefore, if the inside and input is not visible or understood, then the output and outside of the technology is not understood. In a way it’s a double standard because AI already does so much for our lives and we don’t even know the extent, impact, or influence on human life.

One question I gathered was what does the future of AI look like if we have already improved so much? In the Naughton piece, he says, “The world of AI is dominated by the media coverage industry itself” (Naughton, 2019). This really got me thinking deeper about privacy and ethical issues. Another question would be does AI post a threat to the ethics or structure of online journalism? In what other ways since it is understood that the media targets AI and says it will to some degree “ruin our lives’.

To further this, I found a great article that helped me connect these pieces together with a historical approach. Author Nand Kishor writes a suggestion to help improve the image and perception and definition of AI, and says, “The ad industry should use the term AI to refer to a set of specific techniques that are used in building solutions, such as neutral networks or expert systems and keep the marketing speak minimum” (Kishor, 2017). I feel strongly to his suggestion because this might clear up any issues and misconceptions about AI and its authenticity to improve human life. This article was fantastic and I suggest to the class to please read it. Link listed below.


Boden, M. A. (2016). AI: Its nature and future. Oxford: Oxford University Press.


Naughton, J. (2019). ‘Don’t Believe the Hype: The Media Are Unwittingly Selling Us an AI Fantasy’ The Guardian, January 13, 2019.


Kishor, A. (2017, July 12). Machine Learning Vs. Artificial Intelligence: Unpacking Their Histories. Retrieved January 23, 2019, from House of Bots.

Intelligence or Artificial Intelligence?

By Linda Bardha

In her definition of AI, Margaret Boden says: “AI seeks to make computers do the sorts of things that minds can do”. So this sentence makes me think about how intelligent humans are and all sorts of things that we can do with our minds. In order to understand artificial intelligence, I think its important to talk about intelligence first and the evolution of the human brain. What part of the brain evolved and how that affected our life?

So speech and comprehension of language were the two areas that were most developed in the modern humans. Then it makes sense to say that once we started to incorporate computers in our lives we started to write computer code, make progress in so many fields and we started talking about artificial intelligence.

Herbert Simon explains artificial as: “Produced by art rather than by nature; not
genuine or natural; affected; not pertaining to the essence of the matter.”

But how did the field of AI start?

From the readings, as Warwick explains, the strongest immediate roots date back to the work
of McCulloch and Pitts, who, in 1943, described mathematical models (called perceptrons) of neurons in the brain (brain cells) based on a detailed analysis of the biological originals. They not only indicated how neurons either fire or do not fire (are ‘on’ or ‘off’), thereby operating in a switching binary fashion, but also showed how such neurons could learn and hence change their action with respect to time. Another important figure is Alan Turing  and his Turing Test. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine’s ability to render words as speech.

In an interview about Quartz (a digital news outlet), Brian Christian competed in an annual Turing Test competition as part of the Loebner Prize, in which he had to convince a group of judges that he was a human, not a computer program. He won, and was named the “Most Human Human.” This video is part of Machines With Brains, a series about what it means to be human in a world that’s increasingly filled with robots.

You can read more about this at

In our journey of artificial intelligence, we need to be careful in comparing the intellectual abilities of a machine with those of a human, and remember that machines of any kind were designed to do the things they do. From a technical perspective, I want to understand what technologies make AI possible. This is a broad topic since we know that now days AI is present in so many fields such as  HR, eCommerce, Healthcare, Supply Chain, Manufacturing and more, and I’m interested to look at these fields and the technologies that are used and designed regarding AI.


Boden, A. Margaret. AI: Its Nature and Future (Oxford: Oxford University Press, 2016).

Lipton,Zachary  (Carnegie Mellon Univ.), “The AI Misinformation Epidemic” (2017-18).

Simon, A. Herbert. The Sciences of the Artificial (Cambridge, MA: MIT Press, 1996).

Warwick, Kevin. Artificial Intelligence: The Basics (New York: Routledge, 2012).

Human Evolution Part II. Evolution of the Human Brain Intelligence is not just a function of brain size”  (

Hand, Autumn. How the “Most Human Human” passed the Turing Test. October 2018  (





“Synthesize the Real”

One common theme amongst the introductory readings is the concepts of “natural” and “artificial” (terms used within the Simon reading, however used in different terms within the other readings). Work within the artificial intelligence industry has helped humans identify, articulate, learn, and define our psychological and human behavior. By understanding the contrast/relationships between what is seen as natural and artificial, scientists and researchers can begin to not only gain further knowledge on the capabilities of artificial intelligence, but also further expand on the actions and behaviors that artificial intelligence can achieve. Relevant to the theme of duality between natural and artificial, human and robotic, scientific and technological (Boden), is the theme of time that was prevalent amongst the readings. To understand the fast growing artificial intelligence industry, it must be understood that not only is the turn around speed to learn about artificial intelligence under a quick time frame, but also the rapid speed of learning about artificial intelligence has helped us quickly develop new products and ideas. Further knowledge about the capabilities of artificial intelligence help identify and learn more about categories and subcategories of artificial intelligence (virtual machine, connectionism (Boden)).

To further expand on the previously stated themes, I wonder how the speed of our technological development and knowledge of artificial intelligence will affect the outcome of artificial intelligence in the future. Will the lines between natural and artificial become blurry? Will we, as a society, find a way to justify the algorithmically computed psychological responses, behaviors, and actions that an artificially intelligent entity carries out as normal? Similar concerns point towards issues involving artificially intelligent robots with human features (e.g. Sophia the Robot). Ethically/morally, should there be a line drawn for how much natural characteristics an artificially intelligent entity may have?

Designed to Think

Throughout this week’s readings a common theme kept arising in the assigned articles and chapters:  AI’s designed ability to think. Schwartz’s article warned against burgeoning hysteria about bots in Facebook’s Artificial Intelligence Research unit which irregularly combined words to communicate with other bots. The “machine learning patios” resulted in worry from the general public about devices and technologies designed with AI having a language and mind of its own. This sparked questions of ethics and security, as mentioned by Naughton, but neglects to question more realistic and pertinent problems of ensuring  that future AI is not designed to subvert the law. The fantastical notions of AI thinking like humans enough to challenge law , neglects the fact that AI is designed to simulate decision making, not to think like a human. Tasks delegated to technology are trained to be completed through machine learning but this is not the same as the technology actually thinking or making a choice. Deep learning provides historical data that improves on the accuracy of the decisions produced through algorithms and machine learning.

Companies have developed and named technology which has caused many users to  anthropomorphize devices and further the idea that AI makes devices think and listen. This dissonance between reality and design of AI seems to drive not only the funding of AI related research but also the opinions of AI.  Schwartz’s article cautioned against another “AI winter” which was a dearth in funding for AI related research as a result of the dissolution of interest in AI by both the media and general public.  Warwick’s introduction of Artificial Intelligence the basics explained the AI winter of  the 1970s as declining interest and funding of AI related research  as a result of technological delays and philosophical disagreements. The 1960’s produced many ideas of how AI could simulate human problem solving but lacked the technology to produce it. The prevention of another AI winter is contingent upon the recognition and acceptance of Simon’s indica that separate the natural from the artificial. Primarily, the notion that artificial things imitate natural but lack the reality of it.


Is Siri artificial intelligence?

Most of us have had the experience talking with Siri.

“Hey, Siri, how’s the weather today?”

“It’s currently cloudy and 1℃ in Arlington. Expect rain starting in the afternoon……”

Although Siri sometimes hears false commands due to immature speech recognition, basically it can tell you everything you ask, or play a song as you require. However, Siri seems more like a search engine since it is hardly possible for you and Siri to have a conversation beyond that. So here comes my question: is Siri artificial intelligence?

According to Margaret A. Boden, the things that artificial intelligence can do “all involve psychological skills—such as perception, association, prediction, planning, motor control—that enable humans and animals to attain their goals.” As I see it, a system that is by human design and has the ability to perceive/associate/predict/plan/control motor can be categorized as artificial intelligence. In this circumstance, Siri is artificial intelligence in that it can recognize your voice. Also, Siri successfully predicts that you are asking the weather in your location based on your location setting. What is most intelligent part in the weather example is Siri uses ℃ instead of ℉ in the result. Obviously, it detects my preference from the setting in the “Weather” application. So for me, Siri is artificial intelligence.

But why do some people argue that Siri is not AI? In my opinion, the reason that Siri meets with harsh criticism is people have higher expectations. They might have confused artificial general intelligence with the broad concept of artificial intelligence. Artificial general intelligence describes the intelligence of a program that can perform general intelligent action (Goertzel, 2007). That is to say, artificial general intelligence allows a computer/machine to think like a human being, which, I believe, is the ultimate goal of AI. On the contrary, Siri is limited as it only deals with specific problems, which makes some people draw a conclusion that Siri is not eligible for AI.

So how to achieve the kind of artificial intelligence that people are expecting? I think the approach lies in machine learning. The way that machine learning works is to make use of data and turn it into a useful service. What matters is machines are evolving by themselves as they are exposed to more data over time. Learning is a requisite of intelligence (Alpaydin, 2016), and that’s the future of AI.  



Boden, M. A. (2016). AI: Its nature and future. Oxford: Oxford University Press.

Goertzel, B. (2007). Artificial general intelligence. Berlin [u.a.]: Springer.

Alpaydin, E. (2016). Machine learning. Cambridge, MA, USA: MIT Press.


Understanding AI

Tianyi Zhao

My first touch with artificial intelligence was from the robots in the film series Terminator. Then came the Westworld. The movies have visualized the new era when people live and work with artificial intelligence as well as the potential problems – for example, the AI-human relationship – that humans will face in the near future. The AI robots depicted on the screen have widened our horizon on what AI is and how it can be applied in reality. However, this week’s reading has systemized my knowledge about AI for the first time. According to Margaret A. Boden, AI “seeks to make computers do the sorts of things that minds can do.” (Boden, 1) Some key points have dragged me into ponderation.

Virtual machine, an information-processing system that stay in minds of programmers and users, was a bit hard for me to understand until Boden stated that programming languages belonged to virtual languages as well. The experience of learning Python in the last semester reminded and inspired me. Python’s instructions have to be translated to machine code before they can be run. Python running rules have been deeply rooted both in minds of encoder and those of decoders. Its rapid growth with advantages – such as various inbuilt libraries and shorter line of codes – has made it more favorable for AI-based projects. With the example of Python, virtual machine becomes more easier to comprehend.

Figure 1. AlphaGo Beat Top-ranked Professional Player


Go, an ancient board game with complicated prediction and planning, was thought as the game only for humans. However, AlphaGo broke the belief by beating the world’s best professional players since 2016. The unexpected success credited the planning technique of AI. A plan specifies a sequence of actions with a final goal. To reach the final goal, there are amounts of sub-goals. According to Boden, the planning program needs symbolic operators, a set of prerequisites for each action, and “heuristics for prioritizing the required changes and ordering the actions.” (Boden, 26) The integral enables AlphaGo to plan every step among many possible moves and reach the final win.

Figure 2. Some of Siri’s Functions


Furthermore, one of the prevalent applications of AI is natural language processing, an interactive area among computer science, AI, linguistics and human natural languages, aiming to realize various theories and methods for effective communication between humans and computers in natural language. Apple’s Siri is a typical example. Serving as a personal assistant, Siri can answer varieties of questions and quickly get access to any applications for information needed. The built-in conversational analysis will fast analyze the input sentences, spoken or typed, and decide the answers that satisfies users’ preferences.

All in all, AI has already been around us and quickly develop. The understanding of virtual machine, for example the programming languages, can be easier if applies to a specific language. The virtual machine that progress AI should be useful and interesting. Meanwhile, planning technique is prevalent in AI and develops in big strides with the example of AlphaGo. Natural language processing is a universal application of AI by briefly analyzing Siri.


Works Cited

Boden, Margaret A. AI: Its Nature and Future. Oxford: Oxford University Press, 2016.

Warwick, Kevin. Artificial Intelligence: The Basics. New York: Routledge, 2012. (2019)

AI and Convenience

I live in a studio apartment, but in my small space, I have three very helpful roommates; Siri, Alexa and Google. Each morning, Siri wakes me up with an alarm (and then two more if I’m being honest), and plays my music on the HomePod. Alexa runs me through the weather and the day’s news. Google tells me about my commute – he’s the most reticent of the group, overshadowed by his showier friends, but still very helpful when it comes to my passion for cooking. According to Alpaydin, the term for my usage of all these devices is ubiquitous computing – “using a lot of computers for all sorts of purposes all the time without explicitly calling them computers” (Alpaydin, 2016). They each serve a different function, despite often overlapping, but all ultimately adding convenience to our modern, interconnected society.

My tech-reliant morning routine is a microcosm of Alpaydin’s hypothesis, that we create space in our lives for the convenience of technology driven by artificial intelligence, simply due to the fact that “..we want to have products and services specialized for us. We want our needs to be understood and our interests to be predicted” (Alpaydin, 2016). Am I aware that all of this data is being stored, that there is not one, but three devices in my home that listen to my every word laying in wait for the “wake word” (“Hey Siri”, “Alexa…”, “Hey Google”)? Yes, but despite concerns of my privacy potentially being violated, or that I am too dependent on these technologies, it is now shockingly easy for these big corporations to be let into our homes to collect data, when we as a society now prioritize convenience over all.

These ethical issues concerning privacy and surveillance, in tandem with the growth of AI and data mining practices, are cropping up at a time when machine learning is already having “a measurable impact on most of us” (Naughton, 2019). At present, we already see the advent of “programs that learn to recognize people from their faces… with promises to do more in the future” (Alpaydin, 2016). Alpaydin further elaborates on this, differentiating between writing programs and collecting data. An example of a potential machine learning algorithm in action is evident in the recent “Ten Year Challenge” that is rampant on social media, primarily on Facebook. The challenge is a seemingly harmless way to do a before and after, a #TransformationTuesday in viral meme form. However, the data that this trend is leaving in its wake can be an example of machine learning within the bounds of a specific data set – in this case 10 years. “Supporters of facial recognition technologies said they can be indispensable for catching criminals…But critics warned that they can enable mass surveillance or have unintended effects that we can’t yet fully fathom” (Fortin, 2019). This ties back to Noughton’s point, that the “soft” media coverage of artificial intelligence drives a media narrative of AI as a solution to all our problems, without focusing on potential harmful effects. In Noughton’s words, this narrative is “explicitly designed to make sure that societies don’t twig this until it’s too late to do anything about it” – similar to where most of us find ourselves at present, highly dependent on technology.

Ultimately, an interesting facet to these introductory readings can be reflected in a statement from the essay, “Do Artifacts Have Politics?” (Winner, L. 1986), as follows: “in our times, people are often willing to make drastic changes in the way they live to accommodate technological innovation, while at the same time resisting similar kinds of changes justified on political grounds.” Despite being a dated article, the author’s foresight and message are still salient today. In the context of our class, would we give up the convenience that artificial intelligence brings to our modern lives, if say for example one or more of these technologies were not made ethically? Perhaps not, as we are over-reliant on technology. But how far would we give up our privacy for the sake of convenience?


Alpaydin, E. (2016). Machine learning: the new AI. Cambridge, MA: MIT Press.

Fortin, J. (2019). Are ‘10-Year Challenge’ Photos a Boon to Facebook’s Facial Recognition Technology?. Retrieved from:

Naughton, J. (2019). ‘Don’t Believe the Hype: The Media Are Unwittingly Selling Us an AI Fantasy’ The Guardian, January 13, 2019.

Winner, L. (1986). ‘Do Artifacts Have Politics?’ Chicago, IL: University of Chicago Press.