Category Archives: Week 3

AI’s PR crisis: Reframing the Narrative

This week’s readings provided a conceptual parallel to last week’s assigned Guardian article by Naughton, which delved deeper into the pernicious effects of the manner in which the media presents AI to members of the greater, more credulous public. The lack of responsible framing and media narratives plays a huge role in the de-blackboxing of these AI systems and shedding light on the behind the scenes action, if you will, of machine learning. In accordance with Gerbner’s Theory of Cultivation (Gerbner, 1976), entertainment media also has an affect on this, as audiences often cement their sense of reality based on the content they consume, without having prior knowledge or insight on the way these systems work.

The often sensationalized depictions further perpetuate the concept of “sociotechnical blindness” (Johnson & Verdichio, 2016) in which most people are unaware of the key role played by humans in the design machine learning of AI systems. Johnson & Verdichio suggest circumventing the “semantic gap” that has been created by in referring to AI entities as autonomous or the suggestion that they, by themselves, are “intelligent”. The example of the “autonomous” Roomba was especially salient here, as the article sheds light on its workings by elaborating on the following simple format:

Environmental cues + Software (or internal programming)
= Movement across the room

While the Roomba may be seen as “unpredictable”, the limits of its behavior are known and visible to the naked eye, therefore easier to de-blackbox. Johnson & Verdichio explain this by saying, “we know the Roomba will not climb up the walls or fly because we can see that it doesn’t have the mechanical parts necessary for such behavior”. The point is, that AI has somewhat of a PR image crisis that needs to be reframed – it is often conceptualized as “the big unknown” whereas further illuminating the connection between AI and society in mass media will aid in greater understanding of the narrative.

Examples of machine learning and inductive bias can also be applied to concepts explored in the other readings. According to Alpaydin, “the aim of machine learning is rarely to replicate the training data but the correct prediction of new cases” (Alpaydin, pg 39). However, correlation does not always prove to be causation, so I am interested in delving deeper into instances where inductive bias works against machine learning, as evident in the Fibonacci sequence example – correlation does not always prove to be causation!

Reframing the AI Discourse

In the article Reframing AI Discourse, Johnson and Verdicchio claim, “because AI always performs tasks that serve human purposes and are part of human activities, AI should be understood as systems of computational components together with human behaviour (human actors), and institutional arrangements and meaning” (2017, p. 577). This is useful in reframing an understanding of AI as a system that includes the human actors involved. In this reframe, it takes away some of the mysticism created by popular media – that AI is autonomous and does not include human interaction or intervention. 

Autonomy of Computational Artefacts vs. Autonomy in Humans 

One of the AI myths prevalent in the readings is the fear that machines will outsmart us and therefore take control over humans in the future. The Johnson-Verdicchio article helps to clarify the definition of autonomy within computational artefacts. They say, “the less intervention needed by humans in its operation and the wider it’s scope of action, the more autonomous the artefact” (p. 580). Additionally, “the behaviour of computational artefacts is in the control of the humans that design them” (p. 584). Understanding that machines are not like humans in that they do not have their own interests and freewill to act as they choose, debunks the myth that machines or robots will decide to take over human life. The only “choice” machines have, are those that are progammed into their design. 

The real concern then – is a question of what kind of people are designing AI. What happens when the wrong people instruct AI to do harmful things or to control populations? AI used for military purposes is an example of an area that will need monitoring and has the potential to cause severe turmoil.

Given these findings, looking at the article “Who Will Win the Race for AI?” by Yuval Harari, he expresses concerns over the human actors in charge of data and autonomous weapons. Harari states that China and the United States are the leaders in data mining, and data could now be the most important resource in the world in terms of power of influence. He proposes that “the world could soon witness a new kind of colonialism – data colonialism- in which raw information is mined in numerous countries, processed mainly in the inertial hub, and then used to exercise control around the world” (Harari, 2019).

These questions come to mind: are autonomous weapon systems one of the real dangers concerning AI development? Are “dangerous emerging technologies” only dangerous because of the ones who will have access to them? Is data-colonialization a prediction or is there evidence that this will be at the core of the “AI Revolution”?

After reading Johnson and Verdicchio, Harari’s concerns seem to be much more on target. Further research into the current use of autonomous weapons and data mining will be necessary in unpacking the claims made in the Harari article. Additionally, human responsibility seems to be a better area for development and focus within current AI research and public discourse. Harari suggests that people need the tools to counter-monitor the government and large corporations against corruption and police brutality. He proposes that countries that will not be able to lead in AI development can invest their time and energy into regulating the superpowers to protect their data. 

 

References

Deborah G. Johnson and Mario Verdicchio, “Reframing AI Discourse,” Minds and Machines 27, no. 4 (December 1, 2017): 575–90

Harari, Yuval Noah. “Who Will Win the Race for AI?” Foreign Policy, https://foreignpolicy.com/gt-essay/who-will-win-the-race-for-ai-united-states-china-data/.

Margaret A. Boden, AI: Its Nature and Future (Oxford: Oxford University Press, 2016)

Amazon Go – Computational artefacts and sociotechnical systems

By Linda Bardha

There was a quote from our last class: “What we lack in knowledge, we make up for in data”. And after the readings for this week, this sentence is even more powerful. As Alpaydin explains in his book, the data generated by all our computerized machines and services was once a by-product of digital technology, and computer scientists have done a lot of research on data-bases to efficiently store and manipulate large amounts of data. Sometime in the last two decades, all this data became a resource; now, more data means more information that can be stored in order to use algorithms for pattern recognition and predictions. When we start to ask ourselves what can be done with this much information and data, then data starts to drive the operation; it is not the programmers anymore but the data itself that defines what to do next.

Let’s take a look at Amazon Go, another service of Amazon that allows the customers to try another shopping experience with no lines and no check out. Of course this has a drastic impact on the economy itself, and the number of workers that are needed to run a store. And this is where the debate starts when it comes to using technology. At one point of view, we are making our shopping experience easier and we’re saving time. On another note, we’re cutting the number of workers that would run the store. For the purposes of this post, I’d like to expand on the technology that is used to make this shopping experience possible. Whether I like this shopping experience or not, we’ll have to see, since I haven’t tried it myself.

Amazon Go requires a store to be outfitted with machine vision, deep-learning algorithms, and an array of cameras and sensors to watch a customer’s every move. These sensors and cameras  look at what every item is, and when it’s been picked up and put back, so it can charge a shopper’s account.

Before you enter the store, you have to download the Amazon Go app and log in with you amazon account. Once you have that, you use the app, you scan the code and that’s how you have access in the store. Each of the shelves has sensors that track the weight associated with a product, and the cameras also feed the information on which item has been picked up. The items that you pick up and put in your bag are also being tracked on a “virtual cart” which is associated with your amazon account when you entered the store. Only when you leave the store, a receipt is emailed to you and your account is charged.

A lot of the “How’s” and what exact algorithms are used to make this experience possible are not publicized. But there is information hypothesizing and trying to understand how everything works.

An article in Wired magazine, explains that  all the cameras that are placed everywhere around the stores, on shelves, and above aisles, don’t use facial recognition technology, but instead computer vision. Think of it as a network of cameras that allows the software to see and determine what that object is, and also keep track when items get picked up from the shelves. This network of cameras also determines one customer from another, so the right customer is charged with the things that they bought. Behind the computer vision is the deep learning, where the systems are basically advanced pattern recognition and allow for the system to draw conclusions from vast datasets.

As Alpaydin explains, the main theory underlying machine learning comes from statistics, where going from particular observations to general descriptions is called inference and learning is called estimation. Classification is called discriminant analysis in statistics.

“Machine learning, and prediction, is possible because the world has regularities. Things in the world change smoothly. We are not “beamed” from point A to point B, but we need to pass through a sequence of intermediate locations”. (Alpaydin)

So, once you try to “de-blackbox” terms such as machine learning, you understand that at the basis of it lies statistics and statistical analysis and math models that have been used in may different fields, but more recently these became hot topics in the field of computer science and information science.

As Deborah G. Johnson and Mario Verdicchio suggest on their research, a critically important ethical issue facing the AI research community has to do with how AI research and AI products are responsibly conceptualized and presented to the public. They argue that most of the issues relating to AI can be tackled by distinguishing AI computational artefacts and AI
sociotechnical systems, which include computational artefacts.

We need to keep in mind that as more new technologies are being used, it is the human actor that makes these technologies present and not the computational artefacts in them, or what media or businesses use as “buzz words” from a profit perspective.

References:

Ethem Alpaydin, Machine Learning: The New AI. Cambridge, MA: The MIT Press, 2016.

Jerry Kaplan, Artificial Intelligence: What Everyone Needs to Know. Oxford University Press, 2016.

Deborah G. Johnson and Mario Verdicchio, “Reframing AI Discourse,” Minds and Machines 27, no. 4 (December 1, 2017): 575–90.

Matt Burgess “The technology behind Amazon’s surveillance-heavy Go store” January 22nd, 2018  https://www.wired.co.uk/article/amazon-go-seattle-uk-store-how-does-work

 

 

 

Artificial Intelligence as a Catchall

Without a doubt, media representation of artificial intelligence is too vague and simplistic to communicate tangible, actionable information to readers and citizens. This simplicity and quasi-mysticism, as Johnson and Verdicchio discuss, effect the discourse and thus action taken concerning the development of artificial intelligence. In my eyes, the largest issue that the duo highlights is that current discourse of artificial intelligence produces a sociotechnical blindness, or “blindness to all of the human actors involved and all of the decisions necessary to make AI systems” (pg. 587). This sociotechnical blindness creates a myth in which artificial intelligence is completely out of control of human hands, when the exact opposite is true: by definition of artificiality, all artificial intelligence systems are created by human decisions. However, when we prescribe agency to the artificial intelligence system not only does that absolve the creators of blame when issues arise from the artificial intelligence, but it also creates an environment in which citizens feel powerless.

Granted, artificial intelligence is a wide field with branching paths of specialization and epistemology, but using the umbrella term “artificial intelligence” to describe specific programs within media representation continues this trend of sociotechnical blindness. Imagine if, in a story about elephants, we just referred to them as mammals. Technically, we would be categorically truthful, but we would be missing out on a lot of nuance that could lead the reader to make false assumptions about mammals as a whole compared to the specific nature of elephants.

Obviously, AI is more complex than being one entity.

Even one more layer of complexity given to discussion of artificial intelligence would increase the nuance of understanding, and thus would work to de-blackbox the concept of artificial intelligence to most readers.

Even through an elementary survey of popular books about artificial intelligence, I found that many of the authors worked to mythologize and contribute to the black boxing of artificial intelligence as a whole. Boden’s description of artificial intelligence as a virtual machine drew allegories to an orchestra, in which a person does not single out different instruments, but listens to the music as a whole, created product of the virtual machine that is an orchestra. This concept of modularity creating a larger whole from constituent parts is brilliant, but the idea that each piece cannot be singled out or understood seems harmful given the trend of oversimplification in media coverage of artificial intelligence. Sociotechnical blindness will continue if writers continue to think readers need such simplified explanations.

 

References:

Boden, M. A. (2016). AI: its nature and future(First edition). Oxford, United Kingdom: Oxford University Press.

Johnson, D. G., & Verdicchio, M. (2017). Reframing AI Discourse. Minds and Machines, 27(4), 575–590. https://doi.org/10.1007/s11023-017-9417-6

Designing Creativity for AI

The relationships between pattern recognition and creativity as well as pattern recognition and emotions were particularly of interest in this weeks readings. Creativity and emotion are human conditions that can often seem random with uncertain causes. Though randomness and uncertainty can be predicted with probability theories and statistics, creativity and emotion are often a product of a person’s interactions with the outside world. Human computer interactions have produced opportunities for AI to support users by the replication of creativity and recognition of emotion. Unfortunately, the ability for AI to accurately produce creativity or emotion seems much further off than the ability of AI to complete natural language processing. The dissonance between the AI, emotion and creativity can only be ameliorated through the integration of neural networks that mimmic the diffusion of neuromodulators through the brain. Though AI cannot be designed to feel emotion, can the process of neuromodulation be designed into an algorithm for AI to be impacted by emotion the way that humans are?

The three main forms of creativity, combinational, transformational and exploratory creativity, are all based on the recognition of the rules of existing cultural artefacts and the modification of these rules. This is not unlike the pattern recognition and machine learning needed for natural language processing, however, creativity and cultural artefacts often are created from emotion. The impact of emotion on creativity is powerful and also hinders the effectiveness of AI’s ability to truly produce creativity. Though research groups are beginning to study AI and emotion, it seems to be aimed at pattern recognition of emotions rather than a psychological understanding of it.

Boden’s book often referred to the forlorn psychological roots of AI research which contributed to the development of modern neural networks. This acknowledgement lead me to one major question about the future viability of AI and creativity. Through reading this week, I found that the focus of researchers was predominantly on designing AI to mimmic some aspect of human conditions and correcting any diversion from the standard. If creativity is truly a diversion from rules, designers of AI will likely correct the deviations of AI and stunt any potential human like creativity. Will a deeper understanding of the psychology of emotions that foster creativity help design AI to mimmic human creativity?

Alpaydin, E. (2016). Machine learning: the new AI. MIT Press.

Boden, M. A. (2016). AI: Its nature and future. Oxford University Press.

Veiled Veillance & Cyborgian Supervillians

I began this week’s batch of readings with the shortest in length, but I found The Society of Intelligent Veillance (Minsky et. al) to be a fascinating article. Even in the first section, where the authors discuss the “society of mind,” I caught myself asking questions about the implications of their ideas. They write, “Natural intelligence arises from the interactions of numerous simple agents, each of which, taken individually, is ‘mindless,’ but, collectively, give rise to intelligence” (Minsky et al., 2013, p.13). This makes sense, and in many cases, more minds on a task can lead to better, more diverse and sometimes even unpredictable ideas and solutions. But how do the notions of groupthink and hive mind (with their generally negative connotations) factor into this quote? Oftentimes, additional people on a task can lead to mindless agreement and blind following as a way to finish the task quickly by the path of least resistance. The authors apply their concept of the “society of mind” to modern computing and the rise of distributed, cloud-based computing across the internet, going so far as to quote the slogan of Sun Microsystems: “The network is the computer” (p. 13).  Since computers often reflect natural human bias in their programming, are computers subject to the same negative aspects of groupthink? 

The section of the article on the “Cyborg Age” also caught my attention, where the authors write, “Humanistic intelligence is intelligence that arises because of a human being in the feedback loop of a computational process, where human and computer are inextricably intertwined” (Minsky et al, 2013, p. 15). Machines have acted as extensions of our bodies and senses ever since their inception, but now we’ve become reliant on them to the point of wanting to incorporate them as wearable devices, and even possibly implement them into our biological makeup. This idea brought to mind the many depictions of cyborgian technology in popular media, such as (in order of realism) Will Smith’s prosthetic machine arm in i, Robot (replaced, enhanced limb), Doc Ock’s tentacle arm things in Spiderman (added, enhanced limbs), and Wolverine getting his bones fortified with adamantium nanotechnology in X-Men. There are countless other examples of this, and obviously our imaginations can sometimes take us further than science. But while these types of cyborgian innovations hold tremendous potential for the human race, when will this kind of technological advancement end? Maybe it’s just the sci-fi/comic book nerd in me, but I hope it doesn’t take a destructive cyborg supervillian in 20+ years to make us realize we need to pump the brakes on these technological extensions and enhancements to our human bodies. 

I also enjoyed contemplating the Society of Intelligent Veillance, and how we are now subject to “both surveillance (cameras operated by authorities) and sousveillance (cameras operated by ordinary people)” (Minsky et al, 2013, p. 14) in our everyday public activities. We are in a modern, living panopticon, enforced and perpetuated through our own insatiable internet use. So many of the videos we see on the news and social media now come from citizen journalism: people with smartphones catching an ugly encounter in a brand-name restaurant, or a racist incident on a train platform, etc. An especially chilling line from this section is, “If and when machines become truly intelligent, they will not necessarily be subservient to (under) human intelligence, and may therefore not necessarily be under the control of governments, police…or any other centralized control.” Who will these super-intelligent and capable machines answer to?

The answer, according to the level-headed Johnson & Verdicchio (2016), is computer programmers and engineers. They write, “To get from current AI to futuristic AI, a variety of human actors will have to make a myriad of decisions” (Johnson & Verdecchio, 2016, p. 587). The authors discuss how AI is often misrepresented in popular media, news coverage, and even academic writing, because of 1.) confusion over the term “autonomy” (machine autonomy vs. human autonomy); and 2.) a “sociotechnical blindness” that neglects to include human actors “at every stage of the design and deployment of an AI system” (p. 575). This is useful reasoning to keep in mind when becoming fearful about artificially intelligent cyborgian supervillians. It’s the type of reassuring logic we need to maintain faith in the positive development and incorporation of AI in our digital age. 

 

References:

Johnson, D. G., & Verdicchio, M. (2017). Reframing AI Discourse. Minds and Machines, 27(4), 575–590. https://doi.org/10.1007/s11023-017-9417-6

Minsky, M., Kurzweil, R., & Mann, S. (2013). 2013 IEEE International Symposium on Technology and Society (ISTAS 2013): Toronto, Ontario, Canada, 27 – 29 June 2013. Piscataway, NJ: IEEE.

Understand biased AI from a sociotechnical perspective

In October 2018, Reuters revealed that Amazon built a hiring tool which was found biased against women. The author of the article Amazon scraps secret AI recruiting tool explained, for example, that if the word “women’s” and certain women’s colleges appeared in a candidate’s resume, they were ranked lower. This case shows that artificial intelligence is not neutral and machine learning has limitations.

The first question I want to ask is what led to its bias. Did Amazon’s hiring tool learn it by itself? To answer these questions, we need to figure out how it works at first. According to the report, the hiring tool employed machine learning to give job candidates scores ranging from one to five stars.  As we know from Alpaydin’s work, machine learning techniques find patterns in a large number of data. The hiring tool here is an application of machine learning. It was trained by analyzing the resumes that were submitted to the company over a 10-year period. The goal of the hiring AI is to find patterns in those resumes through machine learning and then calcuate the qualification of a candidate based on the observed patterns.

So one can say that Amazon’s hiring tool taught itself that male candidates were more qualified for tech-related jobs. However, the real problem is beyond that simple conclusion. Johnson and Verdicchio argued that AI systems should be thought of as sociotechnical ensembles, which are combinations of artefacts, human behavior, social arrangements, and meaning. Any computational artefact has to be embedded into some context in which there are human beings that work with the artefact to accomplish tasks. They warned people that we cannot solve sociotechnical issues by only focusing on the technical part of problems. In this case, the outdated resume materials are the context. Most of the resumes came from male candidates. The existence of gender imbalance during the past decade had a significant impact on the results of machine learning. That’s what makes AI biased. It reflected the limitations of artificial intelligence. Therefore, the hiring AI can only be used as a supplementary tool. To solve this problem, sociotechnical approach is needed. For example, people from diverse areas should work together to lessen the existing biased variables in the original data. But we need to keep in mind that artificial intelligence is not necessarily neutral. It is still a long way to go.

 

References

Johnson, D., & Verdicchio, M. (2017). Reframing AI discourse. Minds and Machines, 27(4), 575-590. doi:10.1007/s11023-017-9417-6

Alpaydin, E. (2016). Machine learning: the new AI. MIT Press.

Amazon scraps secret AI recruiting tool that showed bias against women https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

 

Google’s Machine Translation

Tianyi Zhao

The most impressive application of Machine Learning in natural language processing is machine translation. According to Radiant Insight Inc., the global machine translation market is expected to reach USD 983.3 million by 2022. Google Translate is a more prominent product in global market. In 2016, Google Translate realized it transformation from statistical machine translation to neural machine translation with multiple input methods.

 

Pattern Recognition

The recognitions of optical and acoustic information are two significant segments in pattern recognition. These two are fully applied to Google Translate as image translation and speech translation. With camera, Google Translate APP can easily and fast capture the features that can be recognized as language, a sequence of words rational from the lexicon and in semantics. Then the real-time translation is delivered automatically. For example, when you travel to Greece, and every landmark is in Greek. You can simply hold up your smartphone, and the camera will mechanically capture the Greek characters and show the translation in the target language. Of course, we always have different fonts or handwritten characters that seem hard to recognize. However, by leveraging with the learning program, the processing machine can quickly know it with the distinct regularities applicable to each Latin character which are generalized and shared by all kinds of fonts.

Besides image recognition, speech recognition is heavily used in machine translation as well. The input of characters in acoustic signal can be identified as a sequence of phonemes, the basic speech sounds. Similarly to the visual recognition above, there are different pronunciations of the same word because of age, gender or accent. In machine translation, what the learning program teaches is only the features that relate to the words instead of those of the speakers. However, Google Translate applies the second type as well, which is for the Conversation Translate. It achieves not only recognizing the input words but also identification of different people in the dialogue. To continue the same example in Greece, if you ask the local people passing by, who does not speak English, for how going to the destination. The Conversation Translate can accomplish instant real-time interpretation between Greek and English as your dialogue goes on.

 

Neural Machine Translation

According to Ethem Alpaydin, the process of neural machine translation starts with multi-level abstraction in lexical, syntactic and semantic rules. Then a high-level abstract representation is extracted, and the translated sentence will be generated as “decoding where we synthesize a natural language sentence” in the target language “from such a high-level representation.” (Alpaydin, 109) The era of phrase-based statistical translation has ended, while neural system translates an entire sentence at a time rather than cutting it into words. It combines context to find more accurate words and automatically adjusts to a more natural sentences syntactically that are smoother and more readable.

 

Works Cited

Alpaydin, Ethem. Machine Learning: The New AI. The MIT Press. 2016.

Radiant Insights. “Machine Translation Market Size Worth USD 983.3 Million By 2022: Radiant Insights, Inc.” Global Newswire. Dec. 3, 2015.

Wu, Yonghui, et al. “Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation.” arXiv.org, Cornell University Library, ar Xiv.org, Oct. 2016.

 

Languages, AI & Representation

This week’s reading shed a light through foundational concepts regarding AI and computation while also clarifying myths and public misconceptions around it. Particularly, I was drawn to the topics of big data and algorithms in combination on the public’s struggle to separate autonomy in these machines from what they consider human autonomy, consequently giving less relevance to human intervention in various stages of these processes than it deserves.

When Boden talks about natural language processes, I couldn’t help but connect it to the ideas presented by Denning and Martel in Great Principles of Computing when talking about the evolution of the studies under the domain of AI: “The focus shifted from trying to model the way the human mind works to simply building systems that could take over human cognitive work”. The two ideas felt connected when we look at the example of online translators and how ineffective they are in performing the task that we expect from them. If you’ve ever used an online translator it becomes clear, very quickly, that it is not effective in terms of whole sentences but does a fair job at finding synonyms and other uses when it comes to single words. It made me think of another example by the authors in which a machine could use Chinese words and respond but could not have a comprehensive understanding of the Chinese language. 

It seems as if the root of this failure goes back to the statement of building systems that take over human cognitive work instead of replicating the human mind. It seems that when it comes to language, there is a big gap between what the machine can produce and what the actual interpretation of the language is. As a Spanish instructor, I see it very clearly when my students attempt to translate whole sentences or paragraphs on Google translate and become incredibly frustrated by how parts of the translation don’t match the grammar rules they have learned in class. I always tell them that language is not a literal translation of symbols that have an exact equivalent from one language to the other, but that language is more of an interpretation based on context, culture, historical, and geographical background that is combined with grammar rules or protocols that might have an equivalent in another language but that, most of the times, won’t have the same value in the translation. It seems to me that, this idea of two sets of values for symbols (some of which might change depending on variables) it’s difficult to accurately put into computation. Which is why online translators are so frustrating when you’re multilingual.

When reading about the current state of the discourse of AI by Johnson and Veridicchio, it was satisfying to see such a clear breakdown of the questions and concerns I tend to have about the way AI is represented or misrepresented in the public discourse. While it clarified the distinction of where to put the weight of responsibility between humans and AI, it left open questions on how some social decisions were made around the representation and design of AI. To me, the biggest question that has plagued my mind and continues to do so is the, in my opinion, unnecessary gendering of AI and the implicit connotations it has about gender in society.

Fictional representations of AI fascinate me. As an avid sci-fi enthusiast, I’ve always been intrigued by dystopian representations of robots/AI apocalypses and genderization of AI in these contexts. Not because, like Johnson and Veridicchio’s article says, I fall into the trap of the fear of annihilation by our machine overlords, but because these representations say more about how humans think about humans than how humans think about machines. Underlying in these representations are profound ethical critiques about the state of society and human behavior. Specifically, sci-fi dealing with gender representation of AI in these dystopias say more about how we think about women than how we think about machines.

Therefore, it is my opinion that, maybe on a subconscious level, we do understand that if AIs “decide” to take over and enact (gendered) violence against humans and annihilate them based on considering them an “inferior race”, it will be because they were designed that way by humans who also enacted gendered violence and annihilated what they considered “inferior races” as well.

 

References:

  • Peter J. Denning and Craig H. Martell. Great Principles of Computing. Cambridge, MA: The MIT Press, 2015. Chapters 1-2.
  • Margaret A. Boden, AI: Its Nature and Future (Oxford: Oxford University Press, 2016).
  • Deborah G. Johnson and Mario Verdicchio, “Reframing AI Discourse,” Minds and Machines 27, no. 4 (December 1, 2017): 575–90.

AI, Language and Discourse: An Ongoing Story

Boden states that the biggest challenges AI systems face in regards to natural language processes (NLP) is its thematic content and grammatical form. What I wonder, though, is how often should AI NLP’s be updated to reflect up-to-date thematic content? Will colloquial language be reflected in AI’s natural language processing? What constitutes legitimacy over another in regards to different dialects or subcultures that may use their own “language” that an AI’s natural language processor may not register? I think this is a huge field of computer science and linguistics that calls for attention to address as the future of AI development becomes more and more intelligent. Though a potential stretch, I can see a dialogue between the Boden and Alpaydin readings, in which Alpaydin’s discussion on the use of machine learning to understand what kind of data should be given to technology can be similarly applied to the previously stated notions about learning a broader range of how language can be used for AI — research, potentially from linguistics or sociocultural anthropologists, should be conducted to determine the types of “natural language” that AI should process.

Connecting back to last weeks discussion on the blurred line between human and AI, I see a resemblance with how individuals inside and outside of the technologically-aware world define ‘autonomy.’ In regards to the Johnson/Verdicchio article, the importance of reframing AI discourse (especially in the media) is evident. And it is especially significant to create a discussion on what AI truly is (as Johnson/Verdicchio discuss), however I do wonder whether a dialogue between the aware and unaware of the AI industry is in any way intentional. If the  industry’s/society’s sociotechnical blindness to the “human actors” that contribute they make to the development and creation of AI systems is clearly prevalent enough to create an inaccurate discussion of AI systems, then why has there not been any successful means of conveying the truth behind it all? Is it at the fault of AI researchers and their lack of acknowledgement? Is it the fault of popular/western media that enjoys profiting off of AI-Aggressive narratives? Or could it merely be ignorance to it all because the technology is so new?