Author Archives: Kevin Ackermann

2.3 Billion vs. 15,000: Content Moderation Strategies and Desires on Facebook

INTRODUCTION

Try to imagine 2 billion of anything. It’s genuinely too hard for the human brain to comprehend such a gargantuan number, and yet Facebook serves 2.27 billion monthly users (Abbruzzese). With the dizzying array of opinions and demands that 2.27 billion global users have of the platform, how does Facebook decide who to please and why to please them?  How does Facebook moderate the massive amount of user-generated content on its platform? How is artificial intelligence used to automate content moderation on Facebook?

Why does Facebook moderate content?

First and foremost, Facebook is a business that aims to make a profit. Most of Facebook’s revenue is gained from selling advertisements to third parties, as Mark Zuckerberg concisely explained during a congressional hearing in April 2018:

What we allow is for advertisers to tell us who they want to reach, and then we do the placement. So, if an advertiser comes to us and says, ‘All right, I am a ski shop and I want to sell skis to women,’ then we might have some sense, because people shared skiing-related content, or said they were interested in that, they shared whether they’re a woman, and then we can show the ads to the right people without that data ever changing hands and going to the advertiser. (Gilbery)

Within this business model, one can summarize the ultimate goal of Facebook in its relationship with its users into two steps: (1) To keep users engaged and on the platform so that the advertisements can be seen and engaged with and (2) to promote users to generate content so that Facebook can extract more detailed behavioral insights with which to target ads. Basically, Facebook operates within the model of surveillance capitalism to make a profit (Laidler).

Thus, Facebook has a bona fide economic incentive to maximize the number of its users who feel that they are safe to post what they want without fear of censorship or peer-mediated attack. Additionally, as Facebook is a network in which users create the content that other users consume,  Facebook needs users to trust that they will not be offended each time they open the site. These incentives to maximize the amount of user-generated content on its platform are reflected in the descriptions of the principles that Facebook includes within its public-facing Community Standards.

When discussing the concept of safety and why it’s important to Facebook, Facebook says “People need to feel safe in order to build community,” suggesting that the reason threats and injurious statements are not welcome on the platform is because this type of content chills the process of community formation (Community Standards). The concept of increasing the number of opinions and ideas that can exist on the platform again resurfaces in Facebook’s description of “Voice” as a defining principle of its community standards. Facebook states, “Our mission is all about embracing diverse views. We err on the side of allowing content, even when some find it objectionable, unless removing that content can prevent a specific harm.”

So, economically at least, there is a reason that Facebook is heavily invested in content moderation. Facebook wants its platform to be a pleasant place to retain users so that Facebook can sell more ads.

Issues of free speech

This section could be short. Constitutionally in the United States, Facebook has no legal mandate to remove or allow speech on its platform. On the internet, many interactive service providers, also referred to as platforms, have traditionally backed the idea that the internet should be a place open to free expression and the marketplace of ideas, but they have no legal mandate to do so (Snider).

The First Amendment states that “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech.” Notably, the First Amendment does not protect citizens from the actions of other private parties. The Supreme Court confirms this interpretation in Hurley v. Irish- American Gay Group of Boston 515 U.S. 557, 566 (1995), in which it states, “the guarantees of free speech . . . guard only against encroachment by the government and ‘erect] no shield against merely private conduct.’” Even though the first amendment does not explicitly protect users from having their speech censored by Facebook, users are still angry and invoke rhetoric that suggests their rights are being mutilated when Facebook censors them.

Alex Abdo, a senior staff attorney with the Knight First Amendment Institute references an idea of a “broader social free speech principle” and summarizes this frustration as the result of a societal expectation in the United States: “There is this idea in this country, since its founding, people should be free to say what they want to say” (Snider).

The lines between Facebook censorship and government censorship are understandably blurred when even the Supreme Court of the United States uses language that equates social media sites to the traditional concept of the “public forum.” In PACKINGHAM V. NORTH CAROLINA 137 S.Ct. 1730 (2017), a case in which North Carolina made accessing many common social media sites a felony for registered sex offenders, the Supreme Court extended the idea that in many ways, contemporary social media acts as a public forum: “While in the past there may have been difficulty in identifying the most important places (in a spatial sense) for the exchange of views, today the answer is clear. It is cyberspace—the ‘vast democratic forums of the Internet’ in general, Reno v. American Civil Liberties Union, 521 U.S. 844, 868 (1997), and social media in particular.” (Grimmelmann, 2018).

Section 230 immunity

Compounding this societal confusion around the legality of content moderation decisions on Facebook is the immunity that Facebook receives under Section 230 of the Communications Decency Act. Under Section 230, Facebook is completely immune from liability for most non-illegal, non-copyright that users upload onto its platform. Additionally, Section 230 contains a “Good Samaritan” provision that allows “interactive computer services” to take somewhat of an editorial stance by removing content that they deem offensive in their platforms without accumulating any liability (COMMUNICATIONS DECENCY ACT, 47 U.S.C. §230). This law is the reason that Facebook can “develop their own community guidelines and enforce them as they see fit” (Caplan).

Mounting pressure for Facebook to do something

No legal imperative exists for Facebook to moderate its content outside of copyrighted and illegal content. The legal imperative, however, is not what many users think about, and as we’ve established that Facebook makes its money from its users, it is easy to understand that Facebook will want to listen to their demands.

There is an increasing anxiety about Facebook’s massive scope coupled with the dependence people have on Facebook and the platform’s ultimate ability to filter speech. As the Supreme Court alluded to in Packingham v. North Carolina, the internet – especially social media sites – have become the main place for people to express themselves and their ideas in contemporary society. With over 2 billion users, Facebook is larger than many sovereign countries and has ultimate power over all user content, and users have no source of structural system to contest decisions or advocate for themselves within this system.

Additionally, following the 2016 United States presidential election, many people were frustrated with Facebook’s apparent negligence in controlling the spread of misinformation on the platform. Users demanded that Facebook do more to stop the spread of false information on the platform because in this role of a news delivery source – in which Facebook’s newsfeed algorithm makes editorial choices about what to show a user –  rather than strictly a social networking platform, some users believe that Facebook has the obligation to ensure a certain standard of news content. Following the political violence in Myanmar and the rise of white nationalism on the platform to name a few instances, some people are also calling for Facebook to do more to moderate content that contributes to political radicalization (Mozur).

How does Facebook currently moderate content?

Facebook uses artificial intelligence tools like machine vision and language processing to flag content that might violate its Community Standards, and from this point, the content is sent to one of the company’s over 15,000 human moderators (Community Standards). At the root of Facebook’s content moderation execution, separate from its public-facing Community Standards, is an ad-hoc system of PowerPoint slides that contain rules which attempt to distill ethically and politically vague dilemmas into binary moderation decisions. This simplification of difficult moderation decisions is part of an attempt to uniformly train its over 15,000 human content moderators to deal with the absolute avalanche of content that needs to be checked each day on Facebook.

Some of these moderators are contracted workers with Facebook and may be forced to work on moderating content that is in a language that the moderator does not understand or situated within a foreign country. The New York Times reported that the rules moderators get to execute moderation decisions are “apparently written for English speakers relying on Google Translate, suggesting that Facebook remains short on moderators who speak local languages” (Fisher).

In addition to the Powerpoint slides of moderation rules, Facebook has an excel-style spreadsheet of groups and individuals that have been banned from the platform for being a “hate figure.” Moderators are instructed to remove any content on Facebook that is praising, supporting, or representing any of the listed figures. This blanket-coverage strategy is meant to make moderation simpler for human moderators, but drawing hard lines on content regardless of context can chill political speech or work to maintain a status quo for certain groups in power.  As Max Fisher reports in the New York Times, “In Sri Lanka, Facebook removed posts commemorating members of the Tamil minority who died in the country’s civil war. Facebook bans any positive mention of Tamil rebels, though users can praise government forces who were also guilty of atrocities.”

Facebook, in taking a stance that content can be shared on its platform depending on the context of the post around it, has limited its ability to fully automate certain aspects of content moderation. On Thursday, May 2, 2019, Facebook banned Alex Jones and all InfoWars-related content from its platform with the caveat that content from this publisher could be shared if the commentary about the content is critical of the message. While AI systems have the capability to conduct sentiment analysis human moderation is required to accurately moderate content according to a viewpoint-based policy (Martineau).

Facebook’s use of stringent Powerpoint-delivered rules coupled with the ultimate subjectivity of a human moderator suggests that Facebook would like to combine the best features of context sensitivity and consistency in content moderation. But, as the continued societal outrage against pretty much every content moderation decision Facebook makes suggests, the current model is not successful for them.

Industrial Moderation

Content moderation experts Tarleton Gillespie and Robyn Caplan have narrowed down most content moderation categories into three groups according to size organization and content moderation practices. (1) The artisanal approach is a tactic in which around 5 to 200 workers govern content moderation decisions on a case-by-case basis. Most social media sites begin their moderation with this approach, and then are forced to adapt the process as a case-by-case scale becomes overwhelming. (2) The community-reliant approach, seen on sites like Wikipedia and Reddit, combines formal policy made at the company level with volunteer moderators from the site’s community. (3) Finally, the industrial approach is the model Facebook uses, in which “tens of thousands of workers are employed to enforce rules made by a separate policy team. (Caplan, 2018).

At all levels, content moderation must deal with the tension between context sensitivity and consistency, and accept different trade-offs between the two. Facebook’s industrial approach, as shown in the global reach of its Community Standards, is one that greatly favors consistency over context sensitivity. In her report on online content moderation, Robyn Caplan confirmed Facebook’s approach to consistency over context sensitivity with one of her interviewees from Facebook:

One of our respondents said the goal for these companies is to create a “decision factory,” which resembles more a “Toyota factory than it does a courtroom, in terms of the actual moderation.” Complex concepts like harassment or hate speech are operationalized to make the application of these rules more consistent across the company. He noted the approach as “trying to take a complex thing, and break it into extremely small parts, so that you can routinize doing it over, and over, and over again.”

Thus, Facebook’s eagerness to adopt AI for content moderation makes a lot of sense. One of the largest trade offs organizations make when they choose to automate content moderation is for consistency over context sensitivity.

This eagerness for Facebook to adopt AI is not hidden at all. Mark Zuckerberg publically has high hopes for the role of artificially intelligent automation in content moderation, as he stated in response to questioning about Facebook’s role in allowing harmful content in Myanmar, “Over the long term, building AI tools is going to be the scalable way to identify and root out most of this harmful content” (Simonite). Automation at this scale makes a lot of sense from a logistical standpoint. With around 2.3 billion monthly users, the idea of solely using human moderators is ludicrous. Additionally, Mike Schroepfer, Facebook’s chief technology officer said that he thinks “most people would feel uncomfortable with that” in reference to purely human content moderation. Schroepfer went on to say, “To me AI is the best tool to implement the policy—I actually don’t know what the alternative is” (Simonite).

Conversely, some may feel that absolute automated content moderation is just as unnerving.  Facebook’s former chief security officer, Alex Stamos, warned that increasing the demand for AI content moderation is “a dangerous path,” and that in“five or ten years from now, there could be machine-learning systems that understand human languages as well as humans. We could end up with machine-speed, real-time moderation of everything we say online” (Lichfield).

Creepiness aside, Facebook has already begun to implement AI moderation into its content moderation process in several ways. In all categories of moderated content, Facebook utilizes AI filters to flag content for review by its legion of human moderators. In most of these categories – spam, fake accounts, Adult Nudity and Sexual Activity, Violence and Graphic Content, Child Nudity and Sexual Exploitation, and terrorist propaganda – 95-99.7% of content was actioned upon before other Facebook users reported it. Most of these categories involve clear-cut definitions of what is and is not objectionable, so AI is easily trained to recognize offensive categories. Violence and spam are, generally speaking, globally recognized and not prone to metamorphosing definitions. Training data does not have to include myriad cultural contexts and definitions to pin down definitions of these types of offensive content. Conversely, only 14.9% of Bullying and Harassment was found and actioned upon before Facebook users reported it. The cultural definitions of bullying and harassment change constantly, and different words and even emoji can suddenly morph into offensive slurs and harassment (Community Standards). Because Facebook’s AI content filters require a lot of manual training and labeling, it isn’t yet plausible to produce an AI filter that can respond and adapt to the changing cultural contexts that endeavor to forever evolve the definitions of bullying and harassment. 

Conclusion

Facebook is gigantic. The scale at which the platform operates, within so many countries and with so many stakeholders, requires that the platform moderate certain types of content to keep its users safe and placated within the platform. Using an industrial approach to content moderation, Facebook values consistency in its content moderation over case-by-case considerations for context of the content. This consistency-favoring approach shows in Facebook’s enthusiastic adoption of AI-enhanced content moderation.

 

References

Abbruzzese, J. (2018, October 30). Facebook hits 2.27 billion monthly active users as earnings stabilize. Retrieved May 5, 2019, from NBC News website: https://www.nbcnews.com/tech/tech-news/facebook-hits-2-27-billion-monthly-active-users-earnings-stabilize-n926391

Caplan, R. (2018, November 14). Content or Context Moderation? | Data & Society. Retrieved from https://datasociety.net/output/content-or-context-moderation/

COMMUNICATIONS DECENCY ACT, 47 U.S.C. §230

Community Standards. (n.d.). Retrieved May 5, 2019, from https://www.facebook.com/communitystandards/

Fisher, M. (2018, December 27). Inside Facebook’s Secret Rulebook for Global Political Speech. The New York Times. Retrieved from https://www.nytimes.com/2018/12/27/world/facebook-moderators.html

Gilbery, B. (2018, April 23). Facebook says its users aren’t its product – Business Insider. Retrieved May 5, 2019, from Business Insider website: https://www.businessinsider.com/facebook-advertising-users-as-products-2018-4

Grimmelmann, J. (2018). Internet law: cases and problems(Eigth edition). Lake Oswego, OR: Semaphore Press.

Laidler, J. (2019, March 4). Harvard professor says surveillance capitalism is undermining democracy. Retrieved May 5, 2019, from Harvard Gazette website: https://news.harvard.edu/gazette/story/2019/03/harvard-professor-says-surveillance-capitalism-is-undermining-democracy/

Lichfield, G. (n.d.). Facebook’s leaked moderation rules show why Big Tech can’t police hate speech. Retrieved May 5, 2019, from MIT Technology Review website: https://www.technologyreview.com/f/612690/facebooks-leaked-moderation-rules-show-why-big-tech-cant-police-hate-speech/

Martineau, P. (n.d.). Facebook Bans Alex Jones, Other Extremists—but Not as Planned | WIRED. Retrieved May 5, 2019, from Wired website: https://www.wired.com/story/facebook-bans-alex-jones-extremists/

Mozur, P. (2019, March 4). A Genocide Incited on Facebook, With Posts From Myanmar’s Military. The New York Times. Retrieved from https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html

Simonite, T. (n.d.). AI Has Started Cleaning Up Facebook, but Can It Finish? | WIRED. Retrieved May 5, 2019, from Wired website: https://www.wired.com/story/ai-has-started-cleaning-facebook-can-it-finish/

Snider, M. (2018, August 9). Why Facebook can censor Infowars and not break the First Amendment. Retrieved May 5, 2019, from USA Today website: https://www.usatoday.com/story/tech/news/2018/08/09/why-facebook-can-censor-infowars-and-not-break-first-amendment/922636002/

ARTificial Intelligence

By Shahin, Beiyue and Kevin

What does it mean to be creative?

  • Creativity as a definition is hard to pin down. Why is solving a math equation – the process of transforming numbers into different numbers — not considered creative, but doodling is?
  • Throughout history, even, the definition of creativity has undergone dramatic changes.

 

Is Laocoon and His Sons a creative work?

  • We would say yes, but his creator might say no.
    • “Art (in Greek, “techne”) was “the making of things, according to rules.” It contained no creativity, and it would have been — in the Greeks’ view — a bad state of affairs if it had.”

Contemporary Creativity

Oxford mathematician Marcus du Sautoy breaks creativity down into three forms:

  1. Exploratory creativity – in which you operate within a set of rules to push skill and expertise (music, portraits)
  2. Combinatorial Creativity – the combination of two or more disparate things
  3. Third kind of “breaking” creativity -” those phase changes when suddenly you’re boiling water, and water becomes steam and changes state completely.”

Can AI be Creative?

Is this video a creative work?

  • Would that opinion change if you knew that AI software did the coloring?

“The machine has no intent to create anything,” said Klingemann. “You make a fire and it produces interesting shapes, but in the end the fire isn’t creative – it’s you hallucinating shapes and seeing patterns. [AI] is a glorified campfire.” BBC

With computer history constantly using human-centric metrics to assess the “intelligence” of artificial intelligence, it’s no wonder that we are beginning to think of lines to draw between human creativity and artificially intelligent creation.

    • That line seems to be drawn in the popular media around the labor and work that is done to create. So, let’s look at three different applications with various levels of human-computer cooperation.

The Human is the Artist, not the Machine

Ben Snell created a sculpture, Dio, that was designed by an artificial intelligence algorithm and then built with the shredded remains of the computer on which it was designed.

“I consider myself, not the computer, to be the artist,” he says. But he also talks enthusiastically about the agency of his algorithms, saying that “Dio began by trying to recreate from memory every sculpture it saw” and that he asked the computer “to close its eyes and dream of a new form.” He says he choose to use this figurative language because it makes these digital processes more relatable to humans. “

Artificial Intelligence as an Agent of Creativity

Obvious is a group of three young French art students who are auctioning off a painting called the Portrait of Edmond Belamy. The painting was produced using open source code from 19-year-old Robbie Barrat.

Many AI artists are upset about the sale because Obvious is using a narrative to push the sale of the art in which the algorithm is the artist, not the creator of the algorithm.

HYPE AS AN AGENT OF CONFUSION. 🙁

  • A common theme in AI discourse – the people who pull or use the human-created code for an algorithm are quick to attribute agency to the code itself, not the person who created the code.

From The Verge

“What might be the most interesting thing about the Belamy auction is that, with not much more than some borrowed code, an inkjet printer, and some enthusiastic press releases, a trio of students with minimal backgrounds in machine learning ended up at the center of a milestone in art history.”

Technical Aspects:

Michelangelo, the great Italian sculptor, painter, once had a penetrating insight into the dual relationship between perception and creativity. “Every block of stone has a statue inside of it, and the job of the sculptor is to discover”.

So before we study how AI makes creative visual art works, let us review how AI have perception on the real world and recognize every image based on artificial neural network.

For example, the bird image’s pixels can be viewed as the first layer of neurons in the system, and it feed forward into one layer after another layer after another layer connecting with each other to do some pattern recognition. With a lot of training examples, the computer can recognize the image as a bird. So each of the system has three layers: input, hidden and output.

Let’s represent the three layers as three variables: x, w and y. So there is a simple equation: x w = y. If we know x and w layer, we are able to get the y output, which is the process of perception.

Data scientists decide to experiment with what happens if we try solving for x value, given a known w and a known y. In other words, we have given a image of bird and you already have your neural network that you’ve trained on birds, but what is the image of the bird? Through the reverse process, AI complete the creative work.

AI as a Curator of Art

Netflix suggestion algorithm — is it creative? As many critics and scholars of art and expression do, the Netflix algorithm groups and suggests films in chunks that transcend classic genre definitions. In this way, the Netflix algorithm is helping viewers see commonalities within creative expressions that may have escaped.

  • Thus, AI can help us form creative connections between among existing pieces of art.

“Some artists working in this field say they are merely channeling the creativity of computers and algorithms, but others protest, and say that these systems are artistic tools like any other, created and used by humans.”

Case One: AI as a Tool to Take Away Boring Aspects of the Creative Process (AAATTTABAOTCP)

Automatic Tagging and Searchable Content

From The Verge

Creative software companies like Adobe are implementing tools that allow users to automatically tag their content libraries to make their assets easily searchable.

Automatically Coloring Line Art

Celsys Clip Studio is an application used in manga and anime that allows users to use AI to automatically color their line art illustrations.

First the user creates a “hint” layer to give the software a seed of intention for the rest of the coloring.

Then, the software does its work.

Allowing mundane work like basic coloring to be accomplished by an algorithm could free animators up to experiment more with different styles or unique effects.

Case Two:(collaboration, human-machine interaction):Magic Google Sketchpad

The Google Brain Team, a machine intelligence team focused on deep learning has created Magenta, a piece of technology which can generate art and music using recurrent neural networks.

For example, every time you start drawing a doodle, Sketch RNN tries to finish it and match the category you’ve selected.

https://magic-sketchpad.glitch.me/

Over 15 million players have contributed millions of drawings playing Quick, Draw! These doodles are a unique data set that can help developers train new neural networks, help researchers see patterns in how people around the world draw, and help artists create things we haven’t begun to think of.

https://quickdraw.withgoogle.com/data/

Case Three: AI as Artist?

Which painting is human-created and which is AI-created?

Image taken from GumGum’s website:

The case of GumGum, a company that focuses on AI development specifically for computer vision. creating an artificially intelligent painting robot truly demonstrates the closest we can get to having a machine be as autonomously creative as possible. Piloted as a Turing test for people to determine whether a work of art was developed by a human or a machine, the GumGum team developed an AI system called a generative adversarial network (GumGum), which is a deep neural network made up of two other networks that feed off of one another (Skymind). One network is called the generator which generates the new data, while the other network is called the discriminator (part of the discriminative algorithm) which evaluates an input for authenticity (Skymind). The generating network begins to produce an image at random, in which, soon enough, the discriminator network begins to feed data into the generating network by critiquing what’s being produced (Skymind). From there, the generating network fine tunes what is being generated until he discriminator network lessens the amount of critiques it feeds, which suggests that the generating network has produced something well-bodied enough for the discriminating network to identify it as a creative work of art (Skymind). The data set that was used for the AI machine, called the CloudPainter, was a collection of art by  20th-century American abstract expressionists.

It’s hard to determine whether the creative work of the CloudPrinter is in fact creative. It can be argued that, since the data collected is from preexisting 20th-century art, what the CloudPrinter created (or other/future ‘autonomous’ AI technologies) is a creative work of remix practices. Using elements and design factors of the hundreds of works of art, the CloudPrinter was programmed to generate artwork that not only lacked a solid targeted body of work that is meant to be generated, but also created an original piece of art that was influenced by other works. Furthermore, it can be argued that the algorithms programmed to make AI perform the way the CloudPrinter performs is creative work itself. Programming can be identified as logic-based creativity. Though the AI machine itself is not an autonomous imaginative being that produces original thoughts of its own, the influenced and applied algorithms are what is creative and original.

 

Ethics and Closing Remarks

Many ethical considerations come into play in the discourse of creativity and artificial intelligence. The world of art is already considered as a personal and human-to-human experience in which one human creates a body of work to express an idea or emotion to another human, where from there the art work produced begins to exists in a broader cultural context. Additionally, there are many misunderstandings and misconceptions about the world of AI and technology, in which those that are unaware of how AI functions believe that “AI” will replace humans — taking our jobs, threatening our economic well-being. Though the threat of automation does exist, people fail to understand that AI and machines are not autonomous. They are not self-thinking humans/beings.

The different mediums in which AI can create art, whether it’s as a tool, a machine-assist, or self-serving creative producer, threatens the sanctity of what “art” and what “creativity” really is. Creativity was never once considered as anything but un-human because creativity and creativity work is the product of human interaction. Technology had yet to be imaginative to the point of stating that artificial intelligence itself was a creative entity. Already, when reviewing AI generated bodies of work, if told that the body of work is AI produced, art critics highly criticize it calling it one-dimensional, boring, and unimaginative as if it were a knock off of already existing artists (GumGum Insights). There is a real fear within the realm of art that creativity and originality can become oversaturated if the production of AI-created artwork beings to claim “creativity” when the algorithms generated for AI to create what it creates actually stems from other actual humans.

If we begin to see more and more AI-driven creative work, the false narrative of AI existing as a self-thinking machine that will take over human work/production will further escalate. Additionally, legal standpoints on the development of AI as a creative being, depending on the content that is produced, can serve as a future discourse due to the inability for there to be protection over work if the work was not created by a human. What AI generated content belongs to who? Can someone claim AI-generated art as their own? Does it belong to the programmers, developers, or engineers? Or would it belong to whomever or where ever the data came from (for instance, the hundreds of 20th-century paintings that GumGum’s CloudPrinter scanned as data)? Granted, introducing new ways for AI to be programmed to produce various forms of content can be useful, such as in the case of a tool for creation and creativity when primarily in the control of humans. Using AI in various other mediums that allow for more programmed algorithms to generate more content can also create a new and helpful discourse for computer scientists, engineers, and sociologists to analyze the impact of technology in the world of art. Regardless of the impact of AI entering the creative hemisphere of content creation, which will have positives and benefits, the underlying question still remains: is AI a creative agent? By textbook definition, no. It’s not the machine that creates creative content, but more so the programmer that generates the creative algorithms for the machine to perform in the manner that it does. The reality of the fact, however, is that the discourse involving AI and creativity will remain as a link only between the two (at first glance), essentially discrediting the original data content that the machine has absorbed. This is a cause of the general public’s/media’s representation of glamorizing AI as an autonomous being.

WORK CITED

https://skymind.ai/wiki/generative-adversarial-network-gan

https://gumgum.com/artificial-creativity

https://insights.gumgum.com/hubfs/Art.ificial%20By%20GumGum.pdf

https://learningenglish.voanews.com/a/first-ai-created-art-work-sells-for-432-500-in-new-york/4630694.html

https://arstechnica.com/science/2018/12/how-computers-got-shockingly-good-at-recognizing-images/

https://www.youtube.com/watch?v=YRhxdVk_sIs

Tatarkiewicz, Władysław (1980). A History of Six Ideas: an Essay in Aesthetics. Translated from the Polish by Christopher Kasparek, The Hague: Martinus Nijhoff.

 

It’s (Almost) All Hype

Over the course of the semester, the main theme that comes to the surface when discussing artificial intelligence is hype and the harms that this hype has on a broader conversation. No one really knows what artificial intelligence actually is, or what standard of intelligence to use to judge a machine’s achievement of such, and this confusion is reflected in media articles that work to obscure the sociotechnical systems that artificial intelligence systems are a part of.

Cycles of Hype and Fear — and calls for regulation on things that no one fully understands

Even before this course, I noticed that many different computational methods were conglomerated under the mantle of “artificial intelligence.” Any company that created anything began to implement apparent “artificial intelligence” in either its design or services. People responded to the hype train with excitement and capital, so much so that 40 percent of European startups that claimed to use artificial intelligence used no such methods. The claim that the organizations were somehow associated with artificial intelligence was enough to harness the hype beast into capital investment.

There are areas of genuinely exciting application for artificial intelligence – content moderation

While the massive hype train would allow an easy out for a cynic to dismiss all progress in artificial intelligence recently, that would be irresponsible as there are several ways and reasons that artificial intelligence is becoming an increasingly important societal conversation. Because there is so much digital content created and acted upon every day, hour, minute and second of the day online, there is a wealth of training data available to companies that create and train AI systems using techniques such as machine learning. Thus, with this recent accessibility of “big data,” artificial intelligence has improved markedly over the last decade. This improvement, especially with machine vision, has exciting – and ethically difficult – implications in the realms of automating content moderation online.

Sociotechnical blindness as a result of misunderstanding and hype

To me, the biggest takeaway in learning more about artificial intelligence and the coverage surrounding it is how deep and widespread the phenomenon of sociotechnical blindness is surrounding systems that utilize artificial intelligence. This concept, that Deborah G. Johnson and Mario Verdicchio introduced, explores the ways in which artificially intelligent systems are deemed to be separate entities of agency from their creators. Average people are unaware of the human-mediated design decisions that go into artificial intelligence and the systems on which these systems operate. That’s a symptom of – and why we get – simplistic headlines that say things like “AI is racist” or “AI caused a fatal accident.” This simplification of sociotechnical systems involving AI obfuscates human action and agency that goes into the system, making users feel powerless and allowing creators to eschew responsibility for real-world actions.

What’s Old is New (and Conglomerated into 4 Mega Corporations)

Cloud computing makes a lot of sense. When you think about all the computational power that is lost Simply because no one is using the system, you begin to see one of the major benefits of cloud computing: more computational power and storage available to more users at a cheaper price point. This idea has deep roots within the history of computation, going as far back as the 1960s with the implementation of time-sharing as a revolution of accessibility to computational power. Timesharing allowed “a central computer to be shared by a large number of users sitting at terminals. Each program in turn is given use of the central processor for a fixed period of time” (Arms). This computational model was efficient when computers were unruly and expensive. Gradually, the minicomputer won out over massive mainframe models, and the personal computer became the de-facto form of computation. However, the benefits of sharing powerful resources are again becoming transparent with the contemporary rise of cloud computational models. For better and worse, cloud computation is contemporarily conglomerated into the “big four:” Google, AWS, IBM, Microsoft.

A positive and negative effect of contemporary cloud computing is the standardization of inputs and outputs that a cloud computing model requires. As Ruparelia says, “to truly deploy a cloud, you need to consider how to standardize your service offerings, make them available through simple portals, track usage and cost information, measure their availability, orchestrate them to meet demand, provide a security framework, provide instantaneous reporting, and have a billing or charging mechanism on the basis of usage“ (Ruparelia, Page 7). This standardization means that users do not have access to full power of the cloud computing system, just a mediated form of it as dictated by the big four companies that create the cloud architecture.

One of the most blatant and obvious negatives of the conglomeration of cloud services is the potential for security breaches and a loss of data privacy. When user data is conglomerated into one database or platform in the cloud (one that has linkable personal identifiable information ready and available), a hacker has a large incentive to steal this data. With the implementation of cloud computing, a hacker doesn’t need to hack hundreds of computers before they stumble upon a juicy target; now, a hacker just needs to pwn a single cloud database to steal thousands of users’ information.

In addition to cloud computing increasing the power and ease of the explicitly malicious actors stealing personal information, conglomerated user data in several cloud platforms – especially cloud platforms that have cross-service terms of service and privacy policies– gives heightened power to collect and sell behavioral data. Most of this data is viewed as payment for free services, but many users do not understand the scope of identification that can occur with such data collection. Having different types of data accessible within a conglomerated cloud service increases the ability and efficacy of corporate surveillance.

Arms, W. (n.d.). The Early Years of Academic Computing. Retrieved March 27, 2019, from The Early Years of Academic Computing website: http://www.cs.cornell.edu/wya/AcademicComputing/text/earlytimesharing.html
Blagdon, J. (2012, March 1). Google’s controversial new privacy policy now in effect. Retrieved March 27, 2019, from The Verge website: https://www.theverge.com/2012/3/1/2835250/google-unified-privacy-policy-change-take-effect
Mack, Z. (2019, March 26). Shoshana Zuboff on understanding and fighting surveillance capitalism. Retrieved March 27, 2019, from The Verge website: https://www.theverge.com/2019/3/26/18282360/age-of-surveillance-capitalism-shoshana-zuboff-data-collection-economy-privacy-interview-vergecast
Ruparelia – 2016 – Cloud computing.pdf. (n.d.).
Ruparelia, N. (2016). Cloud computing. In The MIT Press Essential Knowledge Series. Cambridge, Massachusetts: The MIT Press.

All Actions have Consequences

AI developers often claim no responsibility when the results of their algorithms reinforce something that seems a bit problematic. A popular defense for this exoneration of blame is that their algorithms predict and respond to data from the “real world” and that their results simply reflect the state of the world. This defense fails to recognize that entrenching the biases and inequalities that exist in society within an artificial intelligent system of agency is not neutral. Computation and artificially intelligent mediated decision-making have some air of objectivity. Nick Diakopoulos says “It can be easy to succumb to the fallacy that, because computer algorithms are systematic, they must somehow be more ‘objective.’ But it is in fact such systematic biases that are the most insidious since they often go unnoticed and unquestioned.”

This is me yelling at people who create algorithms that pretend only to “reflect objectively” the reality that exists around them. 

As Mark MacCarthy in his article, The Ethical Character of Algorithms—and What It Means for Fairness, the Character of Decision-Making, and the Future of News, clearly states my questions: “Are these mathematical formulas expressed in computer programs value-free tools that can give us an accurate picture of social reality upon which to base our decisions? Or are they intrinsically ethical in character, unavoidably embodying political and normative considerations?” Put simply: do algorithms have values or are they objective? The answer, I think obviously, is that they have values.

There’s a prevalent myth that because algorithms with agency to make decisions do not have human actors present during the actionable phase of their operations, they must be free of human judgment and bias. The popular mind is slowly changing to allow room for the existence of bias in artificial intelligence, but I think the historical existence of this fallacy of computational objectivity creates too much sociotechnical blindness for all impacts of the apparently “objective program” to be understood. Even when people understand and think critically about the human actors and values that are injected into algorithms, there is still something authoritative and definitive about the systematic nature of an algorithm. Additionally, the sociotechnical blindness surrounding these algorithms creates an environment in which blame is hard to distribute to the companies and people responsible for the algorithms.

Context is Everything

The traditional voice of a computer – that electronic, crackling voice that you can almost feel the circuitry in – is a Frankenstein of phonemes that a program could piece together to create an illusion of a semi-auditory computer interface. By creating these phoneme mashups, computers could speak to the user. But, this interface feels markedly inorganic. As humans speak to each other, the intonation and tone that they use to inflect certain phonemes changes given the context of the phoneme. An “R” sound that I make when I exclaim, “Great!” sounds different from the “R” that concludes the interrogative “Can you hand over that weed-whacker?”

With a digitized catalogue of every phoneme, you might imagine that the issue of conversationally interfacing with computers would be solved. In my mind, the process of digitally “hearing” and synthesizing a voice would be the most difficult within a conversational interface. But, of course, from my human vantage point, the digitization is the hardest part. Humans have a capacity to intuitively understand language that is often taken for granted. When I read a sentence, I don’t have to sit for a second to parse through the relationships that each word has with the sentence as a whole and /then/ consider how that sentence as a whole interacts with the sentences and world around it. I just intuitively understand the sentence.

Early chatbots and vocalizers could give the illusion of “natural conversation.” But, early chatbots were simply branching algorithms whose responses were predetermined and depended on proper inputs. To achieve actual conversational interface, computers would need to understand the context and “meaning” of what was said to them.

Google uses “The Knowledge Graph” to semantically link together information across the web. Instead of “Apple” only existing as a simple string of letters, it is now, in Google’s eyes a symbol that is linked to “seeds” and “computers” and “pies.”

Image source: Digital Reasoning

As the web became an increasingly gigantic web of semantic information linked together in “meaningful relationships,” the contextual nature of conversationally transmitted data became easier and easier for a computer to understand (Crash Course Video). Learning from a dataset that links “dinosaur” with “extinction” and “triceratops” and “cute” helps give natural language processing a better grasp of how humans use language.

Unicode as an Illustration of the Meaninglessness of Raw Data

One of the best illustrations of what data means in terms of computation is the Unicode Consortium code. While one might think of a string of characters as data, in computation, this conceptualization of data is already too abstract. The ideal of the letter “K” has no inherent meaning, but it can be represented by data, a concept here meaning a string of bytes and bits that have no intrinsic meaning. The concept of a “K” can be encoded in so many different ways. Before unicode, if I was programming a computer, I could decide that any combination of bits and bytes could encode a representation of a “K.”

The problem with this Wild West approach to encoding is that without a structure of how data should be encoded to represent certain text is that computers interacting with each out might decode data to mean two separate things. For example, one computer could encode 01001010 as “K,” but another computer might have decided that the data of 01001010 means “🦵🏼” which could lead to some interesting mixups when the computers send data to each other to be interpreted. It’s a bit uncomfortable to think that the data and the concept it stores are different things, but that’s the beauty of a general purpose computer storing data.

Enter Unicode. Instead of different programs and layers using different bit representations to encode different character values that might get jumbled or lost in translation, Unicode converged to assign consistent values and identities to fixed byte codes. Unicode includes different language symbols and emoji. All together, Unicode currently codes 137,439 different characters.

via BuzzFeed News

Thus, unicode represents a microcosm of the challenges and solutions that are presented within data storage. Concepts that are familiar to humans based on semiotic knowledge, such as the number 4 or the letter “K” can be encoded with different data combinations because data inherently has no set meaning. Such a situation can be confusing when different encodings clash. Thankfully, we have unicode to convene to determine a universal code of character representation. Now, if only they could make the emoji identical across platforms, all issues of encoding and decoding meaning digitally could be solved. 🤪

References:

Irvine, “Introduction to Data Concepts and Database Systems.”

Tasker, P. (2018, July 17). How Unicode Works: What every developer needs to know about strings and 🦄. Retrieved February 20, 2019, from https://deliciousbrains.com/how-unicode-works/
Unicode and You – BetterExplained. (n.d.). Retrieved February 20, 2019, from https://betterexplained.com/articles/unicode/
Unicode Emoji. (n.d.). Retrieved February 20, 2019, from https://www.unicode.org/emoji/

Facial Recognition: How it Works + Why it Matters

Face recognition and convolutional neural network

By Beiyue Wang, Shahin Rafikian and Kevin Ackermann

Application:

Face recognition is currently very common in our lives. An increasing number of smart phones replace passwords with face recognition to increase security.  Besides, law enforcement agencies are using face recognition more and more frequently in routine policing. Once criminalsfaces were captured by street cameras, the police are able to immediately compare that photo against one or more face recognition databases to attempt an identification. For instance, last year AI security system had been launched in almost every metro station in Shanghai to track hundreds of wanted criminals. The technology can scan photos from the national database and identify a person from at least 2 billion people in seconds. It was reported that within 3 months, the technology helped the police successfully catch about 500 criminals.

As we know, humans have always had the innate ability to recognize and distinguish between faces, yet computers only recently have shown the same ability. Face recognition needs to be able to handle different expressions, lighting, and occlusions. From this weeks reading, we know that the realization of face recognition must be attributed to convolutional neural network.

Tech:

For me, it is very hard to fully understand this kind of technology and I hope to get more information in class. Based on the reading, convolutional neural network where the operation of each unit is considered to be a convolutionthat is, a matchingof its input with its weight. (Machine learning) For instance, in CNN network, starting from pixels, we then get to edges, and then to corners, and so on, until we get to an image. The whole process contains mathematics and statistics. The picture below presents the process of classification or recognition, from sensing an image, preprocessing, segment the foreground from background and labeling, feature extraction, post processing, classification to decision.

Indeed, face recognition brings us lots of benefits. However, it also has many shortcomings and problems.

Shortcomings:

First, with the number of faces into database going up, face recognition is prone to error, because many people look alike in the world. As the likelihood of similar faces increases, matching accuracy decreases. It has been proved that face recognition is especially bad at recognizing the minority, young people and women. Actually, my cell phone always couldnt recognize my face to open. In my view, solving the problem of accuracy still has a long way to go.

Second, a study purporting to infer criminality from a dataset based on existing prisonersand nonprisonersfaces has serious endogeneity problems. Prison itself may advance aging, or affect routine expressions, or even lead to disproportionate risk of facial damage. Besides, many people questioned that  training data consisting of prisonersfaces is not representative of crime, but rather, represents which criminals have been caught, jailed, and photographed. Indeed, how a classifier operates leaves it vulnerable to a critique of the representativeness of its training data.

Third, as we discuss in last class, face recognition no doubt brings some discrimination problem. For example, the police now use machine learning to get a demographic pattern of criminals so they are likely to watch those people more than others, which causes some discrimination problems.

Facial Recognition and the Inner Workings of Neural Networks

Have you ever stopped to think what it is that determines when you recognize a face? You could break down the constituent elements – eyes, ears, mouth and nose – but why is it that when you see a face, you can immediately recognize it as a face? What’s more, what are the minute details and changes that separate one face from another?

To explicitly tell a machine all of the rules and definitions that make one face unique would be challenging, time-consuming and virtually impossible. Enter convolutional neural networks. Using data pools of thousands of faces, the machine learning program begins to “learn” what differentiates one face from another. Then, once the machine has learned how to recognize a face, it can apply this knowledge to faces that it sees in the future.

Let’s take Apple’s Face ID as an example to further illustrate how this process happens. Face ID is a form of facial recognition built into iPhone models past the iPhone X. The basic concept of Face ID is that the iPhone can recognize its owner’s face, and then use that recognition as a password on the device. According to Apple, “Face ID uses advanced machine learning to recognize changes in your appearance. Wear a hat. Put on glasses. It even works with many types of sunglasses” (iPhone XS – Face ID).

To recognize a face, the first step is to “see” or gather input. To do this, the iPhone projects 30,000 infrared dots on a person’s face, and then uses an infrared camera to take a picture of the facial dot map. This facial map is sent to a chip in the iPhone that uses a neural network to perform machine learning. Basically, what this means is that the chip is able to view the patterns of dots that make up someone’s face and learn these dot maps to recognize the face. The chip is learning to perform a task – recognize a face – by analyzing training examples – the initial Face ID setup (Hardesty, 2017).

To clarify, neural networks, which draw inspiration from neurons in the brain’s structure, could be described as the architecture of the machine. Machine learning is a method of “learning” within a neural network (Nielsen, 2015).

Computer vision is the “broad parent name for any computations involving visual content – that means images, videos, icons, and anything else with pixels involved” (Introduction to Computer Vision, 2018).

So, how exactly does a neural network work to learn and make decisions? Speaking broadly, imagine three sections to the neural network: an input layer, hidden layers, and an output layer. If we’re working with an image, the input layer might be made up of each individual pixel as a node to the input layer. In the case of the iPhone’s Face ID, we can assume that each infrared dot might be a node on the input layer (Machine Learning & Artificial Intelligence…, 2017). Once input data is entered into the neural network, weights are assigned to nodes within the hidden layers. These weights are multiplied together and added in complex ways depending on the input data. Each node within the hidden layer has a certain threshold that, if the threshold is met or exceeded, will “fire” just like an actual neuron. This data, fed into the input layer travels as it fires through the hidden layers “until it finally arrives, radically transformed, at the output layer” (Hardesty, 2017).

As I was reading the way in which every human-technology interaction is a programmed computer function, I began to think about the ways in which future programmed technology interactions (computer and beyond) can become both more personal and intelligent enough to adapt to our needs. But then I realized that our technology and technology programmed systems are already there — smart home devices keep a history of our interaction data for future smart predictions and vocal recognitions, mobile phone keyboards are capable of making keyboard predictions for users, streaming services recommend subscribers with certain shows/movies based on viewing history. It’s all data collection of user experiences. The Alpaydin reading can be in a dialogue with the Karpathy blog post, in that Convolution Neural Networks and scanning algorithms are trained similarly as if you were to train a child new actions and informations. The more you allow for a child to learn and experience something in particular, the more they will be familiar with it. Similarly with the predictive text  keyboard functionality on the iPhone, the more an iPhone users sends text messages, the more data the iPhone will store in order to make smart predictions.

Similar e-behavior (courtesy of algorithms designed to collect data and make such predictions) can be seen in the iPhone’s Face-ID, where Apple’s technology is able to scan the registered face in various mediums (e.g. bearded, with makeup, new hair style). It can be explored, however, that similar Convolution Neural Network data-imaging processes can be applied to facial recognition technology to even further strengthen facial recognition capabilities. In regards to Alpaydin’s discussion on social media data, is it possible that there is a breach in security with facial recognition due to how accessible imaging data is on the internet? And with access to technologies such as 3D-printing, it doesn’t seem far from impossible to be able to break into someone’s phone by 3D-printing a face based on predictions of one’s face/head structure, and access technologies that are locked via the 3D-printed face?

References:

Andrej Karpathy, “What a Deep Neural Network Thinks About Your #selfie,” Andrej Karpathy Blog (blog), October 25, 2015, http://karpathy.github.io/2015/10/25/selfie/.

Ethem Alpaydin, Machine Learning: The New AI. Cambridge, MA: The MIT Press, 2016.

Frank Pasquale ,When Machine Learning is Facially Invalid, https://cacm.acm.org/magazines/2018/9/230569-when-machine-learning-is-facially-invalid/fulltext, 2018

Geoff Dougherty, Pattern Recognition and Classification: An Introduction (New York: Springer, 2012). Excerpt: Chaps. 1-2.

Hardesty, L. (2017, April 14). Explained: Neural networks. Retrieved February 6, 2019, from http://news.mit.edu/2017/explained-neural-networks-deep-learning-0414

Introduction to Computer Vision. (2018, April 2). Retrieved February 6, 2019, from https://blog.algorithmia.com/introduction-to-computer-vision/

iPhone XS – Face ID. (n.d.). Retrieved February 6, 2019, from https://www.apple.com/iphone-xs/face-id/

Machine Learning & Artificial Intelligence: Crash Course Computer Science #34 – YouTube. (2017). PBS Digital Studios. Retrieved from https://www.youtube.com/watch?v=z-EtmaFJieY

Nielsen, M. A. (2015). Neural Networks and Deep Learning. Retrieved from http://neuralnetworksanddeeplearning.com

Artificial Intelligence as a Catchall

Without a doubt, media representation of artificial intelligence is too vague and simplistic to communicate tangible, actionable information to readers and citizens. This simplicity and quasi-mysticism, as Johnson and Verdicchio discuss, effect the discourse and thus action taken concerning the development of artificial intelligence. In my eyes, the largest issue that the duo highlights is that current discourse of artificial intelligence produces a sociotechnical blindness, or “blindness to all of the human actors involved and all of the decisions necessary to make AI systems” (pg. 587). This sociotechnical blindness creates a myth in which artificial intelligence is completely out of control of human hands, when the exact opposite is true: by definition of artificiality, all artificial intelligence systems are created by human decisions. However, when we prescribe agency to the artificial intelligence system not only does that absolve the creators of blame when issues arise from the artificial intelligence, but it also creates an environment in which citizens feel powerless.

Granted, artificial intelligence is a wide field with branching paths of specialization and epistemology, but using the umbrella term “artificial intelligence” to describe specific programs within media representation continues this trend of sociotechnical blindness. Imagine if, in a story about elephants, we just referred to them as mammals. Technically, we would be categorically truthful, but we would be missing out on a lot of nuance that could lead the reader to make false assumptions about mammals as a whole compared to the specific nature of elephants.

Obviously, AI is more complex than being one entity.

Even one more layer of complexity given to discussion of artificial intelligence would increase the nuance of understanding, and thus would work to de-blackbox the concept of artificial intelligence to most readers.

Even through an elementary survey of popular books about artificial intelligence, I found that many of the authors worked to mythologize and contribute to the black boxing of artificial intelligence as a whole. Boden’s description of artificial intelligence as a virtual machine drew allegories to an orchestra, in which a person does not single out different instruments, but listens to the music as a whole, created product of the virtual machine that is an orchestra. This concept of modularity creating a larger whole from constituent parts is brilliant, but the idea that each piece cannot be singled out or understood seems harmful given the trend of oversimplification in media coverage of artificial intelligence. Sociotechnical blindness will continue if writers continue to think readers need such simplified explanations.

 

References:

Boden, M. A. (2016). AI: its nature and future(First edition). Oxford, United Kingdom: Oxford University Press.

Johnson, D. G., & Verdicchio, M. (2017). Reframing AI Discourse. Minds and Machines, 27(4), 575–590. https://doi.org/10.1007/s11023-017-9417-6

ELIZA and Sophia and Sisyphus

One of the biggest issues facing the field of artificial intelligence is that the concept of intelligence is subjective. When discussing the metaphoric birth of the field of artificial intelligence and what it means to achieve “true” artificial intelligence, the consensus seems to shift constantly. The root of this confusion is that intelligence can be defined in dramatically different ways. To some, the birth of artificial intelligence could be considered the advent of the abacus, as this instrument was able to have a memory of a certain state. If one defines intelligence as the ability to store information, then indeed, the abacus seems to be elementary artificial intelligence.

However, most people seem to attribute a uniquely human element to the conceptualization of artificial intelligence. That is, artificial intelligence popularly refers to actions delegated to computers that – put vaguely – a human would traditionally need to do.

The Turing Test perpetuates this concept — and probably was the genesis of the popularity —  that artificial intelligence is a replication of human thought and action. The Turing Test famously determines if a program is “intelligent” if it can fool another human being into believing that the program is actually a human being (Hern).

ELIZA was thrown out as a realization of artificial intelligence because her vocalized responses were often repetitive or canned. Because ELIZA did not synthesize information and respond with something unique, when viewed from the present, ELIZA is not considered artificially intelligent. But, when we talk about the scope and possibility of artificial intelligence today, the conversation often pivots to another notoriously aspirationally human program: Sophia the Robot.

What is interesting between the case of ELIZA and Sophia is that one’s programming is more obscured than the other. Sophia gives similarly scripted responses to answers, but she has a robotic body and even given official citizenship in Saudi Arabia. In her own words: “Think of me as a personification of our dreams for the future of AI, as well as a framework for advanced AI and robotics research, and an agent for exploring human-robot experience in service and entertainment applications” (Sophia).

The historical trajectory seems to be that when a computer can do a single humanoid thing, we are quick to contemporarily denote the program as artificially intelligent. But, soon enough, the discourse shifts to suggest that the marker of true artificial intelligence is not one specific humanoid task, but a different one. The past iteration of artificial intelligence, such as ELIZA, is seen as primitive in comparison with something of contemporary creation, such as Sophia. As Warwick eloquently summarizes, “In each case, what was originally AI has become regarded as just another part of a computer program” (Warwick, pg. 8).

References:

Hern, A. (2014, June 9). What is the Turing test? And are we all doomed now? The Guardian. Retrieved from https://www.theguardian.com/technology/2014/jun/09/what-is-the-alan-turing-test

Sophia. (n.d.). Retrieved January 23, 2019, from https://www.hansonrobotics.com/sophia/

Warwick, K. (2012). Artificial intelligence: the basics. London: Routledge.