Author Archives: Dominique Haywood

Chatting their Way to High Brand Equity

Dominique Haywood

CCTP 607 Final Project

 

Abstract:

In many technology communities, 2016 was known as the year of the chatbot. Facebook released an API that made making branded chatbots extraordinarily simple for brands large and small alike  (Constine, Josh. 2016). Microsoft released then quickly silenced their chatbot, Tay, after it learned white supremacist rhetoric from Twitter users (Victor, Daniel 2016). For better or for worse, brands now have a new direct communication channel to manage customer service, target marketing and facilitate sales. Both customers and businesses like chatbots for one main reason: chatbots makes customer service less arduous. Customers have 24-hour access to brand representatives that they don’t have to call, and businesses have a simple solution for customer engagement that can be designed to meet the business’ standards. This paper will analyze the history of chatbots, the technology that drives chatbots and how the design of chatbots impacts the brand equity of those who use them.

 

Introduction:

 

Since the launch of Facebook Messenger chatbots in 2016, companies have quickly taken advantage of chatbots as a new communication channel to customers. Chatbots are interactive digital agents which provide real time conversational interfaces for organizations.  There are currently over 30,000 chatbots active on Facebook messenger and it is expected 80% of customer engagement will be done through chatbots by 2020. Utilizing chatbots is a way for business to provide consistent and reliable customer service and has already proven to be successful by companies like 1800 Flowers and Tommy Hilfiger which have experienced monetary returns on chatbot investments.  This new communication channel has also benefitted customers by providing 24-hour access to answers from brands; 77% of customers who have interacted with a business’s chatbot have reported improved perception of that business (c Wertz, Jia 2016). Chatbots are simple solutions to many customer service complaints, but come with their own technical and security challenges. Through an analysis of the design of chatbots and the history of the technology, I will assess how the implementation of chatbots can impact an organization’s brand equity using the Customer Based Brand Equity ( CBBE) framework.Usually, this framework is used to help an organization analyze their current brand and build a stronger, customer focused one step by step. In this paper, however, I will use the framework to show how four brands have utilized chatbots to impact a stage of the CBBE framework.

 

 

What are Chatbots?

Chatbots are known by many different names including chatterbots, interactive agents, conversational AI or artificial spy entities. Despite the multitude of names, all chatbots effectively perform the same task: conduct conversations using natural language by following designed protocols. Most modern day chatbots are designed with AI to accurately process and respond to human inputs, whether the input is vocal or textual. Other chatbots, especially the earlier ones, are designed to simply follow a set of rules that produce replies from a designed script. This paper will focus on the chatbots that are designed with AI and operate on chat websites and messaging applications, however, the moniker “chatbot” can be applied to virtual assistants and automated voice chatbots (Wikepedia.com).

 

A Brief History of Chatbots

Although chatbots seems like a relatively new phenomenon, chatbot applications have been around since the early 1960’s. ELIZA, designed in 1964, was a computer program developed by Joseph Weizenbaum in the MIT Artificial Intelligence Laboratory. Running on a script called DOCTOR, ELIZA was designed to mimic the retorts of a Rogerian psychotherapist to a client on his or her first visit. ELIZA was initially designed to reveal the superficial nature of human and computer interactions, but through usage unveiled emotional attachment developed by ELIZA users( (Weizenbaum, Joseph. 1966).

The emotional attachments formed primarily because of the responses from ELIZA enabled by the DOCTOR script. The DOCTOR script processed the inputs from users through simple characterization and substitution of terms to deliver predesigned templates and encoded phrases, which were created to parody a Rogerian therapist’s penchant for answering questions with more questions. By designing ELIZA with an if-else protocol, Weizenbaum was able to avoid designing a complex natural language processing system (Weizenbaum, Joseph. 1966). The difference between ELIZA and the 30,000 chatbots on Facebook Messenger is that chatbots used today are designed and trained with the knowledge of language.

Another notable bot is called Alice (Artificial Linguistic Internet Computer Entity) or Alicebot, it was inspired by ELIZA and was created by Richard Wallace 1995 using Java. Though Alice is a complex and award winning chatbot, it still cannot not pass a Turing test (Wikipedia.com). A turing test is a test of a machine’s intelligence and whether or not a machine can pass as a human.

The first chatbot to pass a Turing test was Eugene Goostman in 2014. The Russian chatbot was designed to communicate like a 13-year-old Ukrainian whose first language was not English. (Gonzalez, Robert T. 2015). The validity of this milestone has been fervently questioned, this debate is fueled not by the advanced technological nature of Eugene Goostman, but the believability of Eugene Goostman’s personal history. Passing the Turing test brings forth questions about how users perceive a person or bot to converse and the accuracy of these perceptions. Other notable chatbots include Clippy, the loathed bot that peppered the margins of Microsoft Word from 1997 to 2003.

Eugene Goostman like ELIZA was designed with a specific backstory that fostered trust in the some of the humans that the bot interacted with. ELIZA was not said to be more than a computer program, yet the responses designed for ELIZA still evoked a one sided relationship from humans. These two bots are prime examples of how brands can overcome subpar bots with realistic “personalities”. The personalities designed into the branded bots provide users with trust and familiarity which may, inadvertently, deceive humans and engender trust. Depending on the brand’s identity, the branded bots should utilize colloquialisms or vernacular to convey the bot’s persona and align it with the brand. Successfully designing a bot with a personality requires training the bot to have a knowledge of language; this can be done through AI, specifically natural language processing and natural language understanding.

 

How Chatbots Work

 

Chatbots are designed like applications with multiple layers of functionality including the presentation layer, machine learning layer and the data layer. Natural Language Processing, Natural Language Understanding and Natural Language Generation are designed to facilitate accurate responses to queries by sending data through the layers of the chatbot (Figure 1). Natural Language Processing (NLP) is the overarching system of neural networks that facilitate end to end communication between humans and machines, in the human’s preferred language. Essentially, NLP provides the machine with the knowledge of language that is used by human interlocutors. (Chatbots Magazine 2018). Chatbots designed with NLP converts a user’s message to structured data, so that a relevant answer can be produced. Natural Language Understanding (NLU) is designed to manage the unstructured and flexible nature of human language. Designing NLU for chatbots requires a combination of rules and statistical models to create a methodology for handling unknown or unrecognizable inputs (Lola.com 2016) At its core, NLU gives chatbots the ability to accurately process and respond to colloquialisms and quirks of human language. Natural Language Generation (NLG) creates the message for the chatbot to answer to the original query.

Figure 1: Fernandes, Anush. “NLP, NLU, NLG and How Chatbots Work.” Chatbots Life, Chatbots Life, 15 Nov. 2017, chatbotslife.com/nlp-nlu-nlg-and-how-chatbots-work-dd7861dfc9df.

 

Popularity of Chatbots

Over the last five years, the interest in chatbots has increased exponent ally (Figure 2). This interest is due to several factors, including the simplicity with which chatbots can be integrated into mobile devices; they share a core feature: messaging. Of the 7.3 billion people in the global population, 6.1 billion people use an SMS capable mobile device and 2.1 billion use messaging applications. Facebook Messenger has 1 billion users, solidifying chatbots as an easily integrated and not disruptive mobile technology (Wertz, Jia. 2018).

Another reason for the wide spread interest and adoption for chatbots is that they are designed to be used by people of all age groups. This makes the technology less prohibitive than other more complex technologies. Chatbots also provide customers with opportunities to ask questions, they otherwise would be embarrassed to ask (Brandtzaeg P.2017) Overall, chatbots are effective because they are simple to engage with and relatively easy to design.

William Miesel, a renowned technologist in the chatbot world, has predicted that the global revenues from chatbots will  soon amass to $623 billion (Dale, Robert.2016). In the global market, 45% of end users prefer to engage with chatbots for customer service inquires. Results from a survey conducted by Live Person ,with 5,000 respondents, showed that the majority of users are indifferent to chatbots, as long as the user’s problem is resolved following a conversation with one. The second largest group of respondents (33%) felt positively about chatbots. (Nguyen, Mai-Hanh. 2017). Automating customer service through chatbots allows business to gain consistency and speed, which is often lacking in human customer service. Studies have shown that the majority of customers have improved perceptions of a brand after conversing with branded chatbots (Wertz, Jia. 2018). Gartner predicts that by 2020, 85% of customer engagement will be done through non-human entities (Moore, Susan.2018)

Businesses benefit from chatbots by cutting costs and gaining knowledge about consumer behavior. In a survey, conducted by Oracle in 2016, 80% of respondents said that they have used or plan to use chatbots by 2020. The increased integration of chatbots into businesses follows the trend of increased automation, chatbots will soon be used across businesses in marketing, sales and customer service. Though complete automation through chatbots is not feasible or even holistically beneficial for an organization, chatbots designed for customer service are predicted to replace 29% of customer service jobs. (Business Insider 2016)

 

Figure 2: Raj-Building Chatbots with Python-Using Natural Language Processing and Machine Learning (2019).pdf

Chatbots and Data Security

 

Chatbots, like all technology, have risks for users, especially as it relates to data privacy. Personalized chatbots, in particular, need to be designed with safeguards for data. Without proper security measures in place, both businesses and users can suffer. Chatbots rely on HTTP and other communication protocols, as well as, SQL queries for data retrieval which can often be targeted for hacks (Bozic J. 2018). Most concerns about data privacy in the chatbot world are focused on financial services chatbots. However, most financial services already transfer user data from databases via HTTPS protocols. (Chatbots Magazine. 2017)

There are two methods which are normally designed into chatbots to ensure security: authentication and authorization. Authentication verifies the human’s identity and authorization gets the user’s permission to complete a task. The technology used to develop chatbots is not new technology, which means that there are existing security measures that have been designed to combat security threats. However, it is important to remember that data security is the onus of the developers as well as the platforms that the bots run on (Chatbots Magazine. 2017).

 

Brand Equity and Branded Chatbots

 

According to Keller’s Customer Based Brand Equity model (CBBE) (Figure 3), brand equity is composed of four major parts including the identity, meaning, response and relationships. Brand equity, in this context, represents the valuation of a brand in the eyes of the customer. High brand equity represents strong customer loyalty and can protect a company from volatility in the market (Keller, Kevin Lane).

Figure 3: “Keller’s Brand Equity Model Building a Powerful Brand.” Strategy Tools from MindTools.com, www.mindtools.com/pages/article/keller-brand-equity-model.htm.

 

Salience

Building brand salience is integral to defining brand identity and engendering brand awareness. A brand’s identity is composed of more than just the logo and name, several other decisions about the appearance of  the brand, including packaging, font, and color scheme of the brand impact its identity (Forbes.2017). Brand awareness is more than a customer’s recognition of a brand’s name and logo, it is also the connection between a brand’s product and the needs that it will fulfill for a customer. Essentially, brand salience is what a company wants consumers to think about the brand when the consumer needs a product in the brand’s category. Well-designed brand salience helps companies stand out in their industries and is integral to brand equity. (Keller, Kevin Lane).

The floral industry is struggling to maintain profitable and industry tactics aimed at averting heightened preferences for succulents have birthed several flower delivery startups. As the industry wavers, brands like 1800 Flowers are attempting to maintain their salience by staying ahead of the technological curve of e-commerce (Kelleher, Katy. 2018) 1800 Flowers launched one of the first chatbots on Facebook Messenger in 2016. Its success was present nominally and practically with Mark Zuckerberg demoing the 1800 Flowers bot at the F8 conference and the new customers and sales gained from the bot. Not long after launch, President of 1800 Flowers, Chris McCann lauded the bot for its facilitation of 70% of sales and its acquisition of a younger market (Caffyn, Grace, et al. 2016). Historically, 1800 Flowers has had a relatively high percentage of millennial customers, 29% as of 2014. This is due to the organization’s commitment through development of brand salience, marketing and strategic partnerships to keep this younger market purchasing items that millennials characteristically do not (Stambor, Zak. 2017). Consistency is integral to salient brands because it helps to keep the brand’s place in the market stable over time, but the ability to change is also vital to keeping the brand relevant.

Meaning

Meaning is how a brand communicates with its customers and conveys the ethical values of the brand to the customers. This is one clear area where a well-designed chatbot can effectively impact a brand’s equity. Meaning is comprised of two components, imagery and performance. Imagery is how a brand satisfies a customer’s social and psychological expectations of the customer, this can be done through digital targeted marketing or physical engagement with a customer in store. Performance is how well a brand meets a customer’s needs (mindtools.com).

Despite the growth of online sales, customers still spend more on in store purchases than online ones. Chatbots not only provide opportunities to market and provide customer service, but can also be used to drive customers back into the store. Regardless of the type of business, companies need to strike a balance between online presence and in store experiences to stay competitive. Tommy Hilfiger launched a chatbot on Facebook messenger called TMY.GRL to converse with customers about the Fall 2016 Tommy X Gigi Hadid collection (Figure 4). At its earliest launch, TMY.GRL was an informational chatbot which provided users with product suggestions and information. With further integration of e-commerce on Facebook Messenger, TMY.GRL is able to link product information with purchase opportunities. TMY.GRL provides solutions for both imagery and performance for Tommy Hilfiger by providing an entertaining commerce channel that aligns with the brand identity (Arthur, Rachel. 2016). TMY.GRL meets the expectations and needs of customers by providing speedy and relevant suggestions to customers looking for products. It also balances online presence with in store engagement by alerting customers of sales and news of events.

Figure 4: Arthur, Rachel. “Tommy Hilfiger Launches Chatbot On Facebook Messenger To Tie to Gigi Hadid Collection.” Forbes, Forbes Magazine, 12 Sept. 2016, www.forbes.com/sites/rachelarthur/2016/09/11/tommy-hilfiger-launches-chatbot-on-facebook-messenger-to-tie-to-gigi-hadid-collection/#3a5a42ab2238

Response
Response is how consumers respond to engagement with the brand, these responses can be categorized in two segments: judgements and feelings. Brand judgements are the assumptions that customers make based on the performance and imagery of a brand. Judgements generally fit into four major categories: brand quality, brand credibility, brand consideration and brand superiority. Brand feelings are related the social currency of the brand that sometimes conjure emotional responses or reactions. The six brand feelings are warmth, fun, excitement, security, social approval and self-respect. Brand response encompasses the head and heart reactions that a customer experiences when interacting with a brand or makes a purchase from that brand (Keller, Kevin Lane).

Cleo is a financial services chatbot designed to replace individual banking apps. Users can get insights about spending habits and trends across multiple debit and credit accounts. Rather than becoming a bank, the founder of Cleo is committed to improving the user experience of financial services (O’Hear, Steve. 2018). Cleo is designed with a sassy, comedic personality which was designed to take the formality out of banking and make millennials more comfortable communicating with Cleo. This decision is clearly aimed at inducing warmth and fun from users, which are emotions typically left out of banking. Cleo also emphasizes the security of the business to give credibility to the bot. The personality of Cleo, however, has come under fire for being too informal and making inappropriate jokes (Sentance, Rebecca). Since Cleo bot is central to the Cleo start up, any negative reflection on the bot impacts the overall business. Though the informal nature of the bot is meant to make the brand relatable, it is clearly a risky business to design a bot, the business’ only tangible product, with a tongue in cheek “personality”. The judgements that result from that kind of personality can leave customers with positive or negative opinions which could be dangerous for the brand’s equity.

Relationship

The last and arguably most important part of CBBE is the relationship between brands and their customers: brand resonance. Brand resonance represent the extent to which customers feel that they are in sync with the brand. It can be measured by repeat purchases and frequent engagement with the brand. The four categories of brand resonance are behavioral loyalty, attitudinal attachment, sense of community and active engagement (Keller, Kevin Lane).

Wysa, Woebot and Youper are three of the top therapy chatbots currently on the market. A study from 2014, discovered participants were more open with AI psychologist bots than with human bots. These findings are not particularly surprising, especially because of the historical experiences with ELIZA. The penchant for consumers to utilize bots for therapy, highlights the many prohibitive factors of human to human therapy: cost, time, and general lack of access.  It also fosters attitudinal attachment to less expensive, yet still effective chatbots. Nevertheless, there are ambiguities about the safety of mentally ill patients who use bots for therapy rather than human therapists.  (Mikael-Debass, Milena). Therapeutic chatbots have an advantage in brand equity over other kinds of chatbots because therapy is deeply relational. E-commerce, retail and banking chatbots replace the semi-anonymous customer service that consumers experience in-store, online and on the phone. Therapy, however, is done one on one or in groups where there are relationships formed between participants based on the expectation of trust and privacy.  The nature of therapy also leads to behavioral loyalty which further connects a therapy bot to a human user. Despite the  ethical and practical uncertainties about therapy bots, they stand to gain the strongest level of brand equity, regardless of design complexity.

 

Conclusion

Since the boom of chatbots in 2016, the numbers of chatbots online will continue to grow. Chatbots are a low cost and easy to develop technology to build, especially on Facebook Messenger. Advances in NLP and AI have made chatbots more adept at communicating with humans and have broadened the capabilities of chatbots. It is clear that bots can be built to build brand equity at every stage of the CBBE framework. It is also clear that for certain companies, bots can be profitable at the center of the business whereas in other companies, bots are supportive to a larger product offering. Inc.com lists the five industries with the most to gain from chatbots as hospitality, banking, retail, service businesses and publishing. These industries stand to benefit from chatbots in both the productivity of the organization and the level of service that the organization can provide. (Harrison, Kate L). However, chatbots in the healthcare sector are also forecasted to grow because they will be beneficial in those same arenas. Customer service in healthcare is complicated to automate because of the risks that privately held companies have when providing medical advice and services. For simple communication, chatbots have  already proven to be useful for answering questions in the medical field. Therapy bots have found their place within the market and have acquired loyal customers, not unlike ELIZA, the “mother” of chatbots. It is surprising to realize that chatbots have come a long way since 1966, yet still operate the same and still bring forth emotional responses from users.For startup businesses, chatbots are a lean tool that can stand alone or be built upon for other more complex businesses. Regardless of the industry, designing chatbots should be done with the brand equity in mind to limit the negative impacts of a pert bot and heighten the brand’s salience in an ever-changing online marketplace. It is not unlikely that chatbots will one day be part of the foundations of a brand’s identity; the same thoughtfulness that goes into the other aspects of a brand’s identity need to be considered when designing a branded chatbot.

References:

Arthur, Rachel. “Tommy Hilfiger Launches Chatbot On Facebook Messenger To Tie To Gigi Hadid Collection.” Forbes, Forbes Magazine, 12 Sept. 2016, www.forbes.com/sites/rachelarthur/2016/09/11/tommy-hilfiger-launches-chatbot-on-facebook-messenger-to-tie-to-gigi-hadid-collection/#3a5a42ab2238.

“Artificial Linguistic Internet Computer Entity.” Wikipedia, Wikimedia Foundation, 20 Apr. 2019, en.wikipedia.org/wiki/Artificial_Linguistic_Internet_Computer_Entity.

Bozic J., Wotawa F. (2018) Security Testing for Chatbots. In: Medina-Bulo I., Merayo M., Hierons R. (eds) Testing Software and Systems. ICTSS 2018. Lecture Notes in Computer Science, vol 11146. Springer, Cham

Brandtzaeg P.B., Følstad A. (2017) Why People Use Chatbots. In: Kompatsiaris I. et al. (eds) Internet Science. INSCI 2017. Lecture Notes in Computer Science, vol 10673. Springer, Cham

“Building Brand Equity.” Forbes, Forbes Magazine, 24 July 2017, www.forbes.com/sites/propointgraphics/2017/07/08/building-brand-equity/#df793e6e8f85.

Caffyn, Grace, et al. “Two Months in: How the 1-800 Flowers Facebook Bot Is Working Out.” Digiday, 29 June 2016, digiday.com/marketing/two-months-1-800-flowers-facebook-bot-working/.

“Chatbot.” Wikipedia, Wikimedia Foundation, 2 May 2019, en.wikipedia.org/wiki/Chatbot.

Constine, Josh. “Facebook Launches Messenger Platform with Chatbots – TechCrunch.” TechCrunch, TechCrunch, 12 Apr. 2016, techcrunch.com/2016/04/12/agents-on-messenger/.

Constine, Josh, and Sarah Perez. “Facebook Messenger Now Allows Payments in Its 30,000 Chat Bots – TechCrunch.” TechCrunch, TechCrunch, 12 Sept. 2016, techcrunch.com/2016/09/12/messenger-bot-payments/.

Dale, Robert. “Industry Watch The Return of the Chatbots.” Cambridge University Press, 10 Aug. 2016.

Gonzalez, Robert T., and George Dvorsky. “A Chatbot Has ‘Passed’ The Turing Test For The First Time.” io9, io9, 16 Dec. 2015, io9.gizmodo.com/a-chatbot-has-passed-the-turing-test-for-the-first-ti-1587834715.

Harrison, Kate L. “These 5 Industries Have the Most to Gain from Chatbots.” Inc.com, Inc., 9 Oct. 2017, www.inc.com/kate-l-harrison/these-5-industries-have-most-to-gain-from-chatbots.html.

“How Secure Are Chatbots?” Chatbots Magazine, Chatbots Magazine, 23 Jan. 2017, chatbotsmagazine.com/how-secure-are-chatbots-2a76f115618d.

Intelligence, Business Insider. “80% Of Businesses Want Chatbots by 2020.” Business Insider, Business Insider, 14 Dec. 2016, www.businessinsider.com/80-of-businesses-want-chatbots-by-2020-2016-12.

Fernandes, Anush. “NLP, NLU, NLG and How Chatbots Work.” Chatbots Life, Chatbots Life, 15 Nov. 2017, chatbotslife.com/nlp-nlu-nlg-and-how-chatbots-work-dd7861dfc9df.

Kelleher, Katy. “Can Instagram Save the Flower Industry?” Observer, Observer, 2 Oct. 2018, observer.com/2018/10/can-instagram-save-the-flower-industry/

Keller, Kevin Lane. Building customer-based brand equity: A blueprint for creating strong brands. Cambridge, MA: Marketing Science Institute, 2001.

“Keller’s Brand Equity ModelBuilding a Powerful Brand.” Strategy Tools From MindTools.com, www.mindtools.com/pages/article/keller-brand-equity-model.htm.

Lola.com. “NLP vs. NLU: What’s the Difference?” Medium, Medium, 5 Oct. 2016, medium.com/@lola.com/nlp-vs-nlu-whats-the-difference-d91c06780992.

Mikael-Debass, Milena. “Will Chatbots Replace Therapists? We Tested It out.” VICE News, VICE News, 17 Dec. 2018, news.vice.com/en_us/article/nep53m/will-chatbots-replace-therapists-we-tested-it-out.

Moore, Susan. “Gartner Says 25 Percent of Customer Service Operations Will Use Virtual Customer Assistants by 2020.” Gartner, Feb. 2018, www.gartner.com/en/newsroom/press-releases/2018-02-19-gartner-says-25-percent-of-customer-service-operations-will-use-virtual-customer-assistants-by-2020.

“Natural Language Processing (NLP) & Why Chatbots Need It.” Chatbots Magazine, Chatbots Magazine, 25 May 2018, chatbotsmagazine.com/natural-language-processing-nlp-why-chatbots-need-it-a9d98f30ab13.

Nguyen, Mai-Hanh. “The Latest Market Research, Trends & Landscape in the Growing AI Chatbot Industry.” Business Insider, Business Insider, 20 Oct. 2017, www.businessinsider.com/chatbot-market-stats-trends-size-ecosystem-research-2017-10.

O’Hear, Steve. “Cleo, the Chatbot That Wants to Replace Your Banking Apps, Has Stealthily Entered the U.S. – TechCrunch.” TechCrunch, TechCrunch, 20 Mar. 2018, techcrunch.com/2018/03/20/cleo-across-the-pond/.

Raj-Building Chatbots with Python-Using Natural Language Processing and Machine Learning (2019).pdf

Suchman, Lucy A. Plans and Situated Actions: The problem of human-machine communication (Cambridge University Press, 1987)

Sentance, Rebecca. “Cleo, a Chatbot Case Study: Why Brands Need to Be Cautious with Comedy Personas – Econsultancy.” Econsultancy, 22 Feb. 2019, econsultancy.com/cleo-chatbot-financial-services-persona-marketing/.

Stambor, Zak. “How 1-800-Flowers Attracts Millennials.” Digital Commerce 360, 3 Mar. 2017, www.digitalcommerce360.com/2016/03/03/how-1-800-flowers-attracts-millennials/.

Victor, Daniel. “Microsoft Created a Twitter Bot to Learn From Users. It Quickly Became a Racist Jerk.” The New York Times, The New York Times, 24 Mar. 2016, www.nytimes.com/2016/03/25/technology/microsoft-created-a-twitter-bot-to-learn-from-users-it-quickly-became-a-racist-jerk.html.

Wertz, Jia. “Why Chatbots Could Be The Secret Weapon To Elevate Your Customer Experience.” Forbes, Forbes Magazine, 18 Jan. 2019, www.forbes.com/sites/jiawertz/2018/12/23/why-chatbots-could-be-the-secret-weapon-to-elevate-your-customer-experience/#1b17ea384645

Weizenbaum, Joseph. Computer Power and Human Reason (New York: Freeman, 1976)

Weizenbaum, Joseph. “ELIZA – A Computer Program for the Study of Natural Language Communication between Man and Machine,” Communications of the Association for Computing Machinery 9 (1966): 36-45.

 

Big Data and Buyers Remorse

The negative impacts of big data are obvious especially as it relates to privacy and discrimination.  The more feedback that users give to social media and the corporations that control the internet, the more information that is at risk. Similarly, the more information that is available about different demographics, the more evidence that corporations and governments have to mistreat or treat people differently.  Big data is gathered through the inputs and feedbacks that are integral to operations within the digital world. Engagement with websites and digital media is a relatively seamless data gathering technique that benefits corporations and users ( though the scale of benefits is not even).

Many are well aware of the constant tracking of data, especially as it relates to our queries and behaviors on the internet. This tracking is often manifested in targeted advertisements that seem to follow users from website to app and any other digital place. Users willing give data and feedback to systems on a daily basis, but how does this feedback impact buyers remorse. People regularly purchase clothing and other times with few or very little thought or knowledge about the buyer or item. Though cookies and advertisements are annoying to users, when an item of interest is advertised to the right user, the user is empowered with more information about the item. Do tracking cookies and advertisements negate some aspect of the user’s buyer’s remorse?

Buyer’s remorse is the negative feeling of unmet expectations after a person purchases something. This feeling is often felt after online purchases which may appear as higher quality than the reality. Though pop up advertisements are not informational, advertising has shifted from pop-ups to more native advertising. These native ads are often produced and acted out by “influencers” who have gained acclaim through blogging, vlogging or instagram posts. In native advertisements produced by influencers, users are given informational reviews about everything from clothing to meal preparation boxes. Big data contributes to these influencers and native advertisements because it enables corporations to build these influencers. Influencers are trusted by their followers because of established ( no matter how true or not) relationship and reputation for honesty and thoughtful reviews(advertisements).

Native advertisements use just as much, if not more , big data as traditional advertisements, digital and otherwise. These native advertisements are almost always better received than  traditional ones, and are more useful to the viewer. Empowering users with more information about the brands and items they purchase from alleviates some of the negative impacts of purchasing items online, but further understanding of buyers remorse is needed to conclude the relationship between it and native advertisements.

 

Deep Learning and Real World Problems

Throughout this semester, we have discussed how AI and deep learning can provide companies and organizations with data about individuals and society.  Neural networks are used to design algorithms that empower natural language processing, recommendation algorithms and facial recognition. Developments of these technologies have been used to elicit sales and subscriptions from the general public, since the development of them. But with the heightening threats of climate change looming, how can these technologies be used, if only on an individual level, to stem contributions to climate change. As we know, deep learning and AI has not been designed to  manage complex problems. It can , however, be used to manage queries with yes and no answers.

Decision making technologies are in the pockets of everyone with a smartphone. Users regularly trust these technologies to guide them through new cities, translate languages and even select which media they should consume. While many may say they do not trust AI or machine learning, their actions demonstrate the opposite. The lack of understanding how AI and machine learning plays a role in daily life and decision making may be keeping users from benefitting from other technologies that can improve life and the world. The balance between technology benefitting users and hurting them is one that is still being developed and understood.

AI and deep learning in itself are not malicious technologies and their benefits, if managed by the right organizations, can outweigh the negative impacts of them. As stated in Marcus’ writings, deep learning presumes a relatively stable world; this we know is not the case, nor has it been. Can accepting the volatility of the world impact the way in which tools like deep learning and AI are designed into technologies?  We have seen that AI and deep learning can be used to  change individual and even societal behaviors to boost the bottom line of organizations, but can it be designed into technologies that have a different goal? Is technology more likely to be integrated into the lives of individuals if it operates within a familiar institution? Can AI and machine learning be used , in part, to create more stable institutions that allow for the optimization of deep learning?

In my final research projects, I want to analyze the relationships between chatbots and the banking systems. I want to understand how, if anyway, chatbots impact modern banking and how scalable chatbots are across industries.

Gary Marcus, “Deep Learning: A Critical Appraisal,” ArXiv.Org, January 2, 2018

Is Cloud Storage more dangerous than Physical Storage?

The advent of cloud computing has drastically improved the capabilities of computing including the management and processing of information. Cloud computing also provides organizations with a cheaper and more efficient method for using information which otherwise would be hard to access and track. The semi conglomeration of cloud computing providers, however, leads to many question about who really owns information and what they have the power to do with it. The four main companies that provide cloud computing services are Google, IBM, AWS and Microsoft. These four companies are not solely cloud computing providers, they are also businesses with other products that touch almost every aspect of the digital realm.

The access to information by these organizations is abundant, but as we’ve seen through numerous hacks and reports of misuse, the regulation and security of the data stored by these companies has room to grow. This is not an issue that is exclusive to cloud computing, but to any company that stores information. Unfortunately, it seems that personal information is almost always at risk whether it is stored digitally by Google or stored in hardcopy by Georgetown. I work in an office which deals with the personal files of students and the security of these files often seems lacking to me. This gives me pause, especially when critiquing the security and safety of cloud storage. It seems that the sole way to protect ourselves and our data from misuse is simply by not sharing it. That is difficult in more ways than one, since our physical worlds and digital ones cannot be separated.

Though we face the same risks of information being viewed and abused both digitally and physically, cloud computing offers malicious actors a veil of anonymity and silence. When data is taken from Google or IBM, often times the people who take the data remain unknown to the public. When data is taken from physical files there is often evidence left behind, whether that is a missing file or a person who is acting upon information that they should not have. The digital realm makes information hard to conceptualize, especially when it has been stolen. Knowing that IBM has been using Flickr images without the consent of users, is easy to understand conceptually but the impact of this is wildly misunderstood.

Cloud computing benefits us and the technology we use daily, but the risks of cloud computing can often seem greater than the benefits. I think that the dangers of cloud computing are sometimes made bigger by the media in the same way that AI and the impending doom it enables does. I wonder if cloud computing is safer or more risky than the storage of physical data in a warehouse?

You Can’t Design Emotional Intelligence, Yet.

My biggest issue with AI and its perceived takeover is the lack of discussion about the designed abilities and limitations  of AI. There are many things that AI can do better and faster than most humans, but one thing that has yet to be designed for AI is emotional intelligence.Emotional Intelligence, according to Forbes, has  two critical abilities. First, it involves the ability to recognize, understand, and control our own emotions. Second, it involves the ability to recognize, understand, and influence others’ emotions( Forbes).  Emotional intelligence and the ability to understand another person’s mood and history has not yet been designed into AI, and likely won’t be in the near future. It is what will keep many people in their managerial roles, but may not save the data miners. Emotional Intelligence is important to the way that humans think and act, but it is often left out of discussions about AI and its impending take over.

Many human motivations, including the desire to conquer, are quests to satisfy an emotional need. In films and television when AI empowered beings take steps to conquer the earth, it is not done so without emotion. In WestWorld, the AI cast was driven by a sense of loss due to their seemingly never ending death at the hands of visitors. In IRobot, Vicky was motivated by concern for the world and concluded that humans should be controlled in order to preserve it.  All of these stories play on the fear of being conquered , but  do not cover the emotional drive of the conqueror. A wider understanding that AI cannot feel or be motivated to take action based on a feeling would likely assuage fears that WestWorld or IRobot could one day be a reality. Deseminating this knowledge, however, would be difficult on a large scale and would require investment from private and public organizations.

Since AI emotional intelligence is so far off, it would be useful to include discussions of emotional intelligence and motivations when media reports on advancements in AI. Faster and more accurate recommendations on platforms like YouTube and Spotify illicit an emotional satisfaction from users, but do designers consider it?  If journalists pivoted from fear mongering headlines to thoughtful discussions about how and where AI could be integrated into daily life for the better, more factual information about AI would be readily available. This may also lead to thoughtfulness about more than financial benefits in the developments of AI, and some issues of privacy and data sharing may change.

AI at Google: Our Principles“: The stated AI policy principles at Google.

Google AI Blog

https://hbr.org/2017/02/the-rise-of-ai-makes-emotional-intelligence-more-important

https://www.forbes.com/sites/falonfatemi/2018/05/30/why-eq-ai-is-a-recipe-for-success/#5bf03c1b1005

Virtual Assistants and their Personalities

Whether or not a person has had direct experience with virtual assistants, many can describe the voice and even some of the quirky comebacks that virtual assistants can give to a user. This is largely due to the integration of virtual assistants into popular culture and media. However, the operations of virtual assistants are wildly unknown, except for the need to be connected to the internet to work properly. A deeper look into the process of how virtual assistants work shows that the voice speaking to the virtual assistant is the input which is then converted into a sequence of frames. Then a deep neural network processes the input in order to produce an output. This process is designed to assess the probability that the input matches the existing patterns of  sequence and produces an output. The output can be an answer to a question, an action or a negative response indicating an error with the initial input.

It is interesting that regardless of how advanced that virtual assistants may seem, the input needs to follow an existing set of rules in order to produce a positive output.  These rules are based on  grammar rules as well as common phrases.  The negative outputs have provided developers with room to creatively present the “personality” of the assistant. This can be clearly demonstrated when an iPhone user asks Siri what is zero divided by zero or even when an Amazon Echo asks Alexa where to buy a Google Home. The designed personalities of virtual assistants seem to be the main product differentiators across brands. As we know from several classes, the actual operations of these systems are the same, it is the brand that makes each virtual assistant seem unique.

https://patents.google.com/patent/AU2011205426B2/en

Apple Machine Learning Journal (1/9, April 2018): “Personalized ‘Hey Siri’.

Using Google Translate for Pidgin English

At a most basic level, language  translation tools like Google Translate are designed to encode content from one language into vectors,  decipher the related word in another language through an attention mechanism then decode the vector into the desired language. This process seems efficient in translating simple sentences of a certain length, but brings into question the effectiveness of translation programs on complex sentences. By complex sentences, I do not mean longer sentences with sophisticated vocabulary, I mean satire, comedy or even sarcasm. Translation of complex sentences are heavily dependent on machine learning and the training of attention mechanisms.  Training broadens the database of semantic, syntactical and contextual information available for the designers of translation tools. Ultimately, end to end systems like NLPs have high performance requirements which subsequently require a-lot of processing power for operation. It can also be assumed that the high level of power needed for operating these systems impede the flexibility of these systems to process new information and languages.

One example of this may be when translation applications are introduced to different dialects of an already existing language like pidgin english. Google Translate obviously has a robust database of the english language and larger dialects like American English or British English. However, pidgin english exists across a variety of cultural groups which have their own syntax.  Since volume is likely a driver for the depth of database knowledge used for translation, does all pidgin get categorized as one “language” and is translated accordingly? In this instance, clustering which is a common machine learning technique would likely be used for efficiency of operating the language processing tool but hinder its accuracy in translation. By clustering pidgin english as one language, the language processing tool would have more data to assess during attention mechanism matching. It is unclear what other machine learning techniques could be used to manage a small languages like dialects of pidgin english and increase the accuracy of pidgin english translation. I also wonder how translation apps would manage languages like Esperanto which has semantic roots from romance languages, but is an artificial language which does not belong to any linguistic family. The existing semantic, syntactical and contextual information exists for the different languages which were used to develop Esperanto but how can the attention mechanism analyze the rules from different languages simultaneously.

Daniel Jurafsky and James H. Martin, Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, 2nd ed. (Upper Saddle River, N.J: Prentice Hall, 2008). Selections.

Thierry Poibeau, Machine Translation (Cambridge, MA: MIT Press, 2017). Selections.

How Google Translate Works: The Machine Learning Algorithm Explained (Code Emporium). Video.

Special Characters in DBMS

Sometimes in conversations with friends or family, words cannot express the message I am trying to convey. When this occurs my first choice is to find an emoji, my second choice is to use a gif. Lately, I have found that it is easier and quicker to find the gif I am looking for when I use an emoji as the search term  in my gif app. Searching for a gif using an emoji is an entirely different process than image search or even facial recognition because these other two search methods rely on  pattern recognition methods, where as unicode search does not.  This process is more efficient for users, because emojis are more concisely able to express reactions or feelings. Often when searching for a gif reaction, users are looking to quickly find a culturally relevant match to their feelings during a conversation. When users need to painstakingly search for the accurate gif, the impact of the gif is lessened by the time it took to find it.

This process relies on the defined meaning of emojis which are controlled by the  Unicode Consortium. The Unicode Consortium organizes and approves the standard bytecode for emoji (Irvine, 2019). The designed definitions of emojis are what keep them relatively standard across operating systems and devices. Despite the fact that the shape or color of the emoji may be  slightly different on facebook versus the iphone, the expression designed on the emoji is the same. This principle is similar to why there are thousands of fonts available on the internet, yet the change in font does not change the actual characters; the majority of  computer fonts use Unicode mapping. (Wikipedia: Unicode Font)

The DBMS that host gifs tend to return hundreds of videos when one particular search term is entered by a user. Emojis may return more accurate results as  search terms for gifs because there is more information designed into an emoji than a word. Words are highly contextual , but emojis have fewer parameters in which they are used, making them a better option for a quick gif search. Using a particular “happy” emoji is more specific than just searching for a “happy” gif. The search could return a smiling happy gif, a laughing gif, or a happy crying gif which may be an accurate result to the search but may not be close to what the user is looking for. This question of using emojis as search terms in DBMS brings up other questions to how special characters can impact the use of DBMS.

Irvine, “Introduction to Data Concepts and Database Systems.”

John D. Kelleher and Brendan Tierney, Data Science (Cambridge, Massachusetts: The MIT Press, 2018).

Designing Creativity for AI

The relationships between pattern recognition and creativity as well as pattern recognition and emotions were particularly of interest in this weeks readings. Creativity and emotion are human conditions that can often seem random with uncertain causes. Though randomness and uncertainty can be predicted with probability theories and statistics, creativity and emotion are often a product of a person’s interactions with the outside world. Human computer interactions have produced opportunities for AI to support users by the replication of creativity and recognition of emotion. Unfortunately, the ability for AI to accurately produce creativity or emotion seems much further off than the ability of AI to complete natural language processing. The dissonance between the AI, emotion and creativity can only be ameliorated through the integration of neural networks that mimmic the diffusion of neuromodulators through the brain. Though AI cannot be designed to feel emotion, can the process of neuromodulation be designed into an algorithm for AI to be impacted by emotion the way that humans are?

The three main forms of creativity, combinational, transformational and exploratory creativity, are all based on the recognition of the rules of existing cultural artefacts and the modification of these rules. This is not unlike the pattern recognition and machine learning needed for natural language processing, however, creativity and cultural artefacts often are created from emotion. The impact of emotion on creativity is powerful and also hinders the effectiveness of AI’s ability to truly produce creativity. Though research groups are beginning to study AI and emotion, it seems to be aimed at pattern recognition of emotions rather than a psychological understanding of it.

Boden’s book often referred to the forlorn psychological roots of AI research which contributed to the development of modern neural networks. This acknowledgement lead me to one major question about the future viability of AI and creativity. Through reading this week, I found that the focus of researchers was predominantly on designing AI to mimmic some aspect of human conditions and correcting any diversion from the standard. If creativity is truly a diversion from rules, designers of AI will likely correct the deviations of AI and stunt any potential human like creativity. Will a deeper understanding of the psychology of emotions that foster creativity help design AI to mimmic human creativity?

Alpaydin, E. (2016). Machine learning: the new AI. MIT Press.

Boden, M. A. (2016). AI: Its nature and future. Oxford University Press.

Designed to Think

Throughout this week’s readings a common theme kept arising in the assigned articles and chapters:  AI’s designed ability to think. Schwartz’s article warned against burgeoning hysteria about bots in Facebook’s Artificial Intelligence Research unit which irregularly combined words to communicate with other bots. The “machine learning patios” resulted in worry from the general public about devices and technologies designed with AI having a language and mind of its own. This sparked questions of ethics and security, as mentioned by Naughton, but neglects to question more realistic and pertinent problems of ensuring  that future AI is not designed to subvert the law. The fantastical notions of AI thinking like humans enough to challenge law , neglects the fact that AI is designed to simulate decision making, not to think like a human. Tasks delegated to technology are trained to be completed through machine learning but this is not the same as the technology actually thinking or making a choice. Deep learning provides historical data that improves on the accuracy of the decisions produced through algorithms and machine learning.

Companies have developed and named technology which has caused many users to  anthropomorphize devices and further the idea that AI makes devices think and listen. This dissonance between reality and design of AI seems to drive not only the funding of AI related research but also the opinions of AI.  Schwartz’s article cautioned against another “AI winter” which was a dearth in funding for AI related research as a result of the dissolution of interest in AI by both the media and general public.  Warwick’s introduction of Artificial Intelligence the basics explained the AI winter of  the 1970s as declining interest and funding of AI related research  as a result of technological delays and philosophical disagreements. The 1960’s produced many ideas of how AI could simulate human problem solving but lacked the technology to produce it. The prevention of another AI winter is contingent upon the recognition and acceptance of Simon’s indica that separate the natural from the artificial. Primarily, the notion that artificial things imitate natural but lack the reality of it.