Category Archives: Week 13

Machine Learning & Deep Text Combat Cyberbullying

Tianyi Zhao and Adey Zegeye

Machine Learning –  using “algorithms to get computer systems to go through content (images, text) and identify various trends and patterns across all of those data, based on what we have told them to look for (e.g., training it on labeled data – content that a human has already manually classified in some way – toxic or not toxic, abusive or not abusive). This can actually be done on unlabeled data as well, via what is called unsupervised learning – where the algorithm(s) tries to cluster the data into groups on its own).”

Deep Learning –” subset of machine learning– after the system identifies trends and patterns across data by analyzing content, we ask it to constantly improve its probability of accurately classifying that content by continually training itself on new data that it receives.”

How Machine Learning Can Classify Online Abuse

The different layers might:

1. Extract Curse words (that the programmer has listed as abusive)

2. The second later would count all the curse words up and divide them by the number of words in the text message it appears (to signal severity_

  1. Third layer might look at words in all CAPS
  2. Foruth layer might look at how many hateful words have second-person pronouns meaning they were directed at someone else
  3. Fifth layer might check if this poster has been previously flagged for abusive content 
  4. Sixth layer might look at punctuation (could imply tone)

Additional layers might check for attached images/video and see if that has been classified as abusive before 

DeepText Utilized in Instagram:

DeepText was firstly successful in spam filtering, and then moved to develop the mean comments elimination. Each person in the developing team looks at a comment and determines whether it is appropriate. If it’s not, he sorts it into a category of verboten behavior, like bullying, racism, or sexual harassment. Until launching in 2017, the raters, all of whom are at least bilingual, had analyzed roughly two million comments, and each comment had been rated at least twice. Simultaneously, this system had been testing internally, and the company adjusted the algorithms: selecting and modifying ones that seem to work and discarding ones that do not. The machines give each comment a core between 0 and 1, measuring the comment is offensive or inappropriate. If it is above a certain threshold, the comment is deleted.

The comments are rated based on several factors, semantic analysis of the text, the relationship between the commenter and the poster, and the commenter’s history. The system analyzes the semantics of each sentence, and also took the source into account. A comment from someone that the user does not follow is more likely to be deleted than one from someone the user does. Also, the comment that repeated endless on Martin Garrix’s feed is probably being made by a human.The technology is automatically incorporated into users’ feeds, but it can be turned off as well.


Figure 1. Turn-on/off the Comment Filter in Settings



Pros & Cons


  1. Automating the process of deleting hate speech and offensive comments helps filter out unwanted content on Instagram
  2. DeepText becomes more effective by allowing users to manually enter words or phrases they want blocked


  1. Characters in hateful words are replaced with symbols to avoid detection;
  2. Some comments may not contain any problematic words but still might be incredibly offensive;
  3. Acronyms and Internet slang are changing constantly;
  4. The system may delete innocuous or helpful comments by mistake.

Works Cited:

Systrom, Kevin. “Keeping Instagram a Safe Place for Self-Expression.” Instagram Press, Jun. 29, 2017.

Systrom, Kevin. “Protecting Our Community from Bullying Comments.” Instagram Press, May 1, 2018.

Marr, Bernerd. “The Amazing Ways Instagram Uses BIg Data And Artificial Intelligence.” Forbes, Mar. 16, 2018.

Hinduja, Sameer. “How Machine Learning Can Help Us Combat Online Abuse: A Primer.” Cyberbullying Research Center, Jun. 26, 2017.

Thompson, Nicholas. “Instagram Unleashes an AI System to Blast Away Nasty Comments.” Wired, Jun. 29, 2017.

Bayern, Macy. “How AI Became Instagram’s Weapon of Choice in the War.” Tech Republic, Aug. 14, 2017.


ARTificial Intelligence

By Shahin, Beiyue and Kevin

What does it mean to be creative?

  • Creativity as a definition is hard to pin down. Why is solving a math equation – the process of transforming numbers into different numbers — not considered creative, but doodling is?
  • Throughout history, even, the definition of creativity has undergone dramatic changes.


Is Laocoon and His Sons a creative work?

  • We would say yes, but his creator might say no.
    • “Art (in Greek, “techne”) was “the making of things, according to rules.” It contained no creativity, and it would have been — in the Greeks’ view — a bad state of affairs if it had.”

Contemporary Creativity

Oxford mathematician Marcus du Sautoy breaks creativity down into three forms:

  1. Exploratory creativity – in which you operate within a set of rules to push skill and expertise (music, portraits)
  2. Combinatorial Creativity – the combination of two or more disparate things
  3. Third kind of “breaking” creativity -” those phase changes when suddenly you’re boiling water, and water becomes steam and changes state completely.”

Can AI be Creative?

Is this video a creative work?

  • Would that opinion change if you knew that AI software did the coloring?

“The machine has no intent to create anything,” said Klingemann. “You make a fire and it produces interesting shapes, but in the end the fire isn’t creative – it’s you hallucinating shapes and seeing patterns. [AI] is a glorified campfire.” BBC

With computer history constantly using human-centric metrics to assess the “intelligence” of artificial intelligence, it’s no wonder that we are beginning to think of lines to draw between human creativity and artificially intelligent creation.

    • That line seems to be drawn in the popular media around the labor and work that is done to create. So, let’s look at three different applications with various levels of human-computer cooperation.

The Human is the Artist, not the Machine

Ben Snell created a sculpture, Dio, that was designed by an artificial intelligence algorithm and then built with the shredded remains of the computer on which it was designed.

“I consider myself, not the computer, to be the artist,” he says. But he also talks enthusiastically about the agency of his algorithms, saying that “Dio began by trying to recreate from memory every sculpture it saw” and that he asked the computer “to close its eyes and dream of a new form.” He says he choose to use this figurative language because it makes these digital processes more relatable to humans. “

Artificial Intelligence as an Agent of Creativity

Obvious is a group of three young French art students who are auctioning off a painting called the Portrait of Edmond Belamy. The painting was produced using open source code from 19-year-old Robbie Barrat.

Many AI artists are upset about the sale because Obvious is using a narrative to push the sale of the art in which the algorithm is the artist, not the creator of the algorithm.


  • A common theme in AI discourse – the people who pull or use the human-created code for an algorithm are quick to attribute agency to the code itself, not the person who created the code.

From The Verge

“What might be the most interesting thing about the Belamy auction is that, with not much more than some borrowed code, an inkjet printer, and some enthusiastic press releases, a trio of students with minimal backgrounds in machine learning ended up at the center of a milestone in art history.”

Technical Aspects:

Michelangelo, the great Italian sculptor, painter, once had a penetrating insight into the dual relationship between perception and creativity. “Every block of stone has a statue inside of it, and the job of the sculptor is to discover”.

So before we study how AI makes creative visual art works, let us review how AI have perception on the real world and recognize every image based on artificial neural network.

For example, the bird image’s pixels can be viewed as the first layer of neurons in the system, and it feed forward into one layer after another layer after another layer connecting with each other to do some pattern recognition. With a lot of training examples, the computer can recognize the image as a bird. So each of the system has three layers: input, hidden and output.

Let’s represent the three layers as three variables: x, w and y. So there is a simple equation: x w = y. If we know x and w layer, we are able to get the y output, which is the process of perception.

Data scientists decide to experiment with what happens if we try solving for x value, given a known w and a known y. In other words, we have given a image of bird and you already have your neural network that you’ve trained on birds, but what is the image of the bird? Through the reverse process, AI complete the creative work.

AI as a Curator of Art

Netflix suggestion algorithm — is it creative? As many critics and scholars of art and expression do, the Netflix algorithm groups and suggests films in chunks that transcend classic genre definitions. In this way, the Netflix algorithm is helping viewers see commonalities within creative expressions that may have escaped.

  • Thus, AI can help us form creative connections between among existing pieces of art.

“Some artists working in this field say they are merely channeling the creativity of computers and algorithms, but others protest, and say that these systems are artistic tools like any other, created and used by humans.”

Case One: AI as a Tool to Take Away Boring Aspects of the Creative Process (AAATTTABAOTCP)

Automatic Tagging and Searchable Content

From The Verge

Creative software companies like Adobe are implementing tools that allow users to automatically tag their content libraries to make their assets easily searchable.

Automatically Coloring Line Art

Celsys Clip Studio is an application used in manga and anime that allows users to use AI to automatically color their line art illustrations.

First the user creates a “hint” layer to give the software a seed of intention for the rest of the coloring.

Then, the software does its work.

Allowing mundane work like basic coloring to be accomplished by an algorithm could free animators up to experiment more with different styles or unique effects.

Case Two:(collaboration, human-machine interaction):Magic Google Sketchpad

The Google Brain Team, a machine intelligence team focused on deep learning has created Magenta, a piece of technology which can generate art and music using recurrent neural networks.

For example, every time you start drawing a doodle, Sketch RNN tries to finish it and match the category you’ve selected.

Over 15 million players have contributed millions of drawings playing Quick, Draw! These doodles are a unique data set that can help developers train new neural networks, help researchers see patterns in how people around the world draw, and help artists create things we haven’t begun to think of.

Case Three: AI as Artist?

Which painting is human-created and which is AI-created?

Image taken from GumGum’s website:

The case of GumGum, a company that focuses on AI development specifically for computer vision. creating an artificially intelligent painting robot truly demonstrates the closest we can get to having a machine be as autonomously creative as possible. Piloted as a Turing test for people to determine whether a work of art was developed by a human or a machine, the GumGum team developed an AI system called a generative adversarial network (GumGum), which is a deep neural network made up of two other networks that feed off of one another (Skymind). One network is called the generator which generates the new data, while the other network is called the discriminator (part of the discriminative algorithm) which evaluates an input for authenticity (Skymind). The generating network begins to produce an image at random, in which, soon enough, the discriminator network begins to feed data into the generating network by critiquing what’s being produced (Skymind). From there, the generating network fine tunes what is being generated until he discriminator network lessens the amount of critiques it feeds, which suggests that the generating network has produced something well-bodied enough for the discriminating network to identify it as a creative work of art (Skymind). The data set that was used for the AI machine, called the CloudPainter, was a collection of art by  20th-century American abstract expressionists.

It’s hard to determine whether the creative work of the CloudPrinter is in fact creative. It can be argued that, since the data collected is from preexisting 20th-century art, what the CloudPrinter created (or other/future ‘autonomous’ AI technologies) is a creative work of remix practices. Using elements and design factors of the hundreds of works of art, the CloudPrinter was programmed to generate artwork that not only lacked a solid targeted body of work that is meant to be generated, but also created an original piece of art that was influenced by other works. Furthermore, it can be argued that the algorithms programmed to make AI perform the way the CloudPrinter performs is creative work itself. Programming can be identified as logic-based creativity. Though the AI machine itself is not an autonomous imaginative being that produces original thoughts of its own, the influenced and applied algorithms are what is creative and original.


Ethics and Closing Remarks

Many ethical considerations come into play in the discourse of creativity and artificial intelligence. The world of art is already considered as a personal and human-to-human experience in which one human creates a body of work to express an idea or emotion to another human, where from there the art work produced begins to exists in a broader cultural context. Additionally, there are many misunderstandings and misconceptions about the world of AI and technology, in which those that are unaware of how AI functions believe that “AI” will replace humans — taking our jobs, threatening our economic well-being. Though the threat of automation does exist, people fail to understand that AI and machines are not autonomous. They are not self-thinking humans/beings.

The different mediums in which AI can create art, whether it’s as a tool, a machine-assist, or self-serving creative producer, threatens the sanctity of what “art” and what “creativity” really is. Creativity was never once considered as anything but un-human because creativity and creativity work is the product of human interaction. Technology had yet to be imaginative to the point of stating that artificial intelligence itself was a creative entity. Already, when reviewing AI generated bodies of work, if told that the body of work is AI produced, art critics highly criticize it calling it one-dimensional, boring, and unimaginative as if it were a knock off of already existing artists (GumGum Insights). There is a real fear within the realm of art that creativity and originality can become oversaturated if the production of AI-created artwork beings to claim “creativity” when the algorithms generated for AI to create what it creates actually stems from other actual humans.

If we begin to see more and more AI-driven creative work, the false narrative of AI existing as a self-thinking machine that will take over human work/production will further escalate. Additionally, legal standpoints on the development of AI as a creative being, depending on the content that is produced, can serve as a future discourse due to the inability for there to be protection over work if the work was not created by a human. What AI generated content belongs to who? Can someone claim AI-generated art as their own? Does it belong to the programmers, developers, or engineers? Or would it belong to whomever or where ever the data came from (for instance, the hundreds of 20th-century paintings that GumGum’s CloudPrinter scanned as data)? Granted, introducing new ways for AI to be programmed to produce various forms of content can be useful, such as in the case of a tool for creation and creativity when primarily in the control of humans. Using AI in various other mediums that allow for more programmed algorithms to generate more content can also create a new and helpful discourse for computer scientists, engineers, and sociologists to analyze the impact of technology in the world of art. Regardless of the impact of AI entering the creative hemisphere of content creation, which will have positives and benefits, the underlying question still remains: is AI a creative agent? By textbook definition, no. It’s not the machine that creates creative content, but more so the programmer that generates the creative algorithms for the machine to perform in the manner that it does. The reality of the fact, however, is that the discourse involving AI and creativity will remain as a link only between the two (at first glance), essentially discrediting the original data content that the machine has absorbed. This is a cause of the general public’s/media’s representation of glamorizing AI as an autonomous being.


Tatarkiewicz, Władysław (1980). A History of Six Ideas: an Essay in Aesthetics. Translated from the Polish by Christopher Kasparek, The Hague: Martinus Nijhoff.


Ethical Implications of Advertising and Big Data

Ethical Implications of Advertising and Big Data

By Proma Huq and Deborah Oliveros


Talking points of the group presentation

  • Introduction
  • Video
  • AWS and overarching implications
  • How does this influence activity/who is privy to the data
  • Decision making due to surveillance
  • Interpretations and consequences
  • Implications of targeted marketing with case studies
  • Closing arguments

Presentation slides here



LinkedIn Explained

By: Linda Bardha, Dominique Haywood, Yajing Hu

How the algorithms work on the LinkedIn Platform?

LinkedIn Feed Algorithms

People you may know

  • People you may know (started in 2006, began with a python script)
  • LinkedIn pre-computes the data by recording 120 billion relationships per day in a Hadoop     MapReduce (It runs 82 jobs which require 16TB of data) 
  • Apache Hadoop is a collection of open-source software utilities that facilitate using a network of many computers to solve problems involving massive amounts of data and computation. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model
  • There are 5 test algorithms continually running – producing approximately 700 GB of output data for the ‘People You May Know’ feature.

Skill endorsement

  • After a member endorses a certain skill, the recommendations are stored as a key-value, by mapping a member id to the list of other members, skills id’s and the score.
  • The output is then used by the front-end team to display in the profile of a member

Jobs you may be interested in

  • LinkedIn uses Machine Learning and Text Analysis algorithms to show relevant jobs to a member’s profile
  • 50% of LinkedIn engagement comes from “Jobs you may be interested in” feature
  • The textual content like skills, experience, and industry are extracted from a member’s profile. Similar features are extracted from the job listings available in LinkedIn.
  • A logistic regression model is applied to know about the ranking of relevant jobs for a particular LinkedIn member based on the extracted features.

How does LinkedIn use AI & What are the consequences?


  • For employees


      • LinkedIn uses artificial intelligence in ways that employees experience everyday, such as giving them the right job recommendation, encouraging them to connect with someone, or providing them with helpful content in the feed.
      • LinkedIn can extensively personalize the recommendations and search results for employees with deep learning. To perform personalization at their level, LinkedIn need machine learning algorithms that can understand content in a comprehensive fashion. LinkedIn also leverage deep learning to automatically learns complex hierarchical structures present in data using neural networks with multiple layers to understand content of all types.
      • AI systems can help LinkedIn find patterns from huge amounts of data. LinkedIn has a rich collection of data from many different sources. Without AI systems, this work can be a time-consuming process. If the employees are a passive candidate who’s not looking for a job, LinkedIn is careful to only surface jobs that are really good and help them get to the next opportunity. If they are an active candidate, LinkedIn sometimes takes more risk and show them the jobs that may or may not be in the ballpark. Using all the past data about LinkedIn members and what members look like, LinkedIn is able to teach machines what are the appropriate jobs for the members.
      • LinkedIn also works in the background, doing things like making sure that employees are protected from harmful content, routing connection to ensure a fast site speed experience, and making sure that the notifications sent to employees are informative, but not annoying.


  • For employers


    • LinkedIn can help employers source and manage candidates, and therefore save time. LinkedIn announced more than one million open candidates who have signaled that they are open to new opportunities. LinkedIn trains algorithms on the huge amounts of data with such signals, and then those algorithms can predict who might be the best fit.
    • LinkedIn also help the employers’ jobs reach the right people. By looking at deeper insights into the behavior of applicants on LinkedIn, it starts to predict not just who would apply to their jobs, but who would get hired. AI system can allow employers to select the exact qualifications in the candidate they are looking for. Employers can define specific skill-sets, years of experience, and levels of education associated with a particular job title for more precise targeting. Using machine learning, LinkedIn will only serve the job to the applicants who a good fit for the their role.
    • LinkedIn also explores how artificial intelligence can go beyond just their own products through integrations such as Recruiter System Connect. It is working closely with its partners to deliver the most robust integration with Applicant Tracking Systems (ATS). Companies that are turning on Recruiter System Connect are powering their “Past Applicants” spotlight, which guides them to the best candidates based on the interactions stored in their ATS, and are seeing half or more of all messages get responses.

Source of data input

    • LinkedIn’s approach to AI is neither completely machine-driven nor completely human-driven. It’s a combination of the two. Both elements working together in harmony is the best solution.
      • AI systems that LinkedIn are reliant on human input and automated process. Take the example of profile data. At a fundamental level, almost all the member data is generated by members themselves. This might lead to a problem for AI system. One company might have a job named “senior software engineer”, while another company might provide almost the same role but named “lead developer”, and there might be other names as well. Humans can easily understand that these names are the similar concept, while for computers, it can be a challenging task. Consequently, standardizing data in a way that AI systems can understand is an important first step of creating a good search experience, and that standardization involves both human and machine efforts.
      • LinkedIn has taxonomists who create taxonomies of titles and use machine learning models that then suggest ways that titles are related. Understanding these relationships allows LinkedIn to infer further skills for each member beyond what is listed on their profiles.


    • LinkedIn’s AI systems have had a huge impact for employees who are trying to find a job. With the personalization recommendation, LinkedIn saw a 30% increase in job applications.
    • Job applications overall have grown more than 40% year-over-year, based on a variety of AI-driven optimizations that have been made to both sides of the member-recruiter ecosystem.
    • AI-driven improvements to employers’ products have helped increase InMail response rates by 45%, while at the same time cutting down on the notifications that LinkedIn sends to our members.
    • AI has improved article recommendations in the feed by 10-20% (based on click-through rate).

Ethical/ Societal Ramifications of AI and Hiring

  • AI’s presence in hiring is intended to streamline the resume review and candidate selection processes. AI in hiring is also designed to avoid gender, racial and other biases.
  • LinkedIn’s new AI tool, which the company briefed Business Insider on appears designed to filter out the biases in data that can taint AI technology.
  • LinkedIn will track what happens in the hiring process with regards to gender, showing companies reports and insights about how their job postings and InMail are performing on this.
  • LinkedIn will re-rank the top search results in LinkedIn Recruiter to be more representative of the candidate pool for the posted jobs.
  • This current feature is only scaled to manage gender diversity, not racial or other demographics
  • LinkedIn has shows how its data could be used to map a person’s career based on their personality traits and interests, changing the order in which candidates are highlighted in could have individual and industry wide ramifications.
  • The technology industry is consistently highlighted in the media for issues with gender however, the top three jobs recruited through linkedin are all in the technology sector (DevOps engineer, Enterprise account executive, Front end engineer)
  • This product combats Amazon’s recruitment tool which was designed to identify the top candidates for Amazon roles
    • This product was pulled in October 2018 because of a bias designed into the tool, which preferred male candidates over female candidates.

Positive Impacts of AI on Recruiting and Hiring

  • AI allows hiring companies to find a larger pool of candidates through sites like LinkedIn by providing access to potential employees that may have otherwise been overlooked
  • AI can help employers process resumes and eliminate some of the processing time that it takes to review resumes and interview candidates
  • Recruiting profiles of typical employees can be created and used to hire and diversify an existing team

Negative Impacts of AI on recruiting and Hiring

  • Racial/Gender bias can eliminate groups of candidates because of factors other than qualification for the job
  • Putting distance between the interviewer and the potential employee can negatively impact the new employee’s view of  the company
  • This could impact employee retention and job satisfaction


DeZyre. How LinkedIn uses hadoop to leverage big data analytics. 2016. Found at

Boyd, Joshua Brandwatch. “The LinkedIn Algorithm:How it works” 2018 found at

Cuofano, Gennaro. LinkedIn Feed Algorithm. Found at

Deepark, A. (2018). An Introduction to AI at LinkedIn. Retrieved from

Josh, J. (2017). How LinkedIn Uses Automation and AI to Power Recruiting Tools. Retrieved from

Rosalie Chan. LinkedIn is using AI to make recruiting diverse candidates a no-brainer.

Alison DeNisco Rayome  The 3 most recruited jobs ever on LinkedIn are all in tech

De-blackbox Past, Present, and Future Of Alexa

Annaliese Blank

Zachary Omer

Beiyuan Gu

Amazon is one of the biggest global commodity based companies that is running the world. One of their most important technologies they have marketed is Alexa. Alexa is known as their own patented product that is their very own virtual assistant technology. “She” was first made in 2014 and has been refined in various versions in now 2018-2019. She is designed to be a virtual assistant technology in your own home to continually listen to its “master” and be of any form of assistance to their needs or inquiries. Alexa requires the internet and relies on verbal speech to be used in order to “wake the technology with the wake word” which then records virtually what was asked and records down all of your speech patterns in order for better speech recognition and performance.

The purpose of Alexa was solely to enhance the smaller or larger tasks of our lives whether it be answering a question, complex or not, texting a message to anyone in your contact list, or looking up the easiest recipe in your kitchen to be as at home sue-chef. All of these tasks are done virtually and answer is produced within on average less than 5 seconds. The data we give to Alexa is virtually coded, understood, and stored al within milliseconds and in this algorithm, the most accurate answer is then produced, usually without most general awareness of how this was made so quickly. No task too small is Alexa’s motto for our group!

Some quotes we wanted to pull from some research we did:

  • You control Alexa with your voice. Alexa streams audio to the cloud when you interact with Alexa. Amazon processes and retains your Alexa Interactions, such as your voice inputs, music playlists, and your Alexa to-do and shopping lists, in the cloud to provide, personalize, and improve our services” (Amazon Help, pg.1).
  • “Voice interaction, music playback, makingto-do lists, setting alarms, streamingpodcasts, and playingaudiobooks, in addition to providing weather, traffic and other real-time information. It can also control severalsmart devices, acting as ahome automationhub” (Wikipedia, pg.1).

To analyze this technology further, we wanted to understand the technology and de-black box its body parts and see more visual aids on where the voice recognition process occurs and be able to understand how the actual root of the machinery works. The socio-technical components are thus followed:

We will break down the components more in the presentation. Some quick parts are the light ring, volume ring, and 7 piece microphone array to detect record and listen to your voice when we speak directly to Alexa. This is where she will start to recognize your master voice and virtually store the conversation in the cloud.The whole process to this allows Alexa to begin forming a better way of knowing you and keeping track of your personal usage and data. In doing this, it sets her apart from other competing virtual assistant technologies.

Some pro’s and con’s are thus followed:


  • Efficiency
  • Low Maintenance
  • Timeless
  • Non-tedious
  • Quick Help
  • Accuracy
  • Proficiency
  • Cost Effective


  • Privacy risks and costs
  • Data is shared and owned
  • Always listening
  • Agreeing to sell your data to Amazon
  • Ethical or unethical?

With the negative aspects of this technology in mind, Alexa herself has received a lot of backlash over the years in terms of this biggest question: DOES ALEXA POSE A THREAT TO YOUR PERSONAL PRIVACY AND DATA THAT IS SHARED AND STORED, AND OWNED AND USED BY AMAZON WITHOUT OUR PERMISSION OR FULL KNOWLEDGE?

Some current Privacy Control Updates and Thoughts:

  • While Amazon Echo’s microphones are always listening, speech recognition is performed locally by the device until the wake word has been detected, at which point the subsequent voice command is forwarded to Amazon’s servers for processing. In addition, Amazon Echo is equipped with a physical button to mute the microphones.
  • Companion mobile apps and websites enable users to review and delete prior voice interactions with the device should they feel uncomfortable or not want Amazon to keep particular voice recordings on their servers.

With this in mind, it becomes increasingly difficult for users to believe this though, because the counter argument would be : DOES ALEXA ALWAYS LISTEN IN ORDER TO ACQUIRE THE WAKE WORD? This is where the threat to privacy and personal data control lies.

Some questions we asked ourselves was, What is actually being recorded? How will this collected information be used and to who? If so, how will it be protected? Will it be used for targeted advertising?

When thinking more about this invasion of privacy, we found an example case to expand on this further:

Case 1: In January, 2017 in Dallas, Texas, when a six-year-old girl asked her family’s new Amazon Echo “can you play dollhouse with me and get me a dollhouse?” The device readily complied, ordering a KidKraft Sparkle mansion dollhouse, in addition to “four pounds of sugar cookies.” The parents quickly realized what had happened and have since added a code for purchases. They have also donated the dollhouse a local children’s hospital.

The story could have stopped there, had it not ended up on a local morning show on San Diego’s CW6 News. At the end of the story, Anchor Jim Patton remarked: “I love the little girl, saying ‘Alexa ordered me a dollhouse,’” According to CW6 News, Echo owners who were watching the broadcast found that the remark triggered orders on their own devices.

Case 2: May 25, 2018, a woman in Portland, Oregon found out that her family’s home digital assistant, Amazon’s Alexa, had recorded a conversation between her and her husband without their permission or awareness, and sent the audio recording to a random person on their contacts list.

With all of this said, we wanted then make some concluding thoughts on the future to Alexa and where it would be headed. According to the SLU project, at SLU they are the first University to bring Amazon Alexa embedded devices, managed by Alexa for Business purposes in the residence halls and on campus apartments. This is a great example of empowering education with better technology for the future. SLU has installed more than 2,300 ECHO devices that are also great campus helpers to inform students on campus information and updates.

When we mean business, this is the future. Thousands of national and international companies use this type of virtual assistant technology in their own algorithmic work and company structure. Here is a picture below of a business model and how it contributes to the work environment:


In light of all of this, we gained a better perspective for this respective technology and how she is changing the business and social world, one model and revision at a time. She is going nowhere; we look forward to seeing more virtual assistant technology unfold in the future and see how much more they will able to do and alter in our every day lives.



Alexa for Business Overview, Retrieved from

Alexa Privacy and Data Handling Overview, Retrieved from Terms of Use. Updated 11/27/2018, Retrieved from

Alexa, Echo Devices, and Your Privacy (FAQs), Retrieved from

D’Angelo, M. (2018, December 26). Alexa for Business: What Small to Medium Businesses Need to Know. Business News Daily. Retrieved from

History of Amazon Echo, Retrieved from

Horcher, G. (2018, May 25). Woman says her Amazon device recorded private conversation, sent it out to random contact. KIRO 7 News. Retrieved from

Lau, J., Zimmerman, B., & Schaub, F. (2018). Alexa, are you listening? Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 1-31. doi:10.1145/3274371

Molla, R. (2019, January 15). The future of voice assistants like Alexa and Siri isn’t just in homes — it’s in cars. Retrieved from Recode website:

Saint Louis University. (2018, August). SLU Installing Amazon Alexa-Enabled Devices in Every Student Living Space on Campus. Retrieved from SLU Alexa Project web page:



Deep Learning and Deep Insights

A lot of the hyperbolic coverage on the implications of AI, in tandem with the overarching umbrella under which products and services are labeled as “AI”, is rooted in misinformation. De-blackboxing the myths and paving the way for a clearer personal understanding of artificial intelligence and similar concepts has been my primary goal as our class navigated through the field. Fittingly, the European Union’s guidelines for developing ethical applications of AI provide a succinct summation of the range of concepts covered over the course of our class. Some of the key points were as follows:

  • Human Agency & Oversight (the very first concept we tackled in this class, that human autonomy allows for design and intervention)
  • Privacy and Governance
  • Transparency
  • Diversity, Non-discrimination and fairness (linking back to the work we did on ML fairness)

This journey of insights led to narrowing down my fields of interest – NLP, ML and deep learning systems – to determine my final project on Spotify and the manner in which its algorithms are changing the way we interact with music. Spotify uses “taste analysis data” a technology developed by Echo Nest (Titlow, 2016), which groups the music users frequently listen to into clusters (not genres, as human categorization of music is largely subjective). Examples of this are Spotify’s Discover Weekly and Daily Mix playlists, and also the end of the year “wrapped” playlists, where they provide each user with insights about their music habits. Essentially, Discover Weekly is Spotify’s unique version of the recommendation engine – similar to the way in which Amazon recommends new books (and just about everything else under the sun) both online and recently bringing the same phenomenon to their brick and mortar Amazon Bookstores – “if you like this, try this!






According to Marcus, ….in speech recognition, for example, a neural network learns a mapping between a set of speech sounds, and set of labels (such as words or phonemes)”. For the purpose of my project, I aim to determine how deep learning systems and neural networks learn how to map songs for the “Discover Weekly” playlist, for example, to determine which set of categories a certain song belongs to. Marcus also claims that “the logic of deep learning is such that it is likely to work best in highly stable worlds”, which is problematic, both for the scope of my project (especially in today’s world of fluid genres in terms of music) and the larger sociotechnical system we live in.




Deep Learning and Real World Problems

Throughout this semester, we have discussed how AI and deep learning can provide companies and organizations with data about individuals and society.  Neural networks are used to design algorithms that empower natural language processing, recommendation algorithms and facial recognition. Developments of these technologies have been used to elicit sales and subscriptions from the general public, since the development of them. But with the heightening threats of climate change looming, how can these technologies be used, if only on an individual level, to stem contributions to climate change. As we know, deep learning and AI has not been designed to  manage complex problems. It can , however, be used to manage queries with yes and no answers.

Decision making technologies are in the pockets of everyone with a smartphone. Users regularly trust these technologies to guide them through new cities, translate languages and even select which media they should consume. While many may say they do not trust AI or machine learning, their actions demonstrate the opposite. The lack of understanding how AI and machine learning plays a role in daily life and decision making may be keeping users from benefitting from other technologies that can improve life and the world. The balance between technology benefitting users and hurting them is one that is still being developed and understood.

AI and deep learning in itself are not malicious technologies and their benefits, if managed by the right organizations, can outweigh the negative impacts of them. As stated in Marcus’ writings, deep learning presumes a relatively stable world; this we know is not the case, nor has it been. Can accepting the volatility of the world impact the way in which tools like deep learning and AI are designed into technologies?  We have seen that AI and deep learning can be used to  change individual and even societal behaviors to boost the bottom line of organizations, but can it be designed into technologies that have a different goal? Is technology more likely to be integrated into the lives of individuals if it operates within a familiar institution? Can AI and machine learning be used , in part, to create more stable institutions that allow for the optimization of deep learning?

In my final research projects, I want to analyze the relationships between chatbots and the banking systems. I want to understand how, if anyway, chatbots impact modern banking and how scalable chatbots are across industries.

Gary Marcus, “Deep Learning: A Critical Appraisal,” ArXiv.Org, January 2, 2018