AI and Convenience

I live in a studio apartment, but in my small space, I have three very helpful roommates; Siri, Alexa and Google. Each morning, Siri wakes me up with an alarm (and then two more if I’m being honest), and plays my music on the HomePod. Alexa runs me through the weather and the day’s news. Google tells me about my commute – he’s the most reticent of the group, overshadowed by his showier friends, but still very helpful when it comes to my passion for cooking. According to Alpaydin, the term for my usage of all these devices is ubiquitous computing – “using a lot of computers for all sorts of purposes all the time without explicitly calling them computers” (Alpaydin, 2016). They each serve a different function, despite often overlapping, but all ultimately adding convenience to our modern, interconnected society.

My tech-reliant morning routine is a microcosm of Alpaydin’s hypothesis, that we create space in our lives for the convenience of technology driven by artificial intelligence, simply due to the fact that “..we want to have products and services specialized for us. We want our needs to be understood and our interests to be predicted” (Alpaydin, 2016). Am I aware that all of this data is being stored, that there is not one, but three devices in my home that listen to my every word laying in wait for the “wake word” (“Hey Siri”, “Alexa…”, “Hey Google”)? Yes, but despite concerns of my privacy potentially being violated, or that I am too dependent on these technologies, it is now shockingly easy for these big corporations to be let into our homes to collect data, when we as a society now prioritize convenience over all.

These ethical issues concerning privacy and surveillance, in tandem with the growth of AI and data mining practices, are cropping up at a time when machine learning is already having “a measurable impact on most of us” (Naughton, 2019). At present, we already see the advent of “programs that learn to recognize people from their faces… with promises to do more in the future” (Alpaydin, 2016). Alpaydin further elaborates on this, differentiating between writing programs and collecting data. An example of a potential machine learning algorithm in action is evident in the recent “Ten Year Challenge” that is rampant on social media, primarily on Facebook. The challenge is a seemingly harmless way to do a before and after, a #TransformationTuesday in viral meme form. However, the data that this trend is leaving in its wake can be an example of machine learning within the bounds of a specific data set – in this case 10 years. “Supporters of facial recognition technologies said they can be indispensable for catching criminals…But critics warned that they can enable mass surveillance or have unintended effects that we can’t yet fully fathom” (Fortin, 2019). This ties back to Noughton’s point, that the “soft” media coverage of artificial intelligence drives a media narrative of AI as a solution to all our problems, without focusing on potential harmful effects. In Noughton’s words, this narrative is “explicitly designed to make sure that societies don’t twig this until it’s too late to do anything about it” – similar to where most of us find ourselves at present, highly dependent on technology.

Ultimately, an interesting facet to these introductory readings can be reflected in a statement from the essay, “Do Artifacts Have Politics?” (Winner, L. 1986), as follows: “in our times, people are often willing to make drastic changes in the way they live to accommodate technological innovation, while at the same time resisting similar kinds of changes justified on political grounds.” Despite being a dated article, the author’s foresight and message are still salient today. In the context of our class, would we give up the convenience that artificial intelligence brings to our modern lives, if say for example one or more of these technologies were not made ethically? Perhaps not, as we are over-reliant on technology. But how far would we give up our privacy for the sake of convenience?


Alpaydin, E. (2016). Machine learning: the new AI. Cambridge, MA: MIT Press.

Fortin, J. (2019). Are ‘10-Year Challenge’ Photos a Boon to Facebook’s Facial Recognition Technology?. Retrieved from:

Naughton, J. (2019). ‘Don’t Believe the Hype: The Media Are Unwittingly Selling Us an AI Fantasy’ The Guardian, January 13, 2019.

Winner, L. (1986). ‘Do Artifacts Have Politics?’ Chicago, IL: University of Chicago Press.