Surveillance Capitalism as a Result of Internet Personalization

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524


It is no secret that companies use technology to track users’ activity online. Often, companies use language such as “personalization” or optimization” to justify the collection of users’ behavioral data. This verbiage frames technological surveillance as a Faustian bargain in which users cede some of their privacy to obtain an optimized, more personal experience. However, Shoshana Zuboff argues in her paper “Big other: surveillance capitalism and the prospects of an information civilization” that the collection and selling of user data has created a new form of “surveillance capitalism,” in which users’ quotidian behavior is commodified. Zuboff’s paper is divided into four components of computer-mediated transactions that contribute to a state of surveillance capitalism: data extraction and analysis, monitoring and contracts, personalization and customization, and continuous experiments (Zuboff, 2015). This essay will illuminate how the affordances and designed history of “personalization and customization” on the internet have contributed to the rise of surveillance capitalism.


When one engages in contemporary discussion about data privacy and collection at the hands of technology corporations, a ubiquitous example is often given as the one true parable of creepy, invasive behavioral data collection: ad retargeting. Someone will mention that they were looking at a pair of shoes on one website, and then a few days later, they saw the same pair of shoes pop-up as a banner ad while they were browsing Facebook. To most people this seems like the apex of invasive behavioral advertising. In actuality, this practice of retargeting just begins to describe the ways in which corporations gather and analyze behavioral data from people on the internet. Most internet users have little knowledge about the actual scope and extent to which corporations collect and analyze behavioral data about them, and this lack of knowledge is largely by designed obscurement. This asymmetry of knowledge about data collection and analysis is one of the basic tenets of Shoshana Zuboff’s definition of the “fully institutionalized new logic of accumulation” that drives most tech companies as “surveillance capitalism.” (Zuboff, 2015).

Because the internet’s complexity demands multiple layers of modular abstraction, reinforced by pressure from the consumer economy to productize these modules, it’s no wonder that the internet helped enable a system of “surveillance capitalism.” The internet, and its associated and similarly mythologized “big data,” is often viewed as a singular, being with its own agency (Zuboff, 2015). However, if we adopt a sociotechnical systems view, we can see that the internet and the data it collects are a designed system that exists as a product of various technological affordances and design ideologies. Once we view the internet and the web as a designed system, rather than a divine monolith, we can begin to see which actors are exerting their agency onto different parts of the system. Using this perspective, we can begin to understand how the modern internet became a vehicle for this type of surveillance capitalism.

At first glance, the term surveillance capitalism seems to invoke dystopian views of Big Brother watching your every move and forcing you to buy things. In fact, Zuboff purposefully invokes some of this imagery in the title of her paper “Big Other” (Zuboff, 2015). But, one might be thinking, “this seems like histrionic language to use to describe the mutually beneficial trade-off of free services for some advertising data.” But, Gary Marx, a surveillance expert at MIT reminds us that “While coercion and violence remain significant factors in social organization, softer, more manipulative, engineered, connected, and embedded forms of lower visibility have infiltrated our world. These are presumed to offer greater effectiveness and legitimacy than Orwell’s social control as a boot on the human face.“ (Marx, 2015). Corporations are not using physical coercion or presense to force behavioral changes as one might imagine with traditional concepts of surveillance, rather they are using surveilled data to design systems of advertisements that subtly – and effectively – manipulate people.

Within Zuboff’s paper, surveillance capitalism is divided into into four contributing components of computer-mediated transactions: data extraction and analysis, monitoring and contracts, personalization and customization, and continuous experiments (Zuboff, 2015). This essay will illuminate how the affordances and designed history of “personalization and customization” on the internet have contributed to the rise of surveillance capitalism.

How has the design of personalization as a key affordance of the internet created opportunities for surveillance capitalism to exist? To outline the affordances and design of every personalization module that contribute to one’s experience of a personalized internet, and thus enable a practice of surveillance capitalism, would require multiple volumes of books  Thus, here I aim to de-blackbox the designs of two key personalization features of the internet to illuminate how they contribute to internet surveillance capitalism: internet browser cookies, and geolocation data.

HTTP Cookies

As mentioned before, the internet is often conceptualized as a massive entity or space that users can “visit,” “surf,” or “go to.” While a spatial allegory is conducive to help one organize and process the information available to him or her on the world wide web, defining the internet as a monolithic structure disguises the communicative nature on which the internet was founded. For personalization to occur, and by extension for one to be surveilled, the internet obviously must be different and unique for each user.

One of the key concepts of the modern web experience is that your browser remembers who you are. Users of the world wide web are able to make accounts for almost any website so that they can interact with the website and have the state of their interactions on the website be saved. When I log in to Facebook, I expect to see my unique, personalized feed of friends and family. To think that the internet could exist in any other way seems almost absurd. This ability for the web to remember who you are, to stay logged into a certain account or hold goods in a virtual shopping cart, is largely attributable to cookies. A cookie, also referred to as an HTTP cookie or a browser cookie, is information about a user that is stored from a website into the user’s browser or hard drive, so when they return to that site later, the site can read the information on the cookie to remember who the user is (“Internet Cookies,” 2013).

Lou Montulli, borrowing from a designed solution in computer science called a “magic cookie,” was the first person to implement cookies into web browsers at Netscape. The cookie was designed to allow the web browser to remember a user’s preferences (Hill, 2015). Now, there are many different features and types of cookies that have developed from that use case, but they all share the common feature of being a “small piece of data that a server sends to the user’s web browser.” Cookies can either be “first-party cookie,” meaning the cookie’s domain is the same as the page a user is on and only sends information to the server that set it, or “third-party cookie,” which are mostly used for tracking and advertising (“HTTP cookies”).

Figure 1.  A common pop-up explaining  that a website is going to use cookies.

Cookies are a basic form of surveillance that most people explicitly consent to in various types of pop-ups because the cookies allow a user to skip repetitive processes like filling out content preferences or location information. However, the affordances of tracking and personalization that cookies bring to web browsers can allow third parties to create profiles that surveil and map users across myriad sites (Hill, 2015). Using NoScript, the Electronic Frontier Foundation found that visiting exposed their browser to “10 (!) different tracking domains.” These third-party cookies are hosted in sites across the web, allowing the tracking organizations to build robust profiles of behavioral data about a user’s experience on the web (Eckersley, 2009). Most collectors and aggregators claim that this information is kept anonymous, but research has shown that “leakage” of personal identifiable information via online social networks can link user identities  “with user actions both within OSN sites and elsewhere on non-OSN sites” (Krishnamurthy & Wills, 2009).

Thus, the design of the cookie itself does not create the issue of surveillance, rather it is the network of actors that take advantage of the browser cookies’ technological affordances that create a scenario in which a user can be identified, profiled and tracked throughout their journey on the web. As the EFF recognizes, “all of this tracking follows from the design of the Web as an interactive hypertext system, combined with the fact that so many websites are willing to assist advertisers in tracking their visitors” (Eckersley, 2009).  Cookies did not create an environment in which surveillance capitalism was inevitable, but the design of cookies as a primary module of the world wide web did contribute to its growth.   Because ”behavioral tracking companies can put whatever they want in the fine print of their privacy policies, and few of the visitors to CareerBuilder or any other website will ever realize that the trackers are there, let alone read their policies,” third parties can continue to use data from cookies to model and sell the quotidian activity of a web user without the user ever even knowing that their identity was surveilled and sold (Eckersley, 2009).

Location Data Sharing

Smartphones have become ubiquitous tools that help us navigate the world around us. Need to find the closest matcha store to you? Pull up Google Maps and have it lead the way. But, actually, that’s a pretty far walk and the sky looks a bit ominous. Open your weather app to check the weather in your area. Turns out it should start raining any second, so you decide to call an Uber to pick you up at your exact location. To ask how these apps on your smartphone helped mediate your journey home seems like a simplistic question. Obviously, the app just asked to use the GPS data that your phone collects. Much like allowing cookies on web browsers, one usually has to accept some sort of push notification or pop-up to allow an app to communicate with the phone’s GPS.

The designed interface of these notifications can be vague about what a user’s location data is used for. Also, much like third-party cookie tracking on web browsers led to the development of a marketplace and industry around users’ behavioral data, a third-party marketplace also came to exist from the buying and selling of users’ location data.

Within Apple’s Human Interface Guidelines for iOS, Apple recognizes that designers need to request permission to access personal information such as location. Within the iOS design guidelines, apps are encouraged to “provide custom text (known as a purpose string or usage description string) for display in the system’s permission request alert, and include an example.” This string is presented in a standard iOS system-provided alert, so the permission request will be familiar to an iOS user (“Requesting Permission,” 2018).

Figure 2. A notification asks the user to share location data

However, within Apple’s design guidelines, nothing is mentioned about a requirement to let a user know if their personal data will then be sold to third-parties. As the New York Times reported, “Of the 17 apps that The Times saw sending precise location data, just three on iOS and one on Android told users in a prompt during the permission process that the information could be used for advertising. Only one app, GasBuddy, which identifies nearby gas stations, indicated that data could also be shared to ‘analyze industry trends’” (Valentino-DeVries, Singer, Keller, & Krolik, 2018). This sharing of location data from app companies to third-parties is not a cottage industry:

At least 75 companies receive anonymous, precise location data from apps whose users enable location services to get local news and weather or other information, The Times found. Several of those businesses claim to track up to 200 million mobile devices in the United States — about half those in use last year. The database reviewed by The Times — a sample of information gathered in 2017 and held by one company — reveals people’s travels in startling detail, accurate to within a few yards and in some cases updated more than 14,000 times a day (Valentino-DeVries, Singer, Keller, & Krolik, 2018).

Companies that sell and analyze this location data might claim that the data is all surrendered consensually, but as is apparent with the vague guidelines for “requesting permission” that app developers must use to access the iPhone’s location measurements, it is likely that users are unaware that their movements are being commodified.

Apps that use location data do so with the utilization of the history of increasing personalization as a validation. People do not object to application-based surveillance because they believe that the deal is designed to benefit them. The designed experience of enabling an application to utilize personal information, including “current location, calendar, contact information, reminders, and photos,” is meant to highlight the benefits of personalization while neglecting to specifically outline ways in which the company behind an app may use that personal data as a commodity to profit on.


Both cookies and the ability to access location data allow a user to have a more personalized, unique experience with the internet. I am not trying to argue that these designed features of the world wide web and smartphones inherently create a form of malevolent surveillance. With both browser cookies and location data sharing, users of the world wide web and the appified internet generally have to opt-in to be surveilled. However, the design of these systems of surveillance obscures the extent to which the user is being surveilled. Most users are told that a website or app will collect behavioral or location data to “optimize” or “personalize” the user’s experience. This asymmetry of knowledge between the user and the surveilling company creates a state in which users can continue to be surveilled.



Eckersley, P. (2009, September 21). How Online Tracking Companies Know Most of What You Do Online (and What Social Networks Are Doing to Help Them). Retrieved December 11, 2018, from

Hill, S. (2015, March 29). The History of Cookies and Their Effect on Privacy. Retrieved December 11, 2018, from

HTTP cookies. (n.d.). Retrieved December 12, 2018, from

Internet Cookies. (2013, July 29). Retrieved December 11, 2018, from

Jennifer Valentino-DeVries, Natasha Singer, Michael H. Keller, & Aaron Krolik. (2018, December 10). Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret. The New York Times. Retrieved from,

Krishnamurthy, B., & Wills, C. E. (2009, August 17.). On the Leakage of Personally Identifiable Information Via Online Social Networks, 6.

Marx, G. (2016). Windows into the soul : surveillance and society in an age of high technology . Chicago ;: The University of Chicago Press.

Requesting Permission – App Architecture – iOS – Human Interface Guidelines – Apple Developer. (n.d.). Retrieved December 12, 2018, from

Zuboff, S. (2015). Big other: surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75–89.