Author Archives: Kevin Ackermann

Surveillance Capitalism as a Result of Internet Personalization

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524


It is no secret that companies use technology to track users’ activity online. Often, companies use language such as “personalization” or optimization” to justify the collection of users’ behavioral data. This verbiage frames technological surveillance as a Faustian bargain in which users cede some of their privacy to obtain an optimized, more personal experience. However, Shoshana Zuboff argues in her paper “Big other: surveillance capitalism and the prospects of an information civilization” that the collection and selling of user data has created a new form of “surveillance capitalism,” in which users’ quotidian behavior is commodified. Zuboff’s paper is divided into four components of computer-mediated transactions that contribute to a state of surveillance capitalism: data extraction and analysis, monitoring and contracts, personalization and customization, and continuous experiments (Zuboff, 2015). This essay will illuminate how the affordances and designed history of “personalization and customization” on the internet have contributed to the rise of surveillance capitalism.


When one engages in contemporary discussion about data privacy and collection at the hands of technology corporations, a ubiquitous example is often given as the one true parable of creepy, invasive behavioral data collection: ad retargeting. Someone will mention that they were looking at a pair of shoes on one website, and then a few days later, they saw the same pair of shoes pop-up as a banner ad while they were browsing Facebook. To most people this seems like the apex of invasive behavioral advertising. In actuality, this practice of retargeting just begins to describe the ways in which corporations gather and analyze behavioral data from people on the internet. Most internet users have little knowledge about the actual scope and extent to which corporations collect and analyze behavioral data about them, and this lack of knowledge is largely by designed obscurement. This asymmetry of knowledge about data collection and analysis is one of the basic tenets of Shoshana Zuboff’s definition of the “fully institutionalized new logic of accumulation” that drives most tech companies as “surveillance capitalism.” (Zuboff, 2015).

Because the internet’s complexity demands multiple layers of modular abstraction, reinforced by pressure from the consumer economy to productize these modules, it’s no wonder that the internet helped enable a system of “surveillance capitalism.” The internet, and its associated and similarly mythologized “big data,” is often viewed as a singular, being with its own agency (Zuboff, 2015). However, if we adopt a sociotechnical systems view, we can see that the internet and the data it collects are a designed system that exists as a product of various technological affordances and design ideologies. Once we view the internet and the web as a designed system, rather than a divine monolith, we can begin to see which actors are exerting their agency onto different parts of the system. Using this perspective, we can begin to understand how the modern internet became a vehicle for this type of surveillance capitalism.

At first glance, the term surveillance capitalism seems to invoke dystopian views of Big Brother watching your every move and forcing you to buy things. In fact, Zuboff purposefully invokes some of this imagery in the title of her paper “Big Other” (Zuboff, 2015). But, one might be thinking, “this seems like histrionic language to use to describe the mutually beneficial trade-off of free services for some advertising data.” But, Gary Marx, a surveillance expert at MIT reminds us that “While coercion and violence remain significant factors in social organization, softer, more manipulative, engineered, connected, and embedded forms of lower visibility have infiltrated our world. These are presumed to offer greater effectiveness and legitimacy than Orwell’s social control as a boot on the human face.“ (Marx, 2015). Corporations are not using physical coercion or presense to force behavioral changes as one might imagine with traditional concepts of surveillance, rather they are using surveilled data to design systems of advertisements that subtly – and effectively – manipulate people.

Within Zuboff’s paper, surveillance capitalism is divided into into four contributing components of computer-mediated transactions: data extraction and analysis, monitoring and contracts, personalization and customization, and continuous experiments (Zuboff, 2015). This essay will illuminate how the affordances and designed history of “personalization and customization” on the internet have contributed to the rise of surveillance capitalism.

How has the design of personalization as a key affordance of the internet created opportunities for surveillance capitalism to exist? To outline the affordances and design of every personalization module that contribute to one’s experience of a personalized internet, and thus enable a practice of surveillance capitalism, would require multiple volumes of books  Thus, here I aim to de-blackbox the designs of two key personalization features of the internet to illuminate how they contribute to internet surveillance capitalism: internet browser cookies, and geolocation data.

HTTP Cookies

As mentioned before, the internet is often conceptualized as a massive entity or space that users can “visit,” “surf,” or “go to.” While a spatial allegory is conducive to help one organize and process the information available to him or her on the world wide web, defining the internet as a monolithic structure disguises the communicative nature on which the internet was founded. For personalization to occur, and by extension for one to be surveilled, the internet obviously must be different and unique for each user.

One of the key concepts of the modern web experience is that your browser remembers who you are. Users of the world wide web are able to make accounts for almost any website so that they can interact with the website and have the state of their interactions on the website be saved. When I log in to Facebook, I expect to see my unique, personalized feed of friends and family. To think that the internet could exist in any other way seems almost absurd. This ability for the web to remember who you are, to stay logged into a certain account or hold goods in a virtual shopping cart, is largely attributable to cookies. A cookie, also referred to as an HTTP cookie or a browser cookie, is information about a user that is stored from a website into the user’s browser or hard drive, so when they return to that site later, the site can read the information on the cookie to remember who the user is (“Internet Cookies,” 2013).

Lou Montulli, borrowing from a designed solution in computer science called a “magic cookie,” was the first person to implement cookies into web browsers at Netscape. The cookie was designed to allow the web browser to remember a user’s preferences (Hill, 2015). Now, there are many different features and types of cookies that have developed from that use case, but they all share the common feature of being a “small piece of data that a server sends to the user’s web browser.” Cookies can either be “first-party cookie,” meaning the cookie’s domain is the same as the page a user is on and only sends information to the server that set it, or “third-party cookie,” which are mostly used for tracking and advertising (“HTTP cookies”).

Figure 1.  A common pop-up explaining  that a website is going to use cookies.

Cookies are a basic form of surveillance that most people explicitly consent to in various types of pop-ups because the cookies allow a user to skip repetitive processes like filling out content preferences or location information. However, the affordances of tracking and personalization that cookies bring to web browsers can allow third parties to create profiles that surveil and map users across myriad sites (Hill, 2015). Using NoScript, the Electronic Frontier Foundation found that visiting exposed their browser to “10 (!) different tracking domains.” These third-party cookies are hosted in sites across the web, allowing the tracking organizations to build robust profiles of behavioral data about a user’s experience on the web (Eckersley, 2009). Most collectors and aggregators claim that this information is kept anonymous, but research has shown that “leakage” of personal identifiable information via online social networks can link user identities  “with user actions both within OSN sites and elsewhere on non-OSN sites” (Krishnamurthy & Wills, 2009).

Thus, the design of the cookie itself does not create the issue of surveillance, rather it is the network of actors that take advantage of the browser cookies’ technological affordances that create a scenario in which a user can be identified, profiled and tracked throughout their journey on the web. As the EFF recognizes, “all of this tracking follows from the design of the Web as an interactive hypertext system, combined with the fact that so many websites are willing to assist advertisers in tracking their visitors” (Eckersley, 2009).  Cookies did not create an environment in which surveillance capitalism was inevitable, but the design of cookies as a primary module of the world wide web did contribute to its growth.   Because ”behavioral tracking companies can put whatever they want in the fine print of their privacy policies, and few of the visitors to CareerBuilder or any other website will ever realize that the trackers are there, let alone read their policies,” third parties can continue to use data from cookies to model and sell the quotidian activity of a web user without the user ever even knowing that their identity was surveilled and sold (Eckersley, 2009).

Location Data Sharing

Smartphones have become ubiquitous tools that help us navigate the world around us. Need to find the closest matcha store to you? Pull up Google Maps and have it lead the way. But, actually, that’s a pretty far walk and the sky looks a bit ominous. Open your weather app to check the weather in your area. Turns out it should start raining any second, so you decide to call an Uber to pick you up at your exact location. To ask how these apps on your smartphone helped mediate your journey home seems like a simplistic question. Obviously, the app just asked to use the GPS data that your phone collects. Much like allowing cookies on web browsers, one usually has to accept some sort of push notification or pop-up to allow an app to communicate with the phone’s GPS.

The designed interface of these notifications can be vague about what a user’s location data is used for. Also, much like third-party cookie tracking on web browsers led to the development of a marketplace and industry around users’ behavioral data, a third-party marketplace also came to exist from the buying and selling of users’ location data.

Within Apple’s Human Interface Guidelines for iOS, Apple recognizes that designers need to request permission to access personal information such as location. Within the iOS design guidelines, apps are encouraged to “provide custom text (known as a purpose string or usage description string) for display in the system’s permission request alert, and include an example.” This string is presented in a standard iOS system-provided alert, so the permission request will be familiar to an iOS user (“Requesting Permission,” 2018).

Figure 2. A notification asks the user to share location data

However, within Apple’s design guidelines, nothing is mentioned about a requirement to let a user know if their personal data will then be sold to third-parties. As the New York Times reported, “Of the 17 apps that The Times saw sending precise location data, just three on iOS and one on Android told users in a prompt during the permission process that the information could be used for advertising. Only one app, GasBuddy, which identifies nearby gas stations, indicated that data could also be shared to ‘analyze industry trends’” (Valentino-DeVries, Singer, Keller, & Krolik, 2018). This sharing of location data from app companies to third-parties is not a cottage industry:

At least 75 companies receive anonymous, precise location data from apps whose users enable location services to get local news and weather or other information, The Times found. Several of those businesses claim to track up to 200 million mobile devices in the United States — about half those in use last year. The database reviewed by The Times — a sample of information gathered in 2017 and held by one company — reveals people’s travels in startling detail, accurate to within a few yards and in some cases updated more than 14,000 times a day (Valentino-DeVries, Singer, Keller, & Krolik, 2018).

Companies that sell and analyze this location data might claim that the data is all surrendered consensually, but as is apparent with the vague guidelines for “requesting permission” that app developers must use to access the iPhone’s location measurements, it is likely that users are unaware that their movements are being commodified.

Apps that use location data do so with the utilization of the history of increasing personalization as a validation. People do not object to application-based surveillance because they believe that the deal is designed to benefit them. The designed experience of enabling an application to utilize personal information, including “current location, calendar, contact information, reminders, and photos,” is meant to highlight the benefits of personalization while neglecting to specifically outline ways in which the company behind an app may use that personal data as a commodity to profit on.


Both cookies and the ability to access location data allow a user to have a more personalized, unique experience with the internet. I am not trying to argue that these designed features of the world wide web and smartphones inherently create a form of malevolent surveillance. With both browser cookies and location data sharing, users of the world wide web and the appified internet generally have to opt-in to be surveilled. However, the design of these systems of surveillance obscures the extent to which the user is being surveilled. Most users are told that a website or app will collect behavioral or location data to “optimize” or “personalize” the user’s experience. This asymmetry of knowledge between the user and the surveilling company creates a state in which users can continue to be surveilled.



Eckersley, P. (2009, September 21). How Online Tracking Companies Know Most of What You Do Online (and What Social Networks Are Doing to Help Them). Retrieved December 11, 2018, from

Hill, S. (2015, March 29). The History of Cookies and Their Effect on Privacy. Retrieved December 11, 2018, from

HTTP cookies. (n.d.). Retrieved December 12, 2018, from

Internet Cookies. (2013, July 29). Retrieved December 11, 2018, from

Jennifer Valentino-DeVries, Natasha Singer, Michael H. Keller, & Aaron Krolik. (2018, December 10). Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret. The New York Times. Retrieved from,

Krishnamurthy, B., & Wills, C. E. (2009, August 17.). On the Leakage of Personally Identifiable Information Via Online Social Networks, 6.

Marx, G. (2016). Windows into the soul : surveillance and society in an age of high technology . Chicago ;: The University of Chicago Press.

Requesting Permission – App Architecture – iOS – Human Interface Guidelines – Apple Developer. (n.d.). Retrieved December 12, 2018, from

Zuboff, S. (2015). Big other: surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75–89.


Spotify, Examined

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

By Huazhi Qin and Kevin Ackermann

Sociotechnical Perspective of Spotify

Spotify as a social network

Generally speaking, Spotify has already built up the most important part of a social network: people. They owned a hundred and forty million users. More than 50 million of them pay for its premium service. In addition, it makes a connection with Facebook and Twitter. It allows its users to integrate their account with their existing Facebook and Twitter accounts. Then they are able to access their friends’ or followees’ favorite music and playlists. They can like or listen to the song that their friend was just listening to — making it as if they are with their friends. In other words, their existing ties in other social media platforms can be transferred to and pushed deeper in Spotify.

The function including “following” and “sharing” make the connections among its users, and make it go beyond just a music player.

Spotify as a new marketing channel

Spotify owns a large user base and database, which is the solid foundation for its ad experience. According to Spotify, it has 100% logged-in audience and more than 2 hours a day for multiplatform users. Also, Spotify collects billions of data points every day. The data they collect reflects the real people behind the devices, revealing users’ preferences, behaviors, and mindsets. As what Spotify emphasize, “the more the user stream, the more they learn”. Its streaming intelligence provides a new marketing channel, helping its customers to locate and reach the right audiences in the right context.

Spotify as a representation of the new music production and consumption activities

Spotify is one of the most popular streaming music platforms. It can be seen as a great example to elaborate the change the widespread of streaming technology bring to music production and consumption activity. The introduction of streaming technology addresses the conflicts between the users’ “objective” or needs and other elements in human activities, including tools, rules, community and division of labor. (Adamides) Streaming technology makes music more “playful, short-term, social, visual and mobile”.

How Does Spotify’s Recommendation System Create the Discover Weekly Playlist

Spotify’s Discover Weekly playlist, a weekly, personalized mix that is meant to help you “enjoy new discoveries and deep cuts chosen just for you,” works so well, it’s hard to believe that each Spotify account doesn’t actually come with its personal DJ. The song recommendations in the Discover Weekly playlist feel like they come from a close friend who knows you deeply, but in reality, this close, personal friend was created from a combination of several pre-existing filtering and analysis methods (Cowan).

The Discover Weekly playlist is fresh and new every Monday.

While recommendations on Spotify feel entirely unique, there was a long history of computationally-mediated recommendation systems from which Spotify borrowed and combined to make its Discover Weekly playlist. Borrowing a system of recommendation from Pinterest, Spotify originally tailored the design of its recommendation to mimic Pinterests with cards and panels that the user could interact with (Cowan).

This was the original design for Spotify’s recommendation system. The design drew inspiration from similar content recommendation systems, like Pinterest.

So, how was this technology designed to replicate a feeling of personal knowledge about a user to provide such intimate recommendations? Spotify combined three filtering/analysis technologies to create the Discover Weekly playlist: Collaborative filtering, natural language processing, and audio models (Ciocca).

Collaborative Filtering

Sharing music isn’t new or unique to technological mediums. Have you ever thought of how you determine if you should take someone’s suggestions to heart or not? You probably have an idea of if someone has “good” or “bad” taste, and you decide whether or not to listen to their suggestions based on the worthiness of what you determine their taste to be. You probably use shared interests and preferences to determine this worthiness.

In this scenario, Person 2 would leave with the suggestion of “Song A,” and Person 1 would leave with the suggestion of “Song E.”

Spotify realized they did not have to redesign this core action of recommendation in which people with similar tastes will like other things that the other person likes. Spotify just had to design a technologically-mediated version of this process that fit within a music streaming service. Netflix already popularized this process of collaborative filtering with its rating system, and Spotify took this system of modeling user behavior and comparing it with other users’ data to suggest new content and made the ratings into implicit data within the streaming service, such as play count. Spotify then creates a massive data matrix, in which every user of the platform is a column and every song is a row. An algorithm compares this absurdly massive matrix of data to find similar listening patterns among users. Collaborative filtering is now often viewed as the “starting point” for making a suggestion system (Ciocca).

Imagine a chart like this, but with literally millions more columns and rows.

Natural Language Processing

Another layer of Spotify’s recommendation system uses natural language processing to determine commonalities between songs and artists. Spotify has crawlers that search out what people and organizations are writing about certain songs on the internet. Once the sentiment of the song is analyzed and turned into a mathematical representation, the data is compared to other songs to find similarities among a user’s listening patterns (“Ever Wonder How Spotify Discover Weekly Works? Data Science.”)


Audio Models and Convolutional Neural Networks

The above methods are great for artists and songs that already have a large base of listeners, but how does the Discover Weekly playlist serve undiscovered hits to the user so regularly? Using convolutional neural networks, Spotify analyzes audio data from songs to determine certain characteristics such as tone, tempo or mood. Then using this extracted data, Spotify can compare and discover new, fresh songs and artists that sounds similar to artists and songs that a user already loves (Ciocca).


Works Cited

Adamides, Emmanuel. (2018). Activity-based analysis of socio-technical systems innovations.

“Audiences: You Are What You Stream.” Spotify For Brands,

Ciocca, Sophia. “How Does Spotify Know You So Well? – Member Feature Stories – Medium.”, Medium, 21 June 2018,

Cowan, Matt. “How Spotify Chooses What Makes It onto Your Discover Weekly Playlist.” WIRED, WIRED UK, 27 Jan. 2017,

“Ever Wonder How Spotify Discover Weekly Works? Data Science.” Galvanize Blog, 22 Aug. 2016,

“Spotify: The New Social Network.” Campaign Creators,


The Evolution of Fear and Power in the “Appified” Internet

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

The internet, which was once a vast expanse of possibility, home-brewed programs and web-pages, is now akin to a digital suburban network. Just as citizens fled cities, which were crowded with opportunities for interaction and expression, for the systemically spread out and commercialized relative safety of suburbs, so too have users fled from the wildness of Apple II to the sterility of the iPhone. The average person’s interaction with the internet is now through producticized devices and applications. As the internet has transformed from a “generative” network to a more “applianced” network, the fears and threat of bad faith actors leveraging the power of the internet have changed significantly (Zittrain, pg. 8).

Now, this isn’t to say that the “appification” of the internet is entirely horrible. Mobile computing, even though through proprietary, appified interfaces, has enabled more people to reliably, safely engage in computationally mediated work and socialization. Without the admittedly sterile, commercialized modern internet, the internet might not have penetrated as deeply into our society. The plausible network power accessible to those online becomes greater and greater as more users partake in an internet-mediated existence. However, as I mentioned earlier, the fears and threats of an appified internet are just as present, if not more potentially devastating, than the wild-west version of the early generative internet.

When I was growing up, in the early ages of the internet, the monster that stood as a manifestation of the fears of being online was a shady hacker in his mom’s basement who was out to steal my identity. I could code my own profiles and web pages as a key component to my online experience on even commercial sites such as Neopets and MySpace, but this freedom was at the cost of having to remain vigilant of this ever-present hacker, just out of sight, who wanted to steal my information. As a result of this consensus of fear of viruses and hackers, internet and tech corporations began creating user interfaces with the internet that were more secure at the cost of less generative freedom and increased surveillance (Zittrain, pg. 4-5). More users began to interact with the internet as it became safer, but in reality, they were just transferring potential power to corporations and governments. While users feel safer from rogue hackers and identity fraud, they are at a greater threat from surveillance and subtle capitalistic manipulation.

People, or at least people engaged in meme-culture, are aware of this power trade-off. The omnipotent surveillance that the appified internet affords has led to the popular emergence of the “your government agent” meme, a meme in which users lovingly refer to the imagined government agent assigned to surveil them online as an ever-present, engaged companion.



Zittrain, J. (2008). The future of the Internet and how to stop it. New Haven, [Conn.]: Yale University Press.

Headspace as a Sociotechnical System

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

A sociotechnical system is any system that “considers requirements spanning hardware, software, personal, and community aspects” to inform its design decisions (“What Are Socio-Technical Systems?”n.d.). Within this framework, almost any contemporary smartphone application could be considered a sociotechnical system. What’s more, these apps, which are functionally sociotechnical systems, are nestled in systems of sociotechnical systems. Using the meditation app Headspace as an example, I’ll reveal how a software application is a sociotechnical system.

Internet Infrastructure (including ISPs and International Standards and Protocols)

As Dr. Irvine’s essay made apparent, the internet is not necessarily a fixed, monolithic entity as much as it is a moving system of interconnected engineering requirements and social intersections: a sociotechnical system.

Because Headspace delivers its content to a user through streaming and internet enabled downloads, Headspace must cooperate and work within the designed sociotechnical system of the internet. Thus, as the relational nature of a network implies, as Headspace is part of the internet, the internet is a part of Headspace.


While Headspace gives some content to its users for no charge, a vast majority of Headspace’s content library is behind a paywall. Because Headspace is a company that collects money and sensitive information from its customers, Headspace must cooperate with rules, regulations and norms that go with the collection of money. More basically, Headspace must be part of a society in which money is agreed to have value in the first place.

Multimedia Companies for Animation and Recording

While most of Headspace’s content is audio, some meditations – and all of its advertising and branding – have a distinctly designed animated aesthetic. Headspace must work with these digital animators to create this content that corresponds with its meditations.

Organized Religion (Buddhism)

A large draw to Headspace is the charismatic expertise of its founder, Andy Puddicombe. Andy is the voice who leads you through most of the meditations on Headspace. After studying with Buddhist monks around the globe, Andy himself was ordained as a monk. The system of organized Buddhism allows Andy to draw perceived authority and authenticity to his experience. Consequently, Headspace becomes a part of, and thus gains authenticity, from that networked connection (“Guided Meditation for Everybody – About Headspace,” n.d.)

Software for User Experience

Without a designed user experience, users wouldn’t be able to interact with the meditations on Headspace. Designers must draw on past affordances and design schema, as well as the technical requirements and limitations of the current operating systems, to create a user experience within the app that allows a user to intuitively understand how to navigate the app.

Computer and Smartphone Industry

The main channel to access Headspace is through smartphone apps. To exist, Headspace is dependent on the design, development and production of smartphones. When new phones are designed, Headspace must update its design specifications so that it matches and runs on the new devices.


Works Cited

“Guided Meditation for Everybody – About Headspace.” Headspace, The Orange Dot,


“What Are Socio-Technical Systems?” The Interaction Design Foundation, The Interaction Design Foundation,


iPads and Empowered Artists

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

While drawing obvious inspiration from the past – notably borrowing a lot of design philosophy from Alan Kay’s Dynabook – the iPad makes me feel like I’m in the future. Nothing makes me feel like a futuristic doctor on an interstellar journey more than scribbling some notes on my iPad with my trusty Apple Pencil. The iPad is a device that clearly illustrates Kay and oldberg’s concept of a “metamedium,” in which computation allows the simulation of myriad preexisting media and the invention of many new forms of media unique to computation. And sure, the iPad allows the creation of “a number of new types of media that are not simulations of prior physical media,” the iPad shines more to me when it simulates and augments traditional media with new technological affordances (Manovich, pg. 329).

The iPad has swiftly become a truly all-in-one device for me. I view photos on Instagram; I watch videos on Netflix; I write notes in GoodNotes. Multiple different forms of traditional media are accessible to me with nothing but a switch of an application. The beauty of the iPad as “metamedium” in addition to housing so many simulations of traditional mediums in one device is what I can do with these mediums thanks to affordances of software as a medium. Notably, these new affordances to traditional media are strikingly apparent within notes apps. With the touch of a few buttons and scales, I have access to multiple different pen tips and colors. The software endeavors to replicate certain extent forms of media like pencils and highlighters, while allowing the strokes of these tools to be instantly deleted, undid, or transformed digitally. Additionally, the iPad allows me to have hundreds of notes and notebooks of pencil-taken notes all housed within a device that’s smaller than a single notebook.

One idea that both empowered and unnerved me within Manovich’s Software Takes Command was the fact that metamedium only have the constraints that a developer programs into a certain media-creation application. Such an idea is awesome because it paints a picture of endless, ever-evolving forms of media thanks to computational affordances, but concurrently this realization highlights a significant divide between programmers who create media software and the artists and designers who use the software. Most designers and artists view media creation applications as toolboxes that are unchangeable until the latest version is announced, but how much more interesting art could be made if the programs were less commercial and the fields of development and art merged more often and fluidly? With metamedium, the confinements of what artistic media can be virtually nonexistent. Empowering artists to create their own tools is the only step missing to unlock this full potential.


Judging how the iPad App Flexcil Uses the Spatial Affordance of Interface Design

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

As personal computing spreads from its PC origins and into tablets and smartphones, the common graphical design principles that governed early computing — notably the WIMP (windows, icons, menus, pointing device) model — are becoming less relevant when designing user experiences (Murray, pg. 73). As computing becomes more ubiquitous in daily life, designers do not need to be as reliant on a “desktop” metaphor when designing an application’s interface. In fact, when designers mix spatial affordance metaphors, the software’s interface can suffer because of it.

The spatial affordance of computing refers to how users feel as if the interfaces are not simply objects and images with which one interacts, but actual spaces and “sites.” (Murray, pg. 71). As mentioned before, the common way to represent this affordance on a desktop is through the WIMP model. However, the hardware constraints of a tablet make the WIMP model less relevant. While the WIMP desktop model and iOS tablet navigation share some navigational similarities, the main difference, to me, is that an application “takes over” the screen of the tablet within iOS tablet navigation. Once one opens an app, the app then becomes an entire environment in which you operate. Let’s take the Flexcil app for example, as its mixed spatial metaphors leave something to be desired.

After opening the app, the user feels like they are “in” the main screen of the app. This space of the app feels like the main hub, and anything you do within this space should reflect an operation upon this space.


To import files into this space, one navigates to a side menu and selects which file they want to import into the app environment. However, after one imports a file into the app space, it is nowhere to be found.

One has to navigate within the downloads file to find the imported file. When I write this process out, it  feels fine and not problematic. So, why does this process feel inorganic while using the app?


On a desktop, one would expect to navigate to a downloaded folder to find a downloaded file. But, because tablets so often use “encompassing environments” within their applications’ spatial analogies, it feels inorganic to have a file appear in a space that isn’t the one you’re currently visiting.

Computational Thinking as a Means to Connect with My Future Self

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

When I first learned to code, I felt a slight shift in the way I viewed the world. I began to break the big problems and projects in my life into minute, almost modular, tasks. The way in which I solved problems didn’t drastically change, but the attitude and approach I had in facing a problem was completely refreshed. Learning to program — which mostly involves breaking down a large task into a system of smaller tasks that amalgamate to accomplish a large goal — enlightened me to the fact that virtually any problem can be broken down in a similar manner.

This realization seems pretty elementary, but this new way of deconstructive thinking opened up an entirely new way for me to approach almost every aspect of my life. Suddenly, giant goals didn’t seem like insurmountable tasks, but rather algorithms that I had yet to deconstruct and program step-by-step. In a way, I was able to more easily picture desired results of my future thanks to the simple power of Boolean logic. For example, everyone knows that to achieve a desired outcome, you have to work to achieve it. But, with my brain beginning to adopt computational thinking patterns, I was able to use if-then patterns to connect with my future self.

Now, instead of just feeling hopeless or powerless in the face of a predetermined future, I had agency to do something about it. If I wanted to get back in cross country shape, then I needed to start training again. Perhaps, I’m being a bit hyperbolic in the degree to which adopting computational thinking patterns opened my eyes, but rather, I was really able to internalize and act on these realizations with the power of computational thinking. I knew all along that I had the ability to change my future and accomplish large goals, but computational thinking gave me the toolbox to do this efficiently. Computational thinking helped me to “de- black box” problems in my life and view them as modular, familiar components.

Affordances within the Nike Run Club App

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Just looking at the screen of an iPhone, one understands that each little app icon is clickable and will open a new application. This affordance draws off a long history of having small icons represent applications within computational graphical user interfaces. We could even go one step further and say that this physical touch screen provides the affordability of tapping these apps to open them. The flat design of the glass screen and the implied semiotic reconfigurability of a phone intuitively beckons a user to tap or touch the colorful icons.

Once you tap on the app to open it, the app fills the entire screen. This constrains the user to only operate within one app of the phone at a time. This constraint is born from the physical constraints of the design of the iPhone. The relatively small screen of the phone would be too cluttered and hard to interpret if one could run multiple apps at a time on screen. Multi-tasking is still available on the phone, as the modular apps can communicate with one another and run in the background, but the constraints of the iPhone’s screen ensure that the user is only immersed in one app at a time.

When looking at what design affordances are used, the amount of affordances that were derived from acquired cognitive models of graphical user interfaces. Once I began to think critically about what felt intuitive to do to navigate within the app and to explain why said actions felt intuitive, I realized that these intuitions mostly came from past smartphone apps. For, example, the top row of menus obviously affords clickability to the two not bold options. This affordability is only afforded to me because of socialization with previous examples of hierarchical typographic information to focus on bolded items as way finders within information. If I were an alien with no human socialization, nothing about the top menu design would tell me that I could tap other options. It truly is semiotic representation all the way down.

Facebook is Built off the Contribution of the Commoner

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

It is imperative that people understand the systems they use to govern their daily lives. With the rampant productization and blackboxing of “consumer technologies,” many people use products and software everyday without knowing what the products do and how they use data. Because users are ignorant of their devices and software as parts of a larger system, they are prone to shared misrecognition of the part they play within a mediated system. In this state of “shared recognition,” media technology is viewed as deterministic and given from a company to a user.

If people more openly recognized devices and software as sociotechnical artifacts, they would have more agency and independence over the uses and capabilities of devices and software. This increase in user/member power would come from realizing that each member/user of a sociotechnical artifact provides more meaning, power and value to the network of which they are a part. What effect does this collective misrecognition have on the common user?

Let’s use the Facebook Feed as an example. Generally, people view the goal of the Facebook Feed as keeping up with friends and family. Speaking incredibly simplistically, we can divide the process of value creation via Facebook into three actors with distributed agency: the users, Facebook’s interface, and the algorithms that serve content on Facebook. Under the false presumption that Facebook is simply a technology product served from a company to a user base, Facebook has all the power over its systems, while users have virtually none. However, as we can see through distributed power, the interactions between users create a large portion of the mediated process of “keeping up with friends and family.”

Once the user realizes that he or she is a large part of this distributed agency, he or she will be able to demand more of the software. Until we, as users, can recognize our agency in the process of value creation on Facebook, we will be prone to myths of “deals” in which we sell our myriad personal data as a trade off to the “value” that Facebook says it supplies to users. Granted, this unfair trade-off is probably not born of malicious volition, but rather because both human parties, users and software engineers at Facebook, have a view of media technology as creating effects for users. Thus, users and engineers need to destroy the “wall” that exists between technology and culture so that we can see that technology does not exist as a fixed system.

Google Maps and My Brain

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

At this point, I view Google Maps almost as an extension of my consciousness. Anytime I move to a new city, or even traverse a new area of my own city, I have Google Maps up and running on my phone. Google Maps is the cognitive artifact in my life that most clearly illustrates the point that I am able to “offload” some of my cognitive load onto an artifact.

When I’m deciding when I need to leave to arrive at a place on time, I don’t have to wonder which way will be the fastest or calculate how long my journey will take. I simply type in an address on Google Maps, and the app presents me with multiple routes, methods, and times that my journey will take. Thus, instead of having to work through the cognition required to calculate all that information myself, I just allow Google Maps to do the work for me. The cognition required to calculate all this information was distributed among the engineers and designers who collaborated to build Google Maps, and Google Maps, as a cognitive artifact, gives me access to this distributed cognition.

When I’m exploring or deciding where to go out, I’m biased to think that areas within the orange/brown shaded areas are somehow better or more interesting. 

Additionally, Google Maps has had a profound impact on how I view the world around me. Google Maps, fundamentally, is a symbolic representation of the world in which I live. Streets become lines, my movement becomes a highlighted path, buildings and destinations become blocks and pins, and even I become a glowing blue dot in the representative world of Google Maps. Before I consider going to a neighborhood, I’ll look at it’s representation on Google Maps. If the area is shaded a different color, indicating that it is an “area of interest,” I’m more likely to want to go to that area. When I’m walking around, I’ll orient myself in the real world based off the stores and points that are the most prominent on Google Maps.

With how much trust I give to Google Maps to perform supplementary cognition for me, I do not stop often enough to consider the influences that advertising and a profit-driven product have on me. The ability to view the world in a traversable representation is great and powerful, but if the highlights of this world are determined around commerce and consumption, what potential benefits am I missing out on? How could a cognitive artifact that allows me to accessibly view the world around me be improved if it didn’t have to answer to the burden of profit?


Li, M. (2016, July 25). Discover the action around you with the updated Google Maps. Retrieved from