“Media of representation shape our understanding of the world. They do not just contain information; they also determine what can be communicated. They provide the loom on which we can weave the fabric of human culture.” (Janet H. Murray, 2012)
If we are to consider the different components of the digital applications we use today to obtain information and communicate with others from a systems point of view that considers the affordances, constraints, and interfaces that emerge among users and the apps in a cognitively distributed way, an interesting group to analyze is that of algorithms.
Algorithms are the step by step instructions programmed into software that are meant to process specific input in a specific way to obtain a specific result. The results sought depend on the many potential uses of an application, and each is pursued through a series of algorithms that go through a step by step process. Each of these steps represent an interface in which information is processed according to a structure and logic based on cultural webs of meaning that, as explained by Murray (2012), would be “making meaning” as it goes.
Let’s consider a search algorithm. If I search for the term “Black Lives Matter” on Google, Facebook, and Twitter, I obtain different outputs and displays of this social movement campaigning against systematic racism.[i] From a perspective of distributed cognition (Zhang and Patel, 2006), we can observe mixed affordances in the display of results each search engine produces. In each application, information is organized according to physical, perceptual, and cognitive affordances (Zhang and Patel, 2006) that allow me to interact with it according to the cultural conventions of each platform.
On Google, for example, I first obtain a list of “All” results which begins with an ad, followed by the link to the movement’s official website, to a subsection of news items, a link to the movement’s official Twitter account, the Wikipidia entry, and more Twitter links. I only see one image on this display, and I have the options of filtering results by media type (images, video) and content type (News and Shopping), and am offered further search tools.
On both Facebook and Twitter, on the other hand, the first result list displayed is the “Top” list, and each offer the option of filtering by posting time, (“Latest” on Facebook, “Live” on Twitter). The results of these social media apps are posts by people or organizations, and each contains information about how widely is been shared (number of views, comments, shares, likes, retweets).
In both apps, there are various images of various sizes, changing the perceptual feeling of what is more prominent, each according to the conventions of each site. Facebook has been promoting video consumption lately so the first category of results displayed are videos. What follows are “Top Public Posts” and “Posts from Friends and Groups.” As it is with Facebook, the display of results provides cognitive affordances for me to interact with this information based on what most of a Facebook-selected group of users are doing (“Top Public Posts”) and what a self-selected group is doing (“Posts from Friends and Groups”). Twitter also offers this choice but it requires further interaction with the platform for me to filter information to “People You Follow”. The social aspect of the information is much more prominent in the affordances of the Facebook and Twitter displays than on the Google one, and there seems to be more indication about what makes the results relevant (how widely and recently they have been shared) on the social apps than on Google (which does display time stamps for news items and Twitter links but doesn’t give indications of relevance on the other results).
These results have been sorted by algorithms that processed my input query using databases and criteria that I can’t see or know, but once they are displayed, they provide a space— an interface—of affordances and constraints for me to interact with information about this movement. There is a moment in which I interact with this information that is blackboxed, but regardless of this, each result display is a space of distributed cognition among the algorithms, their displayed results, and my (and other users) interaction with them. As put by Murray in the above quote, it is a space of meaning making, but as each is a space designed according to cultural conventions, it “determines what can be communicated” (affordances and constraints), and thus how this meaning can be constructed.
According to media scholar Tartleton Gillespie (2014), algorithms are now “a key logic governing the flows of information on which we depend,” and thus require our understanding and attention. From the design thinking point of view of this seminar, it is interesting to consider the algorithm as one of the key components (which are blackboxed) with which we interact in the cognitively distributed systems of this “media of representation.” Algorithms themselves are constructed according to a cultural symbolic logic with which we interact and provide us with affordances and constraints to interact with information. (Blackboxed) algorithms are then an important component to consider when analyzing how we construct meaning in distributed cognition systems today.
Janet Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012.
Jiajie Zhang and Vimla L. Patel. “Distributed Cognition, Representation, and Affordance.” Pragmatics & Cognition 14, no. 2 (July 2006)
Tarleton Gillespie, “The Relevance of Algorithms.” Media technologies: Essays on communication, materiality, and society (2014)