Author Archives: Dominique Haywood

DeBlackboxing Recommendation Algorithms

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524


Algorithms are omnipresent in modern life, however, the construction, motivations and information that is used to design algorithms are unknown. Spotify’s Discover Weekly playlist is one of the most popular examples of the satisfaction that can be achieved through a robust recommendation algorithm. Through deblackboxing the recommendation algorithms of three popular sociotechnical artefacts the design principles used to design the algorithms and the affordances and constraints of the algorithms will become clear. The scalability and extensibility of recommendation algorithms will also be discussed in effort to suggest the ways that a current product can benefit from the integration of a recommendation system.



Within the last five years, consumers have begun to acknowledge the impact that algorithms have on their everyday lives. From online shopping to traffic control, algorithms are ubiquitous, invisible and misunderstood. Algorithms have been designed by major corporations to create curated lifestyles and loyal customers. Society’s engagement with sociotechnical systems has increased the capabilities for corporations to target and sell to consumers.  Spotify has such a refined recommendation algorithm that when users view their Discover Weekly playlists, they claim that Spotify seems to know them better than their spouses (du Sautoy). The seemingly individualistic recommendations are often accepted by users without question of why a particular song is suggested to the user. The breadth of user knowledge and information that is manipulated is misunderstood and even unknown to many, however understanding the design of algorithms may start to change this.

Users are integral to the operations of sociotechnical systems, just like the technologies, artefacts and processes that occur within the system. The rise of applications has led to system innovations that prioritize personalization and individual identity expression (du Sautoy). The relationship between sociotechnical systems and consumers has led to the perception that applications that are designed for sociotechnical systems prioritize individuals (du Sautoy). However, corporations are able to scale their businesses through designing systems that appear to be individually targeted but in reality, use the masses to analyze and predict user behaviors.

The first step to understanding how recommendation algorithms are designed is to understand why they are designed and integrated into technology. The primary and most obvious goal of designing an algorithm is to sell a product or service to a customer, the less obvious goals of recommendation algorithms are designed to maintain user interest in the technology. Specifically, operational goals of recommendation algorithms like recommendation novelty, relevancy, recommendation diversity and recommendation serendipity are integral to keep users engaged with the technology (Aggarwal). For example, if an Amazon shopper purchases a book on Astrophysics once but is constantly inundated with Astrophysics related content and recommendations, he or she may lose interest in the other potential products that Amazon has to offer. Algorithms must be designed to recognize and categorize user behaviors in order to meet the operational goals of the recommendation system. The operational goals integral to user satisfaction and ultimately can lead to consumer loyalty. If the consumer who purchased the Astrophysics book, begun seeing recommendations for chemistry, space or physics books rather than other astrophysics books he or she may see a product that can help in his or her Astrophysics education.  The relevant and diverse recommendations may entice a consumer to purchase another book which indicates to the algorithm that scientific content is of interest to the consumer (Source). Consistent innovations are required for recommendation systems to achieve the operational goals of the system as well as the financial goals of the corporation.

The goals of recommendation systems indicate the kinds of information needed to achieve these goals: user data and product data.  These two forms of data can be managed through the integration of one or many models of recommendation systems. Collaborative filtering is a user centric recommendation system, whereas content-based recommender is a system that relies on key terms and similarities. Hybrid systems use a blend of recommendation models to serve the needs of the users and products in a particular technology (Aggarwal). These systems are built modularly with subsystems designed to assess information inputs. Understanding how other design principles are used in the architecture of recommendation systems can assist in the development of new and innovative recommendation systems in a number of different industries. Recommendation systems are designed into several applications across industries to add another layer of user engagement. The fashion industry, however, does not have a mainstream recommendation system to introduce buyers to new brands or designers like Spotify does with the Discover Weekly playlists. By analyzing the recommendation algorithms of YouTube, Spotify and Amazon, it will become clear how a recommendation system can be used to improve an existing product.



The vast number of videos and the frequency with which videos are posted is effectively managed through the integration of modular design in YouTube’s recommendation algorithm. This system is organized to mitigate three main constraints of YouTube: scale, freshness and noise (Covington). The content and users are managed through a strategic division of information and the creation of an interconnected structure with a content abstraction layer which manage the number of and features of the videos and a user abstraction layer which manages the demographic and behavioral data of the users.

Figure 1: YouTube Recommendation Algorithm Architecture (Covington)

YouTube’s recommendation algorithm is designed as a two-stage system. The first stage is called a candidate generation network, it is an analysis of a user’s viewing behavior that initiates sorting and retrieval of hundreds of relevant videos. This stage is designed using collaborative filtering and relies on user data such as video watches, search queries and demographics. Candidate generation relies on matrix factorization which trains the algorithms through a rank loss (Covington). The rank loss algorithm is designed to optimize large datasets through precison rankings and ultimately allow the system to select relevant content quickly and use low levels of memory (Weston). Using alternative methods for selecting content from YouTube’s larger video corpus limits the breadth of videos with which recommendations can be made. Prior iterations of the algorithm assessed the larger video corpus using historical viewing data about who made the video and what kind of video it was (Covington). The current algorithm uses more robust data sets that are compared to the behavior similar types of users to narrow down the number of suggested videos from millions to hundreds.

The second stage of the recommendation is the ranking process which analyzes features of the video, user and content creator. This process further narrows the number of videos suggested for the user to view. User profiles are determined by an embedding which is designed to classify each video view at a particular time amongst all of the videos within YouTube’s corpus based on the viewer and the context of the view (Covington). This process is integral to breaking the traditional behavior of recommending videos to users based on past videos, by calculating. The embedding provides implicit feedback for ranking which is used to train the recommendation algorithm. There are explicit feedback systems design into YouTube, however, it requires direct user input which can be sparse.

Figure 2: Formula for Ranking Embedding (Covington)

The affordances of YouTube’s recommendation algorithm are that users who have Google connected accounts benefit from long histories of engagement with YouTube and Google content which is integral to training the recommendation algorithm. These users likely are recommended quality content of interest at higher rates than new users. Another affordance of YouTube’s recommendation algorithm is that it the recommendation algorithm predominantly is trained with implicit feedback like video watch times, likes, comments and subscribing to particular channels. This benefits users for the opposite reasons that explicit feedback on YouTube is minimal, it is limited effort for users, yet it is valuable for users’ recommendations.

Constraints of YouTube’s recommendation algorithm is that auto play may impact the implicit feedback gained from viewing videos. If a user doesn’t turn off auto play when watching videos on YouTube, then multiple videos not of interest may be viewed and shift the pool with which videos are pulled for that user. Another constraint of YouTube’s recommendation algorithm cannot distinct between videos with true or false content. Several incidents over the last few years including recommendations for Hillary Clinton conspiracy videos during the 2016 elections have caused consumers to question the validity of YouTube’s content (Sharma). Part of YouTube’s ranking system is designed to prioritize videos with high traffic in order to keep users on the site, however, many of these videos can be sensationalist (Swearingen).

As stated by Regis Debray, media technologies are important co-dependent mediations which are integral to the spread of artefacts of culture and cultural institutions (Irvine). However, the relationship between the artefact and culture becomes unbalanced as the artefacts are designed to filter culture for consumers. This problem is relevant to YouTube’s algorithm because it narrows down millions of videos to dozens without factoring in the validity of the content that is being recommended to users. This issue is unique to sociotechnical artefacts that host media because it has the power to sway the ideologies and actions of viewers. In other industries like ecommerce and music, the reliability of the recommendation is less impactful. If a consumer orders a product that was misrepresented on the ecommerce application, then he or she can just return the item. Correcting the spread of misinformation is not difficult, it just needs concerted effort to mitigate the spread.


Spotify’s recommendation algorithm for the discover weekly playlist relies on the design of three recommendation models: collaborative filtering, natural language processing and audio analysis (Galvanize). Through the acquisition of Echo Nest, a Boston based start-up company, Spotify’s algorithm was able to be advanced through acoustic analysis which allows music on the application to be classified based on several aural factors. Echo Nest is also designed to crawl the internet for music related digital media in order to find actionable and quantifiable data for Spotify’s recommendation algorithm (Prey). The design of Echo Nest relies upon distributed cognition across social media posts, blogs, and music reviews, as well as, natural language processing to identify key words and phrases which allow for the derivation of similarities between songs. Distributed cognition in Echo Nest also enables collaborative filtering because the identification of key words and phrases indicates similarities in cognitive processes (Holland).  The design of Echo Nest enables the semantic, tempo and even danceability analysis of the songs within Spotify’s corpus. This in-depth assessment is integral to the distinction between rock and Christian rock, as well as other genres of music which may have similar tempos and structures but vastly different content and listeners (Prey).

User data gathered through Echo Nest is managed through a tool called the Taste Profile, which tracks a user’s interaction with the content on the application. The Taste Profile is a content filtering module within Spotify’s recommendation algorithm (Prey). User data within Spotify is generated through implicit feedback such as the number of times a user listened to a song and the actions a user took while listening or after listening to a song. Explicit feedback is also recorded based on user’s behaviors such as skipping songs and clicking the thumbs down (Pasick).

Figure 3: Example of a Taste Profile (Pasick)


Spotify’s recommendation system is designed to narrow down potential content of interest for users through ranking songs and playlists. Spotify’s ranking system prioritizes songs and playlists with high numbers of followers and Spotify Generated playlists (Prey). The actual calculation of the songs recommended to users is done through matrix factorization with Python libraries. This process results in two vectors, one for user information(x) and one for songs (y) which are compared using collaborative filtering (Ciocca). Deep learning is also integral to the identification of patterns in user behavior across the platform. It identifies patterns across users which helps to make the music selections more specific and feel personalized (Pasick).

Figure 4.: Spotify Matrix Factorization Equation

Figure 5.: Spotify Vectors


Seventy-five million Spotify listeners benefit positively from the recommendation system and the playlists which are produced because of it (Pasick). This positive feedback is heavily due to the cultural affordances designed into the system, specifically the pattern recognition, community sourcing of content and the perceived specificity. Discover Weekly is a unique product because it mediates the social behavior of recommendations. The specificity and accuracy with which the playlist is curated is designed to feel as if the application knows the user personally. The Discover Weekly playlist benefits from participatory affordances that are embedded within the culture of listening to music. The spread of digital music files in the late 1990’s has turned music into a participatory activity, even if a user is listening to music alone (Murray).

However, there are some constraints to the system that are due to the interface of Spotify’s application. One major constraint is the ability for users to find the Discover Weekly Playlist. Though there over one hundred million active Spotify users, only about half of them use Discover Weekly (Aswald). As a long time, Spotify user, I was unaware of the recommendation system until very recently, despite its debut in 2015 (Ciocca). This constraint of visibility and usability is not of the algorithm, but of the interface of the application overall. This wildly popular feature should be more obviously displayed when a user opens his or her application. Augmenting the interface of Spotify to highlight Discover Weekly would likely increase the time spent within the application and further satisfy the goal of designing the algorithm. Another usability constraint of the algorithm is that users are unable to easily save the playlist each week. For a product that is so efficiently and effectively designed, it is strange that the playlist is replaced each week without a simple option to save it.


Amazon was one of the first commercial companies to pioneer a recommendation system. Amazon’s original recommendation system was designed around inputs of buying behaviors, explicit feedback from ratings and browsing behaviors (Aggarwal). Over time, the algorithm was redesigned to assess both the user’s previously purchased and rated items, as well as features of the items themselves (Martinez).

The designers at Amazon call the system item to item collaborative filtering, which is different from traditional collaborative filtering because the algorithm prioritizes items that are likely to be purchased in tandem. The algorithm is designed to determine likeness between products and is structured to identify the non-uniform distribution of customer purchase histories and the probability of future purchases. The designers of the item to item collaborative filtering system determined that randomly assessing purchased items by customers would bias the recommendations because of atypical behaviors. A heavy buyer’s purchasing history is more likely to be selected as representative of other purchasing probabilities, but a heavy buyer’s engagement with Amazon products is likely not representative of all users. The designers mitigate this problem in the algorithm by modelling customers, denoted by c in the equation, who have purchased product x multiple opportunities to buy product y. The probability of customer c purchasing product y is determined by the number of non-product x purchases made or the probability that any random purchase may be product y. The next layer of the formula determines the expected number of product y customers among product x customers. This enables further comparison of the expected customers who purchased both x and y and the observed number of customers who purchased x and y(Smith).

Figure 6. Related Items Calculation Source: Two Decades of Recommender Systems


The defining related items process is completed offline which enables the identification of similar products to be done quickly and with little memory storage required. Item to item collaborative filtering also does not have the constraint of the cold start problem, which is evident in collaborative filtering models; a user initiates item to item collaborative filtering by purchasing or browsing a product which indicates interest (Linden).

Time is an integral factor to the quality of recommendations given by the algorithm. It impacts the relevancy of the products which are suggested to the user. Designing into the recommendation algorithm a feature to not only promote new products which have not been bought and complementary products, like a camera and memory card, keep customers interested in the product offerings on the site. Adapting the algorithm to recommend new products is an example of a cold start problem, which essentially is a lack of information about a product or customer (Smith). This issue is not exclusive to Amazon and is present in many recommendation algorithms. The lack of information is an inherent constraint of recommendation algorithms, however, all three algorithms discussed have methods to mitigate it.

Amazon’s recommendation algorithm is representative of a change in cultural values, especially as it relates to new products and timing. Murray indicates the need for designing for core human needs especially at the start of the design process by identifying the function, context and core of the designed artefact (Murray). For Amazon, timing is integral to the context of their website and product offerings. Seasonality of items like Christmas tree decorations and beachwear must be first acknowledged by the designers of the algorithms then subverted through the creation of an effective system architecture.

The encyclopedic nature of Amazon is both an affordance and constraint. Users are attracted to the site because of the breadth of products and goods available for purchase. However, the corpus of products includes poor quality products that may not meet the user’s expectations. This is a constraint of the recommendation algorithm because returned products were once purchased by a user, which is recorded as a purchase by the algorithm.  This disrupts the accuracy and precision of the recommendation because a user may read negative reviews of a suggested product and see the negative experiences of other customers. Acknowledging this would likely cause a user to forgo the recommended product. Another constraint of Amazon’s recommendation algorithm is that the initial operational goal of the algorithm was to mediate the temptation products at the registers of brick and mortar stores. The designers did not initially intend to design an algorithm to find related products for prospective consumers. The incongruence between the original goal of the algorithm and current goal of the algorithm result in dissonance that may impact the design and advancements of the algorithm. Amazon’s algorithm does not seem to be as modular as the algorithms of Spotify or YouTube, which is likely because of the preexisting system architecture with a different goal that has been adapted to meet the new goal of the algorithm.

Scalability of Recommendation Algorithms

All three algorithms analyzed focused on two major types of data, customer data and product data. Each recommendation algorithm was designed to manage the industry specific data and behaviors which makes any one of the algorithms difficult to scale across industries without modification. Scalability and extensibility can be designed into one or a combination of the recommendation algorithms of YouTube, Spotify or Amazon to accommodate the affordances and constraints of the fashion industry. Scalability would increase the amount and types of data that could be assessed and managed through the algorithm. Extensibility impacts the system architecture to include more forms of data and layers of information (Irvine).

The women’s fashion industry does not presently have a consumer facing sociotechnical artefacts or application that recommends products or popular items. There are fashion subscription boxes like Stitchfix, Trunk Club and Fabletics which give users a simple test to predict their style and sends a monthly box of items that the consumer may be interested in. These quizzes are very simple and gather information from consumers in less than twenty questions. The data analysis done for the fashion subscription boxes are far less than the recommendation algorithms of YouTube, Spotify and Amazon. The lack of investment in data analysis of these subscription boxes may be due to the goals of the boxes. A subscription model ensures that the companies behind the programs are financially profitable. The companies also have strategic partnerships with the brands that they recommend which makes it even more financially attractive for these companies to maintain their current strategy.

If a subscription box company wanted to integrate a deep learning recommendation algorithm into their product, an algorithm similar to Amazon’s item to item collaborative filtering and Spotify’s mapping technique to create taste profiles. The item to item collaborative filtering is an obvious choice for an ecommerce application, however integrating the community sourcing of recommendations, like Spotify, would be a great improvement to the item to item collaborative filtering. This strategy would mitigate the constraint of Amazon’s current algorithm regarding product quality. Community sourcing recommendations would help to ensure that customers with similar style aesthetics would be recommended items that similar users have bought and liked. The cold start problem would still be a constraint of the algorithm, but until there are wide spread advancements in recommendation algorithms there does not seem to be a way to mitigate this issue.



1.     Adamides, Emmanuel. (2018). Activity-based analysis of socio-technical systems innovations.
2.     Aggarwal, C. C. (2016). Recommender systems. Cham: Springer International Publishing.
3.     Aswad, J., & Aswad, J. (2018, March 26). Spotify Projects Slower Growth, 90 Million-Plus Subscribers by End of 2018. Retrieved December 15, 2018, from
4.     Ciocca, S. (2017, October 10). How Does Spotify Know You So Well? – Member Feature Stories. Retrieved December 15, 2018, from
5.     Covington, P., Adams, J., & Sargin, E. (2016, September). Deep neural networks for YouTube recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems (pp. 191-198). ACM.
6.     Ever Wonder How Spotify Discover Weekly Works? Data Science. (2016, August 22). Retrieved December 15, 2018, from
7.     Hollan, J., Hutchins, E., & Kirsh, D. (2000). Distributed cognition: toward a new foundation for human-computer interaction research. ACM Transactions on Computer-Human Interaction (TOCHI), 7(2), 174-196.
8.     How do algorithms run my life? (n.d.). Retrieved December 15, 2018, from
9.     Irvine, M. “Introduction to Affordances and Interfaces.”
10.  Irvine, M., Introduction to Modularity and Abstraction layers
11.  Irvine, M., “Understanding Sociotechnical Systems with Mediology and Actor Network Theory (with a De-Blackboxing Method)”
12.  Irvine, M., Introduction to Design Thinking: Systems and Architectures
13.  Linden, G.D., Jacobi, J.A. and Benson, E.A., Collaborative Recommendations Using Item-to-Item Similarity Mappings, US Patent 6,266,649, to, Patent and Trademark Office, 2001 (filed 1998).
14.  Martinez, M., Amazon: Everything you wanted to know about its algorithm and innovation. (2017, September 27). Retrieved December 15, 2018, from

15.  Murray, J., Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012.

16.  Pasick, A. (n.d.). The magic that makes Spotify’s Discover Weekly playlists so damn good. Retrieved December 15, 2018, from
17.  Perspective | How Silicon Valley is erasing your individuality. (n.d.). Retrieved December 15, 2018, from
18.  Sharma, A. (2018, March 8). Is Youtube’s Recommendation Algorithm Really Working? Retrieved December 15, 2018, from
19.  Smith, B., & Linden, G. (2017). Two decades of recommender systems at Amazon. com. Ieee internet computing, 21(3), 12-18.
20.  Swearingen, J. (2018, February 7). YouTube’s Algorithm Wants You to Watch Conspiracy-Mongering Trash. Retrieved December 15, 2018, from
21.  Weston, J., Bengio, S., and Usunier, N. Wsabie: Scaling up to large vocabulary image annotation. In Proceedings of the International Joint Conference on Artificial Intelligence, IJCAI, 2011

22.  Zhang J. Patel, V. “Distributed Cognition, Representation, and Affordance.” Pragmatics & Cognition 14, no. 2 (July 2006): 333-341.

How Google Docs Works

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Assessing the interfaces that are designed into a web hosted word processing platform such as Google Docs  includes identifying the products that contribute to the operations of the platform.  The first product is the URL of the website- which allows users to access the website for Google Docs and create new documents . The website is powered by HTML which “allows a flexible, unlimited nesting of content and structure layers, embedded media types, interactive functions, and behind the scenes communication with multiple network sources and services for fetching and updating real-time data (Irvine, Page 3).” For Google Docs, this means that the HTML is designed so that users when users initiate the creation of a new document they can format text in the file. Users can also include graphic and videos that are hosted on the server itself or in Google Photos.

Multiple people are allowed to view and edit a Google Doc at the same time because once a document is created, a unique url is generated as well as cookies for each user. These cookies signal to Google Docs when each unique user is accessing or editing the document. Based on White’s explanation of transmission control protocol( TCP) and user database protocol(UDP), it seems like Google Docs is designed with UDP. Occasionally, when there are multiple users of the same Google Doc, there may be a lag in the appearance of edits or new text. If Google Docs was designed with TCP, then the doc would be suspended from edits until one editor of the Google Doc was finished. However, this does not happen which seems like an indication that Google Docs is designed with UDP; it is more likely that when there are multiple users of the same Google Doc the edits may not appear until one user refreshes his or her browser.

The design of Google Docs heavily resembles traditional word processing software like Microsoft Word. The designers of Google Docs clearly de-blackboxed the software so that they could embed similar features into the web hosted word processor. Google Docs was likely designed to be integrated into Google Chrome and other Google Products. This explains why the translate function in Google Docs and Google Translate return similar results to user interactions in a different language . The Googlebot referenced by White is also likely integrated into Google Docs as well, if not Googlebot then some similar software program. Because Googlebot is designed to “crawl” the internet for web pages when a user submits a query, Googlebot is likely also designed into Google Docs so that users can find the definitions of unknown terms and suggest synonyms. Googlebot may also be the feature that is included in the design of Google Docs to alert users when there is a misspelling or grammatical error in the text of the document.


Martin Irvine, Intro to the Web: Extensible Design Principles and “Appification”

Ron White, “How the World Wide Web Works.” From: How Computers Work. 9th ed. Que Publishing, 2007.

De-productizing the Internet

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

De-productizing the internet and the websites created for it has caused an interesting question of what it means to be on and use the internet. As a highly black-boxed system it is very easy to associate the internet with one’s preferred server, however, this is detrimental to one’s understanding of the sociotechnical systems that influence the internet. Acknowledging that the internet is designed with affordances and constraints helps to distance the internet from it’s totalized identity that has been developed and normalized by society.

The initial recognition that the internet is modular completely deconstructs the notion that the internet is a singular system. Policy, content, industry and software are all basic modules that are integral to the structure and use of the internet. Further analyzing these systems, it is clear that the structure and appearance of the internet exists because of international standards for the system architecture and the devices designed by the computer industry, digital media content providers and many other systems. This knowledge alone can change a users perspective from “the internet did something” to “designed systems have allowed me to access this information on the internet”. Basic identification of the large scale and storied sociotechnical systems is the start to understanding the complexity and invisibility that has been included in the design of the internet. Acknowledging that the internet is, as described by Arthur, an example of cumulative orchestrated combinatoriality helps to remove the agency of the internet. This understanding  contingent upon the assignment of agency and thought to the designers and principles which have lead to the current appearance and functionality of the internet.

De-productizing the internet is another way to remove agency from websites and the internet because it clarifies the fact that the page has been designed. Acknowledging that Google is a company and the leaders at Google have their own motivations behind the site, shows that the internet is not a democratized free flow of information. Google has shareholders, governments and international relations that need to be maintain in order to keep the business profitable. If the executives at Google did not have these external demands and codependencies, the information that users find on Google Chrome and through Google searches would likely be much different.  The internet and websites on it were designed intentionally which is often hard to see because the system’s interface has been made simplistically. However, comparing one brand of internet browser to another can help users to the differences in function and structure, this may also cause users to recognize the fact that the information that users have access to has been curated for them




Affordances of Stitcher

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Allan Kay stated that computer no longer considered a single medium, but a medium for other media processed by user-actived software (Irvine 11). I think that this idea can similarly be applied to many popular apps including Stitcher, a podcasting app that I use daily. Stitcher is essentially a podcast database that enables users to listen, download and share podcasts. The four representational affordances described by Janet Murray ,encyclopedic, spatial, procedural and participatory, are integral to the effective design and usage of apps( Murray 52). Most obviously, the encyclopedic and the participatory principle is represented by the Stitcher app because the app is essentially an encyclopedia of podcasts. Users can search podcasts based on particular topics or search a show based on the name of the podcast. The subject of podcasts hosted on Stitcher vary from business focused to comedic.

Interactivity in the Stitcher app can clearly be explained by Murray’s definition, the structures by which we script computers with behaviors that accommo-
date and respond to the actions of human beings (Murray 13). When users engage with shows on Stitcher, the software in the app begins to source similar podcasts for the user to listen to. In Stitcher,  favoriting a podcast results in newly released episodes of the podcast to be downloaded for offline listening. This is clearly representative of both the procedural and participatory affordances of computing.  The procedural property is its ability to represent and execute conditional behaviors (Murray 51). By “favoriting” a particular show, the user is initiating the process of adding the show to the ” Favorites” list and inidcating his or her interest in podcasts similar to the one he or she favorited. Without user engagement from listening to or favoriting podcasts, the app would simply be a podcast encyclopedia. User engagement makes the application more dynamic and flexible because engagement leads to push of curated content for each user.

Initially, it was difficult for me to identify a representation of the spatial affordance of computing in Stitcher, however, through deeper reading I was able to understand where the spatial affordance is in Stitcher. The spatial affordance of computing refers to how users feel as if the interfaces are not simply objects and images with which one interacts, but actual spaces and “sites.” (Murray, pg. 71) This can clearly be seen once a user selects the icon of the podcast. Users are taken to a show “site” where he or she can see what other shows are available to listen to, when they were released and whether or not the show is downloaded in the app already.


Martin Irvine, Introduction to Symbolic-Cognitive Interfaces: History of Design Principles.

Janet Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012.


Design Structures and Computers

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Bolter and Grusin posit that the newest aspect of digital media is the strategies employed to remediate past media (Manovich 2013). This principle can also apply to computer design and the shift from governmental and business applications for computers to the layman application of computers and tablets. One clear design thinking step that impacted the shift in usage of computers was the recognition that computers were more than mathematics machines, they are symbol manipulating systems. This view, expressed by the Augmented Intellect Program, expanded the opportunity for computer processing, which lead to the production of  graphical interfaces that enabled manipulation of sociocultural signs and systems through computational interfaces and semiotics (Irvine). The perspective expressed by Bolter and Grusin enabled the practical and creative development of applications and tools for users to create film, music and other media. However, the change in purpose of computing does not address the shift in accessibility of computing from corporatized to personalized.The remediation of interface and application design in the 1960s was one update that has lead to the use of computing systems to be wide spread across industries and populations, but, cognitive-symbolic interfaces with human affordances have a long history. The physical design of modern computing is a revision of past systems like the Book Wheel, Memex and the NLS. The progression of information retrieval and processing tools clearly inspired the design for both Xerox and Macintosh devices which have been remediated into the kind of device that I am using today. The act of sitting at a desk to process information alone has remained the same, the device in front of the information gatherer has changed.  Specifically,  modern computational design has adapted from two structures 1: an internal logic for information processing and 2: an interpretable interface for humans (Irivine). The miniaturization of computing has caused the two essential structures of computing design to seem magical because the human facing interface has become simplistic and user friendly. Miniaturization has also resulted in increased portability of devices which in turn has lead to more access. When a computer is remediated from a room sized device, to one that fits a standard sized desk and eventually to one that fits into a backpack, more and more people have access to interact with device.The preservation of computational design, interface development and miniaturization are all design factors that have contributed to the current technological economy. Most obviously, these design factors can be seen in the smartphone which has benefited from remediations of the telegraph, the phone and the computer. The interface, physical design and even the way smartphones are used have histories that touch a variety of industries and creative developers.

Breaking the Technological Wall with Python

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Starting to learn python was interesting because it demonstrated how easy it is for users to become a part of technology. It also highlighted how users can easily rely on technology without learning how it works. This revelation goes beyond modern technology but is also applicable in other industries like fashion and farming. Part of this lack of interest in learning computing may be due to the historical definitions and roles of computers, as well as, a potential trend in technological development that distances people from the process. Prior to the technological development of computers, humans who performed a variety of mathematics to solve problems and were called computers. As the world advanced into World War II and the development of analog computers remained relatively stagnant, the “computing” role of humans became tangental to the final output. Prior to this, humans were almost solely relied on for the output of computing and expected to complete these processes manually. This distancing of human involvement can also clearly be seen in the industrial revolution for  both the fashion and farming industries.As technology advances, it seems that most people accept the distance between the input of raw materials and problems, and the output of clothing, food and solutions. History shows that this technological wall is permeable for those who are naturally curious and for those who are forced to break through the wall to improve their livelihoods. ( Campbell- Kelly, 2009)

Image result for industrial revolution

Image result for human computers

It is interesting to note that popular culture tends to focus on the process of an industry once problems become wide spread and impact many lives. Wings quote, “Computational thinking will be a reality when it is so integral to human endeavors it disappears as an explicit philosophy.”, is pertinent because it is a simple truth( Wing 2006). The command “print” is already somewhat universal, proving that the terminology used in computational thinking and language processing is not alien. Even the seven principles of computing, which at face value seem extremely complex are actually simple to understand, once a user becomes an active participant in the technological process. Similarly to Big Farming and Fast Fashion, computational thinking is already integrated into the lives of most Americans, and many people around the world. As these industries permeate everyday lives and issues like ethics in farming, sustainable fashion and privacy in technology become widespread, more and more people want to understand “how the sausage is made”. When people begin to dissect the industries operations, consumers begin to take back agency and can begin to facilitate change.

The phenomenon of the technological wall was really apparent to me as I was progressing through the Code Academy sessions, because it demonstrated that a lot of the perceived complexity of computing is simple.Becoming a participator in technology helps to reveal the smoke and mirrors of language processing and computational thinking, it also starts to turn the “magic” of technology into an accessible tool for change.


Martin Campbell-Kelly, “Origin of Computing.” Scientific American 301, no. 3 (September 2009): 62–69.

Jeannette Wing, “Computational Thinking.” Communications of the ACM 49, no. 3 (March 2006): 33–35.

Peter J. Denning and Craig H. Martell. Great Principles of Computing. Cambridge, MA: The MIT Press, 2015, chapters 4, 5, 6.

Email Delivery and Reciept

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

The information transmission model as diagrammed by Claude Shannon demonstrates the movement of information between five stakeholders. The first stakeholder is the information source which is the creation of the information, the second stakeholder is the transmitter of information which organizes the information into a format that can be interpreted by the receiver. During this stage, there is opportunity for a mistake or disruption in the delivery of the information. The initial message can simply be stopped between the transmission and the receipt of information due to noise. In my experience, this delay and sometimes failure of a message to be sent can be seen when sending an email. Sometimes due to poor internet connection or system issues, emails can get stuck in the “Outbox”.  In this example, my outbox is the transmitter and the noise that interrupts the delivery of the email is the poor internet connection. However, once the issue solving the noise is resolved then the message can effectively be received and delivered to the recipient.(Irvine)


Image result for email outbox

Shannon’s diagram clearly explains the physical and digital aspects of message creation and delivery but it does not take into account the social, cultural or societal implications of message delivery. If the diagram were to attempt to explain the meaning of sending a message there would be many more opportunities for noise and disruption. In using my example of sending an email, the first opportunity for disruption in sending an email would start with the sender. The sender’s language used can be a cause for disruption in sending the email, because if the sender does not add a subject line to their email, the sender may receive an error message and may be unable to send the initial message. The next opportunity for noise is accounted for in Shannon’s original diagram, with the email getting stuck in the outbox of the sender’s email account. The third opportunity for disruption of the information transmission model if it accounted for content and meaning would be in the receiving stage. Due to a typo in the sender’s initial message, the recipient of the message may be an undetectable email address. Another issue that may arise is that the sender’s message may get put into the receiver’s spam folder, not the inbox where the receiver will get notification of the message. But even if the sender did not have any typos or make any mistakes in the development of the message, Shannon’s model does not account for the relationship between the sender and the receiver. If the sender and the receiver are angry with each other the receiver can block all messages from the sender, disrupting the process. Also,  if the sender and the receiver speak different languages, the message may arrive to the receiver, but he or she may not be able to interpret it. Finally, if the sender uses symbols or words that cannot be transmitted due to noise from the receivers device or internet carrier, then the message sent can be misconstrued or misunderstood. Overall, Shannon’s diagram does not take into account the dynamism of human relationships and human error.

Image result for message cannot be delivered


For the effective delivery and interpretation of messages, senders and receivers must have the same sociocultural understanding of the language in the message. Without this understanding, this noise can impact the ultimate completion of the information transmission model. As mentioned in my examples of noise in the email delivery, if  the receiver does not understand the language of the  sender’s message, despite his or her ability to read it, then the message is not meaningful to the sender. According to Floiridi, `Meaningful’ means that the data must comply with the meanings (semantics) of the chosen system, code, or language in question.( Floridi 2010). The message itself  can still be meaningful but the lack of understanding inhibits the final delivery of the message.


Martin Irvine, Introduction to the Technical Theory of Information

Luciano Floridi, Information: A Very Short Introduction. Oxford, UK: Oxford University Press, 2010.

Week 6

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Many affordances that are linked to books are so ingrained in my understanding of books that it is difficult to separate the inferences derived from using books to what they actually are. On a basic level, a book is a physical vessel of information.The affordances of books and inferences that can be made from using a book are much more complex.As written in the Introduction to Affordances, Constraints and Interfaces, “the inferences we make are learned from socialization into what’s normative in using all the built “stuff” in a culture.” (Irvine). Books have been integral to my life since the day I was born, so it isn’t difficult to see why my inferences about books and readers are innate. One affordance that I almost almost have whenever I see a person reading a book is that the reader is curious about the world. Other affordances about books include ideas like books can be used to teach, learn or entertain, books impact ideologies and world views , books should be read from right to left and that books should have dark font on light paper. These are just a few of the affordances that come to mind when thinking of the representative nature of books and those who use them. In understanding design, it is easy to see how many of these affordances have been transferred to technology and digital media.

On a physical level, opening a laptop is learned behavior from opening a book. Both laptops and books are square folded devices, if a person holds a closed laptop by its spine and opened it it would mimic the act of opening a book. This learned behavior is a result of the affordances of books. On a digital level, many operational functions on a laptop are similar to operating a book. One clear example is the ability to switch screens by swiping left or right on the mouse pad. This relatively new feature to technology is a replication of turning a page in a book. In both actions, the meaning is implicit, the user wants to view different information. Other affordances from books have been integrated into the interface of web browsers. Janet Murray explained that designers are often engaged in the process of refinement and the  fact that there are four universal functions and “buttons” on the toolbar of a webpage is not only simple but initiative.. Murray also states, “It is an appropriate design strategy to exploit the interactor ’ s unconscious expectations and knowledge to cue their interaction with a new artifact or process, making the experience feel “ intuitive ”rather than difficult to understand or hard to learn.” ( Murray 2012). This explanation helps to clarify the presence of the back and forward arrows on a browser tool bar. A novice computer user can almost immediately identify the purpose of the back button because of his, her or their prior experience with reading books. The arrows on webpages indicate access to either old or new webpages which is representative of how readers use books.


Martin Irvine, “Introduction to Affordances and Interfaces.”

Janet Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012.

Mediology of a Broken Washing Machine

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

When reading about mediology, the first experience that came to mind was extremely recent: my broken washing machine. Last week, my washing machine began malfunctioning and almost immediately my roommate and I began anthropomorphizing it saying things like “the washing machine is making a weird sound”. Subsequently, we quickly assigned the blame for the malfunction to our landlords and their poor upkeep of the apartment’s electronics (we also struggled with a broken microwave for about a month). This conversation immediately reminded me of Latour’s proposition of technical delegates, my roommate and I distributed the agency of the technology malfunction to that of our landlords. We quickly assigned not only blame but intention on our landlords, when the problem with the machine was unattached to their interactions with the washing machine. Latour explains why my roommate and I were able to come to this conclusion in the quote,”In artifacts and technologies we do not find the efficiency and stubbornness of matter, imprinting chains of cause and effect onto malleable humans.”(Irvine) In our opinion, it was clear the fault of the malfunction did not lie within the machine, it was caused by the humans managing it.

Retrospectively, I can understand why my roommate and I assigned intentionality to our landlords to explain the causality of the machine malfunction. This thought process is explained by Rammert, “concept of agency opens up a wide range of possibilities to identify and to classify kinds and intensities of agency without regards to the substantial character of the unit that is in action.”(Rammert 2008). In laymen’s terms, mediation is the ability to assign agency over the broken machine to the landlord’s intentions or lack there of , rather than to simply say that the machine’s malfunction is singular and unattached to the actions or ideas of our landlord. The malfunction of the machine in this sense is representative of our landlord’s lack of conformance to the societal norm that in rental properties, technologies should be operational. In mediology, the machine doesn’t brake simply because it is an imperfect technology in need of routine maintenance.

Further deconstructing this example, I can also describe the function of the washing machine and its interface through mediation. Most obviously, a washing machine’s function is to uphold the sociocultural norm of wearing clean clothes. At it’s most basic function, a washing machine connects its user to an efficient method of cleaning clothing and maintaining its users position in society as a person who values cleanliness. A more complex view is that the culturally established levels of dirtiness have impacted the levels of cleanliness that can be achieved through different settings on the washing machine, This value is rooted in many cultures and is not unique to Americans; however, efficiency and the value of time combined with the value of cleanliness has contributed to the phenomenon that washing machines are almost ubiquitous in proximity to American homes. The desire and acquisition of washing machines is inseparable from the cultural significance of having clean clothes. In my example, a broken washing machine was not merely a nonfunctional machine, it was a momentary disruption to my upkeep of social norms.


Martin Irvine, “Understanding Sociotechnical Systems with Mediology and Actor Network Theory (with a De-Blackboxing Method)” [Conceptual and theoretical overview.]

Werner Rammert, “Where the Action Is: Distributed Agency Between Humans, Machines, and Programs,” 2008. Social Science Open Access Repository (SSOAR).


Modern and Ancient Cognitive Artifacts

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

When reading about cognitive artifacts, one tool that is integral to my everyday life quickly came to mind: Google Keep. It is a digital tool where I keep everything from grocery lists to reading lists and notes. Google Keep can be acutely defined by Norman’s definition of a cognitive artifact, it is designed to maintain, display and operate upon information ( Norman 1991).  Though the customizable nature and the data storage aspects of Google Keep are blackboxed to the user, upon reading about collective symbolic cognition and distributed cognition I have begun to understand why tools like Google Keep are attractive to many users including myself.

Google Keep, at its core, is a storage tool for symbolic cognition. As humans and naturally communicators, we are consistently creating and storing messages and symbols. This fact has been true since the prehistoric era when early humans drew in caves to document history and tell stories.  Google Keep is merely one way to document messages and symbols to store information ; in essence, it is a digital extension of language and thought( Cole 1996). Though users are able to write notes and lists in the tool, Google Keep is a tool that can also be used to share information. The ability to share notes and lists on Google Keep is a clear example of distributed cognition at work. The fact that I am able to share notes that can be easily understood is dependent on the collective knowledge of symbols and their meanings. In this sense, the notes and lists that I make are not individual as the items or books that I list are not unique to me, they can be accessed by the world. The collective symbolic cognition drives the sharing function on many digital tools including Google Keep.

This view of Google Keep as a cognitive technology makes the tool seem less unique and complex than it actually is. The design of Google Keep  is clearly rooted in historical examples of cognitive tools. Google Keep is a little more advanced than cave drawings from 30,000 BC, at its core these two technologies are for humans to document thoughts and ideas. Google Keep is much more portable than a cave and the ideas that exist in it may be more complex, but through understanding of cognitive technologies, the similarity between ancient documentation methods and modern ones has become clear.

Donald A. Norman, “Cognitive Artifacts.” In Designing Interaction, edited by John M. Carroll, 17-38. New York, NY: Cambridge University Press, 1991.

Michael Cole, On Cognitive Artifacts, From Cultural Psychology: A Once and Future Discipline. Cambridge, MA: Harvard University Press, 1996.