Author Archives: Kathryn Hartzell

SVOD – Combinatorial Design and Market Fragmentation


The media industry has been disrupted by the over-the-top (OTT) services provided through the internet. These streaming-video-on-demand (SVOD) services caught the established telecommunications industry by surprise and quickly transformed the market. Companies like Netflix, however, are firmly situated within a wider sociotechnical architecture. Simplistically attributing the disruption to the internet fails to explain how SVOD is both shaped and limited by the design principles responsible for its existence.  Netflix did not “invent” SVOD, but evolved into the company it is today by monitoring changes and developments in technology and modifying its distribution strategy. Brian Arthur describes how technologies “inherit parts from the technologies that proceeded them” (Arthur 2009, 19), which can be seen in the modular design of the service. Bruno Latour also argues in “Technology is Society Made Durable” that a historical review of technology development should be undertaken to understand innovation as well as a comparison of “the different versions given by successive informants of the ‘same’ syntagm” (Latour 2009, 127). Looking at the combinatorial design features present in SVOD platforms as well as a historical review of their evolution, using Netflix as a case study, will help understand what factors can both encourage and challenge industry growth.

Figure 1: Netflix & Co. Surpass DVD & Blue-ray Sales (Richter 2017)

Combinatorial design

In its most basic definition, SVOD platforms are simply television and motion pictures over the internet. A slightly longer and more accurate description might be preexisting media forms such as television and movies designed for viewing on a metamedium device, such as a computer, smartphone, tablet or smart tv device and distributed digitally through the internet. Each of these components, have complex design histories and exist within an established set of practices. When Netflix launched its streaming service in 2007, it did not need to invent any of these components and indeed some of them were not yet invented (“About Netflix” n.d.). However, in bringing different technologies and industries together, Netflix was also ascribing to their design principles and dependencies.

According to a 2016 survey of customers who pay for digital media, 55% of users prefer Netflix over its OTT competitors (PayPal 2016). The euphemism, “Netflix and chill,” has entered the zeitgeist, ensuring it’s cultural cache (Rickett 2015). However the market is becoming increasingly fragmented. Between 2014 and 2016, the number of new SVOD platforms launched more than doubled (L.E.K. and SNL Kagan 2017). The rapid rise in competitors is partially explained by the comparative ease with which the Netflix technologies can be replicated. Netflix’s current business strategy, which is geared toward content development, is an attempt to secure their current market dominance. To that end, Netflix plans to spend $8 billion on content in 2017 (Koblin 2017). While there are some technological investments Netflix is making to ensure it is still the state-of-the art in streaming, such as short-term downloads for offline viewing and investment in server technologies to help ISPs manage Netflix traffic, these investments may not be enough to differentiate the service from competitors in the future (Casella et. al. 2017). Having the highest quality technology has never been a guarantee of success in the media industry. Quality, as Jonathan Sterne points out in his book, MP3: The Meaning of a Format, does not necessary mean that a audio or visual file approaches verisimilitude: “Aesthetic pleasure, attention, contemplation, immersion, and high definition—these terms have no necessary relationship to one another” (5). Moreover, less reliable services that offer greater efficiencies, affordances and at a lower cost have consistently reshaped the media industry. Netflix is particularly vulnerable since so many of the core components of it’s service can be approximated or found in other places. In order to fully understand the current mediascape, it’s useful to look at the early history of Netflix and how it combined existing properties with the technology of the late 1990s.

The original Netflix business model

In 2002, when Netflix filed its initial public offering (IPO), its business model was more in line with a mail-order movie rental business than the media company it is today.

We are the largest online entertainment subscription service in the United States providing more than 600,000 subscribers access to a comprehensive library of more than 11,500 movie, television and other filmed entertainment titles. Our standard subscription plan allows subscribers to have three titles out at the same time with no due dates, late fees or shipping charges for $19.95 per month. Subscribers can view as many titles as they want in a month. Subscribers select titles at our Web site ( aided by our proprietary CineMatch technology, receive them on DVD by first-class mail and return them to us at their convenience using our prepaid mailers. Once a title has been returned, we mail the next available title in a subscriber’s queue (Netflix, Inc. Prospectus, May 22, 2002, 41).”

The only piece of proprietory technology cited in 2002 was CineMatch. Everything else pre-existed Netflix. A full history of each of the technologies and industries Netflix combined would be impossible within the scope of a paper, however below is a brief review of the major components of early Netflix and what conventions they brought with them.

Motion Pictures

The motion picture rose to prominence as a cultural art form in the early 20th century. Originally produced on film and projected onto screens, motion pictures evolved to incorporate color and sound. Conventions such as length, style, and production models also developed to form an industry and a community of practice. The standardized of technical aspects such as frame size, frame rate, and sound levels allowed for an efficient distribution model. Further, a  customer base was created of people who enjoyed narrative forms produced in this style (Library of Congress, Motion Picture, Broadcasting, and Recorded Sound Division n.d.). Movie lovers would soon have new ways of watching filmed narrative with the invention of different video, televised, and digital formats.

DVDs and the Home video market

In 2002, Netflix described the home video market as including “home video rental and retail outlets, cable and satellite television, pay-per-view, video-on-demand, or VOD, and broadcast television” (Netflix, Inc. Prospectus, May 22, 2002, 42). Film, in its original incarnation had been reformatted multiple times since celuloid. Broadcast television was designed to transmit image in sound through radio waves. Transmission could also happen over cable wires, operated by cable television service providers. Film had also been formatted via VHS tape which led to the first boom in video rental organizations. By 1998, digitization had arrived for movies and information was encoded onto DVDs. DVDs were more cost efficient and had a number of affordances unavailable in the VHS format: light weight, smaller size, higher image quality and the elimination of the rewind.  The small size was especially important to Netflix whose initial business model relied on the US Postal Service for delivery (Netflix, Inc. Prospectus, May 22, 2002, 44). 

Word Wide Web

The Netflix business model was also dependent on having a large indexed website where it could keep a constantly updated list of movies available for rental. The Web, “A loose confederation of Internet servers that support documents formatted in a language called HTML (Hypertext Markup Language) that can include links to other servers, documents, graphics, audio, and video,” was developed by Tim Berners-Lee while he was working at CERN (White 2007, 313). The decision by Berners-Lee and CERN to put this technology into the public domain made it possible for people from any personal computer with an internet connect to peruse Netflix’s catalog of content.

Online payment processing

Simultaneous to the digitization of film, business was moving onto the internet, which created a market for point of sale (PoS) software. PoS technology was introduced by VeriFone in the 1980s and the reduction in time and cost of payment acceptance led to a wide role out of PoS terminals and adoption by the major banks (Byrne and Hanson 2014, 38). With the advent of the internet, third party processors designed, purchased, and incorporated encryption technology to ensure the security of online transactions. 


E-commerce, “Commercial activity conducted via electronic media, esp. on the Internet; the sector of the economy engaged in such activity,” was a term first coined by the San Jose Mercury News in 1993 (“E-Commerce, N.” 1993). The internet allowed for business to be conducted without traditional retail offices, in much the same way the Sears catalog had allowed people to order from their extensive inventory without having to visit a brick-and-mortar location. Many companies took advantage of the premise in the 1990s, and most were unsuccessful. 


The CineMatch technology was the only proprietary technology owned by Netflix at the time of its IPO. Netflix described CineMatch as enabling Netflix “ to create a customized store for each subscriber and to generate personalized recommendations which effectively merchandize our comprehensive library of titles” (Netflix, Inc. Prospectus, May 22, 2002, 1). Beyond the marketing language, this technology was simply software that allowed users to create an account – which stored users personal information – and an algorithm to statistically determine what titles a customer was more likely to be interested in based on the preference information customers voluntarily supplied to Netflix as well as their rental history. This information was pattern matched against an internal taxonomy of content titles. Algorithms to monitor user preferences were not unique to Netflix, and were widely used at that time by companies such as Amazon.

The Rise of Video Streaming

The early success of the Netflix’s subscription DvD rental business was proof of concept that the internet had a place in the home video market. Other companies, such as BlockBuster, belatedly tried to follow Netflix’s lead, but by then this model was already quickly becoming outdated in the same way that the DVD had only just replaced the VHS.


The transition to streaming was far from inevitable. Digitization was pursing various formats in the early twenty-first century including the more traditional object based formats of HD DVD and Blu-ray, as well as digital downloads. These formats afforded higher quality picture and uninterrupted play.  However, for a low monthly rate, streaming offered customers the opportunity to access an extensive (though still limited) catalog of movies and tv shows for much less than the cost of renting or purchasing the same amount of content. Digital distribution also afforded instant access without having to travel to a brick and mortar location, or having to wait a few business days for the DvD to arrive via the post. Customers were willing to accept trade-offs in quality for these affordances. And the best news for Netflix, the infrastructure to put this new distribution platform in place was already accessible in most homes. Netflix users already had access to personal computers as well as accounts, so it was simple to convert existing customers while also lowering the barriers to entry for new subscribers.

Content Delivery Networks

The transition to viewing content on pixelated computer screens in addition to television sets had been accomplished before Netflix released its streaming service in 2007. Companies such as RealNetworks, Microsoft and Adobe had all rolled out different ways to stream media over the internet. However, early streaming protocols had to contend with constraints such as “bandwidth, scalability and reach” (Zambelli 2013). Streaming video was assisted by the widespread adoption of HTTP as well as the use of Content Delivery Networks (CDN), or “professionally managed and geographically distributed” servers which increased reliability.” CDNs were “engineered to provide a high quality of service, often governed by service-level agreements (SLAs) between the CDN provider and the content owners whose content is being distributed by the CDN” (Anjum et al. 2017).

HTTP-based Adaptive Streaming

HTTP and CDNS helped combat scalability, and in 2007, a company called Move Networks “used the dominant HTTP protocol to deliver media in small file chunks while utilising the player application to monitor download speeds and request chunks of varying quality,” which helped solve the bandwidth problem (Zambelli 2013). A version of this technology, called “adaptive streaming,” was what was rolled out by Netflix in their initial streaming subscription-as-a-service (SaaS) model. Adaptive streaming was further refined through standardization procedures which led to the roll out of Dynamic Adaptive Streaming over HTTP in 2012, also known as MPEG-DASH (Ibid. 2013). Netflix was a champion of digital format standardization because it made it much simpler to lease and distribute new content (McEntee 2012).

Hardware Devices

Rapid adoption in the consumer electronics space of a variety of personal computing devices was also foundational in the success of the Netflix model. Screen technology continued to improve at a rapid pace and by 2009 devices that could support steaming included:

“PCs, Macs, Internet connected Blu-ray players, such as those manufactured by LG Electronics and Samsung, set-top boxes, such as TiVo and the Netflix Player by Roku, game consoles, such as Microsoft’s Xbox 360, and planned for later this year, TVs from Vizio and LG Electronics” (Netflix, Inc. Annual Report, February 25, 2009, 1)

SmartPhones would soon be added to this list. The scalability of the Netflix platform to multiple devices was another affordance, home video was no longer tied to the television set but could be taken on the move.


Figure 2: “Number of mobile phone video viewers in the United States from 2014 to 2020 (“Mobile Video in the United States” 2017)

How Netflix Works Today

(Video 1: Bisla 2012)

Scalable and Extensible

Today, Netflix has its own proprietary CDN called OpenConnect and is capable of handling a variety of source formats such as Interoperable Master Format (IMF) which relies on an “an emerging [Society of Motion Picture and Television Engineers] (SMPTE) specification governing file formats and metadata for digital media archiving and B2B exchange” as well as ProRes, DPX, and MPEG (McEntee 2014). Netflix can also support “a number of codec profiles: VC1, H.264/AVC Baseline, H.264/AVC Main and HEVC” (Aaron and Ronca 2015).

Digital Supply Chain

Netflix operates within a wider media environment. As a distributor, Netflix accesses a variety of videos, in a variety of versions, and has to use the agreed upon digital rights management software (DRM) so that the content plays on their customers devices. It was designed to be scalable and accommodate content providers. However, Netflix has continued to promote standardized formats to cut down on “versionitis” (McEntee 2012).

Figure 3: “How Streaming Playback Works” (Casella et. al. 2017)

Cloud Computing

In addition, Amazon Web Services provides the cloud infrastructure that has allowed Netflix to rapidly expand its footprint globally.

“Amazon Web Services (“AWS”) provides a distributed computing infrastructure platform for business operations, or what is commonly referred to as a “cloud” computing service. We have architected our software and computer systems so as to utilize data processing, storage capabilities and other services provided by AWS. Currently, we run the vast majority of our computing on AWS. Given this, along with the fact that we cannot easily switch our AWS operations to another cloud provider, any disruption of or interference with our use of AWS would impact our operations and our business would be adversely impacted. While the retail side of Amazon competes with us, we do not believe that Amazon will use the AWS operation in such a manner as to gain competitive advantage against our service” (Netflix, Inc. Annual Report, February 27, 2017, 7).

All this to say, Netflix is firmly embedded within a complex sociotechnical system of evolving standards, infrastructure, and devices, as well as the content producers. The system is so interlinked that even though Amazon Video is a direct competitor of Netflix, they have committed to a business relationship with their cloud infrastructure division in a way that would be very difficult for them to disentangle.


The evolution of Netflix was far from predetermined. Commitment to the DVD or Blu-ray format s would have steered the company in a very different direction. The aligning of streaming with a SaaS business model was also not predetermined. The second SVoD platform on the market was Hulu whose roots were more firmly entrenched in television as opposed to home video entertainment and began with advertisement supported model. Rapid changes in the device market also helped with the rapid adoption.  Latour, in describing how Kodak came to invent a new product for a new market said, we were”never faced with two repertoires – infrastructure and super- structure, techniques and economics, function and style – but with shifting assemblies of associations and substitutions” (Latour 2009, 113). Netflix did much the same in creating the SVOD product and market.

In paving a new path, many aspects of the sociotechnical environment within which Netflix operates became more established. John Law describes material semiotics as “realities are counterposed, and those realities are heterogeneous, combining and enacting the natural, the social, and the political” (Law 2009, 154). Netflix was an outcome of the juxtapositioning of new technologies and old media practices, creating something that was beholden to both the rules of the internet, such as HTTP, IP, bandwidth, and legal concerns around internet service regulation (net neutrality), as well as,  old media practices such as licensing agreements, the formats of videos for distribution, as well as the length style and other conventions of narrative video content. In interacting with each other, a new way of watching video was created which effected both technology and media.

Today, standards in video formats are more widely adopted, large scale cloud computing and CDNs make it easier to reliably send videos, and smartphone use has only increased. All of these changes have lowered the barriers to entry for Netflix’s competitors. Competition has resulted in increases in licensing fees for existing films and television. To combat this, Netflix and other platforms have begun investing in production in order to own content outright and avoid recurring licensing fees. This has led to increased competition on the production side which has encouraged increased budgets to entice brand-name directors and stars. Consumers have been inundated with new shows, often referred to as “Peak TV,” the idea that there are too many new shows (McHenry 2016). However, even deep pockets may not be able to keep titles on platforms. Disney is expecting to launch a SVOD platform and HBO, CBS and FX have already created platforms for their content (Koblin 2017). .


The current state of the market is extremely fractured and becoming more so each year.  SVOD’s sociotechnical dependencies are incentivized to encourage a competitive media landscape leading SVOD companies to invest more money each year in content production so as to have a competitive advantage. However, the market for scripted media is quickly becoming oversaturated. Maintaining current levels of growth will be impossible organically. The media industry is set for more changes before things stabilize. Possible future solutions include:

  1. SVOD consolidation through mergers and acquisitions
  2. A return to the monopolistic environment of the old studio system where content production and distribution were owned by the same large companies
  3. Telecommunication companies leveraging their infrastructure to negotiate with the distribution channels operating over their networks. Negotiations with these SVOD platforms could result in the bundling of access to multiple SVOD platforms, reproducing cable packages.

There is nothing predetermined about the current way we are able to access content. Changes in legislation, content, technology and the culture of media consumption have the possibility to dramatically shift the media industry again.


Anjum, Nasreen, Dmytro Karamshuk, Mohammad Shikh-Bahaei, and Nishanth Sastry. 2017. “Survey on Peer-Assisted Content Delivery Networks.” Computer Networks 116 (Supplement C):79–95.

Aaron, Anne, and David Ronca. 2015. “High Quality Video Encoding at Scale.” Netflix TechBlog (blog). December 9, 2015.

Arthur, Brian W. The Nature of Technology: What It Is and How It Evolves. New York, NY: Free Press,  2009.

Bisla, Kunal. Netflix. 2012.

Byrne, Robert, and Jason Hanson. 2014. “Innovation and Disruption in U.S. Merchant Payments.” In Global Payments 2014: A Return to Sustainable Growth Brings New Challenges, 33–41. McKinsey.

Casella, Karen, Phillipa Avery, Robert Reta, and Joseph Breuer. 2017. “Scaling Event Sourcing for Netflix Downloads, Episode 1.” Medium (blog). September 11, 2017.

“E-Commerce, N.” 1993. OED Online. Oxford University Press. Accessed December 6, 2017.

Koblin, John. 2017. “Netflix Says It Will Spend Up to $8 Billion on Content Next Year.” The New York Times, October 16, 2017, sec. Media.

Latour, Bruno. 1991. “Technology Is Society Made Durable.” In A Sociology of Monsters: Essays on Power, Technology and Domination, edited by John Law, 103-131. London, UK; New York, NY: Routledge, 1991.

Law, John. 2009. “Actor Network Theory and Material Semiotics.” In The New Blackwell Companion to Social Theory, 141-58. Malden, MA; Oxford, UK: Wiley-Blackwell, 2009.

L.E.K and SNL Kagan. 2017. “Number of over-the-top (OTT) TV services launched in the United States from 2008 to 2015.” Statistica. January 2017.

Library of Congress, Motion Picture, Broadcasting, and Recorded Sound Division. n.d. “Fictional Films Dominate – Inventing Entertainment: The Early Motion Pictures and Sound Recordings of the Edison Companies – Digital Collections.” Web page. Library of Congress, Washington, D.C. 20540 USA. Accessed December 11, 2017.

McEntee, Kevin. 2012. “Complexity In The Digital Supply Chain.” Netflix TechBlog (blog). December 17, 2012.

McEntee, Kevin. 2014. “Delivering Breaking Bad on Netflix in Ultra HD 4K.” Netflix TechBlog (blog). June 16, 2014.

McHenry, Jackson. 2016. “A Record High of 455 (a.k.a. Way Too Many) Scripted TV Shows Aired in 2016.” Vulture, December 21, 2016.

“Mobile Video in the United States.” 2017. Statista. August 2017.

Netflix, Inc. Prospectus Form 424B4 Initial Public Filing. Filed May 22, 2002. SEC EDGAR. Website., accessed December 5, 2017.

Netflix, Inc. Annual Report Form 10-K. Filed February 25, 2009. SEC EDGAR. Website., accessed December 5, 2017.

PayPal. 2016. “Preferred over-the-top (OTT) services among consumers in the United States as of September 2016.” Statistica. September 2016.

Rickett, Oscar. 2015. “How ‘Netflix and Chill’ Became Code for Casual Sex | Media | The Guardian.” The Guardian, September 29, 2015.

Richter, Felix. 2017. “Netflix & Co. Surpass DVD & Blu-Ray Sales.” Chart. Statist (blog). January 18, 2017.

Sterne, Jonathan. MP3: The Meaning of a Format. Durham, NC: Duke University Press Books, 2012.

White, Ron. 2007. How Computers Work. 9th ed. Excerpts from chapter, “How the Internet Works.” Que Publishing.

Zambelli, Alex. 2013. “A History of Media Streaming and the Future of Connected TV.” The Guardian, March 1, 2013, sec. Media Network.

FaceTime – Apple gets away with some really dumb names

First, I’d like to remind everyone that Apple rebranded the word pad, which for about 50% of the US population, meant something very very different. While nowhere near as egregious, FaceTime is also an extremely silly name. It evokes the kind of boys club behavior in which someone might say, “I need some facetime with the boss!” while heading out to the golf course.

Apple rolled out FaceTime in June 2010. It was marketed as making “the dream of video calling a reality” (“Apple Presents iPhone 4” 2010). Nevermind that Skype had already been in wide circulation since 2003 (Skype).

Figure 1 “Use FaceTime with Your IPhone, IPad, or IPod Touch”


Figure 2: New Skype

Facetime, like Skype before it, was a combinatorial design. The phone’s hardware, composed of an audio codec, a microphone, and speakers were synchronized with a video codec, the digital camera, and the pixelated display screen. This multimedia data could then be sent through the internet using the nested protocols that comprise Voice-over-IP (SIM, SDP, RTP and more) and be decoded on the iPhone of another user. Neither Skype nor Apple invented any of these features, including the nested protocols. SIM was designed in 1996 and standardized in 1999 (Handley et al. 1999).  However, encryption was included to ensure that the signal could only be viewed by another iPhone. Both FaceTime and Skype were software programs designed to utilize existing features.

Video call applications also took advantage of the internet’s distributed infrastructure. Traditional communications companies had invested massive amounts of money in building telephone and cable lines. Digitizing voice and audio allowed software to take advantage of these preexisting systems (at no cost to them) and provide free calling over the same phone lines that still could charge customers by the minute. All a person with an iPhone or other video call software program needed to do was connect to one of the internet’s nodes through its wifi signal in order to be able talk to someone halfway across the globe.

Figure 3: Verizon: Successful Trial of 300G Fiber Optics

For Apple, FaceTimes combinatorial design also had the affordances of modularity. Cameras and audio could be switched out in later versions of the iPhone. If the wifi connection was not strong enough for high quality video, or any video at all, audio could still be transmitted. Users had the option to select only audio if they wanted the affordances of a telephone call (it’s really tiring to hold a phone in front of your face for long periods of time.)

The system was also scalable and extensible, so that any new person who purchased an iPhone could take advantage of this system and join the network, and the feature could be rolled out onto subsequent products, such as the MacBook and the aforementioned iPad.

The result wasn’t new, or revolutionary, but it had the advantage of being integrated with  a familiar object so it’s convenience was prized over other applications that had to be downloaded and maintained separately. It also had the advantage that competitors like Skype also required that both users also be subscribed to the same service. FaceTime also has this feature, but by rolling it out in a device with a high market share, new users were added automatically. They didn’t have to get anyone to buy into the concept. Compare this limitation to the traditional telephone, where a call can be placed on one service provider and answered by another service provider. In that case, the fee for the call would be shared between all the network owners. FaceTime is “free” so long as you and everyone you know buys into the Apple system.

FaceTime represents what Jonathan Zittrain refers to as; “an appliancized network that incorporates some of the most powerful features of today’s Internet while greatly limiting its innovative capacity—and, for better or worse, heightening its regulability (Zittrain 2009, 9). FaceTime works and is convenient, but it is limiting. It relies upon standards formed in 1999, over ten years before its release, but it hides that piece of technology from sight. Users do not have the convenience of calling people using different mobile phone devices or, in the case of other video calling programs, different software. It’s also easy to see how companies like Verizon might view their infrastructure being used to cut into their own business infuriating. Not that I would ever argue against net neutrality for a great many important reasons, but things like this certainly do fan the flames.

Works Cited
“About Skype – What Is Skype.” Accessed November 29, 2017.
“Apple Presents IPhone 4.” Apple Newsroom. June 7, 2010.
Handley, M., H. Schulzrinne, E. Schooler, and J. Rosenberg. SIP: Session Initiation Protocol, The Internet Society (1999).
“New Skype | Enhanced Features for Free Calls and Chat.” Accessed November 29, 2017.
“Use FaceTime with Your IPhone, IPad, or IPod Touch.” Apple Support. Accessed November 29, 2017.
Verizon “History and Timeline,” August 18, 2016.
Zittrain, Jonathan, The Future of the Internet–And How to Stop It. New Haven, CT: Yale University Press, 2009.


A Series of Tubes

Figure 1: The It Crowd

The clip above is from The IT Crowd, a British comedy program that ran four seasons and a closing special between 2006 – 2013. The poor confused woman in front of you is Jen Barber, the technologically illiterate woman who is tasked with running the IT department of a large multinational company. Readers, I have rarely identified with someone more. Though I do not believe that I could be be confused that a black box with a blinking light was in THE INTERNET, I at times wished that it could be.

Senator Ted Stevens famously said, “The internet is not a big truck, it is a series of tubes.”

Figure 2: Senator Stevens, speaking on Net Neutrality in 2006

“Yes, hilarious,” I said. “Of course it’s not a series of tubes!” While all the time I thought, “Please do not ask me what it actually is.”

So in defense of poor confused Senator Stevens, the internet is hard to describe. To even get anywhere you need to understand digitization. I know that was last week’s topic, but I can personally attest that spending a lot of time on how are symbolic processes are transformed from analog to digital and back to analog again made all the difference. Without understanding how information becomes data, the “sending of packets” sounds like complete nonsense, never mind TCP/IP.

So the internet isn’t a series of tubes, it’s a vast international network that relies on a number of interrelated and dependent parts.

  • Hardware: Computers (clients) and servers organized in a distributed network
  • Software: TCP/IP protocols that break up digitized data, address it, send it, and put it back together on the other end. (Beam me up Scotty)
  • Infrastructure: Telephone and data cables that carry the data signals

And surrounding that is a system that has benefited from many levels of standardization from the adoption of IP so that computers running incompatible operating systems can still communicate. There are also web browsers and HTML the world wide web and the domain name registration system.

Each of these pieces of the internet lines up with different design principles. It’s modular, it’s combinatorial, and it’s distributed. None of it works without Claude Shannon’s information theory. None of it works without standardization. None of it works without electricity. It’s an amazing human achievement, but unlike the Pyramids or China’s Wall it’s really difficult to sit back and admire it because it simply can’t be viewed in it’s totality.

To all the above, I think we should also consider it a gift. So much of what was developed for this system was done by students and volunteers. And some of it’s pioneers had the foresight to create something expandable, basic building blocks that could be added to, without breaking the system.

The decision of to gift the World Wide Web to the public domain might be one of the greatest things ever given to humanity. The Web, “A loose confederation of Internet servers that support documents formatted in a language called HTML (Hypertext Markup Language) that can include links to other servers, documents, graphics, audio, and video,” was developed by Tim Berners-Lee (White 313). It’s difficult to imagine a commercialized version this network, and yet that was very close to being the reality (Campbell-Kelly and Aspray).

We have a habit of idolizing the financial successes. Steve Jobs and Bill Gates carry a great deal of cultural cache, but as we again consider large issues like net neutrality, the Berners-Lees deserve their recognition. It’s important that we don’t sound like Senator Stevens and get so lost in metaphor that we don’t see how complex our system really is. We need to pay attention to the organizations such as the World Wide Web Consortium and the Internet Engineering Task Force that continue to push through standards and keep the internet a place for collaboration, experimentation and growth.

“All the players know they have more to gain by accepting the standard and engineering their products and services to meet it than by trying to act alone” (Abelson et al.)

Works Cited

Abelson, Hal,  Ken Ledeen, and Harry Lewis. Blown to Bits: Your Life, Liberty, and Happiness After the Digital Explosion. Upper Saddle River, NJ: Addison-Wesley, 2008.

bluefalcon561. Series of Tubes. Accessed November 14, 2017. As taken from Alex Curtis. “Senator Stevens Speaks on Net Neutrality.” Public Knowledge. June 28, 2006.

Campbell-Kelly, Martin and William Aspray. Computer: A History Of The Information Machine. 3rd ed. Boulder, CO: Westview Press, 2014.

Lineham, Graham. “The Speech,” The IT Crowd. December 12, 2008. The IT Crowd. Moss Introduces Jen To The Internet | The IT Crowd Series 3 Episode 4: The Internet. Accessed November 14, 2017.

White, Ron, How Computers Work. 9th ed. Excerpts from chapter, “How the Internet Works.” Que Publishing, 2007.


eBooks – remediation or a new fronteir?

A few years back I interviewed at Penguin for a position in their very small ebook division. They were looking into designing a deluxe ebook that could be rolled out like a high-end edition. At that point, eBooks were in common usage with multiple devices designed specifically for eBooks as well as software applications for laptops and tablets. The design question wasn’t, “what should an eBook look like?” but, “how do we expand the book?” Though I didn’t get the job, it’s still something I think about as the years go by and the deluxe ebook fails to materialize outside of a handful of Amazon search results. What are the challenges and what ways of thinking about eBooks might be worth exploring?

Andrew Piper argues in, Book Was There: Reading in Electronic Times, that there is a different embodied experience to reading online or even on an eReader when compared to the codex book. There is a weight, heft, smell and texture to physical books as well as the slow repetition which comes with turning pages.  Piper links the pleasures of reading to the feeling of holding, hypothesizing that holding a book gives readers the sense that all human knowledge is in their literal grasp. He further extrapolates that into humanity’s love of miniaturization: the process of making something huge approachable. Somehow, in all this talk about miniaturization, Piper misses the point that this is also happening with digital computational devices.

Piper’s thesis is that the pleasures of books can not be found in their digital representation, a bias he arrives at despite outlining affordances that, for the most part, are not confined to the codex book. In creating eBooks, designers paid close attention to simulating and expanding the affordances of books. We could take, for example, Piper’s beloved page. In an eBook, text is presented only to the extent that it fits on the screen without scrolling. This presentation subdivides the text into approachable and discrete units to replicate the feeling of pages. If I am reading a book on an eReader, which has a mid-sized screen, the text fills the digital display and provides margins. When I read the same book on my phone, the text still only fills the screen,  I simply see fewer words and have more “pages” to move through via a simulated “flip” that comes from touching the edge of the screen. Additionally the text is shown on a simple uncluttered display so as not to distract from process of reading. One affordance of books, the ability to look quickly and see how much content a reader has left, is not possible on the flat screen which can only display part of the text at any time (in a readable format). As quantified data, the text of the eBook only displays the amount of text that fulfills human design principles (like scale). This contrasts with the codex book’s continuous form. To make up for this lack, a new affordance in the form of a sliding gauge, not unlike what appears when playing a video fie or listening to an audio file, can be called up to visually represent the reader’s progress. (Interestingly, in older formats of sound and video there was no easy way to check your progress, but this is an expected affordance – and therefore constraint – of the digital format).

Figure 1: Screen Shot of an eBook displayed on an iPhone using the Nook application

The digital format also provides brand new affordances. The codex book had discreet units  in the form of pages and words and chapters as well as a table of contents and potentially an index to assist in navigation. However, strings of words were not searchable as they are in the digital format. Additionally, notes can also be added or interesting passages marked, using a layering technique, so that the underlying form is not permanently altered as it would in it’s analog form.

Other affordances and constraints of eBooks come from their mediation. Ebooks are read on tablets, phones, and computers – our metamediums – the book itself is a combination of display software (Nook, Kindle, ect.) and digitized text which has been grouped to be displayed in a certain order and in a certain style. The text of the book is obtained (legally) by sending a request on the metamedium to a book distribution company who sends the digitized packets of data back to the device to be displayed on a pixelated screen. If the user has the display software on multiple devices, the book can be read on these different devices. A device can also access any of the books available to the reader. However, digital rights management software, often included in eBooks, as well as proprietary display software that is designed to only display books from proprietary formats, makes the legal sharing of eBooks extremely difficult. These constraints however, are not native to the digital format but are instead the result of corporate practices and legislation.

This brings me back to my question, what would a Deluxe eBook look like? What other affordances might be unlocked due to the eBooks digital format and the ability of the metamediums to simulate other media. In my search for Deluxe eBooks, I saw found two, one for a Ken Burns eBook and one for Dolly Parton that seemed like easy cases for creating mixed media projects. Both artists work in non-text media, so the addition of video and audio content to the ebook, easily done with a metamedium that can support different data formats, makes sense. Mixed media projects like picture books could also benefit from the use of animation in addition to text. However, in long form fiction and non-fiction books, other media formats might be considered distracting to the pleasures of reading. If you were reading a book and suddenly animations began to move on the page it might pull you out of the moment. Outside of mixed media, what could be offered that might enhance the cognitive work being done while reading?

Figure 2: Screenshot of the Google results of “Deluxe eBook edition)

One idea would be hypertext and the ability to link to rich sources of information if the reader was inclined. If a recipe was mentioned, linked text could bring the reader to that information, which while not necessary to the story, might be of interest. Playlists and other tie in media could be available to turn on or off. These versions of Deluxe eBooks would not be dissimilar to new editions which include additional prefaces or introductions which provide additional context. More expansive changes to eBooks, however, could be possible through modifications to the book display software.

Alan Kay and Adele Goldberg’s describe a metamedium as “a machine … designed in a way that any owner could mold and channel its power to his own needs,” (Kay and Goldberg 1977). Display software is designed in a way that limits readers power to manipulate the text of the books they read, rather than allowing the reader to take advantage of the affordance of the metamedium. While readers can layer comments over the text, they do not have a way to share these comments with other readers on the platform. Text can be copied, but it cannot be cut or edited to create personalized versions of the story/narrative. Pictures and art that the reader has created or thinks are relevant, can not be inserted. There are a myriad of ways the needs of the reader could be channeled into a more interactive relationship with their books, behaviors that are demonstrated in many fan cultures. There is wealth of possibility if books are understood not as totalized objects, like the codex book, but as digitally fluid. As Manovich describes in his principle of variability, “a new media is not something fixed once and for all, but something that can exist in different, potentially infinite versions” (Manovich 2002). 

Works Cited

Alan Kay and Adele Goldberg, “Personal Dynamic Media,” First published 1977. As reprinted in The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: The MIT Press, 2003.

Andrew Piper, 2013. Book Was There: Reading in Electronic Times, Chicago; London;: University of Chicago Press.

Lev Manovich, 2002. The Language of New Media, 1st MIT Press pbk. ed. Cambridge, Mass: MIT Press.


A Tool for the People

Last class Deborah asked, “If World War II didn’t happen, would computing be at the place it is today?” It was a very intriguing question, and at the time I pushed back on it slightly. World War II offered incentive, I proposed, to turn theory into application. However now, after looking at Vannevar Bush and the other inventors of the computer at a metamedium, I’m less certain.

Figure 1: Atomic cloud over Hiroshima, taken from “Enola Gay” flying over Matsuyama, Shikoku

The second world war was a transformative moment for many academics. This transformation was not just because it allowed for the convergence of scholarly research, industry, and the military, nor just because the results were proof of concept. Those factors changed the nature of what was known and what could be designed, but the war also created an existential crisis for many researches involved in the construction of devices that caused mass destruction. There was a reactionary feeling to much of the computer design that came afterwards, a feeling that this knowledge needed to be reclaimed and used for more life affirming scholarship such as art and music.  This feeling was clear in Vannaver Bush’s essay,  “As We May Think:”

The applications of science have built man a well-supplied house, and are teaching him to live healthily therein. They have enabled him to throw masses of people against one another with cruel weapons. They may yet allow him truly to encompass the great record and to grow in the wisdom of race experience. He may perish in conflict before he learns to wield that record for his true good Yet, in the application of science to the needs and desires of man, it would seem to be a singularly unfortunate stage at which to terminate the process, or to lose hope as to the outcome” (Bush 1945)

The leading computer designers were disposed to correct course after a brutal war. In turning their attention back to academia, they looked to remediate the tools of learning, while simultaneously building on the affordances provided to enhance what was possible. Bush hypothesized the Memex machine and Sutherland created Sktechpad, expanding what was possible for graphic user interfaces (GUI). The rise of consumer products markets, mass production of electronics and advances in modularization also laid the ground work for rolling out a personal computer. “The world has arrived at an age of cheap complex devices of great reliability,” Bush wrote in 1945,  “and something is bound to come of it.”

There was simultaneously a feeling that in, this new world order scholarship, lacked the tools to keep up with how rapidly life was changing. Douglas Engelbart, describing the current state of society wrote: “Man’s population and gross product are increasing at a considerable rate, but the complexity of his problems grows still faster” (Engelbart 1962). Bush’s Memex and Engelbart’s framework for augmenting human condition were attempts to formulate the necessary tools to make humans more capable and expand what could be made and known.

Figure 2: Member in the Form of a Desk (Bush 1945)

The two ideas of human cognition and cultural products were combined in Alan Kay and Adele Goldberg’s proposed personal computer Dynabook. Dynabook was designed as a “metamedium, whose content would be a wide range of already-existing and not-yet-invented media” (Kay and Goldberg 1977). The “not-yet-invented” part of this definition was key, as they were theorizing a device that was truly interactive, which meant learnable and easily programmable. Kay and Goldberg envisioned a personal computing device that could be sculpted by the users to perfectly match their purposes, eg. “an animation system programmed by animators” (Kay and Goldberg 1977). In their paper, “Personal Dynamic Media,” they describe how Dynabook will come with the programming language Smalltalk, making programming simple enough for grade-school students to learn. Unfortunately, as Manovich notes, the computer as a simulation for media, rather than as a tool of tools, was the eventual implementation. However, Manovich also notes that the improvements in programming language in more recent years, have been closer to the Smalltalk vision. Perhaps efforts like CodeAcademy to demystify programming will see some pushback on the prepackaged software industry. While that seems unlikely at the moment, the disruption experienced by the music industry and the film and television industries when the tools of production and distribution became simpler and more cost effective, may indicate that nothing is monolith.

Works Cited

Alan Kay and Adele Goldberg, “Personal Dynamic Media,” First published 1977. As reprinted in The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: The MIT Press, 2003.

Douglas Engelbart, “Augmenting Human Intellect: A Conceptual Framework.” First published, 1962. As reprinted in The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: The MIT Press, 2003.

Group, 509th Operations. English: Atomic Cloud over Hiroshima, Taken from “Enola Gay” Flying over Matsuyama, Shikoku (Commentary by Chugoku Shimbun). August 6, 1945. A photograph that was taken from “Enola Gay” flying over Matsuyama, Shikoku. 

Lev Manovich, Software Takes Command. INT edition. New York ; London: Bloomsbury Academic, 2013. 

Vannevar Bush, “As We May Think,” Atlantic, July, 1945.



The Language of Computing

Last week in class we reviewed the transmission model of communication and information in which Claude Shannon described the goal of transmission to be “reproducing at one point either exactly or approximately a message selected at another point” (Shannon 1948). However, we also specified that in this model, the meaning of the message is irrelevant. Human meaning making, as expressed through symbols and language, is flexible and complex. Any attempt to freeze our symbolic processes for the purpose of “more accurate” communication between humans would have dire and far reaching consequences.

If however, we are trying to communicate meaning to a computational device, the flexibility of language becomes a problem. Instructions, represented as symbols, need to be translated into a computational language that follows clear syntax and grammar. Variables need to be defined clearly. Computer programming languages are what David Evans calls “designed language,” tools carefully crafted to eliminate undo complexity, ambiguity, and irregularity, but to enhance abstraction and economy  (Evans 2011). Programming languages such as Python utilize symbols in three ways, as described by Professor Irvine:

  1. Symbols to represent meanings (eg. + means to add)
  2. Symbols to described and interpret other symbols (eg. variable = 5)
  3. Symbols to perform actions on other symbols (eg. PRINT variable)

The clear definition of variables and consistent use of grammar is key. A breakdown will result in the dreaded SyntaxError

Figure 1: Katie’s Python Lesson 1 (“Learn Python” 2017)

Beyond correct grammar and syntax, programs have to be designed from a computational perspective. Jeannette Wing describes computational thinking as “reformulating a seemingly difficult problem into one we know how to solve, perhaps by reduction, embed-
ding, transformation, or simulation” (Wing 2006). In other words, programs have to be built logically, the kind of logic I studied in undergrad because I erroneously believed that class, located in the philosophy department, would be easier than calculus. (At least in calculus the the final exam wouldn’t have been only five extremely difficult questions upon which 70% of your grade would be based).

Coding requires a detailed understanding of processes, to be implemented in a specific order. Only when the ordering and steps are clear can the CPU retrieve programs and data stored in the RAM and compute an output.The recursiveness and the depth of the layering in the code language enables programs to carry out complex processes that appear to run instantaneously by encoding instructions to the computer’s hardware.  One wrong key stoke, however, could break the routine and stall the entire system. Leaders of programing teams need to allow time for testing and user feedback to ensure that the outputs match expectations.

Figure 2: The Possibilities of Coding (Apple App Store 2017)

Automated computing feels like a step away from the human computing which existed long before the invention of ENIAC, however, “coding,” like “writing,” is a humanistic pursuit. Most of our more elaborate programs involved teams of hundreds, if not thousands of designers to reach their current state. In order to support this type of collaboration, codes need to contain instructions for people as well as for the computer. If a programmer needs to determine if an algorithm can be improved, they first need to understand the logic of the initial algorithm. Python, for exampled, contains grammar such as the # and the “”” which allow programers to include explanatory notes or instructions to other people who might one day work on their code. Human language is in that way nested inside programming language.

Figure 3: Katie’s Python Lesson 2 (“Learn Python” 2017)

A friend of mine who creates programs to run complex economic models said that he can tell which member of his team wrote which lines of code based on their style. He even suggested that the entire narrative of a program’s creation could be read and understood if a person knew where to look. Coding is another art form, reflecting human intention in its design.

Works Cited

“App Store” Apple. Accessed October 24, 2017.

Claude E. Shannon, “A Mathematical Theory of Communication.” The Bell System Technical
Journal 27 (October 1948): 379–423, 623–656.

David Evans, Introduction to Computing: Explorations in Language, Logic, and Machines. Oct. 2011 edition. CreateSpace Independent Publishing Platform; Creative Commons Open Access:

“Learn Python.” Codecademy. Accessed October 24, 2017.–console-output/exercises/strings?action=lesson_resume.


Jeannette Wing, “Computational Thinking.” Communications of the ACM 49, no. 3 (March 2006): 33–35.

Martin Irvine. Key Concepts in Technology: Week 7: Computational Thinking & Software. Accessed October 25, 2017.

Twitter is a Weird Place

Reply All, a podcast by Gimlet Media about the internet, has a recurring segment called “Yes Yes No.” In this segment, Gimlet CEO Alex Blumberg sits down with the two hosts, Alex Goldman and PJ Vogt, and asks them to explain something he’s found on social media, usually twitter. In each case, Alex reads the tweet, his voice sounding extremely perplexed, and then the co-hosts attempt, if they can, to decode all the levels of meaning behind what at first glance reads like nonsense. This segment highlights the difference between the signal-code-transmission model and meaning system models.

When Alex fails to understand a tweet, it isn’t because of a breakdown in the signal transmission model.

Figure 1: Knight 2012, Hartzell 2017, Tomwsulcer 2014

Above, I’ve created a very simplified signal transition model for “tweeting,” using Shannon’s design (Gleik 2011). Briefly, the user/information source, composes a message. This message is encoded (mediated through the smartphone’s touchscreen interface). The digitized message, now bytes, is transmitted in data packets over a wireless network, through the internet, where it is received by Twitter. Twitter then decodes these data packets and it’s software interprets the information, classifying, indexing, and storing it as well as running many other protocols for updating it’s platform. When twitter’s users launch the software application on their smartphones, the software sends an encoded request for information (again through wireless networks and the internet). After decoding this request, Twitter then encodes a new message in bytes made up of the combined information and transmits it back to the smartphones of users. Twitter’s application software receives the encoded information, decodes it, and displays it digitally on the screen. The platform is designed in such a way that the original message is reproduced as inscribed by the original user, but is combined with information provided by other users of the platform. The “noise” in the diagram above constitutes anything that interrupts the transmission of the data packets.

Unlike a text message designed for an individuated destination, Twitter is designed for a community of users. Twitter is designed to pattern match, which allows for the classification of information, creating a taxonomy of meaning. This classified information is aggregated and displayed on its platform. Additional algorithms create new information by tracking user interaction to determine the way information is displayed. This gives users, like Alex Blumberg, the option to see what information is getting the most attention. When Alex reads a confusing tweet, his inability to understand what is says isn’t due to a system breakdown, it is because the information Twitter is relaying comes from various users all with different reference points for interpreting what they are seeing on their digital displays. Alex has the ability to read the text and digital images below, but he doesn’t know what they mean.

Figure 2: Yes Yes No 2017

If the tweet above also has you confused, you can listen to the episode here. 

Denning and Bell describe information as always being comprised of two parts “sign and referent.” Humans determining meaning by  “the association between the two” (Denning and Bell, 2012, 477). The signs, or the symbols we perceive, are nothing unless we are able to associate them to a subject or idea. In the case above, the tweet and image work together to convey a whole series of meanings that can only be understood once the context is explained. In the “Yes Yes No” segment, Alex Goldman and PJ spend time providing the necessary history for decoding meaning, and once they do, the tweet can be read with ease.

Humans are constantly creating new links between symbols and things they perceive. These new linkages are expressed through language, which is socially constructed and constantly in flux. The orderly system of signal transition outlined by Shannon, which is essential for successfully mediating symbols through digital and electronic transmission, can never apply to the meaning of the messages without first freezing the relationship between meaning and language. And, as relationships are subjective, frozen language would then be forever encoded with other social constructs, such as power dynamics, and would reflect only this historical moment. Day describes this as “fulfilling the two paramount concerns for the U.S. during the Cold War period: controlling and idealizing linguistic and social normativity, and, relegating linguistic and social marginality and political contestation to minority or curiosity status, or simply, to being social or linguistic “noise.” (Day 2000, 811).

Day is extremely critical of using statistical methodologies to ascribe meaning to symbols; however, Denning and Bell’s article see ascribing meaning, the process of linking the sign and the reverent, as new information. That new information can then be processed by machines. This is a constantly generating system, as opposed to the lockdown described by Day. Twitter is designed to use algorithms to pattern match user preferences and promote content the software has statistically determined is relevant to its user’s interests. It is also designed to target advertisements toward specific demographics that the software has statistically determined are likely to buy certain products. In this sense, it is processing “meaning.” It should then be determined, to what extent these algorithms are fulfilling Day’s concerns and freezing social norms, or to what extent the systems is constantly adapting to knew information?

Works Cited

“#106 Is That You, KD? – By Gimlet Media.” Gimlet Media. Accessed October 18, 2017.

Day, Ronald E. “The ‘Conduit Metaphor’ and the Nature and Politics of Information Studies.” Journal of the American Society for Information Science 51, no. 9 (2000): 805-811.

Denning, Peter J., and Tim Bell. 2012. The information paradox. American Scientist 100 (6): 470-477.

Gleick, James. The Information: A History, a Theory, a Flood. (New York, NY: Pantheon, 2011).

Hartzell, Kathryn. Smartphone Screenshots. 2017

Knight, Gary. Now, Instead of Texting Each Other, You Can Text Other People. March 11, 2012. Friends with Mobile Phones Uploaded by JohnnyMrNinja.

Tomwsulcer. English: Young People Using Their Smartphones at a Party. The Ever-Present Use of Smartphones for Multiple Purposes Has Led Some Writers to Describe Young People as the “Thumb Tribe” or “Thumb Generation”. July 7, 2014. Own work.

“Yes Yes No.” Yes Yes No. Accessed October 18, 2017.



The Windows 8 Disaster

In 2012, my computer died. It had lived a long life so this was not much of a surprise. It did, however, mean that I would have to buy into the brand new Windows 8 operating system. When Windows 8 first launched, Microsoft made it very difficult to purchase a new computer on the familiar, and much beloved Windows 7 OS. I’d heard some of the negative buzz, but was given to understand that this was a “learning curve” situation, the result of doddering old PC users who were averse to change. I was myself averse to change, but I didn’t like the idea of paying an addition $100 to “downgrade” my OS. This was a mistake.


Figure 1 Windows 8 Start Screen (CNET 2012)

The Windows 8 operating system was designed to be a universal operating system for a suite of Windows devices, such as the new tablets and Windows phone. In 2012, there was a sense that the tablet was going to displace the classic laptop design. Microsoft, in their rush to catch up with Apple, lock in users, and begin the migration to the cloud, failed to consider the very different ways laptops, tablets, and phones are used.

Laptop computers were designed as portable personal computers. The design of the laptop computer includes a number of affordances for time intensive work involving a great deal of human-computer interaction. The flat base of the laptop with a raised digital display is designed to make interaction while seated, or standing at a high counter, easy. The hinge design also allows the user to angle the screen, easily adjusting the computer to fit each user’s form. The design in very stable, once set up on a flat surface, the laptop is not going to fall over. The ease with which the user can input information is enhanced by the QWERTY keyboard and large graphic display. However, the size and weight of the laptop do not afford walking while operating the device. Additionally, laptops are usually wifi or ethernet dependent. These affordances led to its widespread adoption in the home and workplace, where working environments are more controlled and when work is performed over longer periods of time.

In contrast, smartphones were designed as mobile communication devices. Smartphones can be held comfortably in one hand and easily carried as an accessory. Smartphones were also designed to play games, watch videos, and access digital media while on the move. The convenience of smartphones has led to their adoption in both personal and professional settings, however it small display and relative difficulty (when compared with affordances of a QWERTY keyboard) for composing lengthy messages make them a poor choice for many business or digital media production purposes.

Finally the tablet, which at a larger size than the smartphone, was designed with affordances that fall somewhere between a laptop and a smartphone. Notably, kickstands for propping up screens and QWERTY keyboards are not affordances of the tablet and must be purchased as accessories. Most of these accessories still lack the affordance of the clam shell laptop design, such a range of motion for angling the screen. For many professional settings, the affordances of a laptop still outweigh the tablet. The most notable affordance of the tablet that has not been universally adopted into the laptop is the touchscreen.

This brings us back to Window 8, the operating system designed to be used on all three devices. The tiled display of the Windows 8 Start page was a good design for a smartphone with a touchscreen. Touchscreens afford swiping and tapping. The large blocky icons conform with size of human fingers which make them easy to select and launch. Applications launched on a smartphone are designed to take over the entire screen, which given the small size of the digital display, ensures the viewer can read all the important information presented. Unfortunately for Microsoft, these same design features when put on a laptop computer were counter productive and non-intuitive for the user experience.

On the larger graphics display, the launching of applications that take over the whole screen constrained the number of activities a user could perform at a given time. Multitasking, a simple task of having more than one window open at once, was suddenly a challenge if any of the programs being run were designed in an app format. Additionally, the apps were designed to be launched from the Start page, while more traditional programs like Microsoft office were designed to be accessed from a tradition desktop layout. Moving between these two layouts was, as described by PC World in their review of the operating system, a mess:

If all you need to do is launch an application, you can simply click its tile in the Start screen. If you need robust file management and navigation features, you have to access the desktop. After you boot the machine, pressing the Windows key sends you to the desktop. Unfortunately, the Windows key isn’t consistent in this behavior: If you’re in an app, pressing the Windows key always returns you to the Start screen. Press it again, and you’re in the most recent Windows 8 app. Instead, to move to the desktop consistently, you need to be in the habit of pressing Windows-D. Another option is to move the pointer to the lower left of the screen and click there (though this method works only if you have used no other app recently). (PCWorld 2012)

Even outside of confusion like the above, interaction was nonintuitive. The majority of laptops in 2012 did not have touchscreens so “swiping” left or right was accomplished with a mouse by scrolling up and down. The tile display that was sized for human fingers when displayed on a phone was extremely large on a laptop monitor and made the workspace busy and overwhelming. There was at once too much information and too little. Lack of signaling left users confused as to where they could find basic functions (hidden off the side of the screen, an area that previously had not existed). The affordances offered by Windows 8 were often imperceivable and violated conventions, which Norman argues are cultural constraints as they are learned behaviors “shared by a cultural group.” (Norman, 1999, 41) 

Less than a year later, Windows 8.1 was rolled out which allowed users to boot into the desktop, bypassing the Start screen. However, the damage was done. The failure of designers to consider that different devices had different affordances, as well as different rituals and behaviors associated with them, doomed Windows 8. Murray describes the designer’s task  as needing to be “grounded in the service of specific human needs: this is what gives the work clarity and direction. “ (Murray, 2012, 42) Norman gives his own warning, “Conventions are not arbitrary: they evolve, they require a community of practice…Use them with respect. Violate them only with great risk.” (Norman, 1999, 41)

Donald A. Norman, “Affordance, Conventions, and Design.” Interactions 6, no. 3 (May 1999): 41.

How to Get the Start Menu Back in Windows 8.” CNET. Accessed October 11, 2017.

Janet Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012. 42.

“Windows 8: The Official Review.” PCWorld, October 25, 2012.

Meditating on Medium, Mediation and Mediology

In front of me sits the great metamedium, the smart phone touch screen. A metamedium, as described by Professor Irvine, is a “medium for aggregating, distributing, transmitting, formatting, representing, and presenting other media.” (1)

The design of the touchscreen provides interfaces to various applications that can then be viewed, heard, and interacted with through the digital display, speakers, and touchscreen. The touchscreen, however, isn’t just a medium. It also plays an active role in the construction of a complex sociotechnical system.

Figure 2 iPhone 6 Screenshot (2)

In order to break down the complex ways the touchscreen acts as a mediator, we need to unpack the nested layers of meaning and function that the touchscreen affords.

Function: The touchscreen, through its digital display provides mental map of functions that can be performed on the phone. The layout relies on symbols that have been digitally represented and affords the user the ability to run different programs by pushing on the screen. The pressure on the screen is then mediated through the sensor technology and transferred to the phone’s operating system where those functions are transmitted to the various modular components of the phone, delegating the task of requesting and retrieving content. Once the information has been retrieved (through a complex series of operations involving a diverse number of players) the content is then displayed. The content retrieved though, is bound by certain requirements of touchscreen. Webpages need to be modified to fit the screen dimensions. Pictures may need to be resized. The touch screen presents the content and also informs the content of what it can and cannot be.

Figure 2 iPhone 6 Teardown (3)

Socialization: The functions of the touchscreen are embedded in socialization. The relationship is codependent, with the uses of the touchscreen emerging from social behaviours and also defining those behaviors. Latour refers to the first level of mediation as “goal translation.” (4) These goals, such as connecting with friends, paying bills and booking travel can be achieved through the touchscreen, but they are also changed by the presence of the touchscreen which affords multiple ways to achieve these goals. For example, one goal, which has its roots in social relations, might be a student checking in with their parents. The presence of the touchscreen now affords multiple ways of achieving this task. However the touchscreen also changes the nature of the task. For instance, the ability to communicate quickly and easily may mean that the student is expected to either call, text, or write home more frequently.

Figure 3 English: Man Talking on Phone (5)

Institutionalization: The functions and social practices enabled by the touchscreen are reinforced by institutions. Consider my old job which provided all their employees with a smartphone. The firm not only purchased the phones, but paid the monthly bills. Applications were rolled out that could be accessed by the touchscreen to make working from outside of the office simpler. However in return, it was expected that employees would respond to messages wherever they were, at whatever time of day, marking a significant shift to the way work was done. 

Figure 4: PwC Minneapolis (6)

Dependencies: The touch screen’s ability to successfully mediate is dependent on a number of interconnected systems.

  • Physical: The person’s use of smartphone is dependent on their ability to see, hear, and touch the phone. The impairment of any of these senses will define what can mediated.
  • Legal: The ability of the smart phone to be used (not while driving), and the materials it can access (public vs private information) are largely bound to legal concerns. Legal concerns also impact of many of the smart phone’s dependent technologies to mediate.
  • Economic: Business interests play a key role in the mediation of content.

Latour and Debray break down the false dichotomy between social and technical artefacts. The two exist in an ouroboros configuration, connected in an unending loop that only becomes more enmeshed with the advancement of technology. (7) Debray describes the mediologist as “interested in the effects of the cultural structuring of a technical innovation (writing, printing, digital technology, but also the telegraph, the bicycle, or photography), or, in the opposite direction, in the technical bases of a social or cultural development (science, religion, or movement of ideas).” (8) Cultural practices have influenced the construction of the touchscreen from its modular form, to the different media it can access, but in turn cultural practices have seen significant shifts since the rollout of the touch screen.  


(1) Martin Irvine, “Understanding Sociotechnical Systems with Mediology and Actor Network Theory (with a De-Blackboxing Method)” PDF. 9.

(2) Kathryn Hartzell. iPhone 6 Screenshot, October 3, 2017. Own work.

(3) “IPhone 6 Teardown.” IFixit, September 18, 2014.

(4) Bruno Latour. “A Collective of Humans and Nonhumans — Following Daedalus’s Labyrinth,” in Pandora’s Hope: Essays on the Reality of Science Studies. (Cambridge, MA: Harvard University Press, 1999), 179.

(5) Mylesclark96. English: Man Talking on IPhone, March 15, 2016. Own work.

(6) Zhao, Bohao. PWC Minneapolis, August 13, 2013.

(7) Bruno Latour. “A Collective of Humans and Nonhumans — Following Daedalus’s Labyrinth,” in Pandora’s Hope: Essays on the Reality of Science Studies. (Cambridge, MA: Harvard University Press, 1999), 201.

(8) Regis Debray. “What is Mediology?.” Le Monde Diplomatique 32 Trans. Martin Irvine (1999), 1.



The organized chaos of my web browser

Why are there so many tabs open on my web browser? When I’m researching a project, which I almost always am, I leave a lot of tabs open. Some of these tabs may remain open for weeks. This has been a source of extreme aggravation to people who have tried to work off my laptop through the years, or have simply looked over my shoulder and observed all the different little grey boxes stretching across the screen. Unfortunately for the blood pressures of these poor people, I will henceforth use that question as an opportunity to share my cognitive process.

(Screenshot of my browser. You can count nine different tabs in this picture, but I promise there are at least three more)

My web browser is not a mess, it’s distributed cognition

Whether I’m using Safari, Chrome, or Firefox, my browser is one of the tools I rely upon most. At this very moment, I have a tab for listening to music, four different email accounts, a library catalog search, class readings, and my notes on those readings.

“Surely,” argue people who care about my mental health, “you could at least close the email accounts. Pop-up notifications will let you know if you get a new message.” This is true, but that’s not why I leave the emails open. Each email has different responsibilities associated with it and tasks that need to be accomplished. Keeping these open, reminds me that I need to go into them later and make sure that I have completed those tasks. If I do, I get to close a tab. In this way, I’ve offloaded the organization of my responsibilities to the web browser so I don’t have to worry about forgetting anything important. I’m using the space and layout of my browser as a memory aid and an organizational tool. My multitudinous browser tabs are an example of distributed cognition, where “work materials become integrated into the way people think, see, and control activities, part of the distributed system of cognitive control.” (1)

Like my email accounts, my tabs of articles shouldn’t have to remain open once I’ve read them, but having them there, since I don’t have printed copies, helps me to remember with a quick glance at their titles what each piece was about so that I can consult them quickly if needed. The not having printed copies part of this is key. I’ve remediated what was once a desk covered in stacks of papers, notebooks, and actual books into a “stack” of tabs. The “stack” simply moves horizontally across my screen instead of vertically. I’m focusing on this behaviour because, like Hollan, Hutchins and Kirsch’s example of the use of the airspeed indicator by pilots, I’m using these tabs in a way that was not necessarily intended from a system design. (2)  Norman writes, there is “the system view and the personal view,” which are different, when looking at cognitive artefacts. (3) From a system standpoint, the tabs were constructed to allow users the ability to visit a new webpage without having to leave the page they were currently on. Rather than a new window, the tab function allowed for easier navigation between different places on the internet, as the names of the pages remained at the top of screen. Referred to as tabs, this design was meant to replicate the experience of consulting neatly labeled file folders. To follow through the metaphor, things placed into a physical file folders are usually then placed in a drawer, or stored out of sight, as they are no longer in immediate use. FIle folders aren’t intended to be left in piles on a desk. By leaving, what some might call an excessive number of tabs open, I have also remediated a messy desk. No wonder many people find this stressful.

With this knowledge in mind, what would be a “neater” way for me to keep track of everything I’m working on. I don’t want to close the tabs and make it more difficult to navigate back to the right spot in the future. I don’t want to bookmark everything this would lead to an excessive number of bookmarks and a new place of mess and confusion. Maybe what I need is a virtual stack into which I can drag and drop my active webpages, keeping my main workspace “clean.” Until I find that software, I will likely continue to let my browser tabs accumulate.


(1) James Hollan, Edwin Hutchins, and David Kirsh. “Distributed Cognition: Toward a New Foundation for Human-computer Interaction Research.” ACM Transactions, Computer-Human Interaction 7, no. 2 (June 2000): 178.

(2) James Hollan, Edwin Hutchins, and David Kirsh. “Distributed Cognition: Toward a New Foundation for Human-computer Interaction Research.” ACM Transactions, Computer-Human Interaction 7, no. 2 (June 2000): 180.

(3) Donald A. Norman, “Cognitive Artifacts.” In Designing Interaction, ed. John M. Carroll, (New York: Cambridge University Press, 1991), 17.