Author Archives: Sara Levine

Navigating Remediation and Presence: An Analysis of Live Stream Video Technology

By: Sara Levine

Live stream video technology is now a fairly popular media artefact that has been re-purposed by organizations, institutions, and individuals in order to broadcast various live events. Most recently, for example, Hulu gave visitors the option to watch The White House Correspondents Dinner as it occurred in real time. Hulu published it as a playable video afterwards, but for a few intervening hours viewers were able to tune in to the event as it occurred. The attraction of live stream video seems to be its “liveness”. Viewers are not actually present at the event, but the promise of live stream video is that viewers are going to have a similar experience to those who are sitting in the room with President Obama. This “liveness” quality seems to be derived from a remediation of broadcast television and the illusion of presence.

So, how does it work?

Live stream video requires an on-site computer setup that is able to compress, encode, and stream the content in real time. Alternately, this technology can be outsourced to companies who will do this technical work. The video content is uploaded through a designated “media server” that is given instructions from the web server to send out data to specific recipients. Streaming video does not use the typical protocols such as FTP or TCP. Instead, streaming technology relies on protocols that facilitate the movement of data in real time. These include real-time transfer protocol, real-time streaming protocol, and real-time transport control protocol. They are also necessary for providing an extra layer of protection so that the servers are not overloaded with traffic (“How Streaming Video and Audio Work”).

The sender-to-recipient interaction seems reminiscent of the Shannon-Weaver Model of communication (otherwise known as the “transmission model”). The video is transmitted over a live stream to the recipient through the internet where noise from interrupted connections and other errors may occur. In his article about video programming on the internet, John Meisel provides a diagram that breaks down the communication between sender and recipient (Meisel 55). This diagram indicates that the production process may not be as straightforward as sending data across servers and protocols. The content must go through broadband service providers twice before arriving at its destination. Furthermore, Meisel writes that “streaming live video is more demanding in bandwidth requirements (Meisel 54).”

Meisel analyzes this production process from an economic perspective as well. He points out that “a specific economic concern from a competition perspective is whether these broadband network companies will discriminate against application providers…that are creating video networks (Meisel 61).” In other words, broadband network companies may block access to their competitors’ content. In regards to live stream video, there are several different players that could be caught up in the production process. Hulu was probably given permission to film and stream the White House Correspondents Dinner. C-SPAN was also streaming the event, and so there may have been competition over viewers. Additionally, it is unclear as to whether these companies use their own streaming services or outsource to another company. Viewers also had the option of visiting websites that are unaffiliated with TV companies, such as Competition seems centered around whose stream is passed around the most, and perhaps which institution is providing the best quality video stream of the event.

Live Stream Video Functions to Consider

This technology is available to anyone who has a computer and a robust broadband connection. Consequently, there are a variety of institutions and individuals who are using this media artefact in order to broadcast content. However, there are certain functions that remain the same for everyone. The first is the remediation of live broadcast television, and the second is the concept of presence. Both of these are used differently by various institutions (which will be discussed later with specific examples), but they also seem to be deeply embedded within this medium.

Live Broadcast Television Remediated Through Live Stream Video

The concept of “live” production was introduced with radio programming. In “Live from Cyberspace,” Philip Auslander writes that the first use of the term “live” “comes from the BBC Yearbook for 1934 (Auslander 17).” Radio listeners were not able to identify the sources for the sounds they were hearing. Consequently, there was no way to tell if the broadcast content was live or recorded unless the announcer made the distinction (Auslander 17). Auslander posits that the term “live” came into being precisely because of this confusion between live and recorded radio broadcasts. This notion of live broadcasted content continued on into the mid-20th century with live television. It could be argued that live video streaming over the Internet is simply another iteration of live broadcast content.

Remediation is described by multiple theorists to be a process through which new media is born. According to Lev Manovich, it is “the mix between older cultural conventions for data representation, access and manipulation and newer conventions of data representation, access and manipulation (Manovich 13).” Manovich alternately calls remediated media “meta-media” or “post-media (Manovich 21).” If live stream video content can be considered “meta-media”, then its content might be a combination of more familiar broadcast elements and newer forms of Internet technology. However, sending data over the web was not invented with live stream video content; live stream video technology is re-using these elements. Bolter and Grusin write that “…streaming video…cannot merely improve what the Web offered before but must “reinvent” the Web…What is new about new media is therefore also old and familiar: that they promise the new by remediating what has gone before (Bolter and Grusin 270).”

Live stream video’s remediation of live broadcast content has created economic tensions. Bolter and Grusin posit that “it is a struggle to determine whether broadcast television or the Internet will dominate the American and world markets (Bolter and Grusin 48).” Video streaming providers such as Netflix and Amazon are producing their own content, but there might be another aspect of video streaming that puts the Internet in such a contentious position with broadcast television. That aspect seems to be an increase in the adoption of live stream video technology. TV audiences have the option to choose a different medium for watching live content.

The first live stream video was “a coffee pot in the Trojan Room of Cambridge University to an intranetwork of computer scientists” in 1991, according to J. Macgregor Wise (Wise 425). Once live streaming video became more widespread and accessible, the popularity of webcams seemed to increase dramatically. Anyone with a decent computer and broadband connection would be able to distribute live content to millions of others. Each user has their own audience, and can produce videos in real time for their audience to “tune in” to. Juhlin, Engstrom, and Reponen make the point that “there remains a challenge for the designers of these services to develop the concept in order to support people’s appropriation and thereby democratize a medium which up to now has been entirely in the hands of well-trained professional TV-producers (Juhlin, Engstrom, Reponen 42).” Millions of users can set up webcams to record extremely long stretches of time that Wise refers to as “longue durée (Wise 427).” Viewers can catch small chunks of “longue durée” and return to them at any time. This differs from broadcast television producers, who may use live stream technology to produce content that is separated by the beginning and end of an event.

Whether a single webcam continuously recording a litter of puppies or CNN coverage of a celebrity funeral, the live stream video content draws in viewers who want to watch this content happen live. These viewers are not physically present during the taping, but watching the broadcast content seems to satisfy this desire for presence.

Illusion of presence

When a viewer opens up a live video stream, she or he is watching the content from the point of view of the camera. In a study titled “Amateur Vision and Recreational Orientation: Creating Live Video Together”, Engstrom, Perry, and Juhlin call this process “mediated looking (Engstrom, Perry, Juhlin 652).” They write that “camera users act as proxy viewers on behalf of…the eventual viewer of broadcast content (Engstrom, Perry, Juhlin 652).” It is through this act of “mediated looking” that viewers feel the pull between being present within the broadcast content and sitting in front of their computer screens. This also describes the concept of “telepresence” as described by Wise. Wise echoes others in his belief that this feeling is not particularly strong in the case of live stream videos. It might be considered “low telepresence” or “popular telepresence (Wise 428)”. Therefore, it seems that viewers are not completely taken in by the illusion of presence.

Mark Duffett wrote in “Imagined Memories” about Paul McCartney’s “Webcast from the Cavern” as a major event in live stream technology. He posed the question that if the consumers of this webcast know that they cannot interact with other viewers watching the event or Paul McCartney, then “Why did they accept that Webcast-ing could reproduce liveness (Duffett 312)?” Duffett then makes the connection between reproduction of liveness with Benjamin’s concept of aura (Duffett 315). It is important to note that while Benjamin discussed loss of aura in terms of mechanical reproduction, he came to the conclusion that technologies such as photography and film had in fact divorced themselves from the concepts of ritual and aura. Instead, these works became “designed for reproducibility (Benjamin 256).” In the case of Paul McCartney’s Webcast, Duffett posits that McCartney’s event’s aura of liveness was reduced to a mere “marketing technique (Duffett 314).” In other words, the live stream video content was reproduced with the intent of redistribution in real time. The liveness of the content is then repurposed through this reproductive medium as a way to reach out to a wider audience. Does the online audience have a clearer view of McCartney than those present? Are there close-ups to his face? The streaming video provides a different experience of presence than the physically present audience because of its reproductive function.

Bolter and Grusin write that “the digital medium wants to erase itself…there should be no difference between the experience of seeing a painting in person and on the computer screen, but this is never so (Bolter and Grusin 45).” They call this concept “hypermediacy.” Hypermediacy implies that the medium should not be noticeable (Bolter and Grusin 6), but Bolter and Grusin point out that “technology still contains many ruptures: slow frame rates, jagged graphics, bright colors, bland lighting, and system crashes (Bolter and Grusin 22).” This concept is true for live stream video content as well. Producers of this content want the reproduction of liveness to occur as smoothly as possible, but there may be a certain amount of time lag for weak broadband connections and other disruptions that make the medium more visible to a viewer. The concepts of presence and liveness are especially vulnerable to these disruptions, and would most likely have a negative effect on the viewer’s experience.

On a final note about presence, Baudrillard wrote that “..the confusion of the medium and the message is the first great formula of this new era. There is no longer a medium in the literal sense: it is now intangible, diffused, and diffracted in the real…(Baudrillard 22).” While other thinkers (such as Benjamin) wrote about alienation in regards to video content, Baudrillard is commenting on a new era of diffusion. He attributes this newer concept to McLuhan, and goes on to posit that the medium cannot easily be separated out from the reality it captures. The borderlines between the concept of presence, the remediated broadcast function, the video screen display, and the actual content being recorded blur together in a live stream video. Viewers are experiencing liveness through this tangled form of hyperreality, and producers are using that remediated liveness as part of their intended message to the audience. These concepts can be explored more concretely through several specific case studies.

Case Studies

Live stream video technology can and has been used in a multitude of ways and by millions of different people. These case studies explore only a small percentage of the types of live stream videos on the internet. The first case study looks at how large institutions involved in news media use live stream video technology. The second case study is a discussion of Marina Abramović’s “The Artist is Present” exhibit at MoMA in 2010. The final case study analyzes live stream video on a smaller scale.

News Media

Paul Sagan wrote an article in 2010 called “The Internet & the future of news,” in which he provided numerous statistics about the growing number of people viewing online broadcast content. During President Obama’s inauguration in 2009, for example, “the Akami global content delivery network served more than seven million simultaneous streams…a number that rivals the audience for many televised cable channels (Sagan 122).” It seems that news media institutions rely on the remediation of live broadcast television in order to capture this audience.

There seems to be a remediation loop between broadcast television and the networks’ websites that host live stream video content. Bolter and Grusin used CNN’s website and televised newscasts as a specific example of this feedback between the two forms of media. “The CNN site is hypermediated…yet the web site borrows its sense of immediacy from the televised CNN newscasts. At the same time televised newscasts are coming to resemble web pages in their hypermediacy (Bolter and Grusin 9).” ABC News has seen similar changes, as demonstrated in the following images:

The first image is from a 1981 broadcast of the Royal Wedding, the second image is from the 2011 broadcast of the Royal Wedding, and the third image is taken from the ABC.Go webpage.

The website does seem to borrow from the immediacy and liveness of broadcast television, especially with two columns that point out “latest headlines” and “this just in…”. However, the contrast between the first two images indicate that broadcast television has been influenced by the internet in terms of formatting. The 2011 broadcast has a headline – “The Royal Wedding” – as well as a Twitter icon on the bottom of the screen. It could be argued that there has been a remediation of remediation at work over the past few years. In other words, the webpage became formatted to support the immediacy of news coverage, and then live news coverage in turn became formatted to support the immediacy of the webpage. So, how does live stream video fit into all of this?

Here is an image of ABC’s live stream coverage of President Obama’s Commencement Address at Ohio State University:

The formatting of the live stream video seems very similar to broadcast television. There is the headline on the bottom and the word “live” appears multiple times around the screen. Additionally, there is another window on the right side that gives viewers the option to read comments about the video. The multiple windows and columns layout seems reminiscent of the ABC webpage. All of these multiple remediations and reproductions of layouts seem to support Benjamin’s concept of creating reproducible media. ABC probably does not care much about the loss of “liveness” in live streaming an event like the President’s commencement address. The creation of this content was produced with the intention of reproduction. It can be embedded anywhere, watched on phones or tablets, and significantly expands ABC’s audience.

Many who are watching live news coverage are familiar with the feeling of watching live content without physically being in the same space as the camera crew. In the Ohio State University Commencement Speech, viewers watching the online coverage know that they are not having the same experience as those who are attending the ceremony. Online viewers are watching what seems to be a continuous close-up shot of President Obama, and are therefore developing a different memory of the event than those who were sitting in the crowd and watching him from afar. Most viewers are used to this type of live event coverage. ABC is simply remediating this coverage for the internet. However, live broadcasts of events occurring overseas may warp viewers’ senses of presence.

Bolter and Grusin wrote about news coverage during Princess Diana’s funeral. The funeral occurred in the middle of the night for most American viewers. Consequently, videotaped footage from the funeral appeared on one screen, and live footage of Princess Diana’s body being carried to its final resting place appeared in another window alongside the funeral. “This crowding together of images,” they wrote, “the insistence that everything that technology can present must be presented at one time – this is the logic of hypermediacy (Bolter and Grusin 269).” It was important for the news media to show both videos side-by-side because immediacy demanded that viewers be able to watch what they missed without neglecting the ongoing proceedings. This might have distorted viewers’ perspectives on the content of the videos because displaying multiple windows invites comparison. During the Royal Wedding in 1981, the broadcast replayed footage from earlier in the day and also displayed “live” footage of Buckingham Palace at night. This emphasizes the time difference, and also allows the news media to edit the footage in a way that fits the message that they would like to communicate to their viewers.

The Royal Wedding in 2011, however, was broadcast live both through live stream video technology and on television. This meant that coverage started at 4am EST or earlier. Many Americans woke up very early to watch the wedding live, and also threw parties to celebrate the event as it happened (“Americans Wake Early to Watch Royal Wedding”). This contrasts with the other two events because Americans might have felt an enhanced sense of presence by having to wake up early to watch the wedding. The camera angles and close-ups all continue to indicate that Americans are not actually present at the wedding, but the time zone difference may have increased the telepresence involved in watching the wedding live.

Marina Abramović

In 2010, David Hart – Media Producer for the Museum of Modern Art in New York – wrote on the official MoMA blog that “when the Marina Abramović exhibition was starting to come together, the staff in all the departments here struggled with how best to communicate the ideas in the exhibition online – since so much of the point of performance art has to do with being in a location, in a moment in time (“Live-Streaming Marina Abramović: Crazy or Brave?”).” The decision was made to stream the exhibition online, and Abramović’s work became available to anyone with access to a decent Internet connection. There were a variety of reactions to the live stream video of the exhibition, and many of these reactions were related to the significance of performance art. It seems that the live stream video coverage of the piece became a tool for MoMA to package and market performance art rather than, as David Hart wrote, “a great way to engage with art (“Live-Streaming Marina Abramović: Crazy or Brave?”)”.

The viewers who visited the MoMA website and spent time watching the live video might have experienced the performance art piece very differently than those who were at the museum during that time. Claudine Isé reviewed the exhibition on a blog post for the website Bad at Sports and asked: “What’s the purpose of streaming a performance – one which purportedly explores what it means to ‘be present’ in this particular historical moment – for the benefit of anonymous internet users who can engage with it only by staring at their computer screens for a few seconds at a time (“MoMA’s Live Streaming Marina-Cam Invites Everyone to Be Present”). In fact, the Los Angeles Times reported that the live stream needed to be refreshed “periodically (“I See You: Marina Abramović live video-feed performance at MoMA”)”. Consequently, the viewers’ feelings of presence might have been considerably reduced through the medium of the live stream.

Performance art seems to challenge this idea of presence because of its finite duration. The other artworks in MoMA are present at all times, but a performance art piece is fleeting in comparison. The viewers in the gallery are aware of this, especially when they are sitting in front of the artist as part of the exhibit. However, viewers outside of MoMA are experiencing the performance through the streaming video medium. They are not present with the artist, and are viewing the performance from whatever angle the camera operator has chosen for the shot. They do not have as much agency in their experience of the piece as those in the gallery do. Additionally, they must refresh the feed of the performance if they want to continue watching. It could be argued, therefore, that the extremely low telepresence of this live stream has all but eliminated the aura of live performance art. If Marina intended for this performance to be recorded, then this live stream video could be considered a reproduction designed for reproduction. However, the decision was made by MoMA, who had their own ideas about how and why the performance should be distributed.

“I can’t imagine anyone watching for more than a minute or two,” Claudine Isé wrote, “which makes the Marina-cam little more than an online advertisement for the show itself (“MoMA’s Live Streaming Marina-Cam Invites Everyone to Be Present”).” Isé seemed to be questioning MoMA’s motives for broadcasting the performance using the “Marina-Cam.” If it was essentially a marketing strategy for the museum, then the museum’s attempts to distance themselves from the economics of the art world had been compromised. Pierre Bourdieu wrote about this concept of “disavowal (Bourdieu 261).” He explained that anything relating to monetary value is shunned because the art’s value is supposedly related to something beyond money. This “disavowal” can lead to events such as Marina Abramović’s “The Artist is Present” exhibition, which may hold a significant amount of “symbolic capital (Bourdieu 262).” Symbolic capital functions as a sort of credit towards prestige in the art world, and usually results in some monetary profit (Bourdieu 262). Marina’s exhibition held a great deal of symbolic capital because of her fame in the performance art field. The monetary profits made by the museum may not have been openly discussed because the experience should be considered priceless. However, the remediation of this experience through a live broadcast seems to bring economic value back into the picture and reduce the symbolic value of the piece. Amelia Jones attended the exhibit and wrote an article about the concept of “presence” in regards to Marina Abramović’s decision to reproduce some of her previous works at MoMA. Jones wrote that “…market pressure inspires the range of methods that have been developed to ‘document’ the work and/or its re-enactments and thus to secure the work in its place in the markets of objects and histories (Jones 20).” Part of increasing the economic value of this exhibition involves treating it the same as any other exhibition, which involves a certain amount of sensationalist marketing in order to attract attention. One of the strategies MoMA used was to rely on the broadcast television function of the live video feed to promote the exhibit. Viewers were treated to “live” coverage of the exhibit in hopes that this would attract attention to MoMA.

The live stream video of Marina Abramović’s “The Artist is Present” may have had a significant impact on the way the art piece was received by the general public. The aura of the performance was almost entirely replaced by the video’s function as reproducible and remediated content. MoMA used these functions of presence and remediation in order to market and distribute the performance.

“Draw Friends”

“Draw Friends” is a live internet show. Several cartoonists enter into a Google+ Hangout that is then recorded onto a live stream YouTube video. The cartoonists usually spend some time chatting with each other directly to camera, but most of the show is devoted to watching them draw after they turn on the “screenshare” function of Google+ Hangout. The host, Terence, will occasionally call out various themes or characters for the cartoonists to draw. However, the majority of the drawings derive from conversations the cartoonists are engaged in or projects that they may be working on at the time of the broadcast. Additionally, there is a comments window for viewers to communicate with the cartoonists, ask questions, and make drawing requests. After the broadcast, the video is archived on YouTube and the cartoonists post their drawings from the episode onto their blogs.

This online live show seems to be a remediation of broadcast television. There is a host for the show and a cast of characters who participate in activities as dictated by the host. Additionally, viewers are encouraged to tune in to the live taping as an audience in order to interact with the cartoonists. It seems as though the creators behind “Draw Friends” rely on the remediation of the broadcast television show in order to organize the more familiar aspects of live stream video. The concept of the show is somewhat similar to Bob Ross’ show on PBS, in which the content of the show was devoted to his artistic process. Both Bob Ross’ show and “Draw Friends” possess a certain amount of “longue durée.” These long stretches of time in which the artist is drawing may be filled with conversation, or it may be completely silent. Viewers can walk away from the video and return to it at a later time to catch another segment of the process. The remediation of broadcast television situates the live video chat as a show, and is organized to mimic the television format.

However, Bob Ross’ show was not broadcast live and there was no interaction between Bob Ross and his audience. The “liveness” aspect of “Draw Friends” is grounded in the interaction function of the broadcast. Viewers do not feel like they are in the room with the cartoonists, but instead are encouraged to feel as if they are participating in the Google+ Hangout in real time. Their questions and suggestions change the direction of the show as it is being recorded. Additionally, the title “Draw Friends” indicate that the audience is welcomed into the group of cartoonists. In this case, presence is not linked with a physical location. Instead, it is associated with another live medium: the video chat.

The “screenshare” function is an example of Benjamin’s discussion of the ways in which the close-up revolutionized the way people saw the world around them. Benjamin wrote that “…just as enlargement not merely clarifies what we see indistinctly ‘in any case,’ but brings to light entirely new structures of matter, slow motion not only reveals familiar aspects of movements, but discloses quite unknown aspects within them (Benjamin 266).” The camera angle on Bob Ross’ show usually cycled between a close-up of his movements over the canvas and a wide shot of Ross standing in front of his easel. However, the “screenshare” function that the cartoonists use eliminates the artist from the shot. Instead, viewers watch the cursor move across the screen and sketch each drawing. They are presented with the viewpoint of the artist rather than standing behind or next to the artist. This may affect the way in which viewers perceive art and cartooning. Perhaps the elimination of the cartoonist results in an objectification of the artwork because the human aspect of the drawing went unnoticed. On the other hand, this extreme close-up of the digital canvas may have revealed gestures and techniques that viewers may not have picked up on from any other angle.


The three case studies indicate the ways in which live stream video technology’s remediation function and alteration of the concept of presence have had an effect on the consumption of different media forms. News media are using live stream video technology to further enhance the hypermediacy experience. Live event coverage is now remediated on the internet alongside all of the other media artefacts people use to get their news. Consequently, this demand for immediate “liveness” may have affected the concept of “presence,” as evidenced by the example of the 2011 Royal Wedding. The art world has also been affected by live stream video technology. There is a difference between viewing art on a screen instead of in person, which visitors to websites like the Google Art Project may attest to. Similarly, performance art is perceived differently through the lens of a live internet broadcast. The ability to distribute this content to a wider audience is attractive to museums, but the unique characteristics of the art piece may have been lost in the process. Finally, live stream video has opened up new channels for individuals and small independent organizations to broadcast content. These videos can be short segments, internet shows, or they can be continuous rolling footage. These small scale live videos also capture moments that people may not have noticed. It remains to be seen whether live stream video technology will have a lasting effect on large scale communication networks, but for now its short-term effects are becoming more noticeable with every passing year.

Further Studies for Consideration

Here are a couple of related areas of discussion:

1. How can the producer’s perspective be further analyzed? How do they interact with the interface for live stream video technology software?

2. There are a large number of websites that allow illegal live stream video of television shows. These live stream hyperlinks are usually passed around social networks to people who cannot access a television show because they live outside of the show’s country. What are the implications of this illegal activity?

Works Cited

Auslander, Philip. “Live From Cyberspace: Or, I Was Sitting at My Computer This Guy Appeared He Thought I Was a Bot.” PAJ: A Journal of Performance and Art 24.1 (2002): 16-21. Google Scholar. Web.

Baudrillard, Jean, and Sheila Faria. Glaser. Simulacra and Simulation. Ann Arbor: University of Michigan, 1994. Print.

Benjamin, Walter, Howard Eiland, and Michael William. Jennings. Walter Benjamin Selected Writings. Vol. 4. Cambridge, MA: Belknap of Harvard UP, 2003. Print.

Bolter, J. David, and Richard A. Grusin. Remediation: Understanding New Media. Cambridge, MA: MIT, 1999. Print.

Bourdieu, Pierre, and Richard Nice. “The Production of Belief: Contribution to an Economy of Symbolic Goods.” Media Culture Society (1980): 261-93. SAGE. Web.

Duffett, Mark. “Imagined Memories Webcasting as a “live’ Technology and the Case of Little Big Gig.” Information, Communication & Society 6.3 (2003): 307-25. Google Scholar. Web.

Engstrom, Arvid, Mark Perry, and Oskar Juhlin. “Amateur Vision and Recreational Orientation: Creating Live Video Together.” CSCW (2012): n. pag. Google Scholar. Web.

Gurevitch, M., S. Coleman, and J. G. Blumler. “Political Communication –Old and New Media Relationships.” The ANNALS of the American Academy of Political and Social Science 625.1 (2009): 164-81. JSTOR. Web.

Hart, David. “Live-Streaming Marina Abramović: Crazy or Brave?” Web log post. Inside/Out. MoMA, 15 Mar. 2010. Web. <>.

Isé, Claudine. “MoMA’s Live Streaming Marina-Cam Invites Everyone to Be Present.” Web log post. Bad At Sports. N.p., 22 Mar. 2010. Web. <>.

Jones, Amelia. ““The Artist Is Present”: Artistic Re-enactments and the Impossibility of Presence.” TDR/The Drama Review 55.1 (2011): 16-45. Google Scholar. Web.

Juhlin, Oskar, Arvid Engstrom, and Erika Reponen. “Mobile Broadcasting – The Whats and Hows of Live Video as a Social Medium.” MobileHCI’10 (2010): n. pag. Google Scholar. Web.

Knight, Christopher. “I See You: Marina Abramović Live Video-feed Performance at MOMA.” Web log post. Culture Monster. Los Angeles Times, 15 Mar. 2010. Web. <>.

Manovich, Lev. “Media After Software.” Journal of Visual Culture (2012): n. pag. Web.

Manovich, Lev. Introduction. Software Takes Command: Extending the Language of New Media. [S.l.]: Continuum, 2013. N. pag. Web. . Excerpt from 2008 version

Meisel, John. “The Emergence of the Internet to Deliver Video Programming: Economic and Regulatory Issues.” Info 9.1 (2007): 52-64. Google Scholar. Web.

Sagan, Paul, and Tom Leighton. “The Internet & the Future of News.” Daedalus 139.2 (2010): 119-25. ProQuest. Web.

Shannon, Claude. “The Mathematical Theory of Communication.” The Bell System Technical Journal 27 (1948): n. pag. Web.

Draw Friends Live. Tumblr, n.d. Web. <>.

“White House Correspondents’ Dinner 2013 Live Stream: Watch Barack Obama’s Speech, Conan O’Brien and More.” Zap2it. N.p., n.d. Web. <>.

Wilson, Tracy V. “How Streaming Video and Audio Work.” HowStuffWorks. N.p., n.d. Web. <>.

Wise, J. Macgregor. “An Immense and Unexpected Field of Action: Webcams, Surveillance and Everyday Life.” Cultural Studies 18.2-3 (2004): 424-42. Google Scholar. Web.

Google Glass Case Study

Google Glass Case Study

by: Sara Levine

I had not encountered Google Glass until very recently (in other words, when Professor Irvine mentioned it) and was surprised to find that there have already been a number of articles and video parodies in existence despite the fact that not many people have come into contact with the product. However, the news features surrounding Google Glass do not delve deeply into the product’s function as a media artefact.

The Ultimate Black Box

How does Google Glass work, exactly? If an interested potential buyer visits the Google Glass website, she or he may not find a definitive answer. The promotional video shows how “cool” the product is and demonstrates what it will be able to do, but it does not include technical details. Head-mounted displays (HMD) are not particularly innovative, but Google Glass seems lighter and sleeker than previous models. Its minimalist design suggests that users will not have easy access to the hardware behind Google Glass. If it breaks, it won’t be a simple case of cracking open the frame and checking “under the hood,” so to speak. There is a great deal of mystery surrounding the marketing behind Google Glass, and I would expect that its users will not be bothered with specifications unless it malfunctions. If potential buyers do some extra research outside of promotional materials, they may find that several developers have broken down most of Google’s recently released specifications into more comprehensible explanations. The explanations provided on are written for users who are fluent in code, but they also de-blackbox parts of Google Glass that had previously been shrouded in mystery.

Media in Media

Manovich wrote that a metamedium uses “already existing representational formats as their building blocks, while adding many new previously nonexistent properties. At the same time…these media are expandable – that is, users themselves should be able to easily add new properties, as well as to invent new media (Manovich 23).” Bolton and Grusin wrote that “each month seems to bring new evidence of the voracity with which new media are refashioning the established media and reinventing themselves in the quest for immediacy (Bolter and Grusin 267).” So, is Google Glass just another recombination of “new media,” or a “metamedium” that has the potential for growth? It has been hailed by TIME Magazine as one of the “Best Inventions of the Year 2012,” but it seems as though Google Glass has been lauded for technological innovations that it cannot lay claim to. Google Glass is not, for example, the first instance of Google software. It seems to be a new recombination of software and media forms that Google has already introduced. It also contains the technology used in film, photography, audio, GPS, voice recognition, and Internet access.


On the other hand, Google Glass may have the potential to become a “metamedium” as Manovich described it. At the time that I am writing this blog post Google has released details about API and sample code for users to start experimenting with and build their own programs for Google Glass software. The “Glassware” website contains a few sources for developers, as well as introductory videos for several different programmable aspects of Google Glass. Additionally, there is a “Playground” for developers to test out code in if they have not yet been able to get their hands on Google Glass. Consequently, there is the potential for users to “add new properties” and create new software for Google Glass.

Google Glass is marketed and presented to users as software. Manovich wrote in “Software Takes Command” that “We live in a software culture – that is, a culture where the production, distribution, and reception of most content – and increasingly, experiences – is mediated by software (Manovich 19).” The majority of users will only be interacting with the very surface features of Google Glass’ interface. Those who are well-versed in code may use code to manipulate the software, but that seems to be the deepest level of interaction possible. Manovich gives the example of a digital photograph taking on different properties and functions depending on the software that it is displayed with. Google Glass may be just another piece of software in an increasingly software- and app-based world, but its presentation sets it apart from others.

Absence of Augmented Reality

Google Glass sits in front of the eye and is hands-free. This means that it is controlled by the user’s voice. It is meant to be intuitive, so that the technology can be instantly accessible without the need to manually turn anything on or off. “…what best explains the distinctive features of human intelligence,” writes Andy Clark, “is precisely their ability to enter into deep and complex relationships with nonbiological constructs, props, and aids (Clark 5).” As “natural-born cyborgs,” we may already possess the ability to enter into such a relationship with Google Glass. It seems that our natural propensity for combining our mental functions with electronic tools is facilitated by placing the screen directly over the eye. I was going to write about augmented reality technology here and more specifically about Jurgenson’s “Digital Dualism versus Augmented Reality,” but one of the specifications that Google recently released states that Google Glass does not currently contain augmented reality technology.’s breakdown stresses that Google Glass is not simply a “replacement” for mobile or desktop computing. However, the absence of augmented reality seems to reduce Google Glass to remediated software on a screen in front of users’ eyes. An augmented reality feature would support Jurgenson’s stance that the digital and physical are enmeshed rather than separated in a dualistic point of view. It could be argued, however, that the placement of the Google Glass over the eye supports Jorgenson’s argument as well. Even if it is not augmenting the user’s reality, the user is still combining the two realities by using Google Glass in reaction to physical events (taking a picture, recording sound, etc.).


Discussion and speculation surrounding Google Glass is only just starting to pick up, but it is interesting to make note of some of the context through which Google Glass is emerging. Google Glass has been in development for some time, and is only just now being exported to those who are on an exclusive list of “Glass Explorers.” Despite the fact that Google Glass is not yet widely distributed, the existence of the sample code and other forward-looking features indicate that Google is preparing for an explosion of demand for this product. Reactions to Google Glass have varied widely. Some believe that it will further alienate us from “the real world”, while others are excited about the possibilities for hands-free computing. Whatever the outcome, Google Glass’ entrance into the market as a media artefact will certainly be notable.


Bolter, J. David, and Richard A. Grusin. Remediation: Understanding New Media. Cambridge, MA: MIT, 1999. Print.

Clark, Andy. Natural-born Cyborgs: Minds, Technologies, and the Future of Human Intelligence. Oxford: Oxford UP, 2003. Print.

“Developing for Google Glass.” Breaking the Mobile Web. N.p., 16 Apr. 2013. Web. <>.

Dvorak, John C. “Why I Hope Google Glass Flops.” PCMAG. N.p., 15 Apr. 2013. Web. <,2817,2417784,00.asp>.

“Google Glass.” — Google Developers. N.p., n.d. Web. <>.

“Google Glass API Documentation Now Live, Glassware Sample Code Provided.” Engadget. N.p., 15 Apr. 2013. Web. <>.

“Google Glass.” Google Glass. N.p., n.d. Web. <>.

“Google Glass Playground.” Google Developers. N.p., n.d. Web. <>.

“Google Glass.” Wikipedia. Wikimedia Foundation, 17 Apr. 2013. Web. <>.

Jurgenson, Nathan. “Cyborgology.” The Society Pages. N.p., 24 Feb. 2011. Web. <>.

Manovich, Lev. “Media After Software.” Journal of Visual Culture (2012): n. pag. Web.

Manovich, Lev. “New Media from Borges to HTML.” Introduction. The New Media Reader. By Nick Montfort and Noah Wardrip-Fruin. Cambridge, Mass. [u.a.: MIT, 2003. N. pag. Print.

Price, Emily. “Google Glass Ready to Ship for Some Explorers.” Mashable. N.p., 16 Apr. 2013. Web. <>.

Rivington, James. “Google Glass: What You Need to Know.” TechRadar. N.p., 10 Apr. 2013. Web. <>.

Toolkit Formation and Thoughts on Interface

Toolkit Formation and Thoughts on Interface

Sara Levine

It seems that no two analysts’ tool kits are the same. Certain theories resonate more strongly with some analysts and not with others. For example, Chandler’s website, Semiotics for Beginners, functions as a toolkit for budding semioticians. However, Chandler’s colleagues may disagree with the organization of Semiotics for Beginners or with the omission of certain concepts. Consequently, the tool kit that I have begun to outline here may be particular to my interests and is not intended to be used as a general reference.

Here is my first draft:

Don’t Take It Out of Context
I would make it a priority to learn the context surrounding the development of the form of media or technology that I am analyzing. Most media artefacts were not created in a vacuum, and their histories may reveal a new issue or perspective. Lisa Gitelman’s article in which she emphasized the historical significance of the ink and paper of the Salem Witch Trials records would make a good reference for this lens of analysis.

Technical Content
The “black box” effect indicates that the user of a media or technology artefact is unaware of the technical processes involved in its usage. For example, I can use a computer but I may not understand the technical details involved in saving my documents or sending an email. Consequently, it is important to ask: What does the user see and interact with? What is invisible to the user? Lev Manovich’s discussion of number-based operations contained within “new media” that users do not interact with might be a good resource for this topic.

There are a large variety of semiotic concepts to choose from when analyzing media. However, Daniel Chandler’s Semiotics for Beginners is a great resource. Barthes may work well with Chandler’s basic overview because Barthes introduces new layers of interpellation and the signification of myth. For example, if I were analyzing a web comic I would draw from Chandler when studying specific panel construction. I would then consider Barthes’ concepts in order to analyze the web comic in terms of codes, ideologies, and target audiences.

Cognitive Processes and Interface
It seems more effective to group cognitive science and interface together because interactivity between a media interface and users usually demands some form of cognitive work on the part of the user. If we return to the web comic example, the semiotic analysis may reveal certain meaning-making processes involved in reading the comic. The cognitive and interface analysis might uncover certain aspects that are not covered by semiotics. This includes how a user interacts with the software that displays the web comic. Specific readings that may be helpful with this analysis include most of Andy Clark’s writings, McLuhan’s “The Medium is the Message,” “Distributed Cognition,” and Lakoff’s “Conceptual Metaphor.”

Powerful Combinations
Intertextuality and intermediality could be explored as the final component of the toolkit. Manovich, Bolton, Grusin, and Clark discussed the ways in which media forms encapsulate each other in the same way that a Russian nesting doll is constructed. An iPhone, for example, is composed of many different media forms that came before it including the photographic camera, video camera, telephone, etc. Intertextuality can be analyzed under this topic as well, but may be more applicable to a text within the media artefact. The iPhone provides users with a personal assistant named Siri. If the user asks certain questions, Siri will answer with jokes and ironic statements that a user may only be able to appreciate if she or he is familiar with another text such as the Star Trek series.

I look forward to refining this tool kit and perhaps applying it to a case study as the semester comes to a close. Additionally, I would like to point out two concepts that stood out to me in regards to interface.

Andy Clark’s perspective on spatialization and spatial grouping reminded me of how tagging has become a spatial process on blogging websites such as Tumblr. Tumblr’s interface allows users to tag posts and “follow” tags that they are interested in. The popular tags that people follow then  organizes a space for users to have discussions in. Consequently, this interface for tagging leads many users to refer to tags as places rather than labels. A common complaint amongst Tumblr users is that other users should “stay out” of a certain tag. This implies that there are physical boundaries in place around each tag. These boundaries may be breached when a user adds a label to a blog post, but the language surrounding this action implies that the offending user has walked into a room as an unwelcome guest.

Another concept discussed in terms of interface is hypomnesis. Stiegler uses the example of advancements in automobile technology. The more advanced this technology becomes, the less we have to know in order to operate the vehicle. Consequently, we are forgetting how to drive. Another example that might be applicable here is attribution and copyright. The use of artists’ images without permission is an issue that continues to inspire heated debate on Tumblr, Pinterest, Facebook, and other websites. Tumblr’s interface allows users to re-blog an artist’s work without any attribution because the source link always appears at the bottom of the post. However, there are many instances in which a Tumblr user has posted a piece of artwork that was not theirs and was taken from another site without proper citation. I became involved in a similar situation when a pet supplies company took an image from my Deviantart Gallery and posted it on their Facebook page. I learned about it by chance and sent the company a message asking why I had not been cited or contacted about the use of my artwork. The company replied that they thought the watermark on my image was enough, but eventually made the changes I asked for. It seems that the advanced sharing and re-blogging components of certain website interfaces have further eroded our ability to attribute sources for creative works.

Allen, Graham. Roland Barthes. London: Routledge, 2003. Print.
Barthes, Roland, and Stephen Heath. “From Work to Text.” Image, Music, Text. New York: Hill and Wang, 1977. N. pag. Print.
Barthes, Roland, and Annette Lavers. Mythologies. New York: Hill and Wang, 1972. Print.
Bolter, J. David, and Richard A. Grusin. Remediation: Understanding New Media. Cambridge, MA: MIT, 1999. Print.
Chandler, Daniel. “Semiotics for Beginners.” Semiotics for Beginners. N.p., n.d. Web. <>.
Clark, Andy. Mindware: An Introduction to the Philosophy of Cognitive Science. New York: Oxford University Press, 2001.
Clark, Andy. Natural-born Cyborgs: Minds, Technologies, and the Future of Human Intelligence. Oxford: Oxford UP, 2003. Print.
Clark, Andy. Supersizing the Mind: Embodiment, Action, and Cognitive Extension. Oxford: Oxford UP, 2008. Print.
Gitelman, Lisa. Always Already New: Media, History, and the Data of Culture. Cambridge, MA: The MIT Press, 2008. Excerpt from Introduction.
Hollan, James, Edwin Hutchins, and David Kirsh. “Distributed Cognition: Toward a New Foundation for Human-computer Interaction Research.” ACM Transactions, Computer-Human Interaction 7, no. 2 (June 2000): 174-196.
Lakoff, George. “Conceptual Metaphor.” Excerpt from Geeraerts, Dirk, ed. Cognitive Linguistics: Basic Readings. Berlin: Mouton de Gruyter, 2006.
Manovich, Lev. The Language of New Media. Cambridge, MA: MIT, 2002. Print.
Manovich, Lev. “Media After Software.” Journal of Visual Culture (2012): n. pag. Web.
Manovich, Lev. “New Media from Borges to HTML.” Introduction. The New Media Reader. By Nick Montfort and Noah Wardrip-Fruin. Cambridge, Mass. [u.a.: MIT, 2003. N. pag. Print.
McLuhan, Marshall. “The Medium is the Message,” Excerpts from Understanding Media, The Extensions of Man, Part I, 2nd Edition; originally published, 1964.
Stiegler, Bernard. “Anamnesis and Hypomnesis.” Ars Industrialis. N.p., n.d. Web. <>.


Manovich, the iPhone, and Pictures Under Glass

Manovich, the iPhone, and Pictures Under Glass

Sara Levine

Manovich re-energizes the concept of “new media” by attempting to narrow down the specifics of this umbrella term. He explores a variety of components that make up “new media,” including its variability, transcoding processes, and interactivity. Manovich also posits that many of the innovative aspects of “new media” rely on older forms of media. Similarly, Bolter and Grusin emphasize that mediation and remediation are inseparable. I will use the iPhone as an example in order to determine whether or not these definitions are applicable to modern day media platforms.

Case Study: the iPhone

The Apple iPhone is a “new media” technology that has become an incredibly popular entertainment, communication, and computing device. There are a few key components of the iPhone’s design that highlight the way “new media” forms are presented and consumed by the general public. Its presentation as a desirable product is communicated through advertising, and Apple ran a very successful campaign. Manovich discussed marketing in The Language of New Media (Manovich 60). Instead of targeting mass audiences, companies like Apple have been emphasizing the individual. Advertisements for the iPhone highlight features like Siri in order to present the iPhone as customizable and attuned to personal needs. However, the user experience is also presented as simple and direct. Bolter and Grusin point out that there is a noticeable effort to erase the user’s awareness of the medium. The interface of the iPhone, therefore, should be intuitive to the user.

The Black Box

Here are a couple of diagrams I found through Google Image Search of the components of an iPhone:

Manovich explained that “new media” objects such as the iPhone are composed of digital code, automation, and other number-based elements (Manovich 48). However, these are not the parts of “new media” that users directly interact with. Manovich posited that the role of software in the structure of “new media” is much more important than many of us may realize. He argues that “While digital representation makes possible for computers to work with images, text, sounds and other media types in principle, it is the software that determines what we can do with them (Manovich 3).”

Software’s Centrality

Manovich’s concept that “There is Only Software (Manovich 4)” may be applicable to the iPhone’s software components. The software for the iPhone is generally referred to as apps, or applications. Users do not work with the numerical representations of the iPhone’s functions. Instead, they use the software as translations of these hidden processes. Consequently, users forget about the invisible technical details of the technology. As Manovich writes, “media becomes software (Manovich 12).”

New Media as Post-Media, Meta-Media, or Remediation

The implementation of software and apps on the iPhone is not particularly innovative. Software has been used as a method of communicating with hardware since the early days of PC computing. Manovich, Bolton, and Grusin all discussed this process of building on old forms of media in order to produce the “new media” that we interact with on a daily basis. “New media” technologies such as the iPhone possess the combination of a variety of familiar media platforms, cultural semiotic codes, and other primary building blocks. The iPhone, for example, contains numerous instances of this remediation. The camera and its application borrows from photographic tools and digital photography. iMessage seems loosely based on many instant messaging programs. The phone component displays functions of a cell phone, PDA contact list, and voicemail machine. Additionally, Apple includes its iPod capabilities with the iPhone’s music software. All of these components derive from earlier versions of media. They are combined and reformatted in a remediation process that forms the newest meta-media: the iPhone.


Manovich discussed HCI in his writing, and used the game Myst as an example of “new media” that made use of cinematic techniques as part of its interface (Manovich 81). The touchscreen seems to be the most prominent method of interaction between user and iPhone. A couple of years ago I read a blog post by Bret Victor about touchscreen technology. The post was titled “A Brief Rant on the Future of Interaction Design,” and made the argument that touchscreen technology such as the one implemented through the iPhone does not successfully utilize all of the capabilities of the human hand. He calls the touchscreen “Pictures Under Glass,” and posits that “Pictures Under Glass sacrifice all the tactile richness of working with our hands, offering instead a hokey visual facade.” This discussion of human capability and user interaction reminds me of the importance of cognitive science within the realm of “new media.” Victor seems to believe that the touchscreen of an iPhone numbs the senses in a human limb that has historically been used to manipulate tools in a tactile manner. How are our cognitive processes affected by this apparent repression? Victor also posits that Pictures Under Glass is a “transitional technology.” He pleads with researchers to look into the development of technologies that work with more intuitive gestures and hand motions rather than simple slide-across-the-screen movement. The touchscreen technology may not possess the same familiarity as the cinematic techniques used in Myst. Its history of remediation may not stretch back very far, and Victor makes the claim that the interactivity required by a touchscreen does not take advantage of the full range of cognitive and physical processes displayed by humans.


Bolter, J. David, and Richard A. Grusin. Remediation: Understanding New Media. Cambridge, MA: MIT, 1999. Print.

Manovich, Lev. The Language of New Media. Cambridge, MA: MIT, 2002. Print.

Manovich, Lev. “Media After Software.” Journal of Visual Culture (2012): n. pag. Web.

Manovich, Lev. “New Media from Borges to HTML.” Introduction. The New Media Reader. By Nick Montfort and Noah Wardrip-Fruin. Cambridge, Mass. [u.a.: MIT, 2003. N. pag. Print.

Victor, Bret. “A Brief Rant on the Future of Interaction Design.” Web log post. Bret Victor. N.p., 8 Nov. 2011. Web. <>.

The Pop Culture Engine: Mediology and the San Diego International Comic-Con

The Pop Culture Engine: Mediology and the San Diego International Comic-Con

Sara Levine

Earlier in the semester, I analyzed Shannon’s and Foulger’s models of communication through the lens of fandom interaction. These models come into play again through the study of mediology. Shannon’s model of communication has divorced the communicative process from cultural, political, economic, semiotic, and any other processes that it may be intertwined with. Shannon’s and Foulger’s models seem to imply that there is a cause and effect process involved in studying media. Mediology, on the other hand, posits that culture and technology are so intertwined that they should not be studied separately.

Instead of concentrating on one single interaction between creator and consumer on Twitter, it may be more effective to analyze a broader form of media. In terms of fandom, nothing is bigger or broader than the annual San Diego International Comic-Con. Hundreds of thousands of people flock to California every summer to attend this massive media event. How can Comic-Con be analyzed through mediology? How does this analysis compare to that of Shannon’s and/or Foulger’s models?

Image: Exhibition Hall 2011

Comic-Con as Combinatorial

Other comic conventions such as Emerald City Comicon, Angoulême International Comics Festival, and New York’s MoCCA Festival are focused on the singular medium of comics and cartoon art. San Diego Comic-Con, however, is combinatorial on multiple levels. The physical space of the arena is transformed into hundreds of booths that are categorized by medium. There are comics dealers, t-shirt vendors, independent and small press comics, video game companies, television and movie studios, toy collectibles, and the list goes on. Guests and exhibitors communicate through a variety of mediums as well. There is person-to-person contact, but there are also free giveaways and game demonstrations. Shannon’s and Foulger’s models are present when a comic artist signs an autograph, or when a guest tweets to a friend across the room about a free poster she or he just snagged from a vendor. However, those models do not allow for the scope and underlying conditions of Comic-Con. In addition to the media forms present in the exhibition hall, there are also panels and events throughout the weekend. Panels take place in large rooms in which there are interviews, sneak previews for shows and movies, podcasts, workshops, etc. Outside of the convention center, San Diego embraces fan culture by incorporating Comic-Con events. Restaurants are taken over by TV shows, there are a number of parties at any given location throughout the city, and hotels employ shuttles that carry guests to and from the convention center. Comic-Con exists within virtual media forms as well. Twitter is overrun by attendees spilling news and jealous non-attendees discussing the panels as they happen. Attendees may also post pictures, videos, and blog posts within the context of the convention. Shannon’s and Foulger’s models seem too narrow to accommodate such a large-scale media event.

Image: An excited fan asks for an autograph from Pen Ward, the creator of Cartoon Network’s Adventure Time.

Comic-Con’s Embedded Institutions

Debray explained in Media Manifestos that he is “interested in the power of signs (Debray 6)” rather than their meanings. Comic-Con seems to harness that power in order to produce both monetary and cultural value.

The most harrowing experiences occur before the convention actually takes place. I am referring to ticket purchasing and hotel reservations. Tickets officially go on sale during the previous year’s convention. Later on, the remaining tickets are sold online but disappear within hours. Hotels are similarly booked up months beforehand. There are several economic underpinnings to this interaction. Comic-Con is not cheap, and so the buyer must have the economic means to purchase a ticket. Sociologically speaking, the average Comic-Con attendee may be of a certain class that can afford the expenses. Then there is the matter of navigating the ticket website, which may or may not be designed to help or hinder the potential attendee. Other economic factors involved in guest attendance include the sales of airline tickets, hotel bookings, restaurant reservations, car rentals, and many other industries that are able to feed off of the mania surrounding the convention.

The cultural forces and conditions involved in Comic-Con are similarly expansive, and are nearly inseparable from economic forces. Large media companies attend in order to build up a fanbase before releasing content. Small independent distributors and artists attend in order to sell their work or look for work. Many guests attend in order to purchase comics, artwork, figurines, and other items. However, there are other conditions of Comic-Con that are not as deeply enmeshed in profit margins. As I mentioned earlier, fanbases are created at Comic-Con to help a new television show or film gain support before its release date. The creation of a fanbase over the course of a weekend is a remarkable feat, and could not be accomplished without the involvement of celebrities, branding, social media, and well-constructed preview material. This process is the formation of fan culture within a contained space. Additionally, other fandoms come to Comic-Con in order to reinforce their loyalty and camaraderie.

Image: Cartoon Network takes over a restaurant for Comic-Con weekend.

Communication vs. Transmission

Steven Maras writes that “Debray casts doubts on a complete separation” between communication and transmission (Maras “On Transmission”). Further analysis of Comic-Con may support the idea that these concepts are folded within each other. The communication process of pop culture seems to overlap with the reification of pop culture over time. Pop culture manifests within many institutions and mediums (there may be some parallels to organized religion in that regard), but all of these converge onto San Diego each summer. Comic-Con has become synonymous and symbolic of the umbrella term “pop culture.” The convention’s semiotic power is rooted in its communicative and transmission processes.

Image: Exhibition Hall 2011

Is Comic-Con a Black Box of Pop Culture?

Comic-Con may function similarly to a black box metaphor for someone who is unfortunately unable to attend. Guests enter the convention doors, and a cacophony of news and hype pour out into the virtual world. New films and shows are either lauded as the next hot cultural phenomenon, or they are declared dead upon arrival. Celebrities make surprise appearances, and some clips may make the rounds within a few minutes of their appearance. Mainstream comic book characters are introduced or killed off within the course of a panel. All of this occurs in one place and over the course of one weekend. Even the attendees may not be aware of how news spreads of a free giveaway or disappointing panel. It seems as though the process of producing culture through Comic-Con is not clear. Non-attendees soak up the information and react to bad news accordingly. Attendees rush through the crowded hall without much knowledge of the convention’s inner workings. A more specific example of this potential black box occurred in 2009. A new film debuted at Comic-Con called Scott Pilgrim vs. the World. The film was based on a series of fairly popular comic books, and it was a rousing success at Comic-Con. Rave reviews came in from those who saw it at the convention that weekend, and Comic-Con’s adoration of the movie seemed to be a good omen for its success. Unfortunately, Scott Pilgrim did very poorly in the box office and is now generally considered to be a cult film. Comic-Con’s failure in this case reveals the mysterious nature of its underlying processes. So many different forms of media make up the input and output of the convention that it may be difficult to pinpoint what exactly goes on over the course of that weekend.


Chandler, Daniel. “Processes of Mediation.” Processes of Mediation. N.p., n.d. Web. <>.

Debray, Régis. Media Manifestos: On the Technological Transmission of Cultural Forms. London: Verso, 1996. Print.

Debray, Régis. Transmitting Culture. New York: Columbia UP, 2000. Print.

Foulger, Davis. “Models of the Communication Process.” Models of the Communication Process. N.p., n.d. Web.

Irvine, Martin. “Working With Mediology: From Theory to Analystical Method.” N.p.: n.p., n.d. N. pag. Web.

Maras, Steven. “On Transmission: A Metamethodological Analysis (after Régis Debray).” The Fibreculture Journal 12 (2008): n. pag. The Fibreculture Journal. Web. <>.

Salkowitz, Rob. Comic-con and the Business of Pop Culture: What the World’s Wildest Trade Show Can Tell Us about the Future of Entertainment. New York: McGraw-Hill, 2012. Print.

Vandenberghe, Frédéric. “Régis Debray and Mediation Studies, or How Does an Idea Become a Material Force?” Thesis Eleven (2007): n. pag. Web.



Benjamin’s Work in the Age of Digital Reproduction

Benjamin’s Work in the Age of Digital Reproduction

Sara Levine

Walter Benjamin’s “The Work of Art in the Age of Mechanical Reproduction” is a remarkable piece of work not only because of its historical significance for communication and media theory, but also because of its own reproducible characteristics. Professor Irvine wrote in his presentation “Mediation and Representation: Plato to Baudrillard and Digital Media” that if we replace the terms “mechanical” and “technical” with “digital”, then the concerns addressed in Benjamin’s piece seem similar to modern day concerns about digital reproduction. In other words, Benjamin’s theories continue to hold relevance for modern day reproductions of work through digital means. For example, Benjamin’s concepts can be applied to the Google Art Project.

Virtual Gallery Tour

One of the main features of the Google Art Project is the ability to walk through museums’ floor plans. Similar to the “Street View” option on Google Maps, the Virtual Gallery Tour allows visitors to navigate through collections from a first-person perspective with directional controls. This produces a completely unique experience for users that can be linked with Benjamin’s concepts of aura and ritual. Benjamin was interested in the idea that a work of art loses its aura, or unique characteristics, when it is reproduced. The aura of a piece of art is inseparable from its ritualistic basis, whether that ritual is religious or secular. However, Benjamin was quick to point out that works of art produced through mechanical means (such as a photograph) have successfully removed this ritualistic aspect. In the Virtual Gallery Tour, the works of art are not being reproduced as static images. Instead, they are being reproduced within an environment that possesses a ritualistic aura. It might be necessary to then analyze the Google Art Project’s efforts to substitute the realness of a museum in terms of what Baudrillard might consider to be a simulation. The meaning making involved in physically standing inside of a museum may possess a uniqueness that cannot be successfully recreated through the Google Art Project. There are no other visitors in the room with you, you are not able to smell the slow decay of the paintings, and you will never have the experience of repeatedly asking security guards where the exit is located. Are these experiences necessary to absorb and create meaning from a work of art? It may depend on the piece of art and the person.


Artwork View

Another feature of the Google Art Project is Artwork View, in which a visitor may roll over a piece of artwork and zoom in to the details. This feature is reminiscent of the advent of the close-up, in which a filmmaker may fill the frame with one detail of an object. Benjamin wrote that this technological advancement served to heighten a person’s “apperception (Benjamin XIII).” Close-ups expand the viewer’s sense of space. Details of objects or movements that had before seemed mundane or nearly invisible (the tremor of a hand, scratches on a doorknob, a drop of condensation running down a glass, etc.) became sources of intense analysis. If these close-ups fill the screen, then the audience has no choice but to become aware of these new optical angles. The same could be said of the Artwork View function. This zoom-in feature allows users to expand their fields of vision and make note of details that might not have been perceptible while in the physical presence of the artifact. A visitor at a museum may be scolded for getting too close to an artifact, and our eyes do not have the optical zoom features of a camera. Consequently, the Artwork View option may have a similar effect on viewers as the close-up did on film audiences. This could result in a Google Art Project user developing a different interpretation of the piece than if she or he relied on her or his own eyesight to view the artifact.

Artwork Collection

The Artwork Collection function allows users to “save” a piece of artwork to her or his “collection”. Consequently, the concept of a gallery is reproduced digitally so that any user could take on the role of an art collector. Similar to the Virtual Gallery Tour, the Artwork Collection may inadvertently remove the ritual value and/or uniqueness of an artifact. The artifact is digitally reproduced within a user’s collection, and subsequently removed from its physical place within a collection. This change in environment could be considered a paradigmatic alteration that may have a large impact on the ways in which the artwork is perceived by the user. Additionally, Google Art Project emphasizes the fact that users can personalize their own collections. We are encouraged to organize the artwork according to our own tastes. Therefore, users can transfer artworks out of their environments and then rearrange them or place them next to artworks that are physically located halfway around the world. This significantly alters the meaning making process involved in viewing the artwork.

A Piece of Art Reproduced in a Film Reproduced in a YouTube Clip

The following excerpt from the film Ferris Bueller’s Day Off serves as another example of reproduction. However, there are a number of processes occurring simultaneously. On the surface of this clip, the three main characters visit The Art Institute of Chicago. It can also be viewed through several layers of reproduction and semiotic analysis. The artwork is being reproduced through the film medium, which can manipulate space and time in order to emphasize certain details of the artwork. The film is then reproduced digitally and archived on YouTube. In order to keep this blog post at a reasonable length, I will list some thoughts that struck me while viewing this clip.

  • The camera lingers on certain paintings and groups of artifacts. The framing for each shot reproduces the art in such a way that the viewer’s perception of the art may be altered by the use of camera movement and close-up.
  • The sequence at 1:07 in which Cameron Frye stares at the Seurat painting deserves its own bullet point. The editing technique in this sequence makes use of close-up in which spatial perception is heightened. This heightened perception allows viewers to engage in meaning making. Narratively, these close-ups signify that Cameron identifies with the little girl in the painting. Additionally, viewers may find that the close-up of the painting reveals details about Impressionism that had not been visible to them before.
  • The camera shakes slightly through each shot. This signals to the audience that they are watching a live action shot of motionless artifacts instead of still images.
  • It is also interesting to note that some shots contain people and others do not. Perhaps the shots without people are meant to evoke a personalized experience similar to the one in the Google Art Project.
  • Another shot that deserves its own bullet point is the one at 0:57 seconds in which the three main characters are viewing artwork. They are framed artistically through the film medium. The vector lines within the composition of this shot seem to emphasize Ferris as the largest (and therefore most important) character on screen. Instead of focusing on the artwork, perhaps the audience is being asked to turn a critical eye on the critics.
  • I am able to point out specific shots because I paused and scrolled through the timeline of the clip multiple times. Audiences who watched this film when it was released in theaters did not have this option until it was released on VHS tape. However, VHS tapes did not contain scrub bars or preview images in order to make this process easier for the viewer. Benjamin wrote that “No sooner has his eye grasped a scene than it is already changed (Benjamin XIV).” This constant change has been slowed down through options such as timeline bars and buttons that rewind ten seconds of video. This may significantly alter the viewing experience.


Baudrillard, Jean. Simulacra and Simulation. Ann Arbor: University of Michigan, 1994. Print.

Benjamin, Walter, Hannah Arendt, and Harry Zohn. Illuminations. New York: Harcourt, Brace & World, 1968. Print.

Ferris Bueller’s Day off. Dir. John Hughes. 1986.

“Google Art Project.” Google Art Project. N.p., n.d. Web. <>.

Irvine, Martin. “Mediation and Representation: Plato to Baudrillard and Digital Media.” Lecture.



Intertextual Rendez-Vous: Viewing The Triplets of Belleville from an American Perspective

Intertextual Rendez-Vous: Viewing The Triplets of Belleville from an American Perspective
by: Sara Levine
In order to study how audience members engage with various media forms, it may be necessary to draw on linguistic theories of text and intertextuality. A film, for example, would be referred to as the text. A text exists as a convergence of meanings (or signified meanings in the semiotic sense of the word) and is always “read” simultaneously with other texts. This intertextual process when applied to a medium such as film could expand outward from language, authorship, and genre to variations of sound, visual presentation, narrative structures, celebrity, etc. A text may have been produced with a certain subculture of addressees in mind. Once the text is disseminated, however, the author’s control over the meaning of her or his work is eclipsed by the audience’s reception. The Triplets of Belleville, created by Sylvain Chomet in 2003, is a film that seems to demand that viewers combine the various encyclopedic texts that they carry around with them in order to enjoy the film.

Visual Presentation: Animation
Animated films in America are, for the most part, relegated to the genre of children’s films. By 2003, many animation companies had turned to three-dimensional animation techniques, and 2D animation had fallen by the wayside. 2D animated characters were lively, colorful, and fairly innocent in order to reflect the characteristics of their young audience. Many Americans grew up with Disney films such as Snow White and the Seven Dwarves, The Lion King, etc. that seemed to set codified standards for the animated feature film genre. Consequently, American audiences sat down to watch The Triplets of Belleville while comparing it with various other texts and animation codes. As Radford wrote, “You are not reading this text at random, but rather in conjunction with other texts that you have read or are familiar.” In this case, American audiences inevitably draw a sharp contrast between the text they are engaging with and the encyclopedic knowledge of texts that they carry with them as Americans. The animation in The Triplets of Belleville is unlike most animated styles favored by American production companies. Motion is not as exaggerated as in American cartoons, and there is a focus on small movements. The palette is not particularly bright or seemingly “happy.” In fact, there are adult situations within the narrative that would not be found in the majority of American animated feature films. The only animated segment that comes close to American animation is the very beginning in which the audience is introduced to the titular Triplets of Belleville. The animation is reminiscent of the infamous “Steamboat Willie” short produced by Disney in 1928, but features more mature content. This comparison made between the animation style of The Triplets of Belleville and American animated feature films may have led American audiences to view The Triplets of Belleville as more of a highbrow, artistic piece of work rather than a piece of mindless entertainment for children.
The Triplets of Belleville relies heavily on its soundtrack in order to move the narrative forward. The music composed for the film draws from a wide variety of genres and combines musical stylings that are both familiar and unfamiliar to American audiences. There is an upbeat jazz rendition of “Belleville Rendez-vous”, the accordion music during the bicycle race, and the piece composed entirely of sounds when Mme Souza joins the Triplets in a performance. The jazz piece may combine an American audience’s knowledge of jazz as a remnant of history and as an indicator of nostalgia. The accordion music and other songs the characters play on phonographs may be meaningful in that they are representative of French culture to Americans. Again, the combinatorial nature of pairing distinctly non-American animation with non-American music may indicate to American audiences that this film is to be viewed as innovative. The piece of music that is made up entirely of sounds may require a great deal of intertextual processing. There is the interpretive process of recognizing the sounds, pairing them with the equipment that is producing the sound (newspaper, refrigerator, vacuum cleaner, etc.), recognizing the noises as music, and placing this music within the context of previous knowledge of music and its genres. Utilizing noise in order to create music is not a new phenomenon, but its combination with Chomet’s animation and storyline present it as unfamiliar in this context to American audiences.

Chomet also uses music to indicate which characters are on screen. The villains of this story, for example, have their own theme song that returns every time these characters appear. This method of recall within the text allows the composers to play on variations of the villains’ theme. The tune will be recognizable to audiences, and they will be able to remember this theme the next time it plays in the movie. The villains may not even necessarily be in the shot for audiences to hear the music and know that the villains are somewhere in the vicinity.
Language, or Lack Thereof
The Triplets of Belleville is particularly invested in its music because there is a distinct and noticeable lack of dialogue or subtitles throughout the film. Some characters, like the Triplets, communicate in grunts and short noises made from the throat. Brief snatches of French can be heard from televisions and radios. Otherwise, the film is propelled through sound and music. The disappearance of dialogue helps to destroy the language barrier that otherwise may have impeded an American audience’s interpretive process while engaging with the text. However, this absence may be one of the film’s most noticeable aspect because most modern day media forms rely on dialogue and/or subtitles.
Character Design
Character design in The Triplets of Belleville does not seem to be synonymous with what American audiences are familiar with in terms of animation style. Heavy characters are massive and take up large sections of the screen, whereas the bicyclists are wiry and bony. Noses tend to protrude out of the face and are larger than the characters’ heads in some cases. It can be inferred, therefore, that these animations are not meant to convey the same amount of what could be considered “cuteness” as in American animated feature films. The exaggerations in size and placement of body parts affects the way in which the characters move across the screen. American audiences may therefore draw on other cultural texts in terms of American animations, but also the way people look in the physical world. Belleville seems to be a stand-in for New York City, as indicated by the rather large depiction of the Statue of Liberty holding a hamburger. Americans are drawn as almost grotesquely large people who waddle down the street, and the city is depicted as a crowded and claustrophobic setting. American audiences, therefore, will inevitably view these designs with their particular cultural norms and viewpoints about their own culture while they engage with the text.
Hero’s Journey
Although this film does not strictly adhere to Joseph Campbell’s concept of “The Hero’s Journey,” in which a young (usually male) hero must heed the call to face certain challenges on a journey to a magical or supernatural setting, the tenets are visible enough that American audiences may use it to draw meaning from the film’s narrative. In The Triplets of Belleville, Mme Souza must travel to Belleville in order to rescue her grandson from the sinister machinations of the French mafia. Many of the main aspects of “The Hero’s Journey” are present in the narrative, including a call to action, several dangerous obstacles, help from almost supernatural beings (the Triplets), and the actual journey to an unfamiliar setting. However, the gendered and ageist stereotypes of most of these types of narratives are done away with in order to present Mme Souza and the Triplets as the heroes of the film. The narrative structure is familiar to most American audiences, but it is combined with an elderly female character type in place of the young hero. The elderly are not typically featured as main characters in American film and television. In this case, however, their roles are subverted and combined with “The Hero’s Journey” narrative in order to create a story that is at once familiar and unfamiliar to American audiences*.
History, Geography, and French Culture
There are certain references within The Triplets of Belleville that American audiences may not be receptive to because these references are intended for French audiences. The mania surrounding the Tour de France, for example, may not be ingrained in the American audience’s cultural encyclopedia. Similarly, certain character and setting designs may hold particular meaning for French audiences that does not carry over to American audiences. There may be, for example, a signified meaning for French people in the way the bicyclists are drawn and are animated in an almost equine manner. Additionally, while American audiences may be well-versed about the presentation of nostalgia for the Jazz Age in New York City, they might be less informed about the changes in Parisian society and culture in the first half of the 20th century. The Triplets of Belleville, however, combines these histories and cultural references in order to tell a story that spans across these two cities.
It is also important to note that Chomet may have borrowed or drawn upon various film and animation codes from other French films and animations that is not evident to American audiences. Consequently, the distribution of this film may have introduced certain codes and genres that seem new and innovative to American audiences. However, the rich and varied history of French culture and media production most likely had some amount of impact on Chomet’s work.

*This, however, has become less unusual with films such as Pixar’s Up.


Agger, Gunhild. “Intertextuality Revisited: Dialogues and Negotiations in Media Studies.” Canadian Journal of Aesthetics 4 (1999): n. pag. Web. <>.

Barthes, Roland, and Stephen Heath. “From Work to Text.” Image, Music, Text. New York: Hill and Wang, 1977. N. pag. Print.

Radford, Gary P. “Beware of the Fallout: Umberto Eco and the Making of the Model Reader.” The Modern Word. N.p., n.d. Web. <>.

The Triplets of Belleville. Dir. Sylvain Chomet. By Sylvain Chomet. 2003. DVD.

Barthes and Zombies

Barthes and Zombies
Sara Levine

Meaning-making processes take place both within and across media forms and genres. It may prove useful to deconstruct these intersecting symbolic systems and look at them individually before being able to draw definitive connections. For example, the “Zombies, Run!” iPhone/Android application draws on multiple meaning-making processes in order to immerse the user into the experience of the game.

What is “Zombies, Run!”?

“Zombies, Run!” is an interactive game for iPhone and Android phones developed by Six to Start and Naomi Alderman. The premise of the game is that the user is a survivor of the zombie apocalypse along with the other characters in the game. The user is taken through a series of “missions” that follow a storyline about a makeshift base called Abel Township. Characters instruct the user and talk to each other through the user’s headphones. The user must complete each mission by running for about a half hour or more. The user is referred to as “Runner 5”, and is part of a team of “runners” that collect supplies and other important artifacts for the survival of Abel Township. If the user turns on “zombie chases,” then she or he must increase her or his pace during certain moments in each mission in order to outrun zombies. The game uses audio signals to tell the user when zombies are close by. When characters are not speaking to the user, the game plays the user’s music. “Zombies, Run!” works well with Barthes’ cultural semiotics model in that it makes use of the second mythological layer of signification that Barthes explored in his writing.

The Zombie Genre
Barthes introduces a third concept for semiotics that embodies the cultural component of any given sign. He describes the signification of myth as signs that are re-used by cultures across different texts and media forms. This “second-order meaning,” as Allen calls it, functions as another layer in the signifier’s meaning making. In the case of “Zombies, Run!”, the zombies are representative of the enemies to be defeated in the game. However, zombies are particularly symbolic to Western cultures. The zombie genre has existed for many years, and was most notably demonstrated through George A. Romero’s films as commentary on Western society. Zombie genre texts have since then followed a certain code that utilizes all of the same characteristics: flesh eating undead creatures that were transformed from a contagious epidemic. “Zombies, Run!” follows this code in order to facilitate both the storyline of the game and its function as a fitness app. The user is a survivor in what is generally termed “the zombie apocalypse” and must run from creatures that are well-known to many Westerners and fans of the zombie genre.

The Radio Play
Marcel Danesi made mention of the infamous radio play War of the Worlds in his writing. He used it as an example of “the simulacrum effect”, in which media and reality blur in such a way as to affect an audience’s perception. Some listeners, for example, became panicked while listening to Orson Welles’ retelling of the story and notified the police about an alien invasion. The format of the radio play has now been codified in such a way that “Zombies, Run!” is able to use it for narrative purposes. As the user runs, characters tell Runner 5 to run faster or that she or he is doing a great job, and sometimes they describe where Runner 5 is supposedly running in terms of the environment within the game. The user is not completely may not be entirely taken in by the story because she or he is familiar with the code of a radio play and the blurred lines between the text and her or his reality. Another effect that “Zombies, Run!” employs are the sounds of zombies. The heavy breathing and growling of the zombies grows closer and closer if the user does not speed up as instructed. This audio effect, much like the simulation of the news broadcast in War of the Worlds, is supposed to employ the simulacrum effect in such a way as to motivate the user to increase her or his speed.

I/You Significance and Personalization
Another semiotic aspect of “Zombies, Run!” is its employment of “I” and “You”. The characters in the game address the user as “you” and “Runner 5,” which the user then interprets as a direct form of address. It transforms the user into an objectified character within the narrative story. This signifies the personalization effect of the game, in which users are led to believe that the game is tailored specifically for her or him. In actuality, the game is mass produced and the characters call every user “you” and “Runner 5”. Another method of personalization is that when characters are not speaking, the app plays the user’s music. There is a “radio mode” as well, in which two DJs break up songs with commentary about the apocalypse and also introduce the user’s songs with general statements that can be applied to any type of song. This is a simulation of personalization, but users are encouraged to interact with this simulated experience in order to enjoy the game.

Video Game Codes Within “Zombies, Run!”
There is another cultural code that is used in “Zombies, Run!”. This code developed over years of video game history, and is employed within the world of “Zombies, Run!” through the use of mapping and item collecting. These are RPG (role-playing game) video game features. Games like Legend of Zelda and Final Fantasy are some of the more popular and well-known games that have established this code through the RPG video game genre. “Zombies, Run!” users collect items as they run and are notified of these items by a voice in their earphones. After they have completed a mission, users drag and drop these items from their inventory into different sections within a virtual map of Abel Township. The designers of the game also recently introduced ZombieLink, which tracks a user’s personal running performance and displays a map of the user’s route. If users are familiar with the intertextual signification of these video game features then the game becomes easier to navigate.

Barthes writes about the interpellation process of meaning-making. He uses the example of looking at a Basque house in Spain, and explains that “the concept…comes and seeks me out in order to oblige me to acknowledge the body of intentions which have motivated it and arranged it there as the signal of an individual history (Barthes 123)…” The different historical and cultural processes behind “Zombies, Run!” seems to call out to its users in a similar fashion. It hails its audience through cultural codes such as the zombie genre and the radio play format. It also directly addresses users through the use of language indicators like “I” and “you”. These signified messages that are embedded within cultural myths seem imperceptible to users.


Allen, Graham. Roland Barthes. London: Routledge, 2003. Print.

Barthes, Roland, and Annette Lavers. Mythologies. New York: Hill and Wang, 1972. Print.

Danesi, Marcel. “Semiotics of Media and Culture.” The Routledge Companion to Semiotics. By Paul Cobley. London: Routledge, 2010. N. pag. Print.

“Zombies, Run!” Zombies, Run! N.p., n.d. Web. <>.

The Semiotics of Sequential Art

The Semiotics of Sequential Art
by Sara Levine

Semiotic analysis provides an essential toolkit for conducting a close study of cultural products such as advertisements, film, photographs, and so on. Daniel Chandler’s Semiotics for Beginners, for example, contains an entire arsenal of concepts and ideas that can be applied to a number of cultural forms and genres. This application of semiotic technique may reveal what is denoted and connoted by the work, and messages that are intentionally and unintentionally communicated to the audience. So, how do we study the comic book art form through semiotics? The combination of text and images is a little overwhelming at first. However, this medium may seem less intimidating with the help of Scott McCloud and our semiotic toolkit.

Here is the selection I have chosen for this particular semiotic study:

It is a page taken from Greg Rucka and Matthew Southworth’s Stumptown series, published by Oni Press.

When we look at the page as a whole, the layout may be the most noticeable aspect. American comics have their own code for the layout of a page. This code dictates that the viewer read panels from left to right, and top to bottom. Additionally, most comics, as Scott McCloud points out in Understanding Comics, contain “gutters.” Gutters break up panels using borderlines around the images.

Southworth and Rucka seem to have broken away from this formatting technique. The syntagmatic structure of the panels is confusing at first, and if the viewer attempts to read from left to right and top to bottom the page wouldn’t make much sense. However, if she or he had been reading the comic book from the beginning then she or he would have started reading the page with it tilted 90 degrees clockwise. Then the reader would follow the panels from left to right until the second row of panels, at which point she or he would tilt the page 90 degrees counterclockwise. What is missing from the layout of the page? Paradigmatically speaking, there are no gutters or borderlines for the panels. In addition, the panels are all overlapping one another. The denoted message of these layout changes might signify Southworth’s particular art style, or quicker pacing in a high stakes car chase scene. However, there is an underlying connotative message here as well in regards to how the creators would like the audience to look at the page. It could be assumed that this connotation has something to do with how comic books as an art form should or could be read.

Positioning within Frame/Perspective
Composition within panels is another important aspect of the comic book art form. I won’t analyze every single panel composition here, but there are a couple of notable examples.

  1. The largest panel depicting the car chase, in which two cars and the top of a police car are visible, utilizes comic book code in the form of motion lines. Motion lines denote speed to a comic book reader. The connotations of this panel may include the placement of Dex’s (our main character) car. She is caught between fleeing the police force and pursuing the villains. This is also one of her dilemmas in the narrative of the story. It is a plotline usually associated with the detective genre.

  1. Another signifier throughout the panels on this page is perspective. The smaller panels lining the right side of the page may be the equivalent of quick cuts in a movie. There is a close-up of Dex’s confident, smiling face, a panel from the perspective of the villains’ truck, an even smaller one of Dex and Mim’s horror-filled gazes, and finally the two cars colliding. These jumps between perspectives denote the sequence of events, but beyond that they also build tension for the reader. They are meant to be read in quick succession, and show only quick glimpses of a moment in time.

Color/Art style
Color and art style can also function as signifiers to comic book readers. Some comic books are not printed in color, but Stumptown is awash with murky coloring. There are a lot of browns and grays on this page. The signified interpretation may at first seem like a style choice on the part of Southworth, but a connoted interpretation may have something to do with how Rucka and Southworth wanted to depict Dex’s world. Perhaps this is a nod to Portland (where Stumptown is loosely based on), or maybe Rucka wanted a grungy look that would complete the feel of a PI genre story. Another color choice that creators make are codes for certain characters. Superhero comics use these codes over many years for a certain character (i.e. Superman is associated with red and blue, Batman with black, etc.) Why red for the villains’ truck? Why a truck instead of any other car? The same could be asked of Dex’s car of choice. The deep magenta of the truck seems complementary to Dex’s green car. Consequently, perhaps the depiction of these two juxtaposed against each other serves to highlight the conflict between the characters driving the vehicles.

Text/Art Relationship
The separate symbolic meanings of text and image take on new meaning when they interact. There is more art than text on this particular page, but the interaction between text and artwork is vitally important to any comic book page. Scott McCloud categorizes several different types of text-image interactions. This list includes word specific, picture specific, additive, parallel, montage, and interdependent combinations. This page might fall under additive because the text serves as supplementary dialogue to the action unfolding on the page. Additionally, Southworth and Rucka actually tilt the text and word balloons along with the images in order to signify to readers how they should follow the story down the page. However, there may be another, connoted message involved in the tilted word balloon. I believe that this message is related to the very physical interaction that the reader is having with the medium. The reader is moving the images and helping to create the action on the page rather than simply staring at the panels. Rucka and Southworth intentionally tilt that speech bubble in order to augment the experience of tilting the page.

Intratextuality and Intertextuality
There are a few instances of intratextuality and intertextuality on this page as well. There is intratextuality in the knowledge a reader may or may not possess about how to read this page. A reader would not know to tilt the page unless she or he had been reading this comic book from the first page instead of opening it up to the current selection. An element of intertextuality remains buried in the narrative of the story. Greg Rucka ends every issue with a brief discussion of the role of the private investigator (PI) in American crime stories. He writes about how he constructed this story and its characters with the PI genre in mind, and makes references to other famous PI genre writers such as Raymond Chandler and Dashiell Hammett. Only a reader who is well-versed in the code associated with the PI genre may appreciate all of Rucka’s references.

The overall denotative message to the audience is that they are reading a car chase scene in which Dex is attempting to pursue the story’s villains while being trailed by the police. The argument can be made that this comic book also serves as a connotation referring to the American PI genre of storytelling. Perhaps that message may be about exploring the best elements of the genre with a woman in the lead role instead of the usual Sam Spade character. However, I do not think that this page is completely representative of that message. Instead, I would like to focus on the interaction between the reader and this selection. Most of the notable signifiers in this selection are related to the physical interaction that the reader has with this book. The act of reading any comic book is an interpretive and interactive experience. However, this page (and several others throughout this issue) highlights the physical work that a reader must engage with in order to continue the story. Perhaps this movement signifies Rucka’s intention of connecting the reader with the story in a visceral manner. The turning of the book mimics the turning of the wheel as Dex drives circles around the villains in the red truck. The action seems more real to the reader if she or he are somehow involved. The majority of this book is the car chase scene, and so turning the book breaks up what could be monotonous action shots. There may be another connotation about comic books in contrast with digital copies and web comics. I am not sure how this page would be read on the iPad, but it doesn’t seem to evoke the same experience as holding the comic book pages in the reader’s hands and turning it along with the story. Interaction with a screen showing moving images in a comic is very different than interacting with the physical comic book medium.


Bal, Mieke. “Semiotics for Beginners.” On Meaning-making: Essays in Semiotics. Sonoma, CA: Polebridge, 1994. N. pag. Print.

Chandler, Daniel. “Semiotics for Beginners.” Semiotics for Beginners. N.p., n.d. Web. <>.

Cobley, Paul. “Peirce’s Concept of the Sign.” The Routledge Companion to Semiotics and Linguistics. London: Routledge, 2001. N. pag. Print.

McCloud, Scott. Understanding Comics. [Northampton, MA]: Kitchen Sink, 1993. Print.

Peirce, Charles S., Nathan Houser, and Christian J. W. Kloesel. “What Is a Sign.” The Essential Peirce: Selected Philosophical Writings. Bloomington: Indiana UP, 1992. N. pag. Print.

Rucka, Greg, and Matthew Southworth. “The Case of the Baby in the Velvet Case Issue #4.” Comic Book. Stumptown. N.p.: Oni, n.d. Print.

Saussure, Ferdinand De. “On the Nature of the Linguistic Sign.” Course in General Linguistics. New York: Philosophical Library, 1959. N. pag. Print.

Video Streaming Services as Media Artefacts

Video Streaming Services as Media Artefacts
Sara Levine

Video streaming services such as Netflix, Amazon, and Hulu have re-configured the medium of recorded film and broadcast television in order to move these forms of media to a new technological interface. The effects of this reconfiguration are wide-ranging in terms of social, economic, and ideological concepts.

What are the combinations of technologies and other conditions that make up this form of media?
Video streaming is not a self-contained media artefact. There is a history behind its development that involves many different socio-economic factors. Lisa Gitelman made the point that artefacts are “reflexive.” She used the example of papers from the Salem witch trials. The content of the papers hold great importance, but so do the physical components of ink and paper that make up these papers (Gitelman 20).

Figure 1

Much like the capitalist ventures surrounding the telegraph (Carey 4), video streaming is dominated by a few large companies that are vying for control over the market. Netflix was an early adopter of this technology. Netflix was originally set up as a subscription-based service that delivered DVDs (and eventually video games) to the home. There were no late fees or tedious trips to the video rental store involved. It wasn’t until around 2007 or 2008 that Netflix launched another feature that allowed subscribers to access movies over the Internet (Anderson 1). Video discs, or DVDs, have not been rendered completely obsolete since then. Friedrich Kittler writes that old media is usually found elsewhere and is re-purposed (Kittler “The History of Communication Media”). However, one outcome of Netflix’s rise in popularity was the decline of video rental stores such as Blockbuster (“Movies to Go”). Blockbuster has since launched its own service, but has not fared well against competitors. Hulu, another streaming service, also appeared around 2007 or 2008. It was originally a free service, but has since implemented a paid subscription service called Hulu+.

Figure 2

Netflix’s main competitor is Amazon, which offers Amazon Instant Video to its customers. Amazon Instant Video was released recently in comparison to Netflix. Netflix had not faced much serious competition until Amazon launched its service. Another interesting aspect of their relationship is that Netflix uses Amazon’s cloud services to host its content. Netflix streaming was down this past Christmas Eve because of problems with Amazon’s cloud computing service (Chen 1). There are several important immediate effects of the emergence of Netflix, Amazon Instant Video, Hulu, and the relationships between them. Competition is driving subscription rates up, there is heavy reliance on cloud computing (and therefore Amazon), the playback technology for film and television has to be incorporated into this new digitization, and the streaming technology of these services is a new and unfamiliar tool to many of its users. There is also the sociological aspect of who is using video streaming and why viewers are migrating to this new technology.
Figure 3

How do space and time factor into the consumption of this media?
The abandonment of physical spaces such as Blockbuster indicates a shift in our concept of space and time. The use of the term “instant”, for example, has re-configured the idea of wait time and playback in regards to video and other entertainment. Video streaming services deliver content at a rapid-fire pace. People can easily catch up on movies and television that might not have been as readily available to them several years ago. Quick load times have now become a technological norm that we do not notice until it breaks down and gives us the “buffering” sign on our screens.
James Carey wrote about the telegraph’s overwhelming effect of freeing up communication in regards to geographical movement (Carey 3). Video streaming has created boundaries instead of removing them. Netflix is available to a few select countries including North America, South America, the UK, and Ireland. Amazon Instant Video is only available in the US. Hulu is available to only US and Japanese customers, but there have been illegal proxy servers set up so that viewers from outside the country can use Hulu. These boundaries are set up in accordance with the companies’ wishes for distribution of content, but it also inadvertently constructs new concepts of space on the Internet. Similarly, international standards for content sharing and distribution must be taken into consideration. This results in the creation and/or re-configuration of other technologies such as proxy servers to get around these invisible boundaries. On the other side of these boundaries, streaming services are offering international content. Those who have access to these services are able to learn more about foreign cultures through film and television.

What are the social-ideological effects?
Video streaming services have aided the standardization of the interface for online video controls and features. Elizabeth Eisenstein discussed the concept of standardization in regards to typography (Eisenstein “Some Features of Print Culture”), and the same can be applied to video player interface (Fig. 4, 5). There is always a scrub bar through which the viewer can adjust the timecode of the video. There is a play and pause button, and an option for full-screen. Other features usually include HD streaming, dimming the lights on the screen, and rewinding ten seconds of video. Most of these features have become uniform standards. These features rely on video editing programs for their features, and encourages direct participation and manipulation of video content on the part of the viewer.

Figure 4 and Figure 5

In the excerpts from McLuhan’s book, he wrote about Narcissus and narcosis. McLuhan was exploring the concept that we regard “gadgets” and other media technology as extensions of ourselves (McLuhan 42). Video streaming services takes the idea of watching video content in the comfort of one’s home and expands its mobility. Characters and TV personalities had become a part of our families because we invited them into our living rooms every evening. However, now we invite them onto our cell phones and computers. We can watch this video content at any moment and anywhere we can get service. This makes the content considerably more personal than it was previously.
McLuhan also wrote about hot and cold media. He labeled television as cool and movies as hot. The combination of the two on video streaming services is a complicated convergence of these concepts. Do video streaming services heat up the television medium, or cool down the film medium? It may be the latter case because McLuhan writes that “any hot medium allows of less participation than a cool one (McLuhan 24).” The movie is not on the big screen and involves the viewer physically manipulating its timeline. On the other hand, this participation may not be a strong enough argument to re-configure film as a cool media under the auspices of streaming services.

On a final note, video streaming services were not in direct competition with broadcast and cable television. However, both Netflix and Hulu are starting to produce original content for their subscribers. It remains to be seen as to whether this will affect television and film distribution, or if consumers will simply use different mediums for different purposes without significant social and ideological change.  

Works Cited

“Amazon Adds Movies to Streaming Service in New Challenge to Netflix.” AdAge. N.p., 04 Sept. 2012. Web. <>.

Anderson, Nate. “Netflix Offers Streaming Movies to Subscribers.” Ars Technica. N.p., 16 Jan. 2007. Web. <>.

Carey, James. “Technology and Ideology: The Case of the Telegraph,” excerpt from Carey,

Communication as Culture: Essays on Media and Society. Revised edition. New York, NY and London, UK: Routledge, 1989.

Chen, Brian X. “‘The Cloud’ Challenges Amazon.” The New York Times. N.p., 26 Dec. 2012. Web. <>.

Eisenstein, Elizabeth L. “Some Features of Book Culture,” from Elizabeth L. Eisenstein, The Printing Revolution in Early Modern Europe. Cambridge, UK: Cambridge University Press, 1983. Rev. ed. 2005.

Gitelman, Lisa. Always Already New: Media, History, and the Data of Culture. Cambridge, MA: The MIT Press, 2008. Excerpt from Introduction.

“ Opens to Public Offers Free Streams of Hit TV Shows, Movies and Clips.” Hulu. N.p., 12 Mar. 2008. Web. <>.

Irvine, Martin. “Media Theory: An Introduction”

Kittler, Friedrich. “The History of Communication Media,” C-Theory, 1996.

McLuhan, Marshall. “The Medium is the Message,” Excerpts from Understanding Media, The Extensions of Man, Part I, 2nd Edition; originally published, 1964.

“Movies to Go.” The Economist. N.p., n.d. Web. <>.

“Wired 10.12: The Netflix Effect.” Conde Nast Digital, n.d. Web. <>.


How Do Interdisciplinary Approaches to Cognitive Structures Modify the Communication Process?

How Do Interdisciplinary Approaches to Cognitive Structures Modify the Communication Process?
By: Sara Levine 

Interdisciplinary approaches to studying our interactions with media and technology seem few and far between. Cognitive science, for the most part, is left out. Additionally, the sciences don’t always include important external factors. With this in mind, I would like to use Shannon’s model of the communication process and Foulger’s model of the communication process as tools for exploring theories on human cognition.

Fig. 1 Image source: <>

Shannon’s model reminds me of John Searle’s “Chinese Room” thought experiment from Andy Clark’s “Mindware.” We can decode the characters easily, but “symbol manipulation alone is not enough (Clark 34).” There is a lot more going on in a communication system than a simple transmit-and-receive interaction (Fig. 1). The source and destination usually manifest in human form and therefore should demonstrate a few of the cognitive processes that occur within the human mind. Humans rely heavily on symbols and symbol-oriented language. We use metaphorical structures and mapping when interacting with transmitting and receiving devices. These devices may have a certain level of consciousness because they are programmed to respond to scripts and instructions as we use them. It is also important to note that the device’s design has an overwhelming impact on the way the human uses it and thinks about how it is used. Hollan, Hutchins, and Kirsh use the example of the devices in the cockpit of an airplane. The panel that shows the pilot interacting with the fuel gauge, for example, is treated as both a symbol and as the fuel system that it represents (Hollan, Hutchins, Kirsh 185). Finally, the “noise source” should be expanded out to include environmental factors in the communication process.

Fig. 2

Let’s apply this infinitely more complicated model to a Skype call (Fig. 2). Person A calls Person B over Skype. They both enable video and start a conversation. There are a multitude of both hierarchical and parallel processes occurring at once. The conversation between Person A and Person B is based in English, which is a largely metaphorical language. The meaning-making processes within their minds are working to interpret sentences and to form ones in return. At the same time, Person A and Person B are interacting with Skype technology. They are navigating the Skype interface in order to facilitate a fluid communication experience. This requires Person A and Person B to interpret the symbolic structure of Skype, much like the pilots assign symbols and meaning to the devices in the cockpit of a jet. Person A and Person B’s computers are displaying their own level of consciousness in this interaction because they are following the software instructions of Skype as it responds to the situation at hand. The environment, WiFi connections, and other external factors are also important factors throughout this example of communication. These revisions to the Shannon model are not final. There are probably many other cognitive and technical processes occurring that we may not be aware of.

Fig. 3 Image source: <>

Foulger’s model is designed to be more applicable to media. However, it can also be modified to include several meaning-making concepts and theories (Fig. 3). Foulger does an excellent job of representing cognitive functions with words such as “imagine” and “interpret”. However, as Lakoff describes in “Conceptual Metaphor,” much of our communication is mapped onto other metaphors. Consequently, there is an entire structure of metaphorical language and symbolism that the Creator employs to distribute her or his message. The Consumer employs similar structures to interpret the message. In between those two actors is media, which functions as another symbolic architecture through which these two actors communicate. The cognition processes and structures differ depending on the type of medium and technology that produces media. Additionally, a multitude of external structures (social, cultural, embodied, etc.) affect this model as well.

Fig. 4

A more specific example of this could be an American romantic comedy film (Fig. 4). The creator enacts the communication process even before she or he has written the script because thought is an interpretive interaction. However, she or he eventually produces the story through a visual medium. In addition to the symbolic structure of the English language, she or he has also encoded the story through the structures embedded within both film and the romantic comedy genre. The couple has a “meet cute” at the beginning of the film, for example. The editing choices for each shot hold meaning in terms of focus, timing, and composition. Perhaps all “meet cute” scenarios are shot with similar pacing. Therefore, due to an indexical cognitive learning process, audiences know how to interpret this sequence of events as a “meet cute”. Some external factors may include the history of film, the history of romance as a genre, American culture shaping the cognitive processes of audience members, etc. 

These revisions are not intended as permanent alterations, but as an exploration into how cognitive theories and concepts might reflect on models of communication. My previous experience with studying communication processes was through the narrow lens of microsociology as described by Erving Goffman. He used theater metaphors to explain the seemingly mundane gestures and conversations of everyday life. Goffman used terminology such as “performance,” “front stage,” “backstage,” and “audience.” Consequently, the dramaturgical (as his theories are labeled) perspective may be another angle of approach for cognitive and communication processes. Goffman’s metaphors could even be studied under the cognitive lens as an internalized structure for shared cognition amongst sociologists. The potential for interdisciplinary exploration into the communication process seems to indicate that there is more to be discovered through further collaboration between multiple fields of research. 

Works Cited

Clark, Andy. Mindware: An Introduction to the Philosophy of Cognitive Science. New York: Oxford University Press, 2001.

Day, Ronald E. “The ‘conduit Metaphor’ and the Nature and Politics of Information Studies.”Journal of the American Society for Information Science 51, no. 9 (2000): 805-811.

Deacon, Terrence W. The Symbolic Species: The Co-evolution of Language and the Brain. New York, NY: W. W. Norton & Company, 1998. Excerpts from chapters 1 and 3.

Foulger, Davis. “An Ecological Model of the Communication Process.” A Ecological Model of the Communication Process. N.p., n.d. Web.

Foulger, Davis. “Models of the Communication Process.” Models of the Communication Process. N.p., n.d. Web.

Goffman, Erving. The Presentation of Self in Everyday Life. Garden City, NY: Doubleday, 1959. Print.

Hollan, James, Edwin Hutchins, and David Kirsh. “Distributed Cognition: Toward a New Foundation for Human-computer Interaction Research.” ACM Transactions, Computer-Human Interaction 7, no. 2 (June 2000): 174-196.

Lakoff, George. “Conceptual Metaphor.” Excerpt from Geeraerts, Dirk, ed. Cognitive Linguistics: Basic Readings. Berlin: Mouton de Gruyter, 2006.

#whatwouldChomskydo: Language Structure in the Realm of New Media

#whatwouldChomskydo: Language Structure in the Realm of New Media
Sara Levine

John Searle argued in “Chomsky’s Revolution in Linguistics” that Chomsky did not explore the relationship between language and communication. However, meaning seems to have become more difficult to separate from abstract notions of language as new forms of communication come into being. Writing code and using tags are just a few forms of media that can be investigated as both forms of expression and language.

The use of code and coding language borrows many components from the basic language structure as described by Chomsky and other linguists. Code has rules, a lexicon of words, grammar, semantics, and makes use of context much like any sentence in the English language. For example, a line of code is constructed with a function and then an argument in parentheses followed by a semicolon called the “terminator”. When put together in a certain order, these statements then take on meaning in that they are issued as commands to the computer. The computer interprets this language and carries out the instructions (LeMasters “Computational Expression”). Here are a few lines of code for demonstration:


Each statement can be broken down into segments much like the syntactic structure of a sentence. The functions are the words outside of the parentheses, and the arguments are the numbers inside the parentheses. If one semicolon is missing, the computer will refuse to run the program. In the last line of code, there is a function to draw an ellipse followed by an argument for that ellipse. The argument’s structure contains the coordinates for the center of the ellipse followed by the width and height. Code structure can become more complicated as coders add IF, ELSE, and OR statements. We can study lines of code based only on these components. However, that type of investigation would not yield everything there is to know about that line of code. Chomsky revolutionized the study of language by recognizing the importance of meaning, but he seemed to neglect the semantic component of his theory (Searle). When studying the function of language structure within the use of code, it would also be important to study the semantics of the coding process. When those sample lines of code are read and carried out by the computer, they are translated into a program that the coder intended to create. The green circle it draws may serve a purpose of some sort for the coder. Taken further, other coders might borrow and “edit” that line of code to produce their own creations. This brings up the question of how to go about investigating the meaning of the code in relation to the interaction between two coders who may be simultaneously editing the code.

Twitter trends from the night of January 29, 2013

Another form of media that can be mapped out according to the basics of language structure are tags and hashtags. Although they are rarely formed as full sentences, the structure of tags adheres to certain spatial and syntagmatic rules. Tags are often found at the end of tweets and at the bottom of blog posts. A hash mark must come before it in order for it to be considered a tag. On Twitter, hashtags that contain multiple words do not have any spaces between them. For example, the tag “#MyLifeIn5Words” is trending on Twitter. The hash mark denotes the presence of a tag, and any words following that mark and between the next spacing is the tag itself. “#MyLifeIn5Words” is just an empty phrase without looking at all of the structural, semantic, and communicative work that goes into putting that particular hashtag at the end of a tweet. It is tied to the text within the tweet, the person who tweeted it, and the reasoning behind a person’s decision to tag their tweet with “#MyLifeIn5Words”. Similarly, Tumblr utilizes hashtags at the end of blog posts. Bloggers use this space to make sure that their posts show up in tags that any Tumblr user can search. If a blogger posts a picture of his or her dog and tags it “#puppy” followed by “#dog pictures” and “#I love my dog,” then anyone who searches for those terms in Tumblr will see the blogger’s post. The syntagmatic and spatial structures in Tumblr’s tagging system is different from Twitter’s structure because there are spaces between words. Bloggers can therefore form entire sentences under one hash mark. Many bloggers also use the tags to provide running commentary. Sometimes they use hash marks to emphasize a phrase. For example, here is a screenshot taken from a post made about fanfiction:

There’s an Awful Lot of Gray to Work with. N.p., n.d. Web. <>. <>.

Someone reblogged a blog post, and added his or her commentary in the space reserved for tags. It is important to note that instead of using commas the blogger utilizes tags as a grammatical way to break up sentences. The hash mark signifies a pause when reading the tags. Once again, this new format for communication makes it difficult to separate the content of the tag from its context. Bloggers are using their knowledge of how a sentence works, but re-formatting it for the space under the main blog post.

Finally, there are also forms of media that deliberately destroy language structure in order to create meaning. Artists such as Pogo and The Gregory Brothers remix sounds and segments of dialogue in order to create music. These artists purposefully rearrange syntax, phonological structure, and semantics in order to produce a melody. Pogo takes short sound bites from movies and composes them into full-length songs. “Alice,” for example, features a selection of sounds from the movie Alice in Wonderland. Without knowing the context of the song, “Alice” sounds like vocals without lyrics. There is a certain structural component to the notes and composition of the piece that Pogo created, but the English language was sampled and remixed in order to construct the melody. These songs explore the idea of meaning conveyed through music. The abstract components of “Alice” by themselves do not represent the song’s artistry, message, or context. The Gregory Brothers use a similar method of remix, but they use phrases instead of sounds. Their “Auto-Tune the News” series made melodies out of phrases from politicians’ speeches and news anchors’ reports. “Bed Intruder Song,” which seems to be their most popular song to date, contains parts of a news report about a home invasion. Antoine Dodson’s commentary is auto-tuned and certain phrases are repeated in order to create a chorus for the song. The phonological structure, or how the language sounds, is fundamentally altered in order to create the song. The Gregory Brothers also change the syntax, but the auto-tune method directly affects the context of the news report. Antoine Dodson was not singing his report to the news team, but anyone who downloaded the song without context would not be aware of that. The process and culture of remix art bring in an entirely different perspective to how language is structured.

New media forms seem to possess the fundamental structures of language, which indicates that these structures are so heavily ingrained in our minds that it is imperative that they be used in most forms of communication. This brings up the fundamental question of whether there is an innate faculty that humans possess for language learning (Searle, Chomsky 120, Radford 7). The answer may lie in the study of these new forms of communication and which structural components they all have in common.

Works Cited

Alice. Nick Bertke. YouTube. YouTube, 18 July 2007. Web.

BED INTRUDER SONG!!! (now on ITunes). Prod. Michael Gregory. Perf. Antoine Dodson and The Gregory Brothers.YouTube. YouTube, 31 July 2010. Web. <>.

Bertke, Nick. “POGOMIX.” POGO Music Producer Remix Artist. N.p., n.d. Web. <>.

Chomsky, Noam. “Form and meaning in natural languages.” Excerpt from Language and Mind, 3rd. Edition. Cambridge University Press, 2006.

Irvine, Matthew. “Linguistics: Key Concepts”

LeMasters, Garrison. “CCTP 764: Computational Expression.” Georgetown University. 18 Jan. 2013. Lecture.

Radford, Andrew et al. Linguistics: An Introduction. 2nd ed. Cambridge, UK: Cambridge University Press, 2009.

Searle, John. “Chomsky’s Revolution in Linguistics,” The New York Review of Books, June 29, 1972.

“The Gregory Brothers.” The Gregory Brothers. N.p., n.d. Web. <>.

There’s an Awful Lot of Gray to Work with. N.p., n.d. Web. <>.

Fandom and the Communication Process

“Fandom and the Communication Process”
Sara Levine

Davis Foulger argues that Shannon’s model for communication processes functions well enough as an introductory model, but it may be time to turn to alternative models in order to accommodate for newer forms of technology and media. He writes that Shannon’s model is too basic and abstract in light of how people selectively consume and interact with media today (Foulger 1). Additionally, Ronald Day concludes in his piece that the linear conduit model is reminiscent of a Cold War era way of thinking about communication (Day 10). One of the more recent phenomena that may require a more complicated diagram than Shannon’s is the rapid growth of fandom within the Internet sphere. Fandom is composed of a large, varied, and sometimes hierarchical network that does not function as a simple linear route through a transmitter. Foulger’s ecological diagram from “Models of the Communication Process” and “An Ecological Model of the Communication Process

Foulger goes on to explore alternative communication models that have been modified over the years since Shannon published his diagram. The ecological diagram attempts to fill in the missing aspects from older models that seem to demonstrate an “injection” of content into the consumer (Foulger 2). Foulger’s diagram, on the other hand, positions “creators” and “consumers” as the main actors within a process through which messages are conveyed and interpreted within media content (Foulger Fig. 6).

Tweets from actress Shay Mitchell and showrunner Marlene King ( and

Twitter has opened up a channel of communication between media producers and consumers that had not existed previously. Writers, producers, and actors of a television show often “live-tweet” an episode along with fans as it airs. Fans may ask questions and convey their opinions of the show directly to the people who created it. These creators become aware of the culture of their particular fandom, and may even incorporate fans’ ideas and popular romantic couple portmanteaus. For example, every Tuesday night actress Shay Mitchell utilizes the hashtag “#PLLayWithShay” in order to interact with fans while they watch the newest episode of Pretty Little Liars. Occasionally the show’s creator, Marlene King, will join in and answer fans’ questions. Using the ecological model as a guide, Marlene King produces messages through the medium of television that fans then willingly watch and interpret. Fandom produces its own messages that reflect their interpretations of Pretty Little Liars, and the creators may then willingly consume and interpret those messages through Twitter. The two people involved – creator and consumer – develop a relationship (although not always a direct one, as Foulger is quick to point out) through these interactions. However, this cycle neglects activity within fandom.

A screenshot of, which allows users to peruse fan-written stories about their favorite shows (

A screenshot taken from a tribute blog to a well-known fanfiction author (

The ecological model posited by Foulger may be too simplified to apply to examples of communication that occur within fandom. Foulger’s model demonstrates the intersection between people (creators and consumers), messages, language, and media. There is an arrow that indicates the existence of “relationships,” but does not further demonstrate how convoluted relationships amongst consumers (or fans, in this case) can become. Fandom sometimes utilizes what Stuart Hall may label a “negotiated position” in regards to the media that they consume (Hall 516). Fans may interpret and accept the messages conveyed by creators, but they also create their own forms of media in order to correct or explore other messages that they find lacking in the creator’s content. For example, oftentimes fans believe that two characters should be romantically involved, or that there is not enough LGBTQ representation within a creator’s work.

These grievances can manifest through fanfiction, fanart, fanvids, etc. Other fans consume this fan media, and some fan creators have become prominent figures within fandoms due to a particularly well-produced piece of fanwork. The most successful fanfiction writers are those like EL James, who adapted her Twilight fanfiction for mainstream publication. Others are simply known by their author names on websites such as LiveJournal and These authors produce their own messages through media based on media that they had previously consumed along with their fellow fans. They seem to enter into a more complicated model of communication that is still connected to the process of consuming the original content. Some fanfiction writers even appear as featured guests at fandom conventions. Their work blurs the line between creator and consumer.

Screenshot of A Very Potter Musical, which you can view here.

This is an entirely different communicative process that may take place without the direct knowledge of the original creators of media content. For example, JK Rowling produced the Harry Potter book series which was then voraciously consumed by her fans. They read the books and interpreted messages conveyed through the books. However, there is also a considerable amount of media and other content produced within the Harry Potter fandom. A Very Potter Musical is one of the most popular of these fanworks, and seems to have its own sub-fandom. The creators of the Musical are members of the Harry Potter fandom and fit into the ecological model as consumers of Rowling’s messages. However, they departed from the plot of the text and produced their own works based on her creation, which were then consumed and interpreted by other Harry Potter fans. It is unclear if Rowling has consumed this piece of fan media, so there may or may not be a full cycle to be made from fan to creator in this instance. Therefore, perhaps the ecological model could be expanded to account for fandom communication that occurs across mediums and may never actually come back to the creator.

Floridi writes in his first chapter of Information that we are entering into a fourth revolution. Our reality has become informational, and the divide between digital and analogue is quickly becoming blurred (Floridi 12-18). Fandom may have previously functioned only within physical spaces and through fanzines. Now, however, the rapid development of the “infosphere” has reorganized the relationships both between fans and creators and amongst fans. Foulger’s diagram reflects these changing relationships, but may not account for all of the communicative activity occurring within online fandoms. Taken a step further, the mixing of different mediums throughout this communication process (from television to Twitter, or from film to may have its own impact that requires further exploration.

Works Cited

Day, Ronald E. “The ‘conduit Metaphor’ and the Nature and Politics of Information Studies.”

Journal of the American Society for Information Science 51, no. 9 (2000): 805-811.

Floridi, Luciano. Information: A Very Short Introduction. Oxford: Oxford UP, 2010. Print.
Foulger, Davis. “An Ecological Model of the Communication Process.” A Ecological Model of the Communication Process. N.p., n.d. Web.

Foulger, Davis. “Models of the Communication Process.” Models of the Communication Process. N.p., n.d. Web.

Hall, Stuart. “Encoding/Decoding” (first published, 1973).