Author Archives: Ryan Leach

Remediating the Book: Affordances, Symbolic Capital, and Co-mediation of Print and E-Books

Ryan Leach

Abstract

Most scholarship tends to overemphasize the distinctions between e-readers and traditional print, presenting these digital technologies as a rupture from previous incarnations of the book. Although the computational substrate of e-readers creates new affordances, e-books still heavily rely on the past media conventions, genres, and affordances, as well as the institutionally constructed mediational position of the book-function, which pre-exists any particular instantiation of book-disseminating technology. Drawing on Foucault’s notion of the author-function, Debray’s concept of bi-directional mediation, Murray’s work on design affordances, Bourdieu’s theory of symbolic capital, Manovich’s take on remediation theory, and Hayles’ observations on the analog-digital continuum, this paper aims to show that (1) the book-function remains the same despite alternating substrates, (2) the computational substrate of the e-book allows for a new set of affordances, (3) some of print’s affordances and symbolic value resist remediation into digital interfaces, and (4) texts exist on an analog-digital continuum, continually switching from one state to the other. Such aims are substantiated through close analyses of the properties of electronic and print books, and the larger mediasphere in which they operate.

Main Text

In Inventing the Medium, Janet Murray draws attention to the problem of perceiving recent technologies as “new media,” as if the most observable quality of these forms of mediation were their novelty (8). And, indeed, the vast majority of research on computational technologies emphasizes their difference and distance from previous, usually analog, forms of mediation. However, much of the success of new technologies derives from the remediation of prior media conventions, genres, and affordances, and the socially and ideologically constructed positions always already in place before the introduction of newer media technologies. From this perspective, “new” media are in fact quite old and just as dependent upon specific social and institutional functions as all previous media. The “newness” of digital media derives from its existence on a computational substrate capable of simulating previous mediums (it’s a metamedium) through symbolic representation in binary code. This, in turn, enables a wide variety of Human-Computer Interactions (HCIs) through which previously fixed design features can be manipulated and individually curated. Through an analysis of the e-reader, this paper argues that digital remediations of the book rely on many of the same socio-cultural institutions as previous material incarnations of the author and book functions, and that these institutions continue to depend on the social and ideological functions of authorship and textuality, even and especially in the digital age. The introduction of e-books does not alter the bi-directional mediation between socio-cultural institutions and the book, but instead offers new means of interaction between reader and text. Nevertheless, in contrast to many speculations on the ability of the computer to faithfully simulate all mediums into a single metamedium, thus rendering analog media obsolete, print literature exists and will continue to exist alongside computational simulations in an analog-digital continuum due to the socially ascribed symbolic value to printed books and the unique affordances of the medium that resist digital remediation.

E-books did not erupt Athena-like from the head of Zeus (or Sin-like from Satan, as some critics would have it); they are socially developed technologies that fill a socio-ideologically predetermined position within our culture. There exists a book-function that operates similarly to Foucault’s author-function:

 …the “author-function” is tied to the legal and institutional systems that circumscribe, determine, and articulate the realm of discourses; it does not operate in a uniform manner in all discourses, at all times, and in any given culture; it is not defined by spontaneous attribution of a text to its creator, but through a series of precise and complex procedures; it does not refer, purely and simply, to an actual individual insofar as it simultaneously gives rise to a variety of egos and to a series of subjective positions that individuals of any class may come to occupy. (130-31)

Similarly, the book-function mediates, and is mediated by, legal and institutional systems, and it does not refer to any specific mediating technology, but gives rise to a series of mediational positions that any textual dissemination device can occupy. In discussing the impossibility of dissociating technology and culture, Debray relates how a system of practices, codes, rules, and expectations—in short, a culture—always precedes and creates the mediational position for the development and successful assimilation of any given technology (50). In addition, social changes typically viewed by technological determinists as caused by the sudden emergence of a particular technology are often already a part of the culture before said technology has been developed. For instance, Debray notes how changes in reading habits usually attributed to the invention of the printing press, such as reading the Bible individually, long predate Gutenberg’s invention. Furthermore, Debray describes the dynamic between media technologies and socio-cultural institutions as bi-directional:

The mediologists are interested in the effects of the cultural structuring of a technical innovation (writing, printing, digital technology, but also the telegraph, the bicycle, or photography), or, in the opposite direction, in the technical bases of a social or cultural development (science, religion, or movement of ideas). (“Qu’est-ce que la médiologie?”)

Therefore, one can say of the book-function that cultural structures provide a mediational space for book technologies and book technologies in turn reaffirm existing and emerging cultural structures. Or, in other words, institutions mediate books and books mediate institutions.

From this perspective, the introduction of e-books is hardly an abrupt caesura in the history of the book. Instead, e-books fill a mediational position already socially and ideologically circumscribed by various cultural institutions, and previously occupied by a number of other information dissemination technologies (codex, scrolls, etc.). In addition, they co-occupy this position alongside traditional printed books, relying on the same institutions (legal, educational, medical, cultural, etc.) to “circumscribe, determine and articulate” (Foucault’s phrasing) their position in the wider culture. Further, from the other direction, these institutions derive symbolic value (economic, cultural, and social) from the production, dissemination, and accumulation of books, in whatever form they may appear. Therefore, the introduction of e-books does not alter the author or book functions, which are thoroughly maintained by social and cultural institutions (i.e. not authors or books themselves); but, instead, merely presents a new substrate—for production, dissemination, and accumulation—that provides new means of interacting with texts.

Similarly, e-books remediate conventions, genres, and affordances of print publishing that have developed over hundreds of years, many of which arose before the invention of the printing press itself. As Debray notes, the process of developing these conventions can be traced back to at least the first century A.D., when the protobook, or codex, “precociously transferred graphic spaces from scrolled surface to portable volume, simultaneously enabling silent reading, marginal annotation, pagination, and new classifications first based on titles and then on authorship” (51). Thus, e-books are not so much a break from traditional notions of the book, as they are an extension of the social, technical, and mediational practices that have developed over almost two millennia. As such, printed books function as what Murray terms “legacy media”; pre-digital media that are often taken for granted but form the basis from which digital simulations derive their organizational structure (12). In this way, e-books retain many of the genre conventions of print: title pages, colophons, frontispieces, tables of contents, forwards, prefaces, introductions, prologues, epilogues, afterwords, conclusions, glossaries, bibliographies, appendices, etc. Even electronic texts that exist on a single web page tend to maintain this sequence, so ingrained it is within our cultural expectations of the reading experience. In addition, e-books retain the segmentation into volumes, chapters, sections, as well as the same pagination and page layout:

 

juxtaposition of print and e-reader

 

Both the print and electronic versions provide extra space at the top of each new chapter, larger headings for chapter titles and numbering, proportional margins, and paragraph indentations, and each uses fonts that were most likely developed for printing presses or typewriters. All of these conventions were socially developed over centuries of cultural transmission and across a wide variety of media technologies.

Additionally, e-readers remediate traditional reading practices, enabling interactors (Murray’s term) to “turn” and bookmark pages, inscribe marginalia, and highlight or underline text. While these holdovers might appear as skeuomorphic—that is, designed solely to ease the transition from print to digital reading through maintaining a similar appearance—they in fact maintain much more than that; they also simulate the printed book’s functionality through remediating the interactivity of the older medium. In “Tech-TOC: Complex Temporalities in Living and Technical Beings,” Katherine Hayles defines skeuomorphs as “details that were previously functional but have lost their functionality in a new technical ensemble.” However, these holdovers from the printed book have far from lost their functionality; instead, they operate much like Manovich’s perceptions on the supposedly skeuomorphic nature of the computer desktop. In Software Takes Command, Manovich laments how the original Graphical User Interface (GUI) principles behind the design of the computer desktop have been long forgotten; instead, the files, folders, trash, etc. are perceived only as a means of making the user feel comfortable in a digital environment through replicating objects conventionally found in the average physical office space (101). Such a view overlooks the “intellectual origins of GUI,” which were deeply influenced by cognitive psychologist Jerome Bruner’s theories on enactive, iconic, and symbolic mentalities (98). While he is right to suggest that the computational interface designed by Alan Kay engages all three mentalities, Manovich’s enthusiasm for computers leads him to overshoot the mark by claiming that the utilization of all three mentalities is unique to the experience of using computational media (100). Books, too, employ enactive (page-turning, underlining, marginalia), iconic (visualizations in the form of graphs, pictures, cover art, etc.), and symbolic (various forms of symbolic representation; primarily written text) mentalities. Likewise, e-books remediate all of these features in an effort to retain the interactive legacies of print media, while also adding new functionalities enabled by computational interfaces.

In addition to remediating many of print’s affordances, the computational substrate of e-readers affords for a new level of interactivity between reader and text. In contrast to previous mediums, “the building blocks used to make up the computer metamedium are different types of media data and the techniques for generating, modifying, and viewing this data” (Manovich 110; italics original). According to Manovich, there are two types of data manipulation: (1) “media creation, manipulation, and access techniques that are specific to particular types of data” and (2) “new software techniques that can work with digital data in general” (110-11; italics original). The new affordances offered by the digitalization of books generally fall into the later category, which includes “’view control,’ hyperlinking, sort, [and] and search […]” (111), operations that are not specific to a particular type of data. In terms of view control, e-readers enable somewhat superficial personalization features, such as the ability to change font types and sizes, alter page color, and zoom in and out. Additionally, these devices typically provide hyperlinks in the table of contents to respective chapters and sections of the text, and the search function enables users to find instances of a specific word or phrase throughout a text, remediating (in both senses of the word) the previous function of the index. Of course, readers can copy and paste text into a variety of other documents, but, more interestingly, e-readers also offer the possibility to share highlighted passages via SMS, email, Twitter, Facebook, and other social networking sites (though, admittedly, I know of no one who uses this function), thus affirming the social nature of all reading against previous ideologies that situated reading as a private and individual act. Furthermore, as implied by the social media connectivity, the e-reader is connected to the Internet, allowing users to search the web for more information concerning highlighted words and phrases. This affords almost instantaneous access to a network of extended cognition and a widely distributed cultural encyclopedia. While extended cognition and the cultural encyclopedia are by no means products of internet technologies—similar to how the book function precedes the technologies for book dissemination—the internet ensures faster access to more content than previous versions of media storage (libraries, for example). All of these affordances transcend the limitations of print due to the computational substrate of the e-reader, which creates the possibility of manipulating and individually curating previously fixed design features. However, print continues to exist; why?

Not only does the printed book offer certain affordances that resist digitalization (for the time being, at least), the institutionalization of print provides the medium with the symbolic capital and ideological value necessary to obviate the threat of obsolescence. This line of thinking runs counter to Manovich’s belief in the ability of computational devices to faithfully simulate prior mediums. As the book’s title implies, for Manovich, software takes command. In the first chapter, Manovich twice reiterates Kay and Goldberg’s claim that the computer is “’a metamedium’ whose content is ‘a wide range of already-existing and not-yet-invented media’” (105 and 82; italics original). In general, Manovich perceives the digital simulation abilities of computational devices as Remediation +, whereby the computer is capable of remediating the entire functionality of previous mediums and providing new ways of interacting with these previous mediums and anticipating the development of “not-yet-invented” media. This teleological narrative of media history, in which the computer serves as the ultimate telos, cannot explain the continued presence of pre-digital media, nor the ways by which analog and digital media forms interact in a broader socio-cultural mediasphere.

Although the computer can digitally simulate many of the functionalities of print (see above), it can neither remediate the physicality of the printed book, nor the symbolic capital of print, which has accrued over hundreds of years. I used the term “physicality” to employ Hayles distinction between the physical characteristics of a text and its materiality—that is, “the interaction of its physical characteristics with its signifying strategies” as an emergent and socially determined property (My Mother was a Computer 103-4). Certainly, the materiality, too, changes upon digitalization, and not from material to immaterial, as some might suggest. E-books are still material; they merely offer a different type of materiality due to the nature of the computational interface. Here, I focus on physicality because the materiality varies too much from book to book (according to each text’s signifying strategies) to be of use in intermedial comparison—that is, between two different substrates: computational and paper.

The physicality of the printed book resists remediation into a digital environment. There is, as yet, no means by which to simulate the tactile sensations of paper or the “feel” of turning a page. In addition, a single printed book is usually lighter than an e-book, less susceptible to damage, and cheaper, thus safer to carry about town, as there is no need to worry about damage or theft. Although the search function on e-readers significantly improves on the index, paper books enable easier navigability when the reader cannot remember a specific word or phrase for which to search. For instance, I tend to remember visually (almost photographically) and spatially where a certain un-demarcated section appears on a page based on its location within the book as a whole (judged by thickness of the connected pages), on the front or back of a page, and with regard to paragraph indentations and line spacing. Similarly, the note-taking affordances differ in a digital environment; marginalia is easily deleted and susceptible to disappearance in the case of data loss. The inscription of one’s thoughts on the page is thus less permanent and personal than with printed books.

Along with these physical properties, the symbolic capital of print cannot easily transfer to a computational interface. According to Bourdieu’s theory on symbolic capital, books function as cultural capital in the objectified state; they can be “appropriated both materially—which presupposes economic capital—and symbolically—which presupposes cultural capital” (50). Of course, e-books also satisfy this definition of objectified cultural capital; however, print books provide the unique affordance of physical display. As such, the physicality of the book signifies a socially determined level of cultural capital in the public arena, as well as in the private home, based on the materials that constitute it (whether its leather-bound or paperback, with gilded or normal pages, etc.) and on the cultural significance of the author or title printed on the cover (high/middle/low brow). In contrast, the material substrate of the e-reader is distinguished only by the manufacturer’s brand (iPad, Kindle, Nook) and design choices (colors, sizes, etc.), which—while certainly retaining symbolic values of their own—are not a reflection of the book itself. Hence, the development of social networking sites, such as GoodReads, that enable readers to digitally disseminate what they’ve read or are currently reading to whomever might follow them, thus turning cultural capital into social. (Though, it must be said, one’s “followers” on GoodReads are always already one’s “friends” on Facebook.)

Despite the different affordances of e-readers and printed books, one must be wary of lapsing into, or actively constructing, an analog/digital binary. Instead of a binary relation, analog and print cultural artifacts exist in an analog-digital continuum, constantly shifting from one substrate to the other. As Hayles points out, it is not a choice of either analog or digital but the “synergistic interaction” between the two (29). Within the mediasphere, cultural artifacts continually transfer between analog and digital states, such as when albums originally recorded as analog are digitized for computer storage and dissemination, and then later transferred back to analog as vinyl records to meet the demands created by current cultural trends. Similarly, all books written today have alternated between analog and digital states during the composition, editing, printing, and dissemination processes. It appears exceedingly unlikely that books will ever exist entirely in one state or another.

Although e-readers and printed books provide some different affordances based on their respective substrates, e-books do not mark a radical departure from print. Instead, e-books rely on many of the genre conventions developed over centuries of bookmaking, both before and after the development of the printing press. In addition, the change of substrate from paper to computational has not altered the socio-ideological function of the book, which is institutionally circumscribed and always already in place before the development of any particular media technology. E-books are merely the latest instantiation in a long lineage of book dissemination technologies, and new developments, such as PaperTab, are already on the horizon. While digital technologies have managed to remediate many of the features of print, both computational and paper implementations of the book continue to exist in an analog-digital continuum. Contrary to grave speculations as to the death of print, from both nostalgic technophobes and naive techno-enthusiasts, print still exists and will continue to exist due to the medium’s particular affordances, conveyance of symbolic capital, and institutionally circumscribed mediational position.

 

 

Bibliography

Bolter, Jay David, and Richard Grusin. Remediation: Understanding New Media. 1st edition. Cambridge, Mass.: The MIT Press, 2000. Print.

Bourdieu, Pierre. “The Forms of Capital.” Handbook of Theory and Research for the Sociology of Education. Ed. John Richardson. New York: Greenwood, 1986. Print.

Debray, Régis. Transmitting Culture. Trans. Eric Rauth. New York: Columbia University Press, 2004. Print.

Debray, Régis. “What is Mediology?” Le Monde Diplomatique. Aug 1999. Trans. Martin Irvine.

Foucault, Michel. “What is an Author?” Language, Counter-Memory, Practice: Selected Essays and Interviews. 1 edition. Ithaca, N.Y.: Cornell University Press, 1980. Print.

Hayles, N. Katherine. My Mother Was a Computer: Digital Subjects and Literary Texts. First Edition edition. Chicago: University Of Chicago Press, 2005. Print.

Hayles, N. Katherine. “Tech-TOC: Complex Temporalities in Living and Technical Beings.” Electronic Book Review. Open Humanities Press, 28 June 2012. Web. 30 Apr. 2015.

Irvine, Martin. “Working With Mediation Theory and Actor-Network Theory: From Mediological Hypotheses to Analytical Methods.”

Manovich, Lev. Software Takes Command. INT edition. New York ; London: Bloomsbury Academic, 2013. Print.

Murray, Janet H. Inventing the Medium: Principles of Interaction Design as a Cultural Practice. 1st edition. Cambridge, Mass: The MIT Press, 2011. Print.

Image

http://www.bookdesigntemplates.com/wp-content/uploads/Gallery-Electric-HD2.png

Intro to Final (Draft)

In Inventing the Medium, Janet Murray draws attention to the problem of perceiving recent technologies as “new media,” as if the most observable quality of these forms of mediation were their novelty (8). And, indeed, the vast majority of research on computational technologies emphasizes their difference and distance from previous, usually analog, forms of mediation. However, much of the success of new technologies derives from the remediation of prior media conventions, genres, and affordances, and the socially and ideologically constructed positions always already in place before the introduction of newer media technologies. From this perspective, “new” media are in fact quite old and just as dependent upon specific social and institutional functions as all previous media. The “newness” of digital media derives from its existence on a computational substrate capable of simulating previous mediums (it’s a metamedium) through symbolic representation in binary code. This, in turn, enables a wide variety of Human-Computer Interactions (HCIs) through which previously fixed design features can be manipulated and individually curated. Through an analysis of the e-reader, this paper argues that digital remediations of the book rely on many of the same socio-cultural institutions as previous material incarnations of the author and book functions, and that these institutions continue to depend on the social and ideological functions of authorship and textuality, even and especially in the digital age. The introduction of e-books does not alter the bi-directional mediation between socio-cultural institutions and the book, but instead offers new means of interaction between reader and text. Nevertheless, in contrast to many speculations on the ability of the computer to absorb all mediums into a single metamedium, thus rendering analog media obsolete, print literature exists and will continue to exist alongside computational simulations in an analog-digital continuum due to the socially ascribed symbolic value to printed books and the unique affordances of the medium that resist digital remediation.

iBooks and Media Theories

In examining the iBooks app, many of the theories discussed in Media Theory and Meaning Systems prove applicable. Understanding the cultural function of the app requires knowledge of Bolter and Grusin’s remediation theory, Manovich’s observations of said theory in application to digital simulation, skeuomorphism, the computer’s existence as a metamedium, Eco’s notion of the cultural encyclopedia, and the concept of the digital-analog continuum.

The iBooks app provides an obvious example of Bolter and Grusin’s remediation theory, in which new media are not entirely new; they remediate prior forms of media. From this perspective, iBooks remediates the printed book into a digital platform. However, as Manovich points out, new media are not simply a remediation of previous media forms, they also provide, even necessitate, new approaches to the old media. For instance, in iBooks, one can change the fonts and page color, digitally search for words in the text and compile recurrences, share passages over social media, and copy and paste passages into different textual environments. Therefore, digital simulation alters our approach to reading text.

In addition to these new innovations, the iBooks interface simulates the appearance and functionality of the printed book, possibly to ease the transition from old to new media as in the case of the skeuomorph. These holdovers from print media include the simulation of the bookshelf, page turning, bookmarks, notes/marginalia, and highlighting/underlining. According to the logic of the skeuomorph, retaining older, and thus familiar, elements serve to provide a familiar context in the digital world during the transition phase; however, they are supposed to fade away as users become acquainted with the new system. This is evident in the case of the iBooks bookshelf, which once simulated the appearance of a physical, wooden bookshelf but now merely displays the books floating on the screen. Nevertheless, it is not clear at this stage whether or not some of these functions (note-taking, for instance) will ever be replaced.

Also, the iBooks interface prevents the reader from ever encountering a book in isolation. Even if the user has only downloaded one book, access to the digital bookstore ensures that this one book always exists in a network of other books. Although theories of Intertextuality long pre-date book digitalization, iBooks reinforces the interrelation between texts through displaying them within the context of an expansive digital library. In addition, the app’s existence on a digital device providing access to a plethora of other apps—that is, on a metamedium—further connects the simulated books to an overarching cultural encyclopedia. For instance, the ability to highlight portions of text and search the internet for relevant information enables the user to connect the text with an externalized (and digitalized) cultural encyclopedia, instead of solely drawing on prior knowledge (this is also an example of the extended cognition capabilities provided by the app).

Nevertheless, printed books still exist. In fact, I only read printed books, and not (I hope) as the result of technophobia or retro-fetishism. The continued presence (for now, at least) of bookstores and publishing industries attests to the inability (for now, at least) of digital simulation to entirely supplant analog media. Instead, we exist in a digital-analog continuum.

Digitally Reproducing Kiefer’s (De)Compositions

In searching the Google Art Project, I encountered in the work of Anselm Kiefer the starkest contrast between an original, analog composition and its digital reproduction. In several ways, Kiefer’s compositions exaggerate certain elements innate to all analog visual art, particularly the importance of scale and the inexorable deterioration of the artwork. Through an analysis of Google Art’s digital reproduction of Kiefer’s Humbaba (2009), this post intends to examine how the massive scale and use of already decaying matter in Kiefer’s work illustrates Benjamin and Malraux’s critiques of photographic reproductions. In addition, such techniques might function as a means of preemptively resisting remediation into the digital.

kiefer

Scale: As Malraux notes in The Voices of Silence, “There is another, more insidious, effect of reproduction. In an album or art book the illustrations tend to be of much the same size. Thus works of art lose their relative proportions” (Irvine 4). This is particularly evident in Google Art’s reproduction of Humbala, in which not only is the composition approximately reduced to the same size as all other reproductions in the database, but even the description itself fails to provide the proportions of the artwork.

Screen Shot 2015-04-10 at 8.17.31 PM

As a result, the viewer must rely on previous experiences of Kiefer’s other compositions, or general cultural knowledge of Kiefer’s work, in order to imagine the probably grandness of its scale. While the shirt in the composition provides some means of assessing proportion, the size of this shirt is left indeterminate by both the image and the description (is it a man’s shirt? A doll’s? etc.).

Decay: In “The Work of Art in the Age of Its Technological Reproducibility,” Benjamin examines the inability of the reproduction to reproduce the context of an artwork (“In even the most perfect reproduction, one thing is lacking: the here and now of the work of art—its unique existence in a particular place” (253)), while also mentioning (somewhat briefly) the importance of the materiality of the artwork in assessing its authenticity: “This history includes changes to the physical structure of the work over time, together with any changes in ownership” (253). It’s this latter materiality that I would like to focus on with regard to Kiefer. Certainly, all analog visual art is subject to decay, but here we have compositions which readily exhibit their own decomposition through the intentional combination of elements already in the process of decay (dead leaves, rotting twigs, tattered cloth, etc.). In addition to providing sensory perceptions that resist digitalization (e.g. odors and textures), the accelerated dilapidation of a Kiefer composition ensures that the composition itself undergoes significant physical changes faster than most other forms of visual art. It is not merely that the meaning of the artwork changes based on situational or historical context—where it is in space (art museum, studio, gallery, internet database) and in time (intertextuality stipulates that the introduction of new artwork into the network alters the meaning of the old)—but also that the artwork itself observably changes at a rapid rate in comparison to, say, tradition oil-on-canvas paintings. This unique quality of Kiefer’s work is not digitalized, for the digital image never organically decomposes (although it would not be impossible to simulate this, I’d imagine, Google has yet to do it).

Certainly, we do not have to take the digital reproduction as a replacement for the original. In this sense, the digital reproduction serves to provide access to a global audience; it extends viewership to anyone with an internet connection. However, in the case of Kiefer’s work, is there any point in this extension of access? Two of the main characteristics of his composition—massiveness and decay—resist any attempts at digitalization (as my friend remarked in a conversation about this, “It’s only as massive and decaying as your screen”). What, then, is left? Is it even the same artwork in the digital context? Would a more thorough metatextual description alleviate these deficiencies? (Does not visual art also resist translation into the written word?) Adding to Bolter and Grusin’s remediation theory, might Kiefer’s work provide a case study as to how older mediums might compete with newer mediums not only through remediating the new, but perhaps through employing elements that actively resist remediation into newer formats?

Benjamin, Walter. “The Work of Art in the Era of its Technological Reproducibility” (1936; rev. 1939).

Irvine, Martin. “André Malraux, “La Musée Imaginaire (The Imaginary Museum)””.

 

 

Manovich and Remediation

In Software Takes Command, Manovich challenges Bolter and Grusin’s remediation theory, claiming that computers surpass the mere remediation of previous mediums. Instead, the computer is “‘a metamedium’ whose content is ‘a wide range of already existing and not-yet-invented media‘” (105; italics original). In addition, computers provide the ability to translate various mediums into other mediums (e.g. audio into visualizations) and to control the viewing of a medium’s content.

However, what are we to make of the continued presence of older media? Although Manovich revises remediation theory’s understanding of new media, he entirely overlooks the more interesting, or at least less intuitive, claim that older mediums remediate newer ones in an attempt to compete economically, culturally, and aesthetically. As the title of his book suggests, for Manovich, software takes command. Therefore, there is no such competition between old and new media; new media easily encompass the old and, further, add to it. Why, then, the continued existence of oil-on-canvas paintings, print, cable, vinyl records, AM/FM radio, etc.?

Although new media is capable of simulating and adding new perspectives  to older media, it appears that some quality of older media may be lost in digitalization. Specifically, the meaning of text (in the broadest sense of the word) must change when it transitions into a digital medium. Manovich addresses this dynamic when he writes about hypertext and the various options for viewing, or otherwise experiencing, digitized media. As his quotation of Nelson suggests, “the philosophical consequences of all this are very grave” (80); hypertext “destabilizes the conventions of cultural communication” (81). Nelson remarks that hypertext may have “more teaching power” than previous means of dissemination, and perhaps it does. However, unless we are to perceive the continuation of old, mostly analog media as merely the remnants of the pre-digital past, and those who consume these media as merely conservative Luddites or retro fetishists, Manovich’s theory is incapable of answering why these technologies continue to exist and how they function within the broader media landscape.

If we are to take up Bolter and Grusin’s claim that these old mediums remediate the new in a competition for cultural and economic supremacy, we can shift the focus from how new media is more than the remediation of older media to how older media are possibly more than the remediation of anterior and posterior media. What happens when an older medium, say, literature, remediates digital technologies? Manovich somewhat examines the dynamic of translating a material sign into a digital sign, but how might we understand the reversal of this process?

Python, ANT, Performatives, and Computational Thinking

In (partially) learning Python, I was reminded of Latour’s network of human actors and nonhuman actants. Programming languages enable an ongoing communication between humans and objects, objects and objects, and humans and humans. Through the use of Python, one can execute commands to the computer and receive results, the various softwares can send commands to other softwares, and one can communicate to others who are also proficient in the programming language. In addition, one can specify the intended audience (human or machine) through the use of particular symbols. For instance, the three opening and closing quotation marks (“””) ensure that the computer will not read certain sections of the code; however, other programmers can read these sections in order to better understand the author’s choices or how the code functions.

While using Python and reading the articles for this week, it occurred to me that programming languages consist entirely of performatives, in the linguistic sense of the term. A performative occurs when what the speaker says correlates exactly with what she does. For instance, if I were to say “I promise to learn to code,” I am also doing exactly that: promising to learn to code. In this way, language not only says or explains or details or means, but also performs actions. Programming languages appear designed in such a way that every utterance is also a performance. As Prof. Irvine remarks, bits include “not only data (numbers that mean things) but executable instructions (numbers that do things)” (4).

While the Python exercises assisted in understanding some of the concepts of the reading, they did not convince me of the benefits of applying computational theory to all aspects of the world and existence. For instance, Wing discusses how “computational thinking will have become ingrained in everyone’s lives when words like algorithm and precondition are part of everyone’s vocabulary” (34), and she seems to suggest that the adoption of these new(er) metaphorical understandings will be an improvement. So far, I am not entirely convinced. Why should we think of the choice between lines at a supermarket as “performance modeling for multi-server systems”? What is gained through the application of this new(er) metaphor set? We’ve discussed before the danger of the “mobile army of metaphors,” the way in which “truth” results in a metaphor losing its metaphoricity. However, none of the theorists for this week appear to treat these metaphors critically (or even to acknowledge their existence as metaphors). How cautious and critical should we be in employing computational metaphors, and what are the boundaries (if there are boundaries) of their usefulness?

Martin Irvine, An Introduction to Computational Concepts (introductory illustrated essay)

Jeannette Wing, “Computational Thinking.” Communications of the ACM 49, no. 3 (March 2006): 33–35.

Endlessly Tracing the Network

Although I tend to agree with Latour, I would like to use this post to question some of the possible implications of Actor-Network Theory.

How does one determine what belongs to the network? Or, rather, where does the network end? In my Rhetorical Ecologies class last semester, many of the theorists applied network theory to understanding the process of writing. Certainly, a written work is not the product of some autonomous agent; it does not shoot Athena-like from an individual author’s head. Instead, writing is a process involving a network of human actors  and non-human actants, wherein agency is distributed across the network, albeit unevenly so. For example, this blog post is the result of me, theorists I’ve read, previous and current professors, and other students, but also the blog medium, my keyboard, the Internet in general, etc. However, it is unclear to me when we should stop tracing the network. One of the theorists from last semester went so far as to acknowledge her cat as an agent in her writing process. Others (briefly) delved into the realm of food and other bodily concerns in tracing the network. It appears that one could continue tracing the network indefinitely. How might we be capable of setting up some kind of threshold agency—that is, a level of agency below which agents are deemed irrelevant—without “transcending” the network?

Additionally, Latour defends ANT against charges of immorality, apoliticism, and moral relativism by claiming that he is not “indifferent to the possibility of judgement”; instead, he merely refuses “to accept judgements that transcend the situation” (“Technology is Society” 130). According to Latour, “one must first describe the network” before making diagnoses or decisions. However, if the tracing of the network is seemingly and perhaps actually endless, how do we ever manage to achieve the potential for judgement or decision or diagnosis? There needs to be some point at which it is decided that the network is sufficiently described, but (as stated earlier) how might we manage to designate such a point while remaining within the network and not imposing some limit from an external perspective? In ANT, where exactly do ethical or political considerations come into play?

Bruno Latour, “Technology Is Society Made Durable.” In A Sociology of Monsters: Essays on Power, Technology and Domination, edited by John Law, 103-31. London, UK; New York, NY: Routledge, 1991.

Understanding Media

According to Bolter and Grusin, as well as McLuhan, a medium can only be understood in its relation to other mediums. As in poststructuralist and Peircean semiotics, where the meaning of a sign is always made up of other signs, McLuhan remarks that “the ‘content’ of any medium is always another medium” (Understanding Media 3). This applies the concept of infinite semiosis to understanding media. As with words, we can only understand a medium in relation to other mediums, which in turn are only understandable in relation to still other mediums, ad infinitum.

This is what Bolter and Grusin refer to as “remediation,” and it is by no means the result of digitization, but instead a general theory for understanding all media, new and old. It is not only that newer media remediate (and reform) older media; “older media can also remediate newer ones” (55), an example of which might be Tosh.O, a television show (old media) that appropriates youtubes (new media). In this way, various media struggle for cultural, economic, and aesthetic dominance. They conclude, “No medium, it seems, can now function independently and establish its own separate and purified space of cultural meaning” (55).

As a result, it seems clear that we should study each medium in relation to both prior and succeeding mediums. One potential issue is that we continue to use linguistic metalanguages in order to understand all other mediums, thus granting privilege to the medium of language as the key to all other mediums. As McLuhan ends “Myth and Mass Media,” “For our experience with the grammar and syntax of languages can be made available for the direction and control of media old and new” (348). In other words, we can use the same tools of linguistic analysis in order to understand and therefore control other types of media. For instance, we tend not to think of music as having a syntax or grammar, but, like language, music is generative; a musical scale contains only a fixed amount of notes, but with these notes one can create an infinite amount of utterances. Similarly, the arrangement of the notes constitutes something like a syntax. Through such an analysis, we can come to a better understanding of how music (or any medium for that matter) functions, thus enabling us to direct the medium rather than it directing us.

However, does the study of linguistic media retain a higher methodological value for understanding other forms of media—that is, should we apply linguistic principles to the study of non-linguistic media? Does this not reinforce the centrality of linguistics, the same centrality that Bolter and Grusin criticize in relation to contemporary theory (57)? Should we instead strive to create a more inter-medial metalanguage: one that is equally relevant to all mediums?

For now, it appears that the use of linguistic analysis is the most pragmatic, but the development of a post-linguistic metalanguage might be necessary to understand the elements of other mediums that have no linguistic relation.

Bolter, Jay David, and Richard Grusin. Remediation: Understanding New Media. 1st edition. Cambridge, Mass.: The MIT Press, 2000. Print.

McLuhan, Marshall. “Myth and Mass Media.” Daedalus 88.2, Myth and Mythmaking (1959): 339-48.  JSTOR. Web. 03 Mar. 2015.

McLuhan, Marshall. Understanding Media: The Extensions of Man. New York, NY: McGraw-Hill, 1964 (many subsequent printings and editions).

Communication Models and Social Theory

In developing communication models, our broader theories of societal organization both aid and constrain how we might perceive the transmission/communication of information. As Carey points out, there is a reciprocal relationship between models of communication and the ways in which we actually communicate: “Models of communication are, then, not merely representations of communication but representations for communication: templates that guide, unavailing or not, concrete processes of human interaction, mass and interpersonal” (14) and “Our models of communication, consequently, create what we disingenuously pretend they merely describe” (15). This creates a circularity by which the representations we create become the real upon which further representations are made (generally, ones that reinforce the earlier representations). As such, in representing society as “a network of power, administration, decision, and control” (16) we create society as a network of power, administration, decision, and control. This leads Carey to take a stance against communication models based on “power,” “trade,” and “therapy,” and instead to assert the importance of “aesthetic experience, religious ideas, personal values and sentiments, and intellectual notions” (16).

In so doing, Carey appears to stand against not only information theory’s “transmission model of communication” (developed for militaristic and commercial purposes), but also the reigning social and critical theories built on the foundations of Marxism (power and trade) and psychoanalysis (I assume this is what he means by “therapy”). While I am tempted to agree that these aspects might be emphasized to the exclusion of others, it’s almost as if Carey suggests that overlooking unequal “relations of property, production and trade” would simply make them go away; as if collectively pretending the world to be just would make it so. This brings up a problematic aspect of theorizing that the real is nothing other than our representations of it. Should we then go back to Plato’s Republic, banishing illogical representations in order to create a logical reality? Carey, of course, is a fan of representations Plato would find illogical, but it appears perfectly possible for someone with a more Platonic sensibility to suggest as much.

In addition, I’m conflicted over Carey’s emphasis on “aesthetic experience, religious ideas, personal values and sentiments” (16). While I’m sympathetic to the re-emergence of questions of aesthetic value, recently re-reading Benjamin’s “Work of Art in the Age of Mechanical Reproduction” raises some concerns over Carey’s elevation of the aesthetic over the political. For Benjamin, this was the essence of Fascism, aestheticizing political propaganda in such a way as to short circuit critical reception (think “Triumph of the Will”). In addition, are these aspects of social life not entirely intertwined with the economic order? How might we develop a model of communication that includes all of these forces—social, economic, political, psychoanalytic? Current models appear too reductive. Might we instead draw from complexity theory and create a more ecological model of communication where these forces—sometimes converging, sometimes diverging—are depicted within a complex system, rather than the sender-message-receiver model?

Benjamin, Walter. Illuminations: Essays and Reflections. Ed. Hannah Arendt. Trans. Harry Zohn. English Language edition. New York: Schocken, 1969. Print.

James Carey, “A Cultural Approach to Communication” (from James W. Carey, Communication as Culture: Essays on Media and Society. Revised edition. New York and London: Routledge, 1989. )

 

 

This Heat “Makeshift Swahili”

After much deliberation, I’ve decided to analyze the semiotic structures of This Heat’s “Makeshift Swahili,” track eight of their second, and final, album Deceit (Rough Trade, 1981).

 

 

In assessing the shared cultural encyclopedia from which this track draws, one can access contextual meanings through both the synchronic and diachronic dimensions (Irvine 3). Synchronically, “Makeshift Swahili” arrives at the tail-end of British postpunk, following the death of Ian Curtis and the failure of Gang of Four to achieve mainstream recognition, and preceding the rise of MTV and New Pop (Calvert). Admittedly, postpunk is a fairly nebulous genre, encapsulating a variety of different bands that pushed the boundaries of the minimalist, three-chord progressions of early punk music to include influences from other genres and experimentations with form. This Heat’s placement far on the experimental end of this genre further complicates the problem of categorization. However, “Makeshift Swahili” exhibits some common tendencies of punk/postpunk, including aggressive vocals, treble-heavy guitar tones, and staccato strumming (think Gang of Four). In addition, the song elicits comparisons with prog rock—it’s clear that a fair amount of composition went into each of the song’s three sections, and the electronic organ that becomes prominent in the second section connects it pretty thoroughly to the prog rock sound.  These clashing sound stacks of postpunk and prog rock enable “generative possibilities for variation and combination” (“Popular Music” 3)—in other words, they provide the ability for This Heat not to create something entirely new, but to remix already existing generic conventions in such a way as to add value and develop the possibility for future pathways in the meaning network (“Remix” 12).

From a diachronic perspective, “Makeshift Swahili” breaks down the conventions of Western pop music and sets the stage for succeeding genres (industrial, post-hardcore, and noise-rock). In contrast to the typical verse-chorus-bridge structuring of most pop music, “Makeshift Swahili” has three distinct yet equal parts, none of which are repeated (as in a chorus) or unique from the rest of the song (as in a bridge). The first section features drones/feedback, discordant and off-key (but not arhythmic) guitar riffs, a steadily increasing drum presence, an uncharacteristically smooth bass line, and screamed vocals. There is then an abrupt transition to the second section, which is significantly more melodic: the organ synths kick in, the vocals are no longer screamed but sung, everything is in time and seemingly in key. The third section is a muffled and more discordant version of the first. Here, the tempo accelerates dramatically, the staccato guitar work falls out of rhythm (or at least traditional notions of rhythm), and the screaming of discernible words turns into unintelligible yelling. It’s as if the song self-destructs into the drone from which it began.

In addition to the discordant and arhythmic elements, this three-part structure challenges conventional notions of musical normalcy through the inclusion of the second section. Although the second section conforms to most Western musical standards in terms of timing and harmony (when I took music theory in high school, music was defined as harmony plus rhythm), it is (for me, at least) the most jarring part of the song. In this way, This Heat invert the polarized receptions of music and noise by making the most conventional section of the song seem the most strange or unsettling.

What comes after all this? The Cold War paranoia over seemingly imminent nuclear apocalypse that pervades the entire album suggests the band might not have expected much to follow this release. Nevertheless, the sound appears to have strongly influenced later incarnations of alternative and experimental music. The screaming leads the way to post-hardcore, the synths appear to foreshadow industrial music, and the inclusion of discordant sounds within a piece that still retains structure (albeit a somewhat unusual one) lays the foundations for noise-rock. In this way, “Makeshift Swahili” draws on the past, recombines previously existing elements, and develops the potential for a future response, as in Bakhtin’s dialogism.

Calvert, John. http://thequietus.com/articles/06379-this-heat-deceit-anniversary

Irvine, Martin. “Popular Music as Meaning System.”

Irvine, Martin. “Remix and the Dialogic Engine of Culture: A Model for Generative Combinatoriality.”