Author Archives: Joseph Potischman

Through the Wire: Extended Cognition, Memory, and the iPod (Joseph Potischman)


This paper analyzes the iPod system (device, software, accessories) as a device that enables extended cognition, with varied affordances compared to the other music technologies which came before. It will include concepts important to understanding the differences between music and language processes, context for the iPod within the realm of other cognitive technologies, and lastly how the organizational capabilities of the iPod have altered the environment for music listening in the modern age.


Clark (1998, p. 13-15) wrote about artifacts of extended cognition as portable systems intertwined with our biological memory. The information stored in these systems are must be useful to the user whenever they want to retrieve it. For some, music listening is thought of as innocuous, a frivolous pleasure, or a distraction with little impact. Walking through bustling city streets we have become accustom to seeing humans with wires running into their ears, ‘tuned out’ from the rhythm of daily life. They are completely engaged in something else, and it is not clear what this type of musical immersion represents. If we re-classify music listening as an engagement with cultural memory and the iPod as an artifact for extended cognition, perhaps we can build a stronger representation for the meaning making behind music listening.

Music and Language Processing

To try and build on the meaning of what music is, especially as it relates to the iPod, we must first define what it is not. While music and language share similarities in some aspects, in many more they strongly differ. The meaning elements of music can be described as “stacks of sound moving in time” (Irvine, 2016, p. 1). How watching a movie is not implicitly the same as experiencing the actions on screen, listening to music is not the same as experiencing the moments which the musician signifies. Music instead serves as a conduit to personal memory, taking the themes expressed by someone else and ‘remediating’ them, or seamlessly sliding the music into a new context to fit one’s own experience (Bolter & Grusin, 1999).

By continuously experiencing a specific musical style, listeners can attain a fluency in that style in the same way they understand language. However, even though one language can be translated into another, musical styles cannot. Instead just as every individual learns the local language, they also learn the local variant of music, which functions more as a culturally specific meaning system (Irvine, 2016 & Jackendoff, 2009, p. 195). For instance, Spanish could be translated into Czech, but a Banda could not be translated into a Polka. This does not mean that musical styles cannot permeate cultural boundaries. In fact, musical influence can diffuse across cultures and take on new uses, in the same way that Banda music is derived from Norteno music which originated with Polka (Flores, 1992).

We also do not process music with the same logical directionality in which we process language. When we hear language we first analyze the sound (phonology), then the words in the sound (lexicon) and the structure of those words (syntax), enabling us to parse what they mean so we can think about them (Jackendoff, 2003). This same process does not occur when listening to music, or at least with musical instrumentation. Rather we respond to the simultaneous sound stack immediately, and try to parse out a semantic meaning within the context of our own cultural exposure to sound (Irvine, 2016, p.2).

Why the iPod

This paper specifically focuses on the iPod because of its market dominance and the significant differences between its functionality and the other portable music devices that existed before it. In 2008, approximately three million songs were sold per day through the iTunes store, capturing 83 percent of the market for all digital music sales. Even though there were cheaper MP3 players that had similar interfaces similar, most of those devices, like the iRiver, have faded into obscurity. This is mainly because the iPod held such a dominant spot in the market (Sundie, Gelb, & Bush, 2008, P. 179). It also must be noted that the iPod combined with the iTunes software created the first complete web-connected portable music system (Sydell, 2009). While the iPod is certainly not the first music device, it is unique in that it combines the portability of devices like the Walkman, and Cassette player, with the totality of a record collection, within a closed system. It is then important to look at the iPod within the existing research on cognitive technologies.



Otto’s iPod

Clark and Chalmers (1998) wrote that active externalism helps explain how the environment we make decisions in helps drive our cognitive processes. Like their example of Otto’s notebook as an external artifact for extended cognition, t he iPod can also be thought of as an artifact for extended cognition. Otto’s notebook would help him retrieve the locations of places he wanted to visits, so that all he would have to do is scroll through his book to find the directions he needs. The iPod enables Otto to scroll through his musical memories in the same way, anything he decides to save on his iPod will be available to him later.

The notebook acts as an indexical marker for the places Otto has already been. He can flip through the notebook, and all the experiences he has had at the locations he has visited become retrievable as well (Clark & Chalmers, 1998). This connects back with the concept of the iPod as an audio diary. When we listen to music we can go back to the moments in which we heard the music, but we can also go back to moments the music reminds us of (Bull, 2009). For instance, we might listen to a song because we heard it at a concert or a restaurant and we want to re-experience that moment. There is also a second level of meaning making that comes from the actual content of the songs ability to create a mood, a quality which will be discussed later in this paper.

While finding the proper directions for where he wants to go is the ultimate reason for using the notebook, there are other externalities from offloading cognition into it. With his iPod Otto no longer needs to remember all the musical experiences he’s had (although it’s also possible that he would not be able to even if he wanted). The iPod becomes the environment in which he can re-engage his cultural notebook. Just as the contents of Otto’s notebook are not simply records, they also represent his work, the contents of his iPod are not just songs, but memories.

iTunes Library as a Sign Vehicle

“A sign is something by knowing, which we know something more” – C.S. Peirce

The ability for individuals to manipulate signs and symbols changed with the popularization of the iPod. Rather than having a physical CD, LP, or record collection, every musical file a person owned would be indexed in their iTunes “library”, with text representing specific artists, albums, and songs. For this to work, the inner mind would have to be able to decide that these images represent music, but this interpretation is just a further representation (Barrett, 2013, p.  5).

Interacting with an iPod, the user undergoes the process of semiosis.  Specifically, humans interpret sound recordings, with meanings that are intersubjective and conform to a cultural category. The community that forms around these musical signs creates a dialogic culture (Stanford Encyclopedia of Philosophy & Peirce, 2006). It’s not a community of practice in which people produce music, but rather one where they re-produce it. In the earlier stages of its ubiquity people would often ask to scroll through an iPod user’s library in public places. Mutual agreements on music could lead to friendship, while at the same time, possessing an iPod with contents considered within the category of culturally “un-cool”, could allow for the scroll-er to make unfair value judgements about the iPod owner (Levy, 2006, p.147). In this same way, DJs use their turn tables and records not to produce, but to reproduce music, their selections are based on how well they relate to the musical identity of the crowd they are performing to (Katz, 2004, p. 115). If they pick music that fits within the context of the event, then they are successful, if not than the crowd will share their distaste.

Associated Indexing and the iPod

The human mind jumps from point to point, and computers enable this rapid thought association on screen (Bush, 1945, p. 9). Associating indexing is the ability to take each point in succession and tether it. Where the memex would have accomplished this by storing articles in a desk, the iPod stores its content in a hard drive. It’s computing power works in real time so that the user can scroll on the click wheel (fig. 1) while determining a sequential selection (Licklider, 1962). The iPod user can put on a song and let the content of the music guide us to our next selection, making free associations within the iPod’s stored memory. The click wheel serves the same purpose as the graphical user interface system paired with the mouse, albeit with less autonomy (Engelbart, 1962). Users would rely on their thumbs to scroll through their music library and press down to select, rather than pointing and clicking with an entire screen as their backdrop.




Here is a breakdown of the main affordances pertaining to the iPod, not all of which are unique:

  1. The iPod is portable

In 2006, Olympic Snowboarder Hannah Teter, tucked her iPod into her winter jacket and boarded her gold medal winning run to the tune playing in her earbuds. However, while there are many instances of other well-known figures using an iPod, we are just as likely to see anyone running down the street with headphones in their ears as they move synchronized to the beat of the music and not the rhythm of the street (Levy, 2006 & Bull, 2009, p. 85). This was much harder to do with the bulkier CD players that pre-dated the iPod. Although it was portable, the iPod would only enable playback if it was charged.

  1. The iPod is battery powered

In the top-right corner of the iPod is an icon representing a battery. The charge indicated in the battery displays to the user how much longer there iPod will last. All iPods were built with rechargeable lithium ion-batteries inside of them. These batteries have a life span of 8 to 12 hours before they would need to be charged again depending on usage. Fully charging the battery from no charge would take the approximately 4 hours, so users would need to be aware of the status of their device (Apple, 2016). It is the dock connector interface which makes this charging possible, as users could plug their iPod into any three-pronged outlet to charge. This is different from past music devices like phonographs and turntables which were completely stationary, as well as cassette and CD players which were mostly powered by non-rechargeable batteries. The dock connector was also multifunctional in that it could be plugged into a larger speaker system for playback.

  1. The iPod enables the MP3/MPEG playback

As previously mentioned, when the iPod was first released, it was not the first portable system to play MP3 or MPEG files on. However, The iPod coupled with the iTunes software and iTunes store created the first legal system to listen to and download songs from the internet (Sydell, 2009). At the time of its release, the two components were also inoperable without the other, ensuring that any user fearful of punishment for copyright infringement would use the iPod (Sundie, Gelb, and Bush, 2008).

Steve Jobs brokered a deal with the major record companies to legally license and sell music through the iTunes store, thus creating a digital music store with most of the same capabilities as its local, physical iteration. This does not mean that iPod users could not violate copyright law, as they would often circumvent the iTunes store by downloading music from illegal services like Napster, LimeWire, or Pirate’s Bay and upload these files to their iPod (Knopper, 2013). This development was somewhat inevitable as MP3 and MPEG’s are non-rivalrous resources; consumption of these files by one person does not limit the consumption of another and the sound does not degrade when copied (Katz, 2004, p. 163).

One of the trade-offs that listeners make when using the iPod is that some musical frequencies are drowned out by other sounds on a track. There is a loudness factor that is lost in MP3/MPEG listening. With traditional vinyl, the sound of a slammed piano key in a jazz piece will briefly cover the sound of the other instruments playing concurrently. However, when coding those sounds into a digital format, the background sounds are assigned fewer bits of data than the foreground sounds, and the listener hears less variability (Katz, 2004, p. 160). “Portable device audio decoding and amplifying technology [like the iPod] is not designed for music but for low-quality ‘functional’ sound,” and while this is true, the iPod fools our ears just the same (Irvine, 2016, p. 5).

  1. The iPod is automated

Whenever you press ‘play’, you receive an uninterrupted sequence of music because the iPod is an automated device. This self-operating principle means that when a track ends, the iPod does not stop playback (Denning & Martell, 2015). This is markedly different from past musical devices. The phonograph, turntable, cassette player, and CD player were all limited by the physical contents which they were playing. Vinyl records and cassette tapes contain two sides, so that when one side ends the listener must get up from what they are doing, go over to their phonograph (and later turntable, followed by cassette player) and flip the record or cassette over. With the popularization of CD’s, users no longer had to flip from side to side, but they would still have the problem of an interrupted music experience. iPod users can hear an uninterrupted stream of music provided their iPod is charged.

Organizational Capabilities of the iPod

According to Blichfeldt (2004), “social identity can be explained as the way we present and understand ourselves in relation to other people. In the same way that you need to organize the world around you, for it to give meaning, you also have to organize how you view yourself” (p. 43). If music acts as a cognitive system to tailor our surroundings to our desired psychological state of mind, then the ways in which we organize music can have a similar effect on our musical identity and mood.

There is a scene in the film High Fidelity (Frears, 2000) where the main character, an audiophile, decides he is going to re-arrange every record in his collection autobiographically. If he wants access to a certain song, he’s going to have to remember the album, and the context in which he first heard it. He sits in a sea of vinyl trying to remember the order he listened to his music (figure 3). With the iPod system, it would merely take a click of the mouse to rearrange his library to reflect the order in which he first downloaded his music. While this encounter is humorous, and it should be, it presents an interesting take on the way we functionally think about music, and illustrates how the different affordances between an iPod and a record collection work in action.


Source: High Fidelity (2000)

When new symbols are inserted into a gallery they recursively modify the course of the constructed history by changing the meaning of the symbols that come before it (Irvine, 2016). Just as museums function under this principle of recursive modification, so does media on an iPod. The organizing principles someone takes when using their iPod will greatly influence their listening experience.  If someone decides they are going to listen to music by bands that start with the letter M, they would have a very different experience from someone who decides they are going to listen to music based on genre.


Choosing a different organizing principle to guide a listener could also greatly influence the style of music they hear. With the ability to change someone’s entire library with just a few letters typed into a search there has emerged a divergent approach to music listening. This occurs when users search for mood, rather than artists, albums, genres, etc. (Katz, 2004, p. 168). Type the word ‘cry’ or ‘tears’ into the iTunes search bar and one would receive music that relates to sadness. They could also type in the word ‘blue’ and have an entirely different shift in melancholy sounds.


Of course, listeners could decide to operate with no organizational principle. By using the shuffle feature can go through the entire contents of their iPod at random. You can go from the relaxed sounds of George Harrison to the cold trap music of Rick Ross, with no commonalities between songs, except for the fact that they are in the same library. The mood shift between the two is extremely abrupt and further illustrates how music can create a personal soundscape, but can also disrupt it. This is not possible with a record player where music is ordered sequentially, and the user must initiate playback by physically changing from vinyl to vinyl.


Ultimately music functions as an affect to the thoughts which words convey (Jackendoff, 2009). Certain types of instrumentation, a lonely harp plink, or a violent cymbal crash, can direct a listener to a certain mood. Of course, most songs do not exist in a pure instrumental format, but are filled with signs and meanings. When we can curate those feelings instantaneously, we have more power to experience the world as we want to see it. In a series of interviews conducted by Bull (YEAR) on the iPod’s ability to individualize one’s immediate surroundings, one respondent said: “there is a song for every situation in my life, even if I might have forgotten about a certain time, person, or place, a song can trigger memories again in no time” (p. 87). If music is a way to capture memories in the form of an auditory diary, then the iPod is the most accessible device for memory retrieval in a closed system that humans have had.


With headphones firmly pressed around the ear, there is a paradox of isolation and intimacy (Bernstein, 2016). We are now able to create our own personalized soundscape anywhere we want, giving us the ability to engage in the dialogic culture in ways never possible. At the same time, it cuts us away from reality, so we do not hear what’s going on around us. The first model of the Sony Walkman came with an orange button that could be used to contact other people’s headphones enabling conversation, but the company phased this feature out (Levy, 2006, p. 212). The whole point of headphones, of portable music listening, is to mediate reality, not to connect with people. However, it would be too easy to say that this is separating us from others, as this paper discussed the act of music listening is to engage in a historic cultural dialogue. People have always tried to mediate their surroundings to fit their own narrative, and the iPod is just one of the latest in a long line to enable them to do so.


Apple (2016). iPod battery FAQ. Retreived from

Barrett, J. C. (2013). The archaeology of mind: It’s not what you think. Cambridge Archaeological Journal, 23(1). doi:10.1017/S0959774313000012

Bernstein, J. (2016). My headphones, My self. The New York Times. Retrieved from

Blichfeldt, M. F. (2004). Branding identity with apples iPod: Constructing meaning and identity in a consumption culture by using technological equipment. The European Inter-University Association on Society, Science and Technology.

Bolter, J. D., 1951, & Grusin, R. A. (1999). Remediation: Understanding new media. Cambridge, Mass: MIT Press.

Bull, M. (2009). The auditory nostalgia of iPod culture. In Bijsterveld K. & Van Dijck J. (Eds.), Sound Souvenirs: Audio Technologies, Memory and Cultural Practices, p. 83-93. Amsterdam University Press. Retrieved from

Bush, V. (1945). As we may think. The Atlantic, Retrieved from

Clark, A., & Chalmers, D. (1998). The Extended Mind. Analysis, 58(1), p. 7-19. Retrieved from

Denning, P. J., & Martell, C. H. (2015). Great principles of computing. Cambridge, Massachusetts: The MIT Press.

Engelbart, D. (1962). From augmenting human intellect: A conceptual framework. In N. Waldrip-Fruin, & N. Montfort (Eds.), The new media reader, p. 93-108. Cambridge, England: The MIT Press.

Flores, R. (1992). The corrido and the emergence of texas-mexican social identity.” The Journal of American Folklore 105(416), p. 166-182.

Frears, S. (2000). High Fidelity [Video File].

Irvine, M. (2016). The Grammar of Meaning Making: Sign Systems, Symbolic Cognition, and Semiotics, p. 1-48. Communication, Culture & Technology Program, Georgetown University.

Irvine, M. (2016). Popular music as a meaning system: The combinatorial elements in music’s meanings, p. 1-17. Communication, Culture & Technology Program, Georgetown University.

Irvine, M. (2016). (Meta)mediation, representation, and mediating institutions. Communication, p.5 -6. Culture & Technology Program, Georgetown University.

Jackendoff, R. (2002). Foundations of language. Oxford, UK: Oxford University Press.

Jackendoff, R. (2009). Parallels and nonparallels between language and music. Music     Cognition, 26(3), p. 195-204. doi:DOI:10.1525/MP.2009.26.3.195

Levy, S. (2006). The perfect thing: How the iPod shuffles commerce, culture, and coolness. Simon and Schuster.

Licklider, J. C. R. (1962). Man-computer symbiosis. In N. Waldrip-Fruin, & N. Montfort (Eds.),  p. 73-81. Cambridge, England: The MIT Press.

Katz, M. (2004). Capturing sound. London, England: University of California Press.

Knopper, S. (2013). iTunes’ 10th anniversary: How Steve Jobs turned the industry upside down. Rolling Stone.

Stanford Encyclopedia of Philosophy. (2006). Peirce’s theory of signs. Retrieved from

Sundie, J. M., Gelb, B. D., & Bush, D. (2008). Economic reality versus consumer perceptions of monopoly. Journal of Public Policy & Marketing27(2), p. 178-181.

Sydell, L. (2009, December 22). The iPod: “A quantum leap in listening.” Retrieved October 6, 2016, from NPR Music,      leap-in-listening


The Diner and The Donut Shop (Joe and Jameson)

In Anthropology, the concepts of the sacred and profane are used to show how cultures would take specific aspects of life which they deemed more meaningful and separate them from the mundane tasks of everyday life. Sacred rituals and artifacts were ascribed a symbolic level not attributable to profane tasks and artifacts. In the physical world, museums have functioned as a sacred space for culture, presenting a curated set of artifacts which constitute a version of cultural memory, apart from the outside. The Google Cultural Institute is a new meta-interface providing a digital version of the sacred space for culture, delineated from Google’s profane web search.  

When we talk about “Nighthawks” by Edward Hopper, what are we actually referring to? On the most basic level, it refers to the original, physical painting itself. Moving up a level, it can also refer to the original painting and all the physical copies, or replications, made of the original. Moving up yet another level, it can refer to photographic representations of the painting–since the painting is so distinct, and we can, upon seeing it, recognize it as “Nighthawks.” Moving up even further, photographic representations of the painting (or, most likely, of copies of the painting) uploaded into the digital space constitute yet another interface for the original painting. When you search for “Nighthawks” on Google Images, you get thousands of results, including a wide selection of photographic representations of the painting, as well as iterations that range from re-interpretation to parody.

The former has similar “likeness” to the original and can be immediately recognized as “Nighthawks.” We tend to think of this as merely being the thing that it is representing–if you come across it on Google Images, you will probably refer to it merely as “Nighthawks” (not “a digital representation of a photograph of a copy of…” etc). The latter, on the other hand, may have some similar elements to the original painting, but also alters other elements. Even though it is not an exact replica or representation of the original painting, we still understand it as related, in some way, to “Nighthawks.” This is because it belongs to the cultural category of “Nighthawks,” in that it references and assumes an understanding of the artifact in question. The artistic, recognizable attributes are the same as the original Hopper painting (and its many representations), even though other key elements are changed. These similar attributes are what allow us to understand it as being an iteration belonging to this particular cultural category.


The order in which we look at these representations is important as well in determining their meaning. In this way, how paintings are organized on a wall in a museum becomes another process of recursive modification. As you your eyes pan from one painting to the next, the meanings of each painting changes by each preceding image. This same process is observable in Google’s search interface, when you search for a painting you get all the images that have resulted to similar searches through Google’s PageRank. When you search for “Nighthawks” you get a replica of Edward Hopper’s painting, but you also get parodies of it. These parodies change the meaning of the painting because they denote “Nighthawks”’ cultural significance, without context as to what made it significant, nor where it stands in the artist’s canon. Other Edward Hopper works exist in the same cultural category as paintings related to “Nighthawks”, but they change the meaning of “Nighthawks” by expanding on the themes that the artist was exploring, rather than marking “Nighthawks” as important in and of itself.

Parodies of Nighthawks:



Themes of Nighthawks:


Think of the different ways in which you would conceptualize “Nighthawks” after viewing each block of images. The meta-interface in which you view them will be the same, Google’s search engine, but the meanings elicited by each viewing will be very different. Seeing both give a more elaborate view of artistic output, and the many realms it can penetrate in our cultural consciousness. Google Cultural Institute cannot replace Museums as a way to experience art, but it can offer a substantially different context for looking at art. One that fulfills the more research oriented aspects of what the computer system can do, from it’s early conception as the “Difference Engine” to Vannevar Bush and Alan Kay’s work to where we are now in the mediated present.  


Durkheim, Emile. 1915. “Elementary Forms of the Religious Life.”

Irvine, Martin. 2016. “From Samuel Morse to the Google Art Project: Metamedia, and Art Interfaces.”

Irvine, Martin. 2016. “André Malraux, La Musée Imaginaire (The Museum Idea) and Interfaces to Art.”

Irvine, Martin. 2016. “The Museum and Artworks as Interfaces: Metamedia Interfaces from Velázquez to the Google Art Project“

National Gallery of Art, Background on Samuel Morse’s painting, The Gallery of the Louvre.

Proctor, Nancy. 2011. “The Google Art Project.” Curator: The Museum Journal, March 2.

Those aren’t turtles…they’re bits!

Peirce’s triadic model enables a better understanding of communication as more than just sender and receiver. There are endless signs that can be used to communicate meaning, and when we fix them to an artifact we extend cognition. When we started to use digital mediums as the space to express ourselves we treated them as we had other spaces, and Otto’s notebook (from Clark’s example) became Otto’s computer system. However, a notebook would never refresh to show new information, or link the information you want to other information you might be interested in. With Otto’s computer system, it’s all about association, building webs of interest, from one idea. Otto might search for directions to the MOMA, then save those directions, then his computer prompts him to look at other museums, he saves those directions too. If Otto remembers that his goal is to get to the MOMA he’ll be fine, but if he builds his associative trail too far, he might forget where he was going.

This endless search for new signs, creates information overload, and it’s a real problem right now. There’s a palpable, sometimes dark energy out there, that I think a lot of people are feeling. It’s at its most powerful on the internet, where the sheer volume of information out there to be consumed is swallowing discourse. Not too long ago we had a fixed set of information, you’d get your daily paper and you’d be limited to the facts on it. Now, we still represent this information in the same way, but it’s no longer fixed. Wegner differs from Turing/Von Neumann in his assertion that there is greater richness to computation, as evidenced that machines can’t handle the passage of time during the act of computation. We have this same problem where as we are trying to digest new information, and we can’t account for what’s developing.

Possibly one of the issues in bridging legacy media like the newspaper to a digital medium is that we’re trying to pour old wine into new bottles. There’s a space that needs to be allocated to communicate news, science, and culture, but maybe that space needs to be represented differently. These sites as they are designed now do not reflect the way they are being consumed, which is minute to minute, second by second. Murray makes the point that there is a better option, for instance GUI helped to design a better desktop, and not by creating a layout that looks like a physical desk. We might need to change the mediums on which we receive information, otherwise we may be bogged looking at one story and seeing it propped on the back of another, on the back of another, on the back of another, and we’ll be searching without understanding. Only it won’t be turtles all the way down, it will be turtles made of bits.

Reduce, Reuse, Remediate – Joe and Jameson

A particularly interesting quote from these readings is found in Bolter and Grusin’s piece on remediation: “In addressing our culture’s contradictory imperatives for immediacy and hypermediacy, this film demonstrates what we call a double logic of remediation. Our culture wants both to multiply its media and to erase all traces of mediation: ideally, it wants to erase its media in the very act of multiplying them” (Bolter and Grusin).

This is a fascinating concept, and even more relevant today as we accelerate towards a world that is, in a sense, at the same time both media-ful and media-less. Not only are different mediums combined (and “remediated”) in novel ways—text, video, audio, imagery, etc, and the new forms that emerge from their combinations—but they are also becoming even more a part of our reality. They are no longer seen as mediating forces for and to the world, but exist as forces for and to the world. They are both everywhere and nowhere. They are not merely tools we use to express, capture, or understand something else, they are also the something else. They are ubiquitous yet invisible. This is the double logic of remediation: we want to access the “unmediated” meaning being represented “behind” the medium (the “object,” in Peircean terms), without the “mediation” in between us and the object. What this means is we get an explosion of media, while at the same time try to limit any evidence that media exists.

Alan Kay discussed the new power that the DynaBook (and computers in general) could have on our education. His view on technology as a commodity was that the DynaBook could be given away for free, and only it’s content would be sold. Today we see the exact opposite, as our systems come with a hefty price, but are pre-loaded with free software. It’s the software that enables remediation: we can represent endlessly, convert one message into another, and represent that conversion as a unique message. However, what we can not do is properly denote the link in this trail of representation. For instance, to make a GIF is to take a sequence from a recording out of context, and loop it, thus repurposing the old sequence with a new meaning. If you do not recognize the context of the GIF, you cannot comprehend the full message. A small tweak in a GIF could enable a user to click on it bringing them back to the full sequence or source material. We have the capabilities to do this in research where we mark our cognitive associations with citations, so anyone reading our work can access the papers which influenced us. This type of linkage would be more difficult as it relates to music sampling, you can’t click on a soundwave, and digital streaming services do not show producer credits. Ultimately, it will be interesting to see if new media built for short viewing, will be able to credit sources, as original content becomes increasingly repurposed and devalued.

Alan Kay, A Personal Computer for Children of all Ages. Palo Alto: Xerox PARC, 1972.

Jay David Bolter and Richard Grusin, Remediation: Understanding New Media.

Cambridge, MA: The MIT Press, 2000.

Vannevar Bush, As We May Think. Atlantic, July, 1945.

Associated Indexing and Memory – Joe

It would be very easy to say that there is a lot to unpack from these readings. So many ideas espoused in these papers exist today in modern computing, and while things did not progress exactly like these papers predicted (our computer systems are much more compact than a memex), they are still pretty incredible to read. For me it was Bush’s concept of “associated indexing”  that the memex would allow that provided the “wow” moment.

When Bush discusses the “associated trail” I automatically thought of Zotero and RefWorks, where we can save our sources as we conduct research. He foresees that lawyers, doctors, attorneys, will all have their own trail, correlated to their specific expertise. Could he have imagined that all of their information would exist communally on the internet? That Douglas Englebart’s patent is not only known to patent attorneys, but to anyone who searches for it on Google.

The constraints of other media are limited to our ability to search with them, a physical book we have to flip through the pages we want, a painting requires a trip to a museum. With a tablet or computer system our only limits are what we can cognize and how much electricity we have. I type in any book section I want and it comes up, ditto any painting, and they aren’t saved on microfilm, but exist digitally. These are cognitive artifacts that exist within a cognitive artifact. Artifacts which we can blend together.  This was one of my takeaways from Conery piece, even if he did not intend it this way: that we need to think of computing as more than code, more than programming, but as thinking. As researchers, using the right databases to extract the exact information we need, is actually a form of computing. We create an associative trail as we think, and with every new bit we save, we extend our memory.

One small thing that I keep thinking about as it applies to design thinking,  is what if Xerox PARC hadn’t shared it’s work with aspiring entrepreneurs. We know that Apple basically took Englebart’s GUI system and ran with it straight to the bank, and they did so with a closed system. Englebart’s GUI and mouse made computers accessible, but they could have helped popularized an open system. If Apple hadn’t been the first to reach a critical mass, maybe we would all be hobbyists and adept with the more technical aspects of our computers. Englebart, Bush, and Licklider, predicted (and worked on) some of the most amazing developments of the 20th/21st century, but could they have seen the Cubs winning the World Series? I’m not so sure.

Bush, Vannevar. 1945. “As We May Think.” The Atlantic.

Conery, John. 2010.  “Computation Is Symbol Manipulation.” The Computer Journal, 55, no. 7

Irvine, Martin. 2016. “Introduction to Affordances and Interfaces: The Semiotic Foundations of Meanings and Actions with Cognitive Artefacts”.

Licklider, J.C.R. 1960. “Man-Computer Symbiosis”. New Media Reader. Wardrip-Fruin, Noah, Nick Montfort, ed.. 74–82. Cambridge, MA: The MIT Press, 2003.


Bringing it Back Home with Recursion

This is my third attempt to get myself to learn a programming language. The first time I tried it was through Harvard’s edX Computer Science Course, but as I was already enrolled in 5 other classes I couldn’t really get into it. The next time was an Introductory Programming course taught by the head of Geneseo’s Computing and Information technology department, but I had senior-itis and the course was geared towards Physics majors, so again it didn’t take. I hope the third time’s the charm, I really like Code Academy and it’s helpful that I can structure the lessons myself. While I worked my way through the lessons I really kept thinking about one feature of linguistics and how it relates to coding: recursion.

The term has popped up before as the key feature of language that enables the production of a discrete infinity of sentence possibilities.  We cycle through our lexicon and combine words and syntax structure we know, but the recombinations are always unique. In programming it allows us to call back functions we’ve already used and apply them to new concepts. On the micro-level this allows us to create things like the ‘pyg latin generator’ in Code Academy’s python tutorial. We create the function that can translate a word into pig latin and then we can call it back to put any word we want in it. I can’t really comprehend the type of code big data companies must be writing, but I can imagine how instrumental recursion must be to them. Possible (and simplistic) example: every time you publish a Yelp review of a cafe, their code takes all of the other reviews given to that cafe and adds yours to the average, and the same type of process would happen for every review you post at any other cafe making it easy for them to assign aggregate scores to businesses. Ultimately, if we did not have recursion we would not be able to automate code, or at least our pattern matching would be far less efficient. However, despite all the praise I’ve heaped on recursion, I do have an issue with the term itself. We don’t  only use it to bring back old code, we always add something new to it. Otherwise, there would be no reason to go back in the first place.

Unrelated points:

  • Model Checking: Wing’s explanation of model checking as a system we can use across disciplines and professions was very interesting. When she explained it with the ATM metaphor I felt like I really understood it. Machines have two inputs, one being the hardware finite state (the physical ATM) and the software or temporal logic property (being able to get my money). Bugs in the system itself (money doesn’t come out) are the counter examples. However, I feel like her use of the ‘counter example’ doesn’t cover a wide enough range of  externalities related to the system. Model checking, in my understanding of it, wouldn’t account for how the ATM has affected the way we carry (less) paper money or a robber stealing your money at an ATM. They aren’t flaws in the model, but they wouldn’t exist without it.
  • This is more of a public statement: I f anyone has made it to the end of this post, I’m going to try and code for an hour every other day so I can finally learn a programming language. If you see me around ask me if I’ve been keeping up with Python and if I haven’t feel free to hit me.

Ticket to Ride…the cognitive express! (Joe and Jameson)

51 years ago four young,intrepid, mop-headed boys went into a recording studio and played “Ticket to Ride.” To be able to perform this track they had to first write it, a process in which they came together and each extended their creative, musical ideas into song form. Each of them had distinct cognitive abilities and talents, which, when brought together in the group, interacted in order to develop the song as we know it. John Lennon’s songbook functions in the same way Otto’s notebook does. He could probably play the song from memory, but having musical notes written out and standardized enabled there to be future reproductions. Anyone with the skills (linguistic competency) to read the notes and play the music could look at the notes and play the song, but no reproduction would ever sound the same. Of course, most people want to hear it from the source, and recording allows the extension of the performance into the future an infinite amount of times. In effect, listening to the recording allows you to time travel to Abbey Road studios and meet The Beatles.

The music itself conforms to the structure we think of in a pop song in terms of duration, instrumentation, key signature, tempo, chord progression, etc. They took the signifiers we know to be pop music from a long history of other intersubjective accessible meanings and generates a new iteration of it. We recognize “Ticket to Ride” as its own distinct, identifiable song, but understand it within the context of pop music forms. This allows their musical ideas, brought together in the form of a song, to be distributed in a cultural language that fans of the form will understand.

Today you would not be able to experience the Beatles music through the original lineup performing it to you. We rely on reproductions, whether that be other musicians re-interpreting it, or recordings on vinyl, cassette, digital, etc. In active externalism your environment drives your cognitive processes. If we wanted to go and purchase a Beatles album, we would go to a record store but we are limited by what they sell. If we go to a digital store, we can buy any Beatles album (assuming they have the rights). The digital musical library offloads these physical recordings into the digital sphere. In other words, a press of a record and an mp3 have the same affordance, they play the same tune, but how we interact with them as external cognitive artifacts are completely different. To play the Beatles on a vinyl record is to be limited to how they sequenced their music, hitting shuffle in a library we can listen to their entire archive instantaneously.

Screen Shot 2016-10-19 at 5.39.29 PM

This whole process began with social cognition—one of us hummed “Ticket to Ride,” and, because of our shared repertoire of sign and symbol systems, the other could recognize it, even though it was a recreation of one distinct melody. As far as the lyrics go, we assumed the meaning of the song to be literally about a train ticket, but apparently Lennon and McCartney can’t even agree on the meaning.

Zentropy and Inforgs

I didn’t come up with that title  it’s from an album and it has nothing to do with this post (but it is cool, and random, and I’m writing about entropy).

Entropy may have been Von Neumann’s joking way of connecting concepts in thermodynamics to information, but when extrapolated to semiotics it’s system of informer – informee – informant creates a model that is Peircean in it’s application. If I understand this concept as it relates to information, the more structured your communication there is a small chance of it’s misinterpretation, whereas a much a message with more entropy (randomness) has a larger chance of being misinterpreted. The random information adds noise to our message, and we somehow need to parse out what is accurate.

I want to move on to Inforgs, a Philip K. Dick-esque term for how individuals are becoming interconnected informational organisms. I would argue that we have always been this way, but it wasn’t until discourse moved online that we could comment in real-time and with such rapidity. With our creation of online profiles on major social networking sites we have created the ability to create two separate personas: online and offline. Although code-switching in Hall’s use is usually applied to linguistics (bilingual speakers mixing the high language and low language) it also helps explain online identities.

When we upload new pictures or share content, we deem what information is suitable as an avatar for our natural selves. However, something that is not touched on, is our ability to create a completely fake persona, enter weird twitter, where personas are built from cultural references. Their is also an identity that we do not get to build for ourselves, this is the ITentity. I once studied the use of RFID’s in school ID cards. School’s in large districts stopped tracking in-class attendance and instead relied on the derivative data from the chip to see if the students were really on school grounds. Through our online interactions, we leave a constant trail of metadata and derivative data behind us, I will be interested in studying the models for how this information is used (and it’s limits).


Hall, Stuart. “Encoding, Decoding.” In The Cultural Studies Reader, edited by Simon During, 507-17. London; New York: Routledge, 1993.

Martin Irvine, “Introduction to the Technical Theory of Information

Luciano Floridi, Information, Chapters 1-4. PDF of excerpts.

Logical Directionality of Sight (and lots of Flags)

We are still working out whether language is the model or the main modeling system for other forms of meaning making (Irvine). While we attempt to work this out, we can use theories of language and see how they apply in other symbolic genres. While looking at Jackendoff’s model of logical directionality of language perception I wanted to test out how it would work if applied to sight.

When we hear something we first analyze the sound (phonology), then the words in the sound (lexicon) and the structure of those words (syntax), enabling us to parse what they mean so we can think about them. When we see something, for instance a flag, what is the order in which we uncover it’s meaning? While flags often carry with them ideas of distinct nationality, most are not particularly visually idiosyncratic. Exhibit A, the French Flag:


The first thing we might process would be the flags spatial size. If we did not have this cognitive possess this cognitive ability, we would not be able to distinguish the flag from any of the other images in front of us, and would not make it to the next step of processing. Following Jackendoff’s rules, we simultaneously process the shape and color of the flag. If we could comprehend the French flags color, but not the shapes within it, we might be fooled into thinking we were looking at a Russian flag:


If we could comprehend the French flags shape, but not its proper color we might believe we are looking at the Belgian flag:


Flags exist dialogically like everything else, and the tri-color vertical design could pay homage to the country its second most popular language is derived from. The last step would be to put all of this information together and evaluate it based on context. If an image identical to the French flag was displayed in an art gallery and given the name Composition X it could be seen as a really lazy attempt at De Stijl, rather than a key component of French national identity.

Of course there are other processes that need to be in consideration. A flag can exist physically in ways that words cannot, while they are also unable to convey the depth of meaning that words can. A large French flag made out of cotton may inspire more reverence then a pixelated flag. Why do certain colors convey meaning? A feeling of “red” is a qualisign that means anger, feeling “blue” could indicate sadness. Tying it back to flags rather crudely, do French and Russian exclaim they feel the “red, white, and blue” when they feel patriotic?


Martin Irvine, Selections from: Semiotics, Symbolic Cognition, and Technology: A Reader of Key Texts

Ray Jackendoff, Foundations of Language, selections on the “Parallel Architecture” model of language as a combinatorial system. Chap. 5.5, pp. 123-128; Chap. 7, pp. 196-200.

Semiotic Elements and Classes of Signs (Wikipedia)

words… combined

Whoa. Siri has its work cut out for it.

Screen Shot 2016-09-21 at 6.26.10 PM

If someone asked me what a language is, I would probably start by telling them what language is not, as Pinker does. It is not thought; it is a way of expressing thought. As we learn more about connections between language and other forms of expressing thought (i.e. multimedia, film) this should hold true. When we think about the rules of language, for instance, lexical categories (finite) and their combinatorial nature (infinite), I assume we can apply them for other forms of expression. In music there are only so many notes you can play on any given instrument, but there are endless possibilities with which you can play those notes in a song. I haven’t gone further than basic googling, but it seems that childhood musical development is concurrent (or they at least have some overlap) to language development. I wonder if music is also structure dependent like language.

When Jackendoff writes “the little star besides the big star”, his spatial structure model seems intuitively clear. Although when he draws out the physical structure of a big star and a little star, I wonder if drawing two celebrities of unequal fame would also suffice. There are parts of all of the other structures that make sense, but as a whole the new terminology makes them more difficult. I never really thought before about the way that plural suffixes have three different types of sounds (s,z,uhz), or how syllables have a nuclei which makes up the bulk of the word. I still have a hard time following how he maps the different structures out, but the structure itself provides a useful framework.

I’ve only taken one class on linguistics and it was the last requirement I needed for my Anthropology minor. It focused on social uses of language, although we started off by going over syntax, semantics, and pragmatics. We also discussed things that I assume we won’t be using here like haptics (use of touch in communication) and proxemics (location in communication), although their implementation can alter our perception of meaning. What’s interesting is that although we spent a lot of class time going over the ebonics debate and how descriptivist’s would agree that it is sophisticated, but we never went over Chomsky. His ideas seem to be integral to the idea of linguistic relativity. Jackendoff even mentions his merge rule (although he doesn’t say whether we should give it creedence), where any word or phrase can be combined with any other word or phrase (I suspect this rule governs how startups choose their company names and Dada art). How far did Chomsky take this rule?

Lastly, here’s Rick and Morty’s take on a universe where grammar evolved to have ‘shm’ as the prefix for everything (I wasn’t sure how to weld it to an examplet in the reading, but I know it belongs here):

Martin Irvine, Introduction to Linguistics and Symbolic Systems. 2015.

Ray Jackendoff, Foundations of Language: Brain, Meaning, Grammar, Evolution. New York, NY: Oxford University Press, USA, 2003.

Steven Pinker, Linguistics as a Window to Understanding the Brain. 2012.