Author Archives: Rebecca Tantillo

The Semiotic Potential of Smartpen Technology (Rebecca Tantillo)


In this paper, I provide an overview of smartpen technology, using the Livescribe Echo Smartpen as my primary example, in order to trace the conceptual development of this technology as a tool for sign formation and abstraction. Using the conceptual frameworks of C.S. Pierce, Andy Clark and David Chalmers, I demonstrate the ways in which the smartpen and smartpaper interface enables users to perform meaning-making through advanced forms of cognitive offloading. Ultimately, analyzing smartpen technology through these frameworks provides a deeper understanding of the unique semiotic functions afforded by the development of pen and paper into digital technologies.


The inscription of knowledge for functions ranging from personal use to general communication to overall cultural progression has long been achieved through the artifacts of pen and paper. As an interface, these tools allow for useful extensions of human cognitive functions, particularly in regard to acting as external storage devices for recalling and sharing meaning. However, digital and computational technologies, such as the PC, tablet, or smartphone, which facilitate dynamic digital information processing, have in many ways perhaps surpassed the pen as tools for higher level meaning-making and abstraction. However, smartpen technology, which integrates digital and computational capabilities into its interface, gives users the ability to transform handwritten text into digital text for dynamic information processing. Thus, through an exploration of the smartpen and its functions, I will demonstrate the ways in which smartpen technology provides a technological interface for cognitive offloading and meaning-making that advances the semiotic capabilities of the culturally ingrained artifacts of pen and paper.

Disambiguation of Terms

There are multiple models of smartpens currently available, however, the Livescribe Echo Smartpen incorporates the most conceptually advanced technology at this time. As a result, I will use it as my primary example in this paper. The Livescribe Echo Smartpen is a computing system that, combined with “dot-enabled paper” and Echo Desktop software, which can be run on any PC or smart device, comprises a broader interface system architecture. For the purposes of this paper, I will use the following terminology to refer more broadly to these items and the concepts they represent using the following terms: “smartpen”, “smartpaper”, and “desktop software.”


Image Source: Livescribe Echo Gallery

De-Blackboxing Smartpen Technology

Like any ballpoint pen, the smartpen is just over six inches in length and contains a ballpoint ink cartridge. However, this is where the similarities between the two end. What the smartpen adds is an infrared camera, an OLED (organic light emitting diode) display, embedded microphone and speaker, audio jack, USB port, and, of course, a power button. These components are supported by internal flash memory and an ARM 9 processor and powered by a rechargeable lithium battery (Echo Smartpen Tech Specs). While the smartpen can be used to write standard text on any surface, its digital recording and computing functions are enabled when used in conjunction with smartpaper, which to the naked eye appears as a standard sheet of paper, but in actuality contains an intricate pattern of “microdots” across its surface. Finally, the system architecture of the smartpen is completed by a companion software program, which can be run on any PC or smart device (About Livescribe Dot Paper).


Image Source: Livescribe Echo Tech Specs

Within this system architecture and with these interface components the smartpen allows the user to do much more than simply write. When powered “on,” the infrared camera, allows the user to instantly record handwritten notes, scripts, or designs taken on smartpaper. The script or designs captured by this camera can be automatically synched with audio captured by the smartpen’s integrated microphone, which can then be played back through either the embedded speaker or through headphones attached to the standard audio jack (Echo Smartpen Tech Specs).

As mentioned, the smartpen works in conjunction with smartpaper, which comes in a variety of aesthetic and functional styles, but is unique in that its surface is completely covered with “microdots” (100 micrometers in diameter) positioned across an invisible grid (About Livescribe Dot Paper). The positioning of these microdots, approximately 0.3 mm between each dot, forms a larger pattern (See image). Each microdot represents a specific location within the pattern that is “activated” when contacted by the ink in the smartpen, allowing the infrared camera within the pen to capture and record exact patterns. This pattern recognition can occur simultaneously while recording audio in order to sync the recorded audio with exact points of text.


Image Source: About Livescribe Dot Paper

Of course, the recognition of a pattern on smartpaper and recorded audio would not be possible without the ability to digitally encode the content (Denning & Bell 470). The smartpen contains 2 GB of flash memory and an ARM 9 processor, powered by a rechargeable lithium battery. In addition to storing user-produced content, the smartpen’s memory houses the smartpen’s software for both encoding data, such as content recognition algorithms (Marggraff et. al), and various other bundled smartpen applications, such as a scientific and simple calculator, audio replay commands, translation software, and so on.


Image Source: Livescribe Echo Gallery

In terms of connectivity, there are several options for storing and organizing content collected by the smartpen. If the user is positioned near a Bluetooth-enabled smart device, the content collected by the smartpen can be transferred wirelessly and simultaneously, as the user inputs it on to the smartpaper. Alternatively, the content can be stored on the smartpen itself within its memory until the user is able to connect to Bluetooth and make the wireless transfer. Lastly, the smartpen’s USB port allows the user to dock the smartpen and connect to a PC in order to make a direct transfer.

Once the data from the smartpen has been transferred to a PC or smart device. The desktop software enables the user to perform multiple functions with the digitized text, images, and audio. Primarily, the software allows the user to reorganize individual sheets of paper into notebooks and reorder sheets within a notebook. In its digital form, the handwritten text also becomes searchable, allowing users to easily locate text within notebooks or even across notebooks. Furthermore, this software allows the user to transcribe the digital handwritten text into standard computer text for word processing. Sheets and notebooks, along with their synched audio, can easily be exported as PDFs and shared (Echo Smartpen Tech Specs).

Fundamental Concepts

In order to understand the advanced potential that the smartpen and smartpaper interface contain as a cognitive technology, I will rely on the theoretical contributions of C.S. Pierce, Andy Clark and David Chalmers. In particular, I will refer to Pierce’s concept of semiosis and the triadic nature of signs as a basis for understanding how meaning and abstractions are made (Irvine Grammar 17). In conjunction with Pierce’s theories, I will use the concept of “the extended mind” as developed by Andy Clark and David Chalmers to describe how the sign formation process, as a cognitive function, is extended into tools and artifacts (Clark & Chalmers).

Pierce’s Theory of Sign Formation

A sign is something by knowing which we know something more.

C.S. Pierce (Irvine Semiotics 16)                                                 

By “semiosis” I mean… an action, or influence, which is, or involves, a cooperation of three subjects, such as a sign, its object, and its interpretant…

C.S. Pierce (Irvine Semiotics 21)

To understand a sign, which perhaps may be simplified as something to which meaning is attributed and through which meaning is understood, one must understand how a sign is formed. According to C.S. Pierce, semiosis is the process through which meaning is made and interpreted within a sign. Furthermore, this process of meaning-making is acknowledged through a three-part relationship. In other words, for the meaning of any sign to be understood, there must be: 1) a physical manifestation of the sign to be interpreted, whether visual, audible, tactile, etc.; 2) an abstract concept or secondary physical object, which the physically manifested sign represents; and finally 3) an interpretant or meaning that is produced from the relationship between the physically manifested sign and the concept or object it represents. (Irvine Semiotics 14-5)

By explaining this three-part relationship within the process of semiosis, Pierce reveals three fundamental concepts of meaning-making. First, the relationship between the physical manifestation of the sign to be interpreted and the abstract concept or secondary physical object reveal the interdependence of signs within larger sign systems in order to create or understand meaning. Second, as the interpretant is produced through the relationship between the physically manifested sign and the object or concept it represents, rather than contained within either of these entities, we are able to understand that the interpretant itself is a sign. Third, and perhaps most importantly, in order for meaning to be understood, a sign’s interpretant must be acknowledged. Thus, there must be an active agent outside of this system capable of both observing the physical manifestation of the sign and drawing its connection to the object or concept it represents in order to understand the meaning that the interpretant or relationship between the two conveys.

The Extended Mind Theory

Bearing in mind Pierce’s concept of semiosis, the most logical active agent capable of acknowledging an interpretant as a sign’s meaning is our human cognitive system. Though the specific process of human cognition remains an unobservable mystery (Barrett), we are able to observe the process of perceiving the physical manifestations of signs (i.e. viewing physical images or objects, listening to sounds, feeling materials, etc.), as well as the utilization of the meaning made with these signs (i.e. verbal, physical, emotional, etc. responses).

However, the work of Andy Clark and David Chalmers in The Extended Mind offers a unique alternative interpretation of this process. In essence, Clark and Chalmers assert that the cognitive functions of the human mind are not restricted to the mind itself, but rather through the use of tools and artifacts, the human mind can be “extended.” For example, take the primary cognitive function of memory. When an individual writes down a telephone number instead of memorizing it, the individual is, in essence, using the paper as an external memory in which to store information rather than retaining it within his own memory.

This example reveals two notable implications of Clark and Chalmers’ theory. First, the process of extending cognitive functions to an artifact or tool is an implicitly “external”, “active,” and productive process. In other words, not only can cognitive functions occur in an environment outside of one’s mind through artificial cognitive agents, but they can produce meaning that can then be used or applied within other cognitive agents, either human or artificial. Second, this example reveals that the idea of larger cognitive or system architectures and coupling. In other words, through the extension of human cognitive functions, systems of interdependency are created between cognitive agents that enable more efficient and ideally effective cognitive processes. Within these systems, the human cognitive agent couples with an artificial cognitive agent through which meaning can then be produced and used (Clark & Chalmers 7-9).

The Connection between These Concepts

The significance of Clark and Chalmers’ theories in the context of Pierce’s concepts of semiosis and sign formation is that when these processes are extended from the human mind into the artifacts and technologies that we use, we are able to create extremely powerful tools and networks of meaning-making. As Clark and Chalmers explain, these networks can be as simple as the coupling of one’s mind to a standard pen and paper interface (Clark & Chalmers 12-5). However, when coupled with more complex artifacts or more powerful technologies, the cognitive outputs become more complex and powerful as well, allowing for higher levels of meaning abstraction. With this in mind, we can begin to understand how significant the development of computing technologies as information processing systems has been for the progression of human thought.

In fact, even in the earliest conceptions of computer processing systems, such as Vannevar Bush’s “memex,” we find that much of the developmental inspiration of these systems stems from a need to facilitate storage and communication of information in ways that would facilitate higher levels of meaning-making (Bush). Bush’s vision provided the foundation for much of the information processing technologies that we have today. Other early computer designers progressed Bush’s vision even further. For example, Douglas Engelbart was responsible for revolutionary designs such as “hypertext” as a method for connecting multiple layers of digital content (Englebart). Donald Sutherland developed the Sketchpad, which provided an early example of design-based programming, through the use of a “light pen” (Sutherland). Also pivotal was Alan Kay, whose vision included mobilizing the personal computer interface into a singular portable device and the introduction of software applications within a user-programmed environment (Manovich 57).

These concepts, beginning with Pierce’s philosophical foundation and Clark and Chalmers’ explanation of how artifacts and technology facilitate higher levels of meaning making through cognitive extension, provide the basis I will use to investigate the smartpen as a cognitive tool for meaning-making and abstraction.

The Cognitive Advantage of the Smartpen & Smartpaper Interface

Smartpen technology presents a modular system architecture consisting of three primary components: the smartpen itself, smartpaper, and the desktop software (Baldwin & Clark 63). Both the smartpen and the smartpaper contain mechanical interfaces that can be manipulated by a user to produce analog data (Denning 470). For example, the ink of the smartpen is standard ballpoint ink which can be used to write standard text on a standard piece of paper, while the smartpaper can be used as a standard notebook if written on with standard ink. The digital interface of the smartpen and smartpaper is created when the powered smartpen is used to input information on the smartpaper by bringing the ink cartridge in direct contact with the microdots contained on the smartpaper grid. The interface of the system software on the other hand offers a standard set of functions that depend on the device on which it is run, such as a PC, tablet, or even smartphone.

Like any artifact, the smartpen and smartpaper interface is a semiotic tool that facilitates the expansion of sign creation and interpretation. According to C.S. Pierce there are three fundamental types of signs: icons, indices, symbols. Iconic signs are those that can be interpreted through representations of likeness. Indices are signs that point to other signs, such as an arrow that directs the indicate directions or a car horn that redirects the attention of the individual who hears the horn. Most importantly, however, indices reveal the positioning of signs within a larger system of meaning. Implicit within indication of the direction that one must go, are the other directions that one must not go. In addition, the sound of the horn reveals that there must be some active source making the sound. Thus, in recognizing an indexical sign, there are both clear and implicit connections to other signs. Finally, symbols are signs to which the correlation between the sign itself and the object it represents is attributed rather than contained or apparent within the sign itself (Irvine Semiotics 18-9).

As an interface, the smartpen and smartpaper system is designed to facilitate indexical sign formation. In other words, the microdot grid that the smartpaper contains functions as a larger meaning system in which any added marks or script inherently become signs that are indicated by specific locations within the grid. Thus, the grid itself acts as a set of indexical signs through which users can develop additional signs of any type, whether iconic, indexical or symbolic. What is unique, however, is the degree of control and precision over the digital formation of signs that this particular interface gives to users. Rather than relying on predeveloped icons or symbols of a standardized PC or smart device interface, the user can design their own signs to represent meaning. This allows the user to work with signs that are more clear and easily interpretable by the user, which in turn may facilitate more powerful forms of cognitive offloading. In other words, if the user is able to work with signs that they can understand more intuitively, this relieves the user of the cognitive burden of learning new signs and then recalling their meaning.

Furthermore, through the modular components of the smartpen system architecture, we can begin to breakdown the various layers of meaning that exist within each interface (Irvine Powerpoint Slide #69). As any standard pen and paper interface, the smartpen and smartpaper interface is at its base a medium for abstracting the sign systems of handwritten icons or symbols. Through the addition of the smartpen’s camera and microphone, however, this interface is transformed into a medium for also abstracting the sign systems of sounds and images. This layer of abstraction is formed through the ability to “program” functions into handwritten text, creating a unique form of hyperlinked text. Specifically, the process of capturing and synching handwritten text and recorded audio, activates a specific location on the microdot grid. In doing so, it programs a command function for the smartpen’s camera. As a result, whenever the smartpen “taps” the handwritten text, it initiates the audio playback function of the pen. In addition to these user-created programs, there are certain preprogrammed command functions that the smartpen contains as well. For example, users can control the smartpen’s utility functions by drawing a small cross to use as arrow indicators.

The processing capabilities of the smartpen combined with the microdot pattern of the smartpaper grid, elevate the medium of this interface even further, allowing users to add layers of abstraction within the sign systems of time and space. To explain, as the user activates microdots on the grid of the smartpaper, the information that is collected is organized in a sequential manner and “bookmarked” by the system processor (Pettersson & Ericson). Consequently, this sequential organization and bookmarking process creates sign instances that are catalogued by the specific time in which they were captured. Additionally, the microdot pattern contained within the smartpaper grid establishes a spatial sign system for identifying specific instances of location.

Of course, transferring notes from the smartpen and smartpaper interface to the desktop software interface allows for even more layers of meaning abstraction. Particularly notable, however, is the degree to which this transfer blurs the lines of separation between the artifacts of pen and paper and digital word processing. Even though the smartpen and smartpaper interface on its own is digital, the ability to automatically sync notes to the desktop software as users write, allows users to see instant encoding of analog information into its digital form. Of course, this provides users a more transparent understanding of the combined interfaces (Murray 61). Within the desktop software, the user can then apply their tacit understanding of working within a desktop application to manipulate the digitally encoded information further. For example, the user can catalogue and rearrange sheets of digital information and even search through the text using the media-independent searching function of “Control F” (Manovich 122).

The Smartpen Grows Signs

Perhaps even more interesting is the advanced way in which smartpen technology effectively offloads the cognitive meaning-making process. In other words, by combining two sets of signs, one visual (the written script or design) and one audible (the recorded sound), a specific correlation between the two signs can be drawn to interpret meaning. In doing so, we observe the individual textual sign and audible signs merge into each other to form a new more complex sign for further, higher orders of abstraction. (Irvine Grammar 15)

Thus, as Pierce states “the essential function of a sign is to render inefficient relations efficient” (Irvine Semiotics 16). By combining standard text signs with audible signs, the Smartpen allows users to offload the cognitive function of remembering or recalling the contextual information about the written script or designs. This allows users to make more efficient use of various sign types, whether iconic, indexical or symbolic, in order to take notes. More specifically, this allows the individual to store metainformation, or information about the nature of information (Floridi 31), in the form of an audible object sign. As explained, if a user draws a diagram in order to depict a process that is being described audibly, the smartpen allows them to record and sync the process description to the diagram. In such an instance, the user has created a visual sign whose object, or the concept that the visual sign represents, is stored as metainformation that can be accessed by the user in order to aid in interpreting the sign.

Still, perhaps one of the most useful aspects of the Smartpen’s meaning-making capabilities lies in its human distributive cognitive functions, or the ability to distribute cognitive activity across human minds (Zhang & Patel). Through the smartpen’s desktop software, users can share information collected by the smartpen through various remediations of its original form, such as PDFs or emails (Livescribe(™)). This allows individuals the opportunity to share both written text and audio, but also acts as a form of individual significance parsing. For example, when the smart notes are shared, the recipient of the notes is able to listen to the recorded audio, while viewing the text. This allows the user to acknowledge not only the interpretants that are developed by the actual audio or text, but also make inferences about the interpretants that the individual who shared the notes originally acknowledged. Specifically, it allows the recipient to understand whatever hierarchy or organization the original user might have given to various aspects of this information.

This parsing process is in essence a step-by-step revelation of Pierce’s assertion that “all thought is in signs” and therefore “requires a time” in which each sign is actually interpreted, and which by interpreting creates a new sign. Essentially, the Smartpen creates a map of the “time” in which the audible sign is interpreted into a textual sign, creating a more complex sign that differs from simply the audible reading of a text. Rather than just restating the textual sign out loud, by translating it into the audible sign system, the smartpen integrates the meaning of these two sign systems, but adds the contextual meaning of a sign with the interpretation and context of the secondary sign (Irvine Semiotics 16).

Refining Smartpen Technology

Despite presenting unique opportunities for sign formation, the smartpen system architecture is not without its own usability issues. Primarily, as the smartpen and smartpaper interface is built on the culturally ingrained conventions of the standard pen and paper interface, the user is able to apply a relatively high degree of tacit knowledge in their mechanical or analogue usage of the smartpen and smartpaper interface (Murray 61). However, to a certain degree this tacit knowledge also inhibits usability by masking the digital affordances of the interface. For example, the smartpen and smartpaper both resemble a standard pen and paper, except for certain design cues, such as the power button on the smartpen or the control panel at the bottom of the page (See image), that indicate their digital capabilities. However, outside of the intuitive knowledge to use the pen as an instrument to make marks on the paper, to perform the digital functions of the pen requires a certain degree of learned user competency. For example, the user must learn to first turn on the pen prior to writing and then must learn the functions of each of the buttons on the smartpaper itself. Even more obscure, however, is the knowledge that the text itself contains additional information that can be accessed through tapping the text.


Image Source: Livescribe Faq: Using The Home Button

Of course, learning the functions of smartpaper requires a relatively simple learning curve. However, in order to ensure that the user properly initiates the digital components of the smartpen and smartpaper interface, the design of the smartpen itself could be improved. Namely, by adopting a design that incorporates a forcing function, or a design property that uses “the intrinsic properties of the representation to force a specific behavior” (Donald 34), that requires the user to power on the smartpen in order to write. Perhaps, such a change would be a simple as adopting the retractable pen design, which forces the user to press a button in order to eject the tip of the ink pen. The smartpen could incorporate this design feature within the process of powering on the smartpen itself.

Furthermore, the body of the smartpen, which is similar in length to a standard pen, is much wider in circumference than a standard pen. The smartpen’s larger circumference is currently an unavoidable and necessary property of its design, as it results from the components that make the smartpen capable of digital mediation: the camera, microphone, processor, memory, etc. The smartpen’s size, of course, may present a relative degree of discomfort for some users in terms of writing with the pen. More importantly, however, it inevitably forces the user to adapt their grip on the smartpen, which may reduce the degree of precision that the user can achieve in controlling their handwriting or designs. While this issue does not necessarily affect the functional usability of the smartpen, it could diminish the pen’s creative and design affordances.

Perhaps the most significant design issue with smartpen technology, however, is connected to its larger system network. While as two individual interfaces, the smartpen and smartpaper and the Echo desktop software, can function independently with their own unique sets of affordances, the modular nature of this architecture is not entirely seamless in terms of integration and efficiency. For example, the process of converting handwritten text into digital text is extensive: information must be input into the notebook, then transferred to the software application, and finally converted within the software. As a result, in cases where the user simply wants digital text, the smartpen and smartpaper interface is drastically more inefficient than simply writing within a PC or smart device word processing application.

Longterm Potential

Despite these issues in design, smartpen technology represents unique and valuable opportunities in sign formation and advanced cognitive offloading. In addition to successfully merging select functions of existing technologies in order to create unprecedented forms of mediation, smartpen technology’s greatest potential perhaps lies in the amount of mobility it could allow users. Through the development of various technologies, such as the laptop PC, tablets and smartphones, we can understand the importance and demand for portable electronic devices that allow for advanced cognitive functions (Manovich 102-11). Unlike a PC or even tablet, the smartpen is extremely portable and its internal storage allows users to store information within the smartpen itself. Furthermore, unlike smartphones, which rely on keyboards that do not facilitate comfortable or practical use for extensive information input, the smartpen provides a mobile information processing device that is both more ergonomic and practical for larger inputs of information.

Still, in order for smartpen technology to achieve its potential, at least two things would need to precede widespread adoption of this technology. Specifically, the current cost of the smartpen itself would need to become less inhibitive. However, as innovation with both smartpen and smartpaper technology continues and competing forms of the smartpen and smartpaper interface become available, the costs for this technology should decrease. More importantly, however, the standardization of smartpaper, on which the smartpen’s digital capabilities depend, would need to become more widespread. Of course, this would require widespread acknowledgment of the benefits of smartpen technology in terms of its cognitive offloading capabilities in order to increase demand for this technology.

Thus, assuming that smartpen technology is embraced, it could alter the way that we understand and use the artifacts of pen and paper. The line between analog and digital will be further blurred, granting individuals new perspectives for how they might use these artifacts as tools for meaning-making and, consequently, new forms of cultural creativity and expression. When smartpen technology is viewed through a semiotic perspective of this type, we can understand the unique opportunities that pen and paper as digital technologies represent for creating, preserving, and sharing meaning and knowledge.

Works Referenced

“About Livescribe Dot Paper.” 2016. Concept. Accessed December 13.

Andersen, Peter Bogh. 2001. “What Semiotics Can and Can’t Do for HCI.” Knowledge-Based Systems 14: 419–24.

Atuma, S. S., K. Lundström, and J. Lindquist. 1975. “The Electrochemical Determination of Vitamin A. Part II. Further Voltammetric Determination of Vitamin A and Initial Work on the Determination of Vitamin D in the Presence of Vitamin A.” The Analyst 100 (1196): 827–34.

Baldwin, Carliss Y., and Kim B. Clark. 2000. Design Rules. Vol. 1. The Power of Modularity. Cambridge, Mass: MIT Press.

Barrett, John C. 2013. “The Archaeology of Mind: It’s Not What You Think.” Cambridge Archeological Journal 23 (No. 01): 1–17.

Bush, Vannevar. 1945. “As We May Think.” The Atlantic, July.

Clark, Andy, and David Chalmers. 1998. “The Extended Mind.” Analysis, Oxford University Press 58 (1): 7–19.

Denning, Peter J., and Tim Bell. n.d. “The Information Paradox.” American Scientist 100: 470–77.

Donald, Norman. 1991. “Cognitive Artifacts.” In Designing Interaction: Psychology at the Human-Computer Interface, 17–38. Cambridge University Press.

“Echo Desktop.” 2016. Accessed December 13.

“Echo Smartpen Tech Specs.” 2016. Accessed December 13.

Engelbart, Douglas. “Augmenting Human Intellect: A Conceptual Framework.” In The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: MIT Press, 2003.

Evans, David. Introduction to Computing: Explorations in Language, Logic, and Machines. August 19, 2011 edition. CreateSpace Independent Publishing Platform, Creative Commons Open Access:

Floridi, Luciano. 2010. Information: A Very Short Introduction. New York: Oxford University Press.

Irvine, Martin. 2016. Semiotics, Symbolic Cognition, and Technology Key Writings. Compiled and edited with commentary by Martin Irvine. Communication, Culture & Technology Program, Georgetown University.

———. 2016. The Grammar of Meaning Systems: Sign Systems, Symbolic Cognition, and Semiotics. Compiled and Edited with commentary by Martin Irvine. Communication, Culture & Technology Program, Georgetown University.

———. 2016. “Semiotics Foundations, 2: Media, Mediation, Interface, Metamedium.” PowerPoint Presentation.

Jackendoff, Ray. 2009. Foundations of Language: Brain, Meaning, Grammar, Evolution. Reprint. Oxford: Oxford Univ. Press.

Jayaram, H. N., D. A. Cooney, H. A. Milman, E. R. Homan, W. M. King, and E. J. Cragoe. 1975. “Ethacrynic Acid–an Inhibitor of L-Asparagine Synthetase.” Biochemical Pharmacology 24 (19): 1787–92.

Johnson, Jeff. 2014. Designing with The Mind in Mind: Simple Guide to Understanding User Interface Design Guidelines. Second edition. Amsterdam ; Boston: Elsevier, Morgan Kaufmann.

“Livescribe(™) Connect(™) Makes Handwritten and Spoken Information Easily Shareable with Facebook, Evernote(R), Google(TM) Docs and Email – all from Paper Livescribe Introduces the Affordable GBP99 2GB Echo Smartpen Starter Pack.” 2011. PR Newswire Europe Including UK Disclose.

“Livescribe Echo Gallery.” 2016. Accessed December 14.

Manovich, Lev. 2013. Software Takes Command: Extending the Language of New Media. International Texts in Critical Media Aesthetics. New York ; London: Bloomsbury.

Marggraff, J., E. Leverett, T.L. Edgecomb, and A.S. Pesic. 2013. Grouping Variable Media Inputs to Reflect a User Session: US 8446297 B2. Google Patents.

Moggridge, Bill. 2007. Designing Interactions. Cambridge, Mass: MIT Press.

Murray, Janet H. 2011. Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, Massachusetts: The MIT Press.

Naone, Erica. 2016a. “Computing on Paper.” MIT Technology Review. Accessed December 7.

———. 2016b. “Taking Apart the Livescribe Pulse.” MIT Technology Review. Accessed December 7.

Pettersson, M.P., and P. Ericson. 2007. Coding Pattern: US 7175095 B2. Google Patents.

Sutherland, Ivan. “Sketchpad: A Man-Machine Graphical Communication System.” In The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 109–126. Cambridge, MA: MIT Press, 2003.

Zhang, Jiajie, and Vimla L. Patel. 2006. “Distributed Cognition, Representation, and Affordance.” Pragmatics & Cognition 14 (No. 2): 333–41.


Working Through Ideas…

Hi Dr. Irvine,

I am posting some notes and a general outline for what I am thinking. I expanded a bit on some of the points where I thought that the connections I’m drawing might not be immediately clear. I’ve also included a running bibliography.  Hopefully, you’ll be able to find some method in the madness. Thank you for your willingness to take a look at this. Any feedback you might have is definitely welcome!


Abstract – 

Working idea...The concept of the smartpen and paper interface present a provide a more dynamic interface for end-user tailorability/mobility and a potential parardigm shift in the hybridization of analogue and digital interface.

In this paper I will provide an overview of Smartpen technology, using the Livescribe Echo as a primary of the example, in order to trace the conceptual development of this technology as an important progression of symbolic/computational mediation. While this technology is currently in a relatively rudimentary state, I will argue that its refinement and integration could lead to more advanced forms of symbolic abstraction and cognitive offloading. Most importantly, I will make a case for the potential paradigmatic shift that technology of this type may hold for the ways in which we use and understand the culturally ingrained artifacts of pen and paper.

Technical Overview/Definition of Smartpen/paper

  1. Hardware
    1. Pen
    2. Infrared Camera & Audio Recording
    3. Bluetooth
    4. Paper Grid 
  2. Software
    1. Bundle
    2. Apps

Conceptual Overview

  1. Sign formation (Pierce -Concepts /Anderson – Application)
    1. Iconic significance – the Smartpen allows for digital remediation of icons developed/identified by the user.
    2. Indexical Significance – The Smartpen software allows for unique restructuring/sequencing of handwritten notes. It also allows users to search for words in handwritten notes.
    3. Symbolic significance – The Smartpen personalizes notes by digitizing things such as handwriting, which carries symbolic significance
  2. Mobilization – (Moggridge pg. 191) – The Smartpen is more portable than a laptop or tablet. It has built in storage that allows users to use it when not connected to Bluetooth.
  3. Remediation/Meaning Stacks- (Irvine) – The Smartpen syncs handwritten notes to the computer which can then be transcribed into digital text. It also “instantly” hyperlinks or programs text on paper to perform command functions. For example, you can perform functions on the pen by drawing a small cross to use as arrow indicators. Also, text with which you have recorded audio will begin audio playback when tapped by the point of the pen on the piece of paper! 
    1. Signs grow (Irvine/Pierce?)
    2. abstraction? (Evans)
    3. recursion? (Evans)
  4. Sutherland’s sketchpad (Sutherland) – 
  5. Metamedium (Manovich)
    1. Smartpen as a simulation of prior media extended with new properties (Manovich 110)
    2. Hybridization (Manovich)
    3. Softwarization of Pen and Paper
      1. Encoding/Digitization – (Floridi?)
      2. Hypertext (Englebart)


    1. Issues (Murray/product reviews)
      1. Size of pen
      2. Lack of integration
    2. Suggestions

Longterm Trajectory

  1. Symbolic potential
    1. Personalization and control of images/text
    2. Alan kay’s vision
  2. Smart Textbooks?
  3. Integration/Tablet PCs – Moggridge (pg 198)
  4. Paradigm Shift for pen and paper

Running Bibliography

  • Andersen, Peter Bogh. 2001. “What Semiotics Can and Can’t Do for HCI.” Knowledge-Based Systems 14: 419–24.
  • Donald, Norman. 1991. “Cognitive Artifacts.” In Designing Interaction: Psychology at the Human-Computer Interface, 17–38. Cambridge University Press.
  • Engelbart, Douglas. “Augmenting Human Intellect: A Conceptual Framework.” In The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: MIT Press, 2003.
  • Evans, David. Introduction to Computing: Explorations in Language, Logic, and Machines. August 19, 2011 edition. CreateSpace Independent Publishing Platform, Creative Commons Open Access:
  • Floridi, Luciano. Information: A Very Short Introduction. New York: Oxford University Press, 2010.
  • Irvine, Martin (2016). Semiotics, Symbolic Cognition, and Technology Key Writings. Compiled and edited with commentary by Martin Irvine. Communication, Culture & Technology Program, Georgetown University.
  • Irvine, Martin. 2016. “The Museum and Artworks as Interfaces: Metamedia Interfaces from Velázquez to the Google Art Project.” PowerPoint Presentation.
  • Johnson, Jeff. 2014. Designing with The Mind in Mind: Simple Guide to Understanding User Interface Design Guidelines. Second edition. Amsterdam ; Boston: Elsevier, Morgan Kaufmann.
  • “Livescribe(TM) Connect(TM) Makes Handwritten and Spoken Information Easily Shareable with Facebook, Evernote(R), Google(TM) Docs and Email – all from Paper Livescribe Introduces the Affordable GBP99 2GB Echo Smartpen Starter Pack.” 2011.PR Newswire Europe Including UK Disclose.
  • Manovich, Lev. 2013. Software Takes Command: Extending the Language of New Media. International Texts in Critical Media Aesthetics. New York ; London: Bloomsbury.
  • Moggridge, Bill. 2007. Designing Interactions. Cambridge, Mass: MIT Press.
  • Murray, Janet H. 2011. Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, Massachusetts: The MIT Press.
  • Sutherland, Ivan. “Sketchpad: A Man-Machine Graphical Communication System.” In The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 109126. Cambridge, MA: MIT Press, 2003.

R^2 Does Google

Google Arts & Culture Case Study

(Link to the longer version – for our reference, but we’re sure you’re all eager to read more)

The Google Art project is an extremely ambitious undertaking. Like Diderot’s vision to collect, organize and present all of the world’s knowledge in one volume or Malraux’s attempt to chronicle all of art history (Irvine Malraux, 2), Google aims to collect, organize, and present all of the world’s art in one location, externalizing cognition and storing and transmitting knowledge. Google Arts & Culture is helping to further Engelbart’s vision of collective intelligence.

Google’s current interface seems like a placeholder interface for longer-term remediation, glimpses of which can be seen in the left-hand navigation, particularly the experiments section. There is reason to believe that as interest in this project increases, Google collects a full catalog of art with which it can experiment, and the project moves out of the beta phase, the interface will improve. Google does, after all, take an iterative approach to problem solving and product development.

The project is currently billed as a space where providers can store and organize digital versions of their catalogs, and users can explore these vast archives. It’s about language advertises the space as an opportunity for the “culturally curious” user to “explore cultural treasures in extraordinary detail and easily share with your friends.” The pitch geared toward partner institutions, meanwhile, focuses on curation and cataloging: “We build free tools and technologies for the cultural sector to showcase and share their gems, making them more widely accessible to a global audience.” Google offers the platform as a collection management system and a storytelling tool.

The feat of partnering with institutions, providing the technology (such as the Google Art Camera), and taking the time needed to capture each work of art should not be underestimated (Proctor). The current state of Google’s interface should be evaluated in the context of this broader, sweeping vision.

The undertaking is unprecedented in the art world and seemingly without precedent elsewhere. The other current projects of this scale that immediately come to mind are other Google projects: the Google Book Project, Google’s Public Data Directory, Google’s Evolution of the Web, Google Moon, or Google Mars, Google Self-Driving Car Project, the whole concept of its search engine. With the exception of Google’s Evolution of the Web project, none of these interfaces are terribly innovative in terms of remediation. The Google Arts & Culture project, in that sense, is the most aesthetically refined, perhaps reflecting the gravity of its content or the desires of  the partners from various museums and cultural sites.

Currently, the primary interface is capable of mediating structural or formal levels of meaning (Irvine Week 13 Slide #76) as the “two-dimensional perceptible surfaces/substrate” that function “as symbol tokens and physical representations” (Irvine Week 13 Slide #64). Yet, how much semiotic intention went into the design process is unclear, and the interface itself is a bit of a black box. Who is choosing the slices and curating the content is not always transparent to the user, and Google provides no navigation instructions, so it is difficult for the user to quickly grasp the full functionality and extent of the platform, for instance.


What’s more, this content is presented in a design that is not innovative and does not dramatically remediate artifacts for the digital space—perhaps intentionally if this is indeed merely a stop on the way to something more. Other museum sites have a similar appearance and allow users to explore artifacts while playing up the encyclopedic affordances of the digital space, such as the MOMA and the Met. The comparison of these cases to an actual encyclopedia’s website, such as Encyclopedia Britannica, is interesting.


Some of Google’s tools are unique to the digital space and helps shift the context and take the user beyond what is possible in a physical museum, such as the zoom function.You can get up close and personal with Monet’s abstract brushstrokes that somehow come together to present a cohesive whole or Klimt’s vibrant golds selectively applied, seeing the hand of the artist at work in a way not possible in person.

monetfull monetzoom

Meanwhile, the 360 degree camera is an attempt to mediate dynamic levels of meaning present in the physical space as well; you can browse and “walk-through” museum spaces at your own pace, with the freedom to zoom at will and avoid social distractions and norms.

Yet, these efforts essentially just use new technological developments to play around the edges of existing standards (Murray, Manovich, Proctor).

Despite the current limitations of the primary interface, Google may be on a more innovative long-term remediation track. Specifically, in the left-hand navigation of the site, there are a number of interesting experiments that can free up more human brain space for meaning-making by offloading significant parts of a cognitive burden onto computing technology:

Still, these experiments seem quite technology- rather than user-centric at the moment. User agency is limited in the sense that Murray describes, potentially because this is just a beta version. Users can explore features, but they are exploring what partner institutions and invisible Google forces have organized and curated. They rely on modern-day versions of Malraux’s “great creator” as guides through the artwork and are not able to easily make their own connections (Irvine Malraux).

If the Google Art Project truly intends to flatten the hierarchy of art and provide unprecedented access to all users, users should have more control over the interface organization and what is done with the art. Of course, technological capabilities play a role as well—what users can do with the platform in part depends on their internet connection and computer hardware. But providing an interface that allows users to manipulate the platform more, and more easily, would also bring it more in line with Alan Kay’s and Douglas Engelbart’s visions of computing systems that can augment human intellect and aid in the learning process while pushing remediation further.

Perhaps this is ultimately what Google has in mind. We’ll have to wait and see.

Works Cited

Irvine, Martin. 2016. “From Samuel Morse to the Google Art Project: Metamedia, and Art Interfaces.”

Irvine, Martin. 2016. “The Museum and Artworks as Interfaces: Metamedia Interfaces from Velázquez to the Google Art Project.” PowerPoint Presentation.

Irvine, Martin. 2016. “André Malraux, La Musée Imaginaire (The Museum Idea) and Interfaces to Art.”

Manovich, Lev. Software Takes Command. New York: Bloomsbury Academic, 2013.

Murray, Janet H. 2011. Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, Massachusetts: The MIT Press.

National Gallery of Art. 2011. “A New Look at Samuel F. B. Morse’s Gallery of the Louvre.” Pamphlet from the National Gallery of Art.

Proctor, Nancy. 2011. “The Google Art Project.” Curator: The Museum Journal, March 2.

Semiotics Reflection

I came to CCT with a broad interest in critical theory and media. When I registered for semiotics, I intended to somehow add it to my arsenal of theoretical assumptions. Looking back, I think my assumption was that it would expand my understanding of semantics, which I intended to use as a theoretical framework for whatever research projects I did while at CCT. Needless to say that assumption was extremely naïve. Studying semiotics has completely shifted my worldview and given me a tool for framing practically everything I look at or problem I attempt to understand.

For example, now when I think about the various forms of media I consume – film, music, literature, etc. – I not only analyze them in regard to the concepts and emotions I perceive, but also in regard to the technical structure of the images, sounds or words they contain. Specifically, I found it fascinating to see how de Saussere’s insight on the arbitrary nature of signs (Irvine 10-1) connects directly to information theory, through which we understand that meaning is not carried by signals through mediums, but rather understood through our own cultural interpretations (Floridi 20-2).

Furthermore, this understanding, combined with concept of cognitive offloading, has completely altered the way I view computation. Primarily, we are completely in control of the meaning that we make with these devices, rather than somehow subject to the perceived demands they place on us. In that regard, using these devices to store, transmit and ultimately use information that can be translated into meaning and abstraction is infinitely valuable in terms of our overall social progress. If we reduce these devices to simply their operational functions, we deny ourselves their ultimate educational and even political significance. We speak through them, create experiences through them and continue to develop new methods for doing both of these things through them.

While these concepts now seem apparent to me, I had not previously considered many of them prior to this class. Furthermore, I’m not sure that I would have ever come to understand or accept these things without having been brought through the process of learning them through the following theoretical constructs: semiotics, cognitive evolution, information and design theory. Having that background and perspective, I am able to trace developments of the technologies that we use today through their intended designs and appreciate them for their educational and productive potential without indulging concerns about how they negatively affect our society. The contributions of designers such as Engelbart, Sutherland and Kay alone give me an excitement for the technologies we have and the learning capabilities that they hold.

For this reason, I now personally feel more empowered and even have the desire to design not only digital technologies or programs, but analogue artifacts using concepts in the spirit of Alan Kay, emphasizing education and usability. Furthermore, when I am presented arguments for how technology is changing our society in manner in which we cannot control or predict, I have the desire to counter that argument and explain how we have brought ourselves to this point in terms of technological development and rather than resist the tools that we have created, we should seek to improve these tools and create new tools with a humanistic focus such as presented by Janet Murray in Inventing the Medium (Murray).

Floridi, Luciano. 2010. Very Short Introductions : Information : A Very Short Introduction. Oxford, GB: Oxford University Press.

Irvine, Martin. 2016. Semiotics, Symbolic Cognition, and Technology Key Writings. Compiled and edited with commentary by Martin Irvine. Communication, Culture & Technology Program, Georgetown University.

Murray, Janet H. 2011. Inventing the Medium : Principles of Interaction Design as a Cultural Practice. Cambridge, US: The MIT Press.


I Guess I’m No Steve Jobs

When thinking about the history of computer design, I’ve always just assumed that the primary goals were overtly technical, such as facilitating complex mathematics, decoding information, and so on. However, reading this week’s selections I was surprised to learn how many of the early computer design concepts were focused on the ideas of communication, organization and efficiency. It seemed that most of the early conceptual models were trying to establish a centralized tool for storing, consolidating and retrieving information in a manner that was easy for users to understand and access quickly.

However, what I found even more interesting is the way in which some of the fundamental design concepts, such as selection, recursion and indexing, serve both technical functions and practical functions at the same time. In other words, last week we learned how in coding we can program variables and create indices. From these indices we can select data to build complex abstractions that, through recursive actions, allow us to perform various computational functions. Those actions of course are often hidden from our view on the side of the interface with which we don’t always interact. However, between Bush’s description of selection (Bush), Sutherland’s description of recursion (Sutherland 118-9), and Engelbart’s description of indexing (Engelbart 99-100), I was able to see how we actually use these same concepts for organizing and manipulating data on the side of the interface with which we regularly interact.

As for continuing and improving computer design, the first thing that came to my mind in this regard is the mouse. On one hand, it forces users to perform an interaction that is relatively unnatural to the ways in which humans typically indicate things. Rather it is more like the motion of wiping down a surface than pointing to something. It’s only through the visual interface on which the mouse’s corresponding arrow is displayed that we are able to understand the significance of the motion used to move the mouse. Our reliance on this correspondence is most clearly revealed in the moments when the arrow doesn’t respond to the user’s movement, causing them to do things such as pick up the mouse, turn it over or click it repetitively.

Rather than performing than performing the odd motions that the mouse requires, I began to consider the possibility of using touchscreen technology as some tablet PCs have already begun to do, since using one’s finger to manipulate objects displayed on a digital interface provides stronger affordance to users (Irvine 1-2). Specifically, I wondered why, if touchscreens have already become the norm for cellular phone and tablet interface design, have we been so slow to standardize this technology for PCs and render the mouse obsolete ?

As far as I can tell, it seems to be a matter of precision, application development and ergonomics. To begin, even though it seems that the overall constraints of using one’s finger versus using a mouse to perform actions on a PC interface are the same (Irvine 2), the pointer to which a mouse corresponds allows more precision than our fingertips can. In particular, as the mouse’s pointer functions as an internal component of the interface, it can be significantly and consistently more precise than our fingers, as external tools acting on the interface. Of course, I don’t think that this is something that couldn’t be remedied; however, I think that in addition to improving the precision of touchscreen technology, applications would also require redesigns in order to facilitate touch activations more easily. A primary issue in this regard is that there is no standardized finger size, while the pointer for a mouse, on the other hand, is standardized. As a result, the dimensions of application buttons would need to be redesigned in order to accommodate the possibility that some people might have large fingers, which might mean that the look of a PC interface would need to change significantly.

There is also the possibility that a mouse is simply more comfortable to use because it doesn’t require the user’s arm to be elevated in order to reach the screen. Rather, the user can rest his hand on the desk while using the mouse. This also serves to stabilize the user’s hand and provide additional stabilization. When I think about the design of the mouse in this sense, I’m not sure that it should become obsolete or rather, it doesn’t seem that touchscreen indication is the most ideal design evolution for the PC.

So how could interaction with the PC interface be improved? The next thought that comes to mind is through voice commands. While I’m personally intrigued by the possibilities that advanced speech recognition holds for word processing, the potential it holds for interaction with the general PC interface seems more complicated. In particular, while it would surely relieve the user of performing functions with his hands, voice command control could potentially cause an unwanted burden of learning when interacting with the computer interface. For example, when using computer programs, we take for granted the large amount of data with which we interact, that we don’t really understand. For example, I know what all of the buttons in my word processor’s toolbar do, but I am not able to tell you what most of them are called. However, if I were to rely on voice commands in order to utilize any of these functions, I would be forced to learn the names of these functions in order to use them and communicate that to the computer. That may not be too daunting within one program, but think of all the different and new programs I may want to use and the changes that may result from any future updates. Utilizing an indicating tool, such as the mouse, allows me to understand and interact with these functions rather seamlessly without storing the additional information of their names.

Again, this issue could be solved by redesigning applications and internet browsers in order to facilitate voice commands. However, even if such redesigns were desirable, there are numerous social implications of using voice command for PCs . In particular, the office environment would become incredibly chaotic without developing some sort of barriers to the various voice commands that would be floating throughout the office. In an era in which we are finally moving away from the cubical, it seems that such a development would actually inhibit the advances in business communication and collaboration that have recently been acknowledged as beneficial. In this sense, the overall efficiency that is gained through voice command features for PCs might be less than the efficiency gained through open work environments.

All of that is to say that I’m not sure how the PC interface could be redesigned, unless we could develop a way to truly achieve “Man-Computer Symbiosis” (Licklider) and control computers through some sort of unspoken cognitive functions. Or, perhaps, there’s a way to interact with the computer via visual cues, such as installing a sensor on the screen that can track the focus of one eye and then respond to blinks in the same way we use clicks. However, that would probably require a large amount of additional buttons within programs that we would need to activate certain functions (I’m thinking of things like highlighting text with a cursor), so I don’t know if it would be more efficient or not. And while it may free users from developing carpal tunnel, I’m not sure if most people would trade that for a twitch!

Bush, Vannevar. 1945. “As We May Think.” The Atlantic, July.

Engelbart, Dave. 1962. “Augmenting Human Intellect: A Conceptual Framework.” New Media Reader. Wardrip-Fruin, Noah, Nick Montfort, ed.. 93–108. Cambridge, MA: The MIT Press, 2003.

Irvine, Martin. 2016. “Introduction to Affordances and Interfaces: The Semiotic Foundations of Meanings and Actions with Cognitive Artefacts”.

Licklider, J.C.R. 1960. “Man-Computer Symbiosis”. New Media Reader. Wardrip-Fruin, Noah, Nick Montfort, ed.. 74–82. Cambridge, MA: The MIT Press, 2003.

Sutherland, Ivan. 1962. “Sketchpad: A Man-Machine Graphical Communication System.” New Media Reader. Wardrip-Fruin, Noah, Nick Montfort, ed.. 109–26. Cambridge, MA: The MIT Press, 2003.

Coding and Computing

With my background in operations management, it didn’t take much for Jeannette Wing to convince me of how pervasive “computational thinking” is within my field. For example, Wing’s descriptions of computational thinking below could just as accurately describe operations management:

  • Computational thinking is thinking in terms of prevention, protection, and recovery from worst-case scenarios through redundancy, damage containment, and error correction.
  • Computational thinking is using heuristic reasoning to discover a solution. It is planning, learning, and scheduling in the presence of uncertainty.
  • Computational thinking involves solving problems, designing systems, and understanding human behavior… (Wing 34)

Of course, these excerpts describe very broad concepts. On a practical level, there are dozens of operation platforms that can be tailored to meet individual organizations’ needs by performing the operations described in these concepts.  That said, even though the companies I worked for used operation platforms, the tool we most often used was Excel.

Working with the Python tutorial reminded me of using Excel, except with a higher degree of tailorability and functionality. In Excel, the standard grid with lettered columns and numbered rows allows users to define cells in the same way that they can define variables in Python. Furthermore, Excel is equipped with a broad range of functions that can be used to do anything from organize data sets to generate solutions to complex algorithms. In Python, it seems that you can program even more functions to change the nature of data and generate abstractions based on complex calculations. Based on this experience, it seems that while Excel offers various aids, such as auto-filled functions, that help users program computing abstractions, it is limited by the spatial constraints of the grid on which it functions. Python, on the other hand, does not have similar restrictions, giving the user a much greater degree of control and customization. The only catch, in my mind, is that users have to learn and somehow remember the coding language, with all of its syntax rules, etc. on which Python functions.

While that task seems somewhat daunting to me, one thing that I did find more user-friendly about Python was the responsiveness of its compiler. Initially, when reading about a compiler and an interpreter in Principles of Great Computing (Denning & Martell  92), I had trouble understanding the differences between the two. However, using Python I was able to understand how a compiler translates code once the user has completed it rather than continuously translating it through an interpreter. Unlike Excel, the Python compiler was relatively specific in highlighting the exact error within the code that I had written. Excel, on the other hand, will either compute the procedure you’ve designed or tell you simply that it didn’t work, forcing the user to retrace his steps in order to find the error. That said, and perhaps Python also has this option, Excel allows the user to toggle between an interpreter or a compiler. When using Excel, if you are building a system with multiple calculations of large amounts of data, you want to switch to a compiler because, as explained in Introduction to Computing the interpreter, which is constantly translating data, will execute functions more slowly than the compiler, which translates data after an entire set of functions has been programmed. (Evans 38-9)

On the whole, between building on my past experiences with Excel and the readings from this week, I thought that the Python tutorial was a helpful way of understanding computing concepts. However, one of the concepts for which I’m not sure if I understood the direct connection was the concept of stacking. I think I understand the concept in general as described in the Introduction to Computing (Evans 24-5), but I don’t see how it works with coding. I would be interested in understanding this concept further by discussing it in class!

Peter J. Denning, and Craig H. Martell, Great Principles of Computing. 2015. Cambridge, Massachusetts: The MIT Press.

David Evans, Introduction to Computing: Explorations in Language, Logic, and Machines. Oct. 2011 edition. CreateSpace Independent Publishing Platform; Creative Commons Open Access:

Jeannette Wing, “Computational Thinking.” Communications of the ACM 49, no. 3 (March 2006): 33–35.


Digital Digital Get Down

In my mind, the digital encoding of the physical components of text makes sense, but I am struggling to understand how both image and sound can be digitally encoded. As a result, I am going to attempt to explain the process of digitally encoding text and hope that, in doing so, I can reason out the process of encoding of image and sound.

A 7-bit grid can produce up to 127 distinct characters (64 + 32 + 16… +1 =127). Of course, that is enough numbers to cover all of the letters in the English language (both capital and lowercase), essential punctuation, and then some. Thus, each letter of the alphabet can be represented using binary digits to activate the electric current that encodes the needed digital signal. These individual signals can be combined and sequenced to produce strings of text that we can then interpret and understand via a digital interface.

In theory, I would assume that this process of assigning numbers to text can be replicated with images and sounds. However, I do not understand how the actual image units or actual sounds units are assigned different numbers. I suppose I could see how different shades of color can be assigned numbers that then are activated on the binary level. But that would mean that a computer or digital camera would have to be pre-programed to recognize an enormous amount of shades in color. Sound, in that sense, seems more plausible. Or at least musical notes, because they are defined and structured. – Wait, is this really how it works? Or am I completely off on this? The more I think about it, the more I can see how it could be, but it really blows my mind. Looking at Figure 3 from “The Information Paradox” (Denning 472), I can now see how a CD is just a sequence of bits that can be read to activate certain sounds, texts or images. …But it’s still kinda crazy!

As for meaning, at this point, it seems only natural that signs and messages don’t contain meaning, but rather we interpret and apply meaning through our prior knowledge and the context in which we experience the sign or message. In fact, I feel kind of absurd for not completely understanding earlier in this course. That said, the two concepts from this week’s readings that really sealed the deal for me on that were Rocchi’s model of information and relativity (Denning 477) and Stuart Hall’s discussion of “profoundly naturalised” codes and signs (Hall 511-3). Rocchi’s model, which explains meaning as the interpreted association between sign and referent, alongside Hall’s explanation of certain interpretations as deeply ingrained cultural associations, helped explain my persisting question regarding how individuals can interpret the same meaning from an object if artifacts and signs themselves contain no meaning. What I was viewing before as meaning that resulted from transcendental truth, I can see now is simply a continuously reinforced association. That said, I still believe in universal truths, but I now see how those truths exist outside of the signs that we culturally assign them.

With that in mind, when we receive text messages there is no meaning in them. Rather, the messages are represented in a meaningful way, which allows us to interpret and understand them. Specifically, text messages rely heavily on the context and preexisting relationship between the individuals that send and receive them. For that reason, it can be relatively difficult to relay text messages with emotional or even ironic meanings. In order to successfully do so, the sender and receiver must understand to a certain degree the mannerisms, speech tendencies and general personalities of each other to motivate meaning in the words and symbols they send via text messaging. Even emojis become more meaningful if the relationship between the sender and receiver is more intimate.

In that sense, it is interesting to think about people’s online profiles as a map of meaning motivation. What are the different photographs and digital artifacts that they chose to represent themselves and what meanings to individuals interpret through them? What are those meanings before you meet a person? How do they change once you get to know that person? I think that’s a pretty typical example of how we already understand this concept without knowing we understood it.

Denning, Peter and Tim Bell. “The Information Paradox.” From American Scientist, 100, Nov-Dec. 2012.

Hall, Stuart. “Encoding, Decoding.” In The Cultural Studies Reader, edited by Simon During, 507-17. London; New York: Routledge, 1993.

Dreams Complicate Things

Rebecca Tantillo

So I’d like to start off by saying that this week’s readings were my favorite thus far. But, while I found them extremely interesting, I am also pretty confused. So I hope anyone who reads this doesn’t get more confused.  That said, here we go!

An analogue clock is an iconic sigsign that displays a discrete qualitative account of time. The relationship of the sign to the interpretant on this level would be rhemetic, because it represents what we understand as the standard assessment of time. The hands on a clock also make it an indexical sigsign that visually indicate a specific time of day by motivating the thoughts of an individual toward specific numbers in combination with each other designate the time. In this sense, the clock’s interpreted relationship is dicisign, because it points to the actual existence of a certain time of day. Lastly, based on consistent cultural reinforcement, the clock as a symbolic legisign has come to represent the idea of time in general. Of course, this symbolic understanding is the syllogism of various cultural understandings of what time is and how it is commonly represented, thus making this level of the clock’s interpretation an argument (Parmentier 16-8) I am certain that there are additional ways that an analogue clock acts as a sign, but I will use those interpretations at least as my starting point.

Understanding an analogue clock as a sign in that manner is relatively straightforward. I say relatively, only because I’m now going to attempt to understand how to interpret a mediated representation of a clock without hands from a dream, such as appears in the film Wild Strawberries by Ingmar Bergman. Since this is a rather ambitious undertaking, I am going to limit this analyzation to the ways in which this specific representation can be understood as an icon, index and symbol. Please see the following clip for reference: Wild Strawberries – Video retrieved from Youtube (JuhaOutuinen).

Any sign must have its root in iconicity, because iconic representation forms the “ground” of any significance in a sign. Furthermore, according to the linguist John Lyons, “iconicity is ‘always dependent upon properties of the medium in which the form is manifest'” (Chandler 41). With these two ideas in mind, we must first understand how the medium of film works as an icon and, consequently, a sign. On the most basic level, the images and sounds that make up film are icons because the present “perceived resemblances” of a reality that the director intends to show (Chandler 40). These resemblances, of course, are false to a certain degree, because whatever is outside of the frame is not represented. However, those that are within the frame are physically captured and relayed to the viewer, causing film to be an indexical representation (Chandler p. 43). Alone, these images and sounds are iconic and indexical, but combined with context of the film and any cultural or relevant knowledge the viewer may have that can support their understanding of the film they are symbolic.

Next, it is important to understand the representation of a dream. Because dreaming is an unconscious, unobservable state, any representation of a dream must be considered symbolic. There is no grounding to the concept of a dream, just as there is no physical manifestation of a dream outside of actually dreaming itself. Thus, when Bergman presents the dream sequence in the clip shown above, we must symbolically “buy in” to his iconic and indexical representation of a dream. Bergman helps leads us to do so by presenting this dream sequence following a representation of the man featured in the dream lying in bed. He then uses indexical signifiers of stark light, unnerving silence, empty streets, surreal encounters, etc, leading the viewer to believe that the images are not meant to depict a reality, but rather a non-reality. So that while, ultimately, Bergman’s representation of a dream may not match that of the viewer’s exact vision of a dream, the viewer is able to understand Bergman’s representation of a dream through connotative “perceived resemblances”. Once we understanding how both film and representations of dreams can act as signs, we can then interpret and understand how an analogue clock with dream represented in a film may be interpreted.

To understand a faceless clock, we must acknowledge the clock as both an icon, which represents the quantification of time, and an index, which when presented without hands motivates the interpreter to view and consider all of the numbers represented on the face, rather than focus on a combination indicated by the hands of a clock. The combination of the faceless clock’s iconic and indexical significance leads the viewer to search for a symbolic meaning represented by the clock, such as: there is no time, time cannot be counted, or perhaps, someone’s time is up. The context of the film and what ultimately occurs following the dream sequence, thus, becomes infinitely important in understanding of the meaning of the clock without hands that the professor sees in the dream. As a result, we can understand the infinite development of signs as symbols as Dr. Irvine describes below:

A sign isn’t a static object, and meanings aren’t isolated events: meanings from symbolic activity are part of a continuum of thought that link an individual’s cognition to shared concepts, to the experienced world, and to others. Meaning is an open triadic process with one element of the structure, the interpretant, always unfolding new meaning. (Irvine 27)


Daniel Chandler,  Semiotics: The Basics. 2nd ed. New York, NY: Routledge, 2007. Excerpts.

JuhaOutuinen (2011). Smultronstället – Ingmar Bergman – Wild Strawberries. Retrieved September 28, 2016.

Martin Irvine, “Introduction to Meaning Systems and Cognitive Semiotics“.

Richard J. Parmentier, Signs in Society: Studies in Semiotic Anthropology. Bloomington, IN: Indiana University Press, 1994. Excerpts.


Cuisine as a Language

Rebecca Tantillo

Based on the discussion of what constitutes a language from this week’s readings, I would like to propose that cuisine is a language. According to the Merriam-Webster Dictionary, cuisine is “a style of cooking, or food that is cooked in a particular way”.  Based on this definition, you can see that cuisine, like a language, implies a set of rules or procedures that define its constituents. As you can see, you could even alter this definition to apply it to a spoken language, by saying that say that a language is a style of speaking, or words that are spoken in a particular way.  In addition, just as language is also an essential method human “meaning-making” (Irvine 3) using sounds, words, sentences, etc.; cuisine is a method of “meaning-making” albeit different (which will become clearer in my discussion below) using ingredients, techniques, and meals.

To understand the ways in which cuisine could be considered a language, I will attempt to classify the different levels of meaning that make up cuisine using the framework of Jackendoff’s “parallel architecture” of spoken language (Irvine 10). Of course, cuisine is not fundamentally based in sounds or phonetics the way spoken language is, but it is based on tastes and smells. Everything in the natural world has a particular taste and smell based off of its chemical make-up. Whether or not we as humans perceive these tastes and smells or like those tastes and smells is subjective and directly related to our perception of the physical qualities that make up food we consume (Kennedy). It is important to note this fact because the subjective appreciation of tastes and smells, in conjunction with the assessment of whether something is safe and physically possible to eat, is what ultimately determines the adoption of materials into a cuisine. Of course, availability of resources and economic factors also play a role, but that is less relevant to this particular discussion as well.

Thus, that which is both edible and considered to have an appealing taste and smell can be classified as the natural ingredients that make up a cuisine. These natural ingredients are the minimal food values in cuisine, similar to the way that phenomes are the minimal sound values in spoken language (Irvine 5). In my mind, this particular part of the structure presents an incongruity between the comparisons. Specifically, there are many natural ingredients that can be consumed alone, such as apples or even certain raw meats and seafood. Thus, I wonder if certain phenomes, such as a short a, which is the same sound as the expression “ah”, functions in the same way? Of course, the expression “ah” is not a complete sentence, but within context, it could express a complete thought. Regardless, the next phase, or the morphological level, of natural ingredients occurs when the natural ingredients are combined or altered to form base ingredients, such as wheat that is ground into flour or vanilla flavoring that is extracted from a vanilla bean. Natural ingredients can also be combined to make compound ingredients, similar to the way that compound words are made, such as the way that sugar and water are combined to make simple syrup or flour and water are combined to make dough. Following this logic, I suppose we could even say that grocery stores or even just an assessment of the ingredients, including both agriculture and livestock, found in a specific region serve as a lexicon for cuisine.

Next, the syntactic level of cuisine includes the various techniques, guidelines, and recipes that are used to prepare food. For example, to make a soufflé’ there are specific guidelines that must be followed and techniques that must be used in order to successfully achieve the desired product. There are other more discretionary guidelines such as the Italian rule that seafood should never be garnished with cheese (meaning no Parmigiano on your scampi :/ ). However, just as in spoken language, syntax guidelines are not always followed. These rules of syntax and techniques can be applied to the semantical level of cuisine which is made up of specific dishes and recipes. These dishes can be understood outside of the context of a meal based on their components. For example, semantic elements of cuisine are classifications of dishes such as appetizers, main courses, sides, desserts, etc. These dishes on their own can be identified by the ingredients that they contain and based on those ingredients we can understand how they may be combined to play a role in a larger meal. For example, we typically identify the main course by how substantial it is, often times implying the incorporation of a protein or starch. Of course, how these determinations are made is ultimately culturally specific, but nonetheless, assuming a cuisine reaches a certain level of development, I imagine that some sort of classification on this level would occur.

The pragmatics of cuisine involves the combining of various dishes into a meal. In this sense, they act as codes and behavior cues, such as the idea that a main course is constituted by a protein, a starch, and a vegetable. Or, the idea that appetizers are eaten before the main course, while dessert is eaten after. Again, like the pragmatics of spoken language, pragmatics of cuisine are also determined intersubjectively (Irvine 6). Another element of cuisine pragmatics is presentation, which enhances the meaning and overall significance of a meal by eliciting knowledge and associations from the mind of the diner. Most often diners make pragmatic connections to specific cultures or geographical locations of cuisines, such as Mexican or Chinese food, etc. Or, diners also commonly make emotional and sentimental associations, such as recalling a particular time or experience connected to a meal. These are just two general examples of the types of knowledge and associations that cuisine can elicit, but on a personal level the possibilities are endless.

Lastly, the discourse of cuisine is its tradition and continuing legacy. Cuisines are built on a rich history of expression using the edible materials that are readily available to us. Like a spoken language, cuisine is handed down culturally and continually built upon by the individuals that recognize it and use it daily. These traditions can be recorded and preserved through recipes and cookbooks, but they are often passed through verbal communication and learning. Perhaps, most relevant though, is the extent to which cuisine gives humans “vast expressive power” (Pinker) through building on their instinct to eat and the knowledge that they have acquired to build and create new expressions within a cuisine.

Having said that, and reflecting back on last week’s readings, I would almost dare to hypothesize that food, or the instinct to eat rather could have played some role as the “Master Switch” for the development of cognitive reasoning that has given humans the “Faculty of Language” (Irvine 2). I won’t go into that here, because based on my knowledge of the evolutionary process, I am not capable of doing much more than speculating on that topic. That said, I’m probably going to ask about it in class!


“cuisine.” 2016. (20 September 2016).

Martin Irvine, Introduction to Linguistics and Symbolic Systems: Key Concepts. 2015.

C. Rose Kennedy. The Flavor Rundown: Natural Vs. Artificial Flavors. Harvard University: Science in the News, 2015.

Steven Pinker. Linguistics as a Window to Understanding the Brain. 2012.

Andrew Radford, et al. Linguistics: An Introduction. 2nd ed. Cambridge, UK: Cambridge University Press, 2009. Excerpts.



Based off this week’s readings, it seems that there is essentially one major division amongst the various hypotheses seeking to understand how the capacity for symbolic processing developed within the human species.  On one side of this division are hypotheses that claim that the capacity for symbolic cognition was developed as a result of some degree of advanced brain development distinctive to the human species.  While on the other side of the division are hypotheses that assert that the capacity for symbolic cognition developed from interactions with external and social stimuli and, therefore, developed in conjunction with various cultural phenomena, primarily communication. (Barrett, 14-5)

Each set of hypotheses carries its own set of significant implications for understanding how humans understand and create symbolic meaning.  Those that assert that brain development preceded symbolic cognition, such as Steven Mithen’s concept of the general intelligence facility (Barrett, 3), rely on the theory that human symbolic cognition developed out of the necessity of representation or interpretation of individual conceptions of reality.  As a result, theories of this type are based in the idea that evolution provided humans with a certain degree of “instinctual knowledge,” (Deacon, 26) which implies that the human species is ultimately tied to some sort of behavioral determinism.

On the other hand, theories that suggest that symbolic cognitive abilities developed in conjunction with processes of communication, such as were presented by both Deacon and Donald, establish human symbolic activity as a primarily social and material process of learning. (Donald)  As Deacon suggests, the process of evolution is primarily one of learning and remembering (Deacon, 26), rather than instinctively knowing and representing.  Thus, amongst these hypotheses, the concept of “instinctual knowledge” is unfounded, along with ideas of behavioral determinism.  Consequently, the process of human evolution can be viewed as a social and relatively arbitrary phenomenon.

On a broader scale, the differences between these two groups of hypotheses forced me to consider the ultimate “randomness” or unpredictability of life.  Oddly enough, in terms of technology such contemplation actually provided me with a sense of control and reinforced the understanding that technological advances are primarily tools for our benefit.  This reassurance stems from the thought that within the randomness of events, we create a specific technology to address and facilitate our needs based on the situation that arises.  While that observation may seem obvious, a more deterministic view of technological advancement ultimately leaves me feeling subjected to these developments and the ways in which the affect society.  In other words, I can now see, at least in part, where my own inclinations toward Luddism originate.

Furthermore, this shift in thinking also makes me contemplate the way I understand general processes of technological design.  Rather, than resisting technological developments based on perceived or speculative negative social implications, it refocuses my attention onto the original societal “necessity” for which a technology may have been designed.  Specifically, that means asking questions such as how do certain technologies advance our symbolic processing, is there a specific symbolic need that a certain technology fulfills, and how will we as a species evolve to utilize these technologies?  Of course, I’m not claiming that we should not consider the various ramifications that specific technological advancements may have on society, but contemplating technological advancement in terms of its symbolic significance and potential, at least, provides me with a new perspective from which I might ultimately draw more thoughtful conclusions.

Viewed in this light and in conjunction with Deacon’s discussion of how symbolic association is derived from indices and icons, I can now understand how any technological advancement, whether historic or recent, major or minor, is infinitely important and symbolically significant.  However interesting it might be to look back and track the course of technological development we have achieved thus far, it’s even more interesting to think of the enormous index of technology that we now have and the endless symbolic opportunities that it presents.  Basically, any single advancement is fair game for endless symbolic applications! (Deacon, 79-83)


John C. Barrett, “The Archaeology of Mind: It’s Not What You Think.” Cambridge Archaeological Journal 23, no. 01 (2013): 1-17.

Terrence W. Deacon, The Symbolic Species: The Co-evolution of Language and the Brain. New York, NY: W. W. Norton & Company, 1998.

Merlin Donald, “Evolutionary Origins of the Social Brain,” from Social Brain Matters: Stances on the Neurobiology of Social Cognition, ed. Oscar Vilarroya, et al. Amsterdam: Rodophi, 2007.