In this paper, I provide an overview of smartpen technology, using the Livescribe Echo Smartpen as my primary example, in order to trace the conceptual development of this technology as a tool for sign formation and abstraction. Using the conceptual frameworks of C.S. Pierce, Andy Clark and David Chalmers, I demonstrate the ways in which the smartpen and smartpaper interface enables users to perform meaning-making through advanced forms of cognitive offloading. Ultimately, analyzing smartpen technology through these frameworks provides a deeper understanding of the unique semiotic functions afforded by the development of pen and paper into digital technologies.
The inscription of knowledge for functions ranging from personal use to general communication to overall cultural progression has long been achieved through the artifacts of pen and paper. As an interface, these tools allow for useful extensions of human cognitive functions, particularly in regard to acting as external storage devices for recalling and sharing meaning. However, digital and computational technologies, such as the PC, tablet, or smartphone, which facilitate dynamic digital information processing, have in many ways perhaps surpassed the pen as tools for higher level meaning-making and abstraction. However, smartpen technology, which integrates digital and computational capabilities into its interface, gives users the ability to transform handwritten text into digital text for dynamic information processing. Thus, through an exploration of the smartpen and its functions, I will demonstrate the ways in which smartpen technology provides a technological interface for cognitive offloading and meaning-making that advances the semiotic capabilities of the culturally ingrained artifacts of pen and paper.
Disambiguation of Terms
There are multiple models of smartpens currently available, however, the Livescribe Echo Smartpen incorporates the most conceptually advanced technology at this time. As a result, I will use it as my primary example in this paper. The Livescribe Echo Smartpen is a computing system that, combined with “dot-enabled paper” and Echo Desktop software, which can be run on any PC or smart device, comprises a broader interface system architecture. For the purposes of this paper, I will use the following terminology to refer more broadly to these items and the concepts they represent using the following terms: “smartpen”, “smartpaper”, and “desktop software.”
Image Source: Livescribe Echo Gallery
De-Blackboxing Smartpen Technology
Like any ballpoint pen, the smartpen is just over six inches in length and contains a ballpoint ink cartridge. However, this is where the similarities between the two end. What the smartpen adds is an infrared camera, an OLED (organic light emitting diode) display, embedded microphone and speaker, audio jack, USB port, and, of course, a power button. These components are supported by internal flash memory and an ARM 9 processor and powered by a rechargeable lithium battery (Echo Smartpen Tech Specs). While the smartpen can be used to write standard text on any surface, its digital recording and computing functions are enabled when used in conjunction with smartpaper, which to the naked eye appears as a standard sheet of paper, but in actuality contains an intricate pattern of “microdots” across its surface. Finally, the system architecture of the smartpen is completed by a companion software program, which can be run on any PC or smart device (About Livescribe Dot Paper).
Image Source: Livescribe Echo Tech Specs
Within this system architecture and with these interface components the smartpen allows the user to do much more than simply write. When powered “on,” the infrared camera, allows the user to instantly record handwritten notes, scripts, or designs taken on smartpaper. The script or designs captured by this camera can be automatically synched with audio captured by the smartpen’s integrated microphone, which can then be played back through either the embedded speaker or through headphones attached to the standard audio jack (Echo Smartpen Tech Specs).
As mentioned, the smartpen works in conjunction with smartpaper, which comes in a variety of aesthetic and functional styles, but is unique in that its surface is completely covered with “microdots” (100 micrometers in diameter) positioned across an invisible grid (About Livescribe Dot Paper). The positioning of these microdots, approximately 0.3 mm between each dot, forms a larger pattern (See image). Each microdot represents a specific location within the pattern that is “activated” when contacted by the ink in the smartpen, allowing the infrared camera within the pen to capture and record exact patterns. This pattern recognition can occur simultaneously while recording audio in order to sync the recorded audio with exact points of text.
Image Source: About Livescribe Dot Paper
Of course, the recognition of a pattern on smartpaper and recorded audio would not be possible without the ability to digitally encode the content (Denning & Bell 470). The smartpen contains 2 GB of flash memory and an ARM 9 processor, powered by a rechargeable lithium battery. In addition to storing user-produced content, the smartpen’s memory houses the smartpen’s software for both encoding data, such as content recognition algorithms (Marggraff et. al), and various other bundled smartpen applications, such as a scientific and simple calculator, audio replay commands, translation software, and so on.
Image Source: Livescribe Echo Gallery
In terms of connectivity, there are several options for storing and organizing content collected by the smartpen. If the user is positioned near a Bluetooth-enabled smart device, the content collected by the smartpen can be transferred wirelessly and simultaneously, as the user inputs it on to the smartpaper. Alternatively, the content can be stored on the smartpen itself within its memory until the user is able to connect to Bluetooth and make the wireless transfer. Lastly, the smartpen’s USB port allows the user to dock the smartpen and connect to a PC in order to make a direct transfer.
Once the data from the smartpen has been transferred to a PC or smart device. The desktop software enables the user to perform multiple functions with the digitized text, images, and audio. Primarily, the software allows the user to reorganize individual sheets of paper into notebooks and reorder sheets within a notebook. In its digital form, the handwritten text also becomes searchable, allowing users to easily locate text within notebooks or even across notebooks. Furthermore, this software allows the user to transcribe the digital handwritten text into standard computer text for word processing. Sheets and notebooks, along with their synched audio, can easily be exported as PDFs and shared (Echo Smartpen Tech Specs).
In order to understand the advanced potential that the smartpen and smartpaper interface contain as a cognitive technology, I will rely on the theoretical contributions of C.S. Pierce, Andy Clark and David Chalmers. In particular, I will refer to Pierce’s concept of semiosis and the triadic nature of signs as a basis for understanding how meaning and abstractions are made (Irvine Grammar 17). In conjunction with Pierce’s theories, I will use the concept of “the extended mind” as developed by Andy Clark and David Chalmers to describe how the sign formation process, as a cognitive function, is extended into tools and artifacts (Clark & Chalmers).
Pierce’s Theory of Sign Formation
A sign is something by knowing which we know something more.
C.S. Pierce (Irvine Semiotics 16)
By “semiosis” I mean… an action, or influence, which is, or involves, a cooperation of three subjects, such as a sign, its object, and its interpretant…
C.S. Pierce (Irvine Semiotics 21)
To understand a sign, which perhaps may be simplified as something to which meaning is attributed and through which meaning is understood, one must understand how a sign is formed. According to C.S. Pierce, semiosis is the process through which meaning is made and interpreted within a sign. Furthermore, this process of meaning-making is acknowledged through a three-part relationship. In other words, for the meaning of any sign to be understood, there must be: 1) a physical manifestation of the sign to be interpreted, whether visual, audible, tactile, etc.; 2) an abstract concept or secondary physical object, which the physically manifested sign represents; and finally 3) an interpretant or meaning that is produced from the relationship between the physically manifested sign and the concept or object it represents. (Irvine Semiotics 14-5)
By explaining this three-part relationship within the process of semiosis, Pierce reveals three fundamental concepts of meaning-making. First, the relationship between the physical manifestation of the sign to be interpreted and the abstract concept or secondary physical object reveal the interdependence of signs within larger sign systems in order to create or understand meaning. Second, as the interpretant is produced through the relationship between the physically manifested sign and the object or concept it represents, rather than contained within either of these entities, we are able to understand that the interpretant itself is a sign. Third, and perhaps most importantly, in order for meaning to be understood, a sign’s interpretant must be acknowledged. Thus, there must be an active agent outside of this system capable of both observing the physical manifestation of the sign and drawing its connection to the object or concept it represents in order to understand the meaning that the interpretant or relationship between the two conveys.
The Extended Mind Theory
Bearing in mind Pierce’s concept of semiosis, the most logical active agent capable of acknowledging an interpretant as a sign’s meaning is our human cognitive system. Though the specific process of human cognition remains an unobservable mystery (Barrett), we are able to observe the process of perceiving the physical manifestations of signs (i.e. viewing physical images or objects, listening to sounds, feeling materials, etc.), as well as the utilization of the meaning made with these signs (i.e. verbal, physical, emotional, etc. responses).
However, the work of Andy Clark and David Chalmers in The Extended Mind offers a unique alternative interpretation of this process. In essence, Clark and Chalmers assert that the cognitive functions of the human mind are not restricted to the mind itself, but rather through the use of tools and artifacts, the human mind can be “extended.” For example, take the primary cognitive function of memory. When an individual writes down a telephone number instead of memorizing it, the individual is, in essence, using the paper as an external memory in which to store information rather than retaining it within his own memory.
This example reveals two notable implications of Clark and Chalmers’ theory. First, the process of extending cognitive functions to an artifact or tool is an implicitly “external”, “active,” and productive process. In other words, not only can cognitive functions occur in an environment outside of one’s mind through artificial cognitive agents, but they can produce meaning that can then be used or applied within other cognitive agents, either human or artificial. Second, this example reveals that the idea of larger cognitive or system architectures and coupling. In other words, through the extension of human cognitive functions, systems of interdependency are created between cognitive agents that enable more efficient and ideally effective cognitive processes. Within these systems, the human cognitive agent couples with an artificial cognitive agent through which meaning can then be produced and used (Clark & Chalmers 7-9).
The Connection between These Concepts
The significance of Clark and Chalmers’ theories in the context of Pierce’s concepts of semiosis and sign formation is that when these processes are extended from the human mind into the artifacts and technologies that we use, we are able to create extremely powerful tools and networks of meaning-making. As Clark and Chalmers explain, these networks can be as simple as the coupling of one’s mind to a standard pen and paper interface (Clark & Chalmers 12-5). However, when coupled with more complex artifacts or more powerful technologies, the cognitive outputs become more complex and powerful as well, allowing for higher levels of meaning abstraction. With this in mind, we can begin to understand how significant the development of computing technologies as information processing systems has been for the progression of human thought.
In fact, even in the earliest conceptions of computer processing systems, such as Vannevar Bush’s “memex,” we find that much of the developmental inspiration of these systems stems from a need to facilitate storage and communication of information in ways that would facilitate higher levels of meaning-making (Bush). Bush’s vision provided the foundation for much of the information processing technologies that we have today. Other early computer designers progressed Bush’s vision even further. For example, Douglas Engelbart was responsible for revolutionary designs such as “hypertext” as a method for connecting multiple layers of digital content (Englebart). Donald Sutherland developed the Sketchpad, which provided an early example of design-based programming, through the use of a “light pen” (Sutherland). Also pivotal was Alan Kay, whose vision included mobilizing the personal computer interface into a singular portable device and the introduction of software applications within a user-programmed environment (Manovich 57).
These concepts, beginning with Pierce’s philosophical foundation and Clark and Chalmers’ explanation of how artifacts and technology facilitate higher levels of meaning making through cognitive extension, provide the basis I will use to investigate the smartpen as a cognitive tool for meaning-making and abstraction.
The Cognitive Advantage of the Smartpen & Smartpaper Interface
Smartpen technology presents a modular system architecture consisting of three primary components: the smartpen itself, smartpaper, and the desktop software (Baldwin & Clark 63). Both the smartpen and the smartpaper contain mechanical interfaces that can be manipulated by a user to produce analog data (Denning 470). For example, the ink of the smartpen is standard ballpoint ink which can be used to write standard text on a standard piece of paper, while the smartpaper can be used as a standard notebook if written on with standard ink. The digital interface of the smartpen and smartpaper is created when the powered smartpen is used to input information on the smartpaper by bringing the ink cartridge in direct contact with the microdots contained on the smartpaper grid. The interface of the system software on the other hand offers a standard set of functions that depend on the device on which it is run, such as a PC, tablet, or even smartphone.
Like any artifact, the smartpen and smartpaper interface is a semiotic tool that facilitates the expansion of sign creation and interpretation. According to C.S. Pierce there are three fundamental types of signs: icons, indices, symbols. Iconic signs are those that can be interpreted through representations of likeness. Indices are signs that point to other signs, such as an arrow that directs the indicate directions or a car horn that redirects the attention of the individual who hears the horn. Most importantly, however, indices reveal the positioning of signs within a larger system of meaning. Implicit within indication of the direction that one must go, are the other directions that one must not go. In addition, the sound of the horn reveals that there must be some active source making the sound. Thus, in recognizing an indexical sign, there are both clear and implicit connections to other signs. Finally, symbols are signs to which the correlation between the sign itself and the object it represents is attributed rather than contained or apparent within the sign itself (Irvine Semiotics 18-9).
As an interface, the smartpen and smartpaper system is designed to facilitate indexical sign formation. In other words, the microdot grid that the smartpaper contains functions as a larger meaning system in which any added marks or script inherently become signs that are indicated by specific locations within the grid. Thus, the grid itself acts as a set of indexical signs through which users can develop additional signs of any type, whether iconic, indexical or symbolic. What is unique, however, is the degree of control and precision over the digital formation of signs that this particular interface gives to users. Rather than relying on predeveloped icons or symbols of a standardized PC or smart device interface, the user can design their own signs to represent meaning. This allows the user to work with signs that are more clear and easily interpretable by the user, which in turn may facilitate more powerful forms of cognitive offloading. In other words, if the user is able to work with signs that they can understand more intuitively, this relieves the user of the cognitive burden of learning new signs and then recalling their meaning.
Furthermore, through the modular components of the smartpen system architecture, we can begin to breakdown the various layers of meaning that exist within each interface (Irvine Powerpoint Slide #69). As any standard pen and paper interface, the smartpen and smartpaper interface is at its base a medium for abstracting the sign systems of handwritten icons or symbols. Through the addition of the smartpen’s camera and microphone, however, this interface is transformed into a medium for also abstracting the sign systems of sounds and images. This layer of abstraction is formed through the ability to “program” functions into handwritten text, creating a unique form of hyperlinked text. Specifically, the process of capturing and synching handwritten text and recorded audio, activates a specific location on the microdot grid. In doing so, it programs a command function for the smartpen’s camera. As a result, whenever the smartpen “taps” the handwritten text, it initiates the audio playback function of the pen. In addition to these user-created programs, there are certain preprogrammed command functions that the smartpen contains as well. For example, users can control the smartpen’s utility functions by drawing a small cross to use as arrow indicators.
The processing capabilities of the smartpen combined with the microdot pattern of the smartpaper grid, elevate the medium of this interface even further, allowing users to add layers of abstraction within the sign systems of time and space. To explain, as the user activates microdots on the grid of the smartpaper, the information that is collected is organized in a sequential manner and “bookmarked” by the system processor (Pettersson & Ericson). Consequently, this sequential organization and bookmarking process creates sign instances that are catalogued by the specific time in which they were captured. Additionally, the microdot pattern contained within the smartpaper grid establishes a spatial sign system for identifying specific instances of location.
Of course, transferring notes from the smartpen and smartpaper interface to the desktop software interface allows for even more layers of meaning abstraction. Particularly notable, however, is the degree to which this transfer blurs the lines of separation between the artifacts of pen and paper and digital word processing. Even though the smartpen and smartpaper interface on its own is digital, the ability to automatically sync notes to the desktop software as users write, allows users to see instant encoding of analog information into its digital form. Of course, this provides users a more transparent understanding of the combined interfaces (Murray 61). Within the desktop software, the user can then apply their tacit understanding of working within a desktop application to manipulate the digitally encoded information further. For example, the user can catalogue and rearrange sheets of digital information and even search through the text using the media-independent searching function of “Control F” (Manovich 122).
The Smartpen Grows Signs
Perhaps even more interesting is the advanced way in which smartpen technology effectively offloads the cognitive meaning-making process. In other words, by combining two sets of signs, one visual (the written script or design) and one audible (the recorded sound), a specific correlation between the two signs can be drawn to interpret meaning. In doing so, we observe the individual textual sign and audible signs merge into each other to form a new more complex sign for further, higher orders of abstraction. (Irvine Grammar 15)
Thus, as Pierce states “the essential function of a sign is to render inefficient relations efficient” (Irvine Semiotics 16). By combining standard text signs with audible signs, the Smartpen allows users to offload the cognitive function of remembering or recalling the contextual information about the written script or designs. This allows users to make more efficient use of various sign types, whether iconic, indexical or symbolic, in order to take notes. More specifically, this allows the individual to store metainformation, or information about the nature of information (Floridi 31), in the form of an audible object sign. As explained, if a user draws a diagram in order to depict a process that is being described audibly, the smartpen allows them to record and sync the process description to the diagram. In such an instance, the user has created a visual sign whose object, or the concept that the visual sign represents, is stored as metainformation that can be accessed by the user in order to aid in interpreting the sign.
Still, perhaps one of the most useful aspects of the Smartpen’s meaning-making capabilities lies in its human distributive cognitive functions, or the ability to distribute cognitive activity across human minds (Zhang & Patel). Through the smartpen’s desktop software, users can share information collected by the smartpen through various remediations of its original form, such as PDFs or emails (Livescribe(™)). This allows individuals the opportunity to share both written text and audio, but also acts as a form of individual significance parsing. For example, when the smart notes are shared, the recipient of the notes is able to listen to the recorded audio, while viewing the text. This allows the user to acknowledge not only the interpretants that are developed by the actual audio or text, but also make inferences about the interpretants that the individual who shared the notes originally acknowledged. Specifically, it allows the recipient to understand whatever hierarchy or organization the original user might have given to various aspects of this information.
This parsing process is in essence a step-by-step revelation of Pierce’s assertion that “all thought is in signs” and therefore “requires a time” in which each sign is actually interpreted, and which by interpreting creates a new sign. Essentially, the Smartpen creates a map of the “time” in which the audible sign is interpreted into a textual sign, creating a more complex sign that differs from simply the audible reading of a text. Rather than just restating the textual sign out loud, by translating it into the audible sign system, the smartpen integrates the meaning of these two sign systems, but adds the contextual meaning of a sign with the interpretation and context of the secondary sign (Irvine Semiotics 16).
Refining Smartpen Technology
Despite presenting unique opportunities for sign formation, the smartpen system architecture is not without its own usability issues. Primarily, as the smartpen and smartpaper interface is built on the culturally ingrained conventions of the standard pen and paper interface, the user is able to apply a relatively high degree of tacit knowledge in their mechanical or analogue usage of the smartpen and smartpaper interface (Murray 61). However, to a certain degree this tacit knowledge also inhibits usability by masking the digital affordances of the interface. For example, the smartpen and smartpaper both resemble a standard pen and paper, except for certain design cues, such as the power button on the smartpen or the control panel at the bottom of the page (See image), that indicate their digital capabilities. However, outside of the intuitive knowledge to use the pen as an instrument to make marks on the paper, to perform the digital functions of the pen requires a certain degree of learned user competency. For example, the user must learn to first turn on the pen prior to writing and then must learn the functions of each of the buttons on the smartpaper itself. Even more obscure, however, is the knowledge that the text itself contains additional information that can be accessed through tapping the text.
Image Source: Livescribe Faq: Using The Home Button
Of course, learning the functions of smartpaper requires a relatively simple learning curve. However, in order to ensure that the user properly initiates the digital components of the smartpen and smartpaper interface, the design of the smartpen itself could be improved. Namely, by adopting a design that incorporates a forcing function, or a design property that uses “the intrinsic properties of the representation to force a specific behavior” (Donald 34), that requires the user to power on the smartpen in order to write. Perhaps, such a change would be a simple as adopting the retractable pen design, which forces the user to press a button in order to eject the tip of the ink pen. The smartpen could incorporate this design feature within the process of powering on the smartpen itself.
Furthermore, the body of the smartpen, which is similar in length to a standard pen, is much wider in circumference than a standard pen. The smartpen’s larger circumference is currently an unavoidable and necessary property of its design, as it results from the components that make the smartpen capable of digital mediation: the camera, microphone, processor, memory, etc. The smartpen’s size, of course, may present a relative degree of discomfort for some users in terms of writing with the pen. More importantly, however, it inevitably forces the user to adapt their grip on the smartpen, which may reduce the degree of precision that the user can achieve in controlling their handwriting or designs. While this issue does not necessarily affect the functional usability of the smartpen, it could diminish the pen’s creative and design affordances.
Perhaps the most significant design issue with smartpen technology, however, is connected to its larger system network. While as two individual interfaces, the smartpen and smartpaper and the Echo desktop software, can function independently with their own unique sets of affordances, the modular nature of this architecture is not entirely seamless in terms of integration and efficiency. For example, the process of converting handwritten text into digital text is extensive: information must be input into the notebook, then transferred to the software application, and finally converted within the software. As a result, in cases where the user simply wants digital text, the smartpen and smartpaper interface is drastically more inefficient than simply writing within a PC or smart device word processing application.
Despite these issues in design, smartpen technology represents unique and valuable opportunities in sign formation and advanced cognitive offloading. In addition to successfully merging select functions of existing technologies in order to create unprecedented forms of mediation, smartpen technology’s greatest potential perhaps lies in the amount of mobility it could allow users. Through the development of various technologies, such as the laptop PC, tablets and smartphones, we can understand the importance and demand for portable electronic devices that allow for advanced cognitive functions (Manovich 102-11). Unlike a PC or even tablet, the smartpen is extremely portable and its internal storage allows users to store information within the smartpen itself. Furthermore, unlike smartphones, which rely on keyboards that do not facilitate comfortable or practical use for extensive information input, the smartpen provides a mobile information processing device that is both more ergonomic and practical for larger inputs of information.
Still, in order for smartpen technology to achieve its potential, at least two things would need to precede widespread adoption of this technology. Specifically, the current cost of the smartpen itself would need to become less inhibitive. However, as innovation with both smartpen and smartpaper technology continues and competing forms of the smartpen and smartpaper interface become available, the costs for this technology should decrease. More importantly, however, the standardization of smartpaper, on which the smartpen’s digital capabilities depend, would need to become more widespread. Of course, this would require widespread acknowledgment of the benefits of smartpen technology in terms of its cognitive offloading capabilities in order to increase demand for this technology.
Thus, assuming that smartpen technology is embraced, it could alter the way that we understand and use the artifacts of pen and paper. The line between analog and digital will be further blurred, granting individuals new perspectives for how they might use these artifacts as tools for meaning-making and, consequently, new forms of cultural creativity and expression. When smartpen technology is viewed through a semiotic perspective of this type, we can understand the unique opportunities that pen and paper as digital technologies represent for creating, preserving, and sharing meaning and knowledge.
“About Livescribe Dot Paper.” 2016. Concept. Accessed December 13. http://www.livescribe.com/en-us/faq/online_help/Maps/Common/CRef_SP_About_c_about-livescribe-dot-paper.html.
Andersen, Peter Bogh. 2001. “What Semiotics Can and Can’t Do for HCI.” Knowledge-Based Systems 14: 419–24.
Atuma, S. S., K. Lundström, and J. Lindquist. 1975. “The Electrochemical Determination of Vitamin A. Part II. Further Voltammetric Determination of Vitamin A and Initial Work on the Determination of Vitamin D in the Presence of Vitamin A.” The Analyst 100 (1196): 827–34.
Baldwin, Carliss Y., and Kim B. Clark. 2000. Design Rules. Vol. 1. The Power of Modularity. Cambridge, Mass: MIT Press.
Barrett, John C. 2013. “The Archaeology of Mind: It’s Not What You Think.” Cambridge Archeological Journal 23 (No. 01): 1–17.
Bush, Vannevar. 1945. “As We May Think.” The Atlantic, July. http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/.
Clark, Andy, and David Chalmers. 1998. “The Extended Mind.” Analysis, Oxford University Press 58 (1): 7–19.
Denning, Peter J., and Tim Bell. n.d. “The Information Paradox.” American Scientist 100: 470–77.
Donald, Norman. 1991. “Cognitive Artifacts.” In Designing Interaction: Psychology at the Human-Computer Interface, 17–38. Cambridge University Press.
“Echo Desktop.” 2016. Accessed December 13. https://www.livescribe.com/en-us/smartpen/echo/echo_desktop.html.
“Echo Smartpen Tech Specs.” 2016. Accessed December 13. https://www.livescribe.com/en-us/smartpen/echo/echo_desktop.html.
Engelbart, Douglas. “Augmenting Human Intellect: A Conceptual Framework.” In The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: MIT Press, 2003.
Evans, David. Introduction to Computing: Explorations in Language, Logic, and Machines. August 19, 2011 edition. CreateSpace Independent Publishing Platform, Creative Commons Open Access: http://computingbook.org/.
Floridi, Luciano. 2010. Information: A Very Short Introduction. New York: Oxford University Press.
Irvine, Martin. 2016. Semiotics, Symbolic Cognition, and Technology Key Writings. Compiled and edited with commentary by Martin Irvine. Communication, Culture & Technology Program, Georgetown University.
———. 2016. The Grammar of Meaning Systems: Sign Systems, Symbolic Cognition, and Semiotics. Compiled and Edited with commentary by Martin Irvine. Communication, Culture & Technology Program, Georgetown University.
———. 2016. “Semiotics Foundations, 2: Media, Mediation, Interface, Metamedium.” PowerPoint Presentation.
Jackendoff, Ray. 2009. Foundations of Language: Brain, Meaning, Grammar, Evolution. Reprint. Oxford: Oxford Univ. Press.
Jayaram, H. N., D. A. Cooney, H. A. Milman, E. R. Homan, W. M. King, and E. J. Cragoe. 1975. “Ethacrynic Acid–an Inhibitor of L-Asparagine Synthetase.” Biochemical Pharmacology 24 (19): 1787–92.
Johnson, Jeff. 2014. Designing with The Mind in Mind: Simple Guide to Understanding User Interface Design Guidelines. Second edition. Amsterdam ; Boston: Elsevier, Morgan Kaufmann.
“Livescribe(™) Connect(™) Makes Handwritten and Spoken Information Easily Shareable with Facebook, Evernote(R), Google(TM) Docs and Email – all from Paper Livescribe Introduces the Affordable GBP99 2GB Echo Smartpen Starter Pack.” 2011. PR Newswire Europe Including UK Disclose. http://proxy.library.georgetown.edu/login?url=http://search.proquest.com/docview/896962663?accountid=11091.
“Livescribe Echo Gallery.” 2016. Accessed December 14. https://www.livescribe.com/en-us/smartpen/echo/photo.html
Manovich, Lev. 2013. Software Takes Command: Extending the Language of New Media. International Texts in Critical Media Aesthetics. New York ; London: Bloomsbury.
Marggraff, J., E. Leverett, T.L. Edgecomb, and A.S. Pesic. 2013. Grouping Variable Media Inputs to Reflect a User Session: US 8446297 B2. Google Patents. https://www.google.ch/patents/US8446297.
Moggridge, Bill. 2007. Designing Interactions. Cambridge, Mass: MIT Press.
Murray, Janet H. 2011. Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, Massachusetts: The MIT Press. http://site.ebrary.com/lib/alltitles/docDetail.action?docID=10520612.
Naone, Erica. 2016a. “Computing on Paper.” MIT Technology Review. Accessed December 7. https://www.technologyreview.com/s/409190/computing-on-paper/.
———. 2016b. “Taking Apart the Livescribe Pulse.” MIT Technology Review. Accessed December 7. https://www.technologyreview.com/s/409948/taking-apart-the-livescribe-pulse/.
Pettersson, M.P., and P. Ericson. 2007. Coding Pattern: US 7175095 B2. Google Patents. https://www.google.com/patents/US7175095.
Sutherland, Ivan. “Sketchpad: A Man-Machine Graphical Communication System.” In The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 109–126. Cambridge, MA: MIT Press, 2003.
Zhang, Jiajie, and Vimla L. Patel. 2006. “Distributed Cognition, Representation, and Affordance.” Pragmatics & Cognition 14 (No. 2): 333–41.