By Rebecca N. White
Alan Kay helped revolutionize computing, but it was not quite the revolution he wanted. With his Dynabook in the 1970s, he aimed to teach children to program so they could experiment with and learn from personal computers, and to help humans’ thought processes adapt to the digital medium. What is the legacy of this interactive, educational vision?
I seek to answer this question by looking at Kay’s ideas in the context of C. S. Peirce’s meaning-making models and Janet Murray’s work on twenty-first-century interaction design, which is rooted in semiotic principles. Exploring Kay’s vision in this way sheds light on how technological developments interact with natural human meaning-making processes, revealing principles that make good digital design to augment humans’ cognitive processes and that help technologies become societal conventions. For this project, I conducted a textual analysis of primary-source material from Alan Kay Janet Murray in the framework of C. S. Peirce.
While Kay’s educational ideas are evident in many of today’s technologies, a semiotic analysis reveals that Kay was perhaps pushing humans to be too computer-like too quickly. Interactions with computing systems must satisfy human expectations and meaning-making processes.
Using computing technology today is both a social and a personal experience. Video games driven by powerful graphics cards play on large, flat-screen monitors without lag, allowing users to spend a Friday night at home alone navigating complex simulations or connecting with gamers across the globe in massive online worlds. Touchscreen devices that fit in purses and pockets line up icons that are gateways to telephonic capabilities, web browsers, music streaming applications, and more in conventional grid layouts. A range of portable computers with full keyboards weigh under 3 pounds but can still deliver users to more externalized memories than they would ever need in their lifetimes. And with their personal computing innovations in the 1970s, Alan Kay and the Learning Research Group at Xerox PARC helped bring this world into being, providing the inspiration for technical developments for decades to come.
Kay’s widely implemented technical vision for the graphical user interface and more was driven by media, communication, and learning theories. Yet, a simple-sounding, non-technical idea that Kay put forward has not broadly caught on (alankay1 2016). An educational goal was at the center of the vision: to create a “metamedium” that could “involve the learner in a two-way conversation” (Kay and Goldberg). The aim was for users to be able to write their own programs, not just use prepackaged ones, so they could experiment and learn. The proposed device, called the Dynabook, was intended for “children of all ages” (Kay 1972), but Kay focused heavily on the potential for youth to learn by doing (Kay and Goldberg).
This was a truly bold vision. Kay sought to launch what he calls today the real computer revolution—the one in which humans’ thought processes adapt to the possibilities presented by the digital medium (alankay1 2016).
These ideas are part of a broader process of meaning making and knowledge building that C. S. Peirce has described. And Kay was building on a long legacy of innovations and ideas about augmenting human intelligence with computing systems, from Charles Babbage to Samuel Morse to Claude Shannon to Douglas Engelbart and J. C. R. Licklider.
Interaction designer Janet Murray is operating in the environment that this history made possible. It is an environment in which humans interact with computers, not just use them as tools, and in which digital design is focused on that interaction. It is a space in which people dabble with new frontiers of technology, such as virtual reality. It is an age in which ideas proliferate, and some exceed technical capabilities. Murray strives to add some design structure to this at times chaotic interactive environment, with the aim of giving humans agency in interaction and amplifying their meaning-meaning processes.
To begin exploring these topics and the ways in which Kay’s educational ideas have developed over time, I asked the question: What is the legacy of Alan Kay’s interactive, educational vision for personal computing? I sought to answer this question by looking at Kay’s ideas in the context of C. S. Peirce’s meaning-making models and Janet Murray’s work on twenty-first-century interaction design, which is rooted in semiotic principles. This analysis has implications beyond tracing the Dynabook’s legacy. Exploring Kay’s ideas through semiotic models sheds light on how technological developments interact with natural human meaning-making processes. It reveals general principles that make good digital design to augment humans’ cognitive processes and that help technologies become societal conventions. For this project, I drew on a textual analysis of primary-source material from Alan Kay and Janet Murray conducted with a Peircean framework. To understand Kay’s influences, I also turned to the media theories of Marshall McLuhan and the work of Kay’s colleagues and predecessors, including Engelbart, Licklider, Vannevar Bush, and others, supplemented by my analysis of current technological developments.
Kay and Murray are united around the idea that digital devices should be designed in a way that helps humans build knowledge. Yet, the two diverge in approach. Murray’s focus is on humans and computing systems meeting in the middle—what the computer can do procedurally must match the user’s expectations, not the other way around. Kay too wanted devices and interfaces that matched the way humans make meaning, but he sought to make human thinking more procedural, to quickly adapt thought to the way computers process information. Additionally, Murray’s theories are firmly rooted in society’s communal process of making meaning, while Kay is focused on individual learning, often seeming to overlook the significance of collective processes.
The educational vision Kay put forward remains relevant, and his ideas are evident in many of today’s technologies. However, a semiotic analysis reveals that Kay was perhaps pushing humans to be too computer-like too quickly.
A Vision for Learning
The plans for personal computing developed at Xerox PARC required rethinking hardware and programming to build a computing system that was not just a passive recipient of information but an active participant in a creative process. Much of the technical vision was widely implemented. The graphical user interface and overlapping windows, object-oriented programming, and the use of icons in interfaces that we know today were all born at Xerox PARC (Kay and Goldberg 2003). Yet, Kay’s precise educational vision, which built on Seymour Papert’s and other’s work (Kay 2001), did not catch on as intended (Manovich 2013). Over forty years since it was first introduced, the plan to teach every child how to program and set him or her up with a digital teacher with which to experiment has not been widely adopted by industry or educational systems (alankay1, Greelish). Yet, when these ideas are viewed in broader terms of human meaning making and knowledge building, aspects of Kay’s learning vision are apparent in many areas.
Augmentation and Communication
The idea that the human mind needs to be understood before designing interfaces motivated Kay’s ideas about computers and human interaction with them. He was there at the birth of user-interface and human-centered design, if not its only father. According to this way of thinking, humans are active users not passive consumers. A computer isn’t just a tool, but rather a “metamedium” that combines other media and is an extension of thought (Kay 2001).
Humans have long used tools to extend their abilities and help them navigate the world, as well as for more symbolic purposes (Donald 2007, Renfrew 1999). And these processes were firmly rooted in an external and networked process of making and sharing meaning. Language, writing, and literacy allowed humans to store memories externally and transmit them to future generations, aiding knowledge building and cultural progress (Donald 2007). Humans extend their cognition to these systems, which Clark and Chalmers describe as “a coupling of biological organism and external resources.” Language is one tool that extends cognition in this way (Clark and Chalmers 1998, 18).
Kay operates in this spirit and in many cases has pointed out the importance of language, but he is also firmly situated in the realm of digital thinking—using computing systems that deal in abstract symbols that humans can understand and machines can execute to aid thinking. As he wrote, “language does not seem to be the mistress of thought but rather the handmaiden” (Kay 2001). For Peirce, like Kay, the linguistic and other symbolic systems are just one part of a broader system of logic and meaning-making processes.
With his focus on computing technologies to extend cognition, Kay was also building on the work of Douglas Engelbart, Ivan Sutherland, J. C. R. Licklider, and others (Kay 2004). Licklider termed this “man-computer symbiosis” (Licklider 1990). Sutherland developed the Sketchpad and light pen, which made graphical manipulation possible via an interface, creating a computing device with which humans could begin to be visually creative (Sutherland 2003). Engelbart developed foundational interface ideas and technology, such as the mouse, to change human behavior and augment human intelligence (Engelbart 2003). Often, Kay nods to these and other innovators. In one recent conversation he described his aims in the broader context, “We were mostly thinking of ‘human advancement’ or as Engelbart’s group termed it ‘Human Augmentation’ — this includes education along with lots of other things” (alankay1 2016).
The individual was Kay’s focus. He wanted to build a personal computer with which the user shared a degree of “intimacy.” In his conception, achieving that intimacy required users to be able to both read and write with the computer, to make it truly theirs. He sought to adapt computing devices to the way humans think while also changing the way humans think (Kay 2001).
At the center of Kay’s ideas were principles of communication and meaning making. He has often described a revelation he had when reading Marshall McLuhan’s work on media: the computer itself is a medium. It is a means for communicating information to a receiver that the receiver can then recover and understand. Kay took this further and interpreted McLuhan’s work as saying the receiver must become the medium in order to understand a message—an idea that would drive his conception of human-computer interaction as an intimate connection (Kay 2001). Referring to Claude Shannon who pioneered a theory for conveying information in bits without noise, Kay recently put his general thoughts on process and meaning this way:
The central idea is “meaning”, and “data” has no meaning without “process” (you can’t even distinguish a fly spec from an intentional mark without a process.
One of many perspectives here is to think of “anything” as a “message” and then ask what does it take to “receive the message”?
People are used to doing (a rather flawed version of) this without being self-aware, so they tend to focus on the ostensive “message” rather than the processes needed to “find the actual message and ‘understand’ it”.
Both Shannon and McLuhan in very different both tremendously useful ways were able to home in on what is really important here. (alankay1 2016)
In the same discussion, Kay elaborated on the Shannon ideas: “What is “data” without an interpreter (and when we send “data” somewhere, how can we send it so its meaning is preserved?). . . . Bundling an interpreter for messages doesn’t prevent the message from being submitted for other possible interpretations, but there simply has to be a process that can extract signal from noise” (alankay1 2016). He is tackling this idea of meaning from both a technical—as in, extracting signals from noise—and a human perspective.
Beyond Shannon and McLuhan, this sounds much like Peirce’s triadic conception of the process of meaning making. In this model, a human makes meaning (an interpretant) by correlating an object (a learned concept) and a sign (a material and perceptible representation). Reception is key in this model as well. What are signs without an interpretant? There is no meaning without the human process of recognition and correlation (Irvine 2016a). Peirce also came to call his signs mediums—that is, interfaces to meaning systems, or instances of broader types (Irvine 2016b)—an interesting parallel to Kay’s revelation that the computer is a metamedium. Moreover, Peirce was very focused on process, but in a way slightly different from Kay. The process of meaning making with symbolic systems is dynamic and is always done in a broader societal and communal context. Communicated information can only be understood if the sender and the receiver are drawing from the same conventional understandings (Irvine 2016a). Kay does not seem to fully account for the communal aspect of these processes.
The Building Blocks of Learning
Working with this internal meaning-making framework, Kay drew heavily on ideas about the nature of children’s thought processes. He wanted a personal device that could match the meaning-making processes of children at their individual developmental levels (Kay 1972). The design of the computer interface needed to be tied to natural learning functions (Kay 2001). As Kay put it recently, “For children especially — and most humans — a main way of thinking and learning and knowing is via stories. On the other hand most worthwhile ideas in science, systems, etc. are not in story form (and shouldn’t be). So most modern learning should be about how to help the learner build parallel and alternate ways of knowing and learning — bootstrapping from what our genetics starts us with” (alankay1 2016). He demonstrated some of these learning techniques using modern technology in a 2007 TED Talk (if the player is not working properly, the video can be viewed on TED’s website; jump to 12:15 for the clip):
Kay’s primary influence when it came to cognitive development was Jerome Bruner, though he was also inspired by other developmental psychologists and Seymour Papert’s educational work with LOGO. Most influential in Kay’s interface efforts were Bruner’s descriptions of children’s stages of development and three mentalities—enactive, iconic, and symbolic. The enactive mentality involves manipulation of tangible objects; iconic or figurative involves making connections; and symbolic involves abstract reasoning. Additionally, Kay recognized that a human’s visual, symbolic, and other systems operate in parallel (Kay 2001).
According to Kay, a computing device needed to be designed to serve and activate all of these areas if it was to encourage learning and creativity, one of his overarching goals for children and adults alike. He sought to combine the concrete and the abstract. Based on these principles, he developed the motto “doing with images makes symbols” (Kay 2001).
Although Peirce’s conception of sign systems did not involve clear stages of development and he was not modeling internal cognitive processes, his ideas roughly correspond to Kay’s. The enactive mentality is about how humans interact with the material signs in their worlds. It is a tactile and action-oriented experience with the outside world that provides input to humans with which they then make meaning. The iconic mentality could map to two of Peirce’s signs: the iconic and the indexical, which represent and point to objects, respectively. The symbolic realm appears to be the same in both conceptions—abstractions and generalizations made from other signs. Kay’s motto, “doing with images makes symbols,” could thus be broadened to “doing with signs makes meaning.”
The computing device Kay envisioned would help users make their own generalizations and abstractions from digital symbols (Kay and Goldberg), which are themselves abstractions. This is the process of meaning making and knowledge building with signs that Peirce describes (Irvine 2016a). And in the Peircean sense, the computers and their parts are signs as well, the material-perceptible parts of humans’ symbolic thought processes (Irvine 2016b).
Central to Peirce’s conception is the dialogic nature of sign systems. That is, the individual process of meaning making is based on conventions shared with other humans (Irvine 2016a). In contrast, Kay focuses on the “interactive nature of the dialogue” between the human and the computing system, another symbolic actor in a sense (Kay and Goldberg). It is almost as if Kay views computers as on the same level as humans in terms of the symbolic dialogue. This thought process emerged clearly in a recent Q&A. A questioner asks specifically about tools to encourage the communal process of knowledge building, and Kay brings the conversation back around to individual human and human-computer processes (in addition to criticizing the interface he has to use to respond to the questioner) (alankay1 2016).
In this case, Kay’s conception is both in line and in conflict with Peirce’s. Particularly in more recent writings after the spread of the internet, it appears that Kay recognizes the communal network of human meaning making and extends it to computers. It is not just about augmenting human intellect and increasing creativity on a personal, internal level. Rather, those interactive processes stretch far beyond the coupling of one human with a personal computer. However, in the Peircean conception, a computer is not the same kind of symbolic actor as a human. Computing systems cannot make meaning. They can convey information with which humans can then make meaning due to their capacity for abstraction and generalization, but they do not make correlations in the same way.
The technical computing revolution all began with these ideas. Focused on the human-computer dialogue, Kay set out to translate these principles into a personal computing vision. Kay and Goldberg envisioned a metamedium that could simulate and represent other media. The vision took the form of the Dynabook, though many of its components were incorporated into other computing devices as well.
Part of this development process involved conceptualizing the foundational concepts in terms of the affordances of the digital space, but it also entailed shifting the way users approached computing systems. As Kay put it, “The special quality of computers is their ability to rapidly simulate arbitrary descriptions, and the real [computer] revolution won’t happen until children learn to read, write, argue and think in this powerful new way” (Kay 2001). He wanted to alter the way people approach digital technologies, and still sees that as the aim. In a recent Q&A, he warned: “children need to learn how to use the 21st century, or there’s a good chance they will lose the 21st century” (alankay1 2016). Kay at times calls his educational vision the “service” conception of the personal computer.
Regardless of terms applied, the idea was bold and far-reaching. As Kay wrote, “man is much more than a tool builder . . . he is an inventor of universes” (Kay 1972), and he sought use computers to make the most of that potential. The intention was for humans, particularly children, to be able to program the system themselves and learn concepts by experimenting in the digital space. Kay described the idea and experiments with computing technology in detail in “A Personal Computer for Children of All Ages” and “Personal Dynamic Media,” co-written with Adele Goldberg.
The underlying, meaning-making principles on which Kay drew translated into physical design choices (Greelish 2016). To encourage creativity and based on his understanding of the iconic mentality, Kay envisioned an interface that presented as many resources on the same screen as was feasible. To meet this need, Kay created overlapping windows using a bitmap display presenting graphic representations that users could manipulate with a pointing device—a mouse. The Smalltalk object-oriented programming language was an outgrowth of Kay’s understanding of how people process messages, and it unified the concrete and abstract worlds “in a highly satisfying way” (Kay 2001). The language was intended to be easy to use so even children could create tools for themselves and build whatever they wanted to in the metamedium. Users could personalize a text editor to process linguistic symbols as they saw fit, or create music and drawing programs. The Dynabook itself was intended to be lightweight, portable, able to access digital libraries of collective knowledge, and able to store, retrieve, and manipulate data (Kay 2001, Kay 2004, Kay and Goldberg).
Where These Ideas Took Us
Although many of his technical conceptions are ubiquitous, Kay’s somewhat utopian vision of world in which each child had an individualized computer tutor with which to experiment through programming did not take off. This was at least in part because Kay was bound by the technical capabilities of the day, not to mention the magnitude of the task of shifting bureaucracies and ingrained human processes built up over centuries.
The devices Kay described in his early papers had to first be created before they could be used widely to enact his service vision. This was, after all, a new medium. The seeds of his ideas grew out of existing conventions like editing, filing systems, drawing, and writing (Kay and Goldberg), so they were somewhat familiar to users and activated existing human knowledge. But the possibilities afforded by the digital space were just being probed when Kay was first writing (Kay 1972). Technologies that we today think of as commonplace and some that have not yet come to fruition were being invented back then. For example, Kay hypothesized that the technology could “provide us with a better ‘book,’ one which is active (like the child) rather than passive.”
And Kay always intended for the initial ideas to grow and evolve, summed up in the “finessing style of design” employed at Xerox PARC (Kay 2004). These ideas were not meant as the end-all-be-all. Kay and his cohort imagined that others would not just improve upon them but also produce new innovations.
Yet, Kay hinted that there could be problems with the metamedium conception as well. He and Goldberg doubted that a single device could be pre-programmed to accommodate all user expectations. It was better to allow the user to customize the device as they saw fit. Kay and Goldberg explained: “The total range of possible users is so great that any attempt to specifically anticipate their needs in the design of the Dynabook would end in a disastrous feature-laden hodgepodge which would not be really suitable for anyone” (Kay and Goldberg).
That is, in many ways, what happened. Today, computing devices are frequently used for passive consumption of other forms of media. Users do create with current computing systems, but that creativity is constrained by software that has been programmed by someone else. The ability to program machines remains the purview of those with specialized skills (Manovich 2013).
Kay, for one, is not satisfied with the way in which computing technology has evolved, and has bemoaned the lack of innovation and the current state of computing. To him, people are mostly tinkering around the edges of existing conventions and not thinking about inventing for the future. Overall, there is not enough emphasis on the services side of his original ideas. Current programming languages remain too abstract and are not user-friendly enough. He would like to see languages that are scalable and easier for humans to use. Kay criticizes tablets and other systems that do not use pointing devices, which are necessary for the enactive mentality. He still thinks simulations are an important part of the computer revolution but is not satisfied with any, although he describes NetLogo as interesting. And he widely criticizes user-interface designs, particularly those from Apple, as not being easy enough for their users to manipulate and personalize (alankay1 2016, Kay 2004, Oshima et al 2006). One recent example comes from a Q&A (alankay 1 2016):
Despite his specific criticisms, many of the general principles Kay put forward, which were based on his conception of thought and learning processes, can be seen today. For instance, today design fields exist that are focused on user interfaces and human-computer interaction, which is a significant change in and of itself (Kay 2001). And interactive computation that can better predict emergent behaviors and better respond to humans’ mental models is an active area of research that could perhaps make it unnecessary to teach children programming in order to achieve Kay’s aims (Wegner 1997, Goldin et al 2006).
Although software is not open to programming by children, many educational tools have been and are being developed that draw on interactive principles pioneered by Kay and that can react to a learner’s needs. Duolingo, a language-learning app built using collective intelligence and that adapts to users’ learning levels, is just one example. Broader initiatives to incorporate computation in early childhood exist as well. Code.org, backed by Google, Microsoft, Facebook, and others, seeks to make computer science accessible to all children. Active learning practices incorporate many of the principles Kay sought to foster using computing technologies (Center 2016). Jeanette Wing describes computing as the automation of abstractions and seeks to teach children how to think in this way (Wing 2006, Wing 2009). Ian Bogost argues that procedural literacy, based on computing processes, should be applied outside the realm of programming to teach people how to solve problems (Bogost 2005). The One Laptop Per Child initiative, which Kay mentions in the video above, seeks to give children the metamedia with which to experiment.
The list goes on, but perhaps most true to Kay’s vision is MIT Lifelong Kindergarten’s Scratch project. This is no surprise given that the Media Lab of which this project is a part was co-founded by Nicholas Negroponte, who was also influenced by Papert and worked with Kay and Papert on the One Laptop Per Child project (MIT Media Lab). Kay’s Squeak Smalltalk language formed the backbone of Scratch (Chen, Lifelong 2016b), which seeks to help “young people learn to think creatively, reason systematically, and work collaboratively — essential skills for life in the 21st century.” And it allows all users to program, create, and share their creations. Although anyone can use the platform, educators are encouraged to use it as a learning tool, and resources are provided to help teachers on that front (Lifelong 2016a). Thanks to the internet, this project can go directly to educators and students, rather than proponents having to navigate educational systems as would have been necessary in the 1970s.
Designing for the Networked, Metamedia World
Janet Murray is operating in this context, and encouraging others to think more like Kay did in the 1960s and ’70s. She takes the new form of representation Kay helped create, the metamedium, and lays out principles of design to maximize meaning and user (“interactor”) agency or interactivity. To do this, she says, designers should deconstruct projects into components and then rethink them in terms of the affordances provided by the digital space.
Murray draws on a range of different fields and acknowledges deep historical context, generally operating from a semiotic, Peircean perspective. Like Peirce, Murray is thinking abstractly and trying to build out a general model in a sense. Her model is of the digital design process, and she seeks to extract common, general principles that can be applied regardless of project. The model is a component of the broader meaning-making system described by Peirce.
Like Kay, Murray approaches computing systems as media and not tools. Media, she says, “are aimed at complex cultural communication, in contrast to the instrumental view of computational artifacts as tools for accomplishing a task” (Murray, 8). Stemming from this, she discourages use of the word “user” and the phrase “interface design,” as they are too closely related to tools (Murray, 10–11).
How she would prefer to describe these human-computer processes sounds much like Kay’s vision: “an interactor is engaged in a prolonged give and take with the machine which may be useful, exploratory, enlightening, emotionally moving, entertaining, personal, or impersonal” (Murray 2011, 11). This idea is, in a sense, broader than Kay’s, which was intended to be somewhat narrowly focused on children’s learning processes.
But at base, both attempt to tap into broader human processes of making meaning—the process described by Peirce in which a human forms an interpretant from an object and a sign/medium. Human beings, as members of the symbolic species, are unique in that they operate in the realm of abstractions and generalizations. They can provide computing systems with symbols that those systems can then execute—because humans have drawn on their symbolic capacities to build them that way. And humans can make meaning of the symbols that the systems return. Each new abstraction creates new meaning, building knowledge—which is the process of learning in a general, non-psychological sense.
The job of designers, according to Murray, is to use code to design digital artifacts that meet interactors’ needs and expectations, allowing them to form those new correlations—as Kay sought to do with his original designs. This involves using existing conventions in new ways, to signal certain meaning correlations to users (Murray, 16). Conventions allow humans to recognize patterns amid complexity and noise. Those patterns, or schema in the cognitive science sense, are built from experience (Murray, 17). Users must be able to make meaning and connections out of what they have in front of them; in Peirce’s terms, a system should not be so foreign that it prohibits users from extracting features and forming interpretants based on existing knowledge (Irvine 2016b).
The affordances of the digital medium help designers achieve these aims. An interactive system that is successful will create “a satisfying experience of agency” for the user by matching the digital medium’s procedural and participatory affordances—that is, the programmed behaviors of the system and the expectations of the users (Murray, 12). Kay developed one of the types of languages used to encode those behaviors—object-oriented programming.
Kay’s work also laid the groundwork for participatory affordances. Murray’s description of this topic takes those foundations for granted: “Because the computer is a participatory medium, interactors have an expectation that they will be able to manipulate digital artifacts and make things happen in response to their actions” (Murray, 55). That expectation is possible in part because of Kay’s original vision; this is essentially his “doing with images makes symbols.” Kay, however, sought to take this further, and to transfer more agency to users by allowing them to design their own programs to meet their knowledge-building needs.
There are also spatial and encyclopedic affordances of the digital medium. The former is about visual organization, and it builds off of what Kay initially created with the graphical user interface. This graphical organization involves the abstractions made up of bits of information that have come to signal particular meanings to users of computing systems, such as file folders, icons, and menus. Here too, as when Kay was designing the Dynabook, the focus is on meaning making and tapping into human thought processes: “Good graphic design matches visual elements with the meaning they are meant to convey, avoiding distraction and maximizing meaning” (Murray, 76). In Peirce’s terms, the perceptible signs (designs) correspond to objects, and humans correlate the two to make meaning. Murray argues, harking back to Shannon’s information theory, that designs should minimize noise so the interpreters can make maximum meaning.
The encyclopedic affordance, meanwhile, stems from the vast capabilities of computing technology to store information that humans can retrieve and process. This enables cultural progress and collective knowledge building because these technologies can store vast amounts of information for use over time, allowing many humans now and in the future to form interpretants from the same information. Kay thought of this as well in his Dynabook conception, discussing the use of personal computers to access digital instances of books or libraries full of information through the LIBLINK (Kay 1972). In 2015, he even wrote about the challenges of ensuring this wealth of externalized memory can be accessed by future generations (Nguyen and Kay). And he was reared in the culture of the Advanced Research Projects Agency (ARPA), which focused on “interactive computing” in a “networked world” (Kay 2004). Yet, one area Kay does not spend much time commenting on is the dialogic, communal nature of meaning making, remaining focused on the individual experience.
This nature factors centrally into Murray’s thinking. She focuses on meaning making as not just an individual but also a social activity; humans interpret digital media based on both personal and collective experiences. Interaction with digital media, she says, necessarily involves interpretation of artifacts within broader cultural and social systems (Murray, 62). Interactors also use computing technology to access and interact with other people and broader cultural systems (Murray, 11). Drawing on this dialogic nature of the symbolic computing system, Murray calls for using existing media conventions to actively contribute to and develop the collective, or as she puts it, “to expand the scope of human expression” (Murray, 19).
This meshes with Peirce in many ways. According to his semiotic model, meaning is always communal, intersubjective, collective, and dialogic (Irvine 2016a, 2016b). Signs are the ways in which we communicate meanings to others, and those meanings are always made in the context of collective understanding, drawing on existing conventions so others may make their own correlations. Humans can communicate in ways members of their society understand because they can communicate in mutually agreed-upon symbols (Irvine 2016a). Digital technologies offer ways to externalize and share the meanings interactors make from these collective systems.
Still, intersubjectivity does not mean that the same signs lead all humans to make the same correlations. Interpretant formation is necessarily based on context, and each individual interprets a perceptible sign based on their individual experiences and perspectives on conventions, which can lead to the making of various meanings. In this sense, meaning is personal and dynamic. And Murray acknowledges that inventors of digital technologies cannot control the ways in which those artifacts will be interpreted or used:
The invention of a new form of external media augments our capacity for shared attention, potentially increasing knowledge, but also increasing the possibilities for imposing fixed ideas and behavior and for proselytizing for disruptive causes. Media can augment human powers for good or for evil; and they often serve cultural goals that are at cross purposes. (Murray 40)
On this topic, one point of direct comparison between Murray and Kay relates to music. Murray references the pirating of music that took place over the internet starting in the 1990s, which resulted in decreased sales of CDs among other outcomes. In contrast, Kay wrote in 1972 that “most people are not interested in acting as a source or bootlegger; rather, they like to permute and play with what they own” (Kay 1972). Kay expected individual users would want a flexible computing device with which they could make their own meanings, but he underestimated the impact of networking and communal meaning processes. These computational artifacts have the power to alter the way humans behave, for bad and not just good.
Often the deciding factors in this development are out of any individual’s control. Murray puts a fine point on this: “Cultural values and economic imperatives drive the direction of design innovation in ways that we usually take for granted, making some objects the focus of intense design attention while others are ignored altogether” (Murray, 28).
Today, Kay acknowledges this power to a degree. He consistently and fondly remembers his time at Xerox PARC as a somewhat utopian experience of all researchers working together toward a common vision and in the absence of market drivers (Kay 2004). He has struggled to find another place like that. With respect to current artificial intelligence, for instance, he commented that “the market is not demanding something great — and neither are academia or most funders” (alankay1 2016). Still, he persists in trying to control the outcomes and change the way people think.
It is clear that Murray and Kay are moving toward similar ends. Both are attempting to create digital technologies that tap into the human processes of making meaning and building knowledge, and to augment those processes. They argue for meeting users where they are—delivering on expectations and helping with the process of extraction and abstraction. Both also recognize that the new digital space provides new affordances, not least the opportunity to give users greater agency over devices, and requires rethinking how information is presented.
When it comes to broad brush strokes, Murray’s general design process sounds much like the process Kay undertook when thinking up the Dynabook. Murray’s basic recipe for digital design includes: “framing and reframing of design questions in terms of the core human needs served by any new artifact, the assembling of a palette of existing media conventions, and the search for ways to more fully serve the core needs that may lie beyond these existing conventions” (Murray, 19).
Similarly, Murray and Kay are both firmly oriented toward the future. The only time Murray mentions Kay directly in her book, in fact, is on this subject. She quotes him saying, “The best way to predict the future is to invent it” (Murray 25). And the title of her book, Inventing the Medium, is essentially what Kay did in the 1970s.
Yet, Murray in some ways has a much broader scope than Kay. This is perhaps a counterintuitive thought given Kay’s truly revolutionary vision. Still, she is working in a much more complicated, networked computing environment than Kay was, and her goal is to fit anything that could be designed in the new digital space under the same set of umbrella principles. Her ideas are firmly rooted in broad, societal processes of meaning making, not just in the individual learning process. And she is exceeded in ambition by C. S. Peirce, who sought to produce a model of all meaning processes.
The implications of this are difficult to discern. But a more holistic view such as that taken by Murray could indeed help designers better meet human needs than one focused on individual goals, even if that approach does not impact the flow of history as spectacularly as Kay did. After all, humans are the symbolic species. Making new meaning is inborn and collective. The power of conventions should not be underestimated. And as Murray writes, the computer is a large and complicated “cultural loom, one that can contain the complex, interconnected patterns of a global society, and one that can help us to see multiple alternate interpretations and perspectives on the same information” (Murray, 21). Designing digital technologies today requires keeping the communal possibilities in mind.
Another difference has to do with agency. Murray frequently stresses the need for digital designs to match human meaning-making processes. Kay also stressed the need for computing technology to operate on the user’s level. Yet, Kay was actually trying to drastically change the way humans processed information as part of that symbiosis. With his vision to teach children programming so they could experiment with computer tutors, he was attempting to start a revolution in which humans made meanings with an entirely new set of abstractions that did not evolve organically from human processes but that was created by a small subset of experts. The new sign system did not emerge from the existing behaviors of a collective society but was rather imposed on broader society by a small culture. In a sense, his device was not meeting children on their playing field but was moving them to an entirely new country.
Murray speaks to this point when discussing situated action theory. She writes, referencing anthropologist Lucy Suchman, “Instead of asking the user to conform more fully to the machine model of a generic process, Suchman argues for an embodied and relational model, in which humans and machines are constantly reconstructing their shared understanding of the task at hand” (Murray, 62).
Society, broadly speaking, may now be approaching a point at which computing technologies can meet humans in this way, closer to their natural processes. That is, humans are becoming more accustomed to this new symbolic system—and its power—and technological developments are allowing computing systems to be more adaptable to human processes. This is not the disruptive, futuristic thinking both Kay and Murray call for, but evolution. Perhaps that is what it takes for long-term, deep changes in human behavior and meaning-making processes to happen.
In this vein, Kay has adapted his educational model to reflect developments, achieved and projected, in artificial agent technology. He, along with co-authors, outlined a new plan for “making a computer tutor for children of all ages” in 2016. The team wants to leverage innovations in artificial intelligence technology to develop an interactive tutor that can observe and respond to students’ behaviors, without the student having to program the device’s activities (Oshima et al 2016).
Although all meaning making is intersubjective according to Peirce, there is also something to be said for the stress Kay puts on the individual experience. Members who share symbolic systems draw from the same conventions, but their experiences are personal, and the interpretants they form are individualized to some extent. Humans operating in the digital space also now expect to make use of its participatory affordances. In colloquial terms, they want control, and to be able to do their own thing.
Many technologies are now more customizable and reactive to individual desires to create and learn, although most have not reached the point Kay wanted to with his service vision. The aforementioned Scratch and NetLogo are examples. Amazon has opened up its Alexa system and others to developers, so users can develop functionality for these devices to serve their own needs, as well as commercial ones. These apps can be shared with other users. Google also allows developers to create add-ons that bring new functionalities to its apps. To amplify individual and collective meaning-making processes, more flexibility is perhaps needed on this front.
Virtual and augmented reality technology, meanwhile, could completely change user interfaces once again. Although the technology is today often just used to play games and have fun, as the interactor below is doing, it could eventually revolutionize the way in which humans interact and make meaning with computing technologies.
When it comes to these technologies, Kay is both the base and the tip of the iceberg in many ways. His and others’ ideas drove and support the development of what we know today to be personal computing, and in formulating that vision, he helped unlock endless possibilities. But as Murray hints and Peirce demonstrates, there is a broader logic at play. Murray tries to tap into those broader meaning-making processes to push digital design forward one step—or giant leap—at a time.
In large part due to the technological access brought about by cheaper and smaller hardware components and the internet, there has, perhaps, not been one computer revolution of the sort Kay outlined but a multitude of smaller revolutions as humans have tried to catch up to the technological advancements. At least the two symbolic processors—the human and the computer—seem to be moving closer together in many ways, even if that change is slower than Kay might like.
 In many forums, Kay has given credit to his colleagues at Xerox PARC for their roles in bringing this vision to life. However, for ease of reading, unless he was a co-author on a publication, I have only used Kay’s name throughout this text.
alankay1. 2016. “Alan Kay Has Agreed to Do an AMA Today.” Hacker News. Accessed December 7. https://news.ycombinator.com/item?id=11939851.
“Amazon Developer Services.” 2016. Accessed December 18. https://developer.amazon.com/.
Bogost, Ian. 2005. “Procedural Literacy: Problem Solving with Programming, Systems, & Play.” Telemedium (Winter/Spring): 32–36.
Bolter, Jay David, and Richard Grusin. 2000. Remediation: Understanding New Media. Cambridge, MA: The MIT Press.
Bush, Vannevar. 2003. “As We May Think.” In The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 35–47. Cambridge, MA: MIT Press.
Center for New Designs in Learning and Scholarship. 2016. “Active Learning.” Accessed December 18. https://commons.georgetown.edu/teaching/teach/.
Chen, Brian X. 2010. “Apple Rejects Kid-Friendly Programming App.” WIRED. April 20. https://www.wired.com/2010/04/apple-scratch-app/.
Clark, Andy, and David Chalmers. 1998. “The Extended Mind.” Analysis 58, no. 1: 7–19.
“Code.org: Anybody Can Learn.” 2016. Code.org. Accessed December 18. https://code.org/.
Deacon, Terrence W. 1998. The Symbolic Species: The Co-evolution of Language and the Brain. New York: W. W. Norton & Company.
“Develop Add-Ons for Google Sheets, Docs, and Forms | Apps Script.” 2016. Google Developers. Accessed December 18. https://developers.google.com/apps-script/add-ons/.
Donald, Merlin. 2007 “Evolutionary Origins of the Social Brain.” In Social Brain Matters: Stances on the Neurobiology of Social Cognition, edited by Oscar Vilarroya and Francesc Forn i Argimon, 215-222. Amsterdam: Rodophi.
Engelbart, Douglas. 2003. “Augmenting Human Intellect: A Conceptual Framework.” In The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: MIT Press. Originally published in Summary Report AFOSR-3223 under Contract AF 49(638)-1024, SRI Project 3578 for Air Force Office of Scientific Research, Menlo Park, CA: Stanford Research Institute, October 1962.
Gleick, James. 2011. The Information: A History, a Theory, a Flood. New York: Pantheon.
Goldin, Dina, Scott A. Smolka, and Peter Wegner, eds. 2006. Interactive Computation: The New Paradigm. New York: Springer.
Greelish, David. 2016. “An Interview with Computing Pioneer Alan Kay.” Time. Accessed December 5. http://techland.time.com/2013/04/02/an-interview-with-computing-pioneer-alan-kay/.
Irvine, Martin. 2016a. “The Grammar of Meaning Systems: Sign Systems, Symbolic Cognition, and Semiotics.” Unpublished manuscript, accessed December 17. Google Docs file. https://docs.google.com/document/d/1eCZ1oAurTQL2Cd4175Evw-5Ns7c3zCxoxDKLgVE8fyc/.
———. 2016b. “A Student’s Introduction to Peirce’s Semiotics with Applications to Media and Computation.” Unpublished manuscript, accessed December 17. Google Docs file. https://docs.google.com/document/d/1F0mFTLC1HgYIOnzwoNrSa0Re7PVplUfSo_OmqSOMfXc/edit.
Kay, Alan. 2001. “User Interface: A Personal View.” In Multimedia: From Wagner to Virtual Reality, edited by Randall Packer and Ken Jordan, 121–131. New York: W. W. Norton. Originally published in 1989. Available at http://www.vpri.org/pdf/hc_user_interface.pdf.
———. 2003. “Background on How Children Learn.” VPRI Research Note RN-2003-002. Available at http://www.vpri.org/pdf/m2003002_how.pdf.
———. 2004. “The Power of Context.” Remarks upon being awarded the Charles Stark Draper Prize of the National Academy of Engineering, February 24. Available at http://www.vpri.org/pdf/m2004001_power.pdf.
———. 2007. “A Powerful Idea about Ideas.” Filmed March 2007. TED video, 20:37. Accessed December 7, 2016. https://www.ted.com/talks/alan_kay_shares_a_powerful_idea_about_ideas.
Kay, Alan C. 1972. “A Personal Computer for Children of all Ages.” Palo Alto, CA: Xerox Palo Alto Research Center.
———. 1977. “Microelectronics and the Personal Computer.” Scientific American 237, no. 3: 230-44.
Kay, Alan, and Adele Goldberg. 2003. “Personal Dynamic Media.” In The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 393–404. Cambridge, MA: MIT Press. Originally published in Computer 10, no. 3 (March 1977): 31–41.
“Learn a Language for Free.” 2016. Duolingo. Accessed December 18. https://www.duolingo.com/.
Licklider, J. C. R. 1990. “The Computer as Communication Device.” In Systems Research Center, In Memoriam: J. C. R. Licklider, 21–41. Palo Alto, CA: Digital Equipment Corporation. Originally published in IRE Transactions on Human Factors in Electronics HFE-1: 4–11, March
Lifelong Kindergarten Group at the MIT Media Lab. 2016a. “Scratch – Imagine, Program, Share.” Accessed December 18. https://scratch.mit.edu/.
———. 2016b. “Smalltalk – Scratch Wiki.” Last modified December 13. https://wiki.scratch.mit.edu/wiki/Smalltalk.
Manovich, Lev. 2013. Software Takes Command. New York: Bloomsbury Academic.
Maxwell, John W. 2006. “Tracing the Dynabook: A Study of Technocultural Transformations.” PhD diss., University of British Columbia.
McLuhan, Marshall. 1964. “The Medium Is the Message.” In Understanding Media: The Extensions of Man, 7–21. Cambridge, MA: MIT Press. Available at http://web.mit.edu/allanmc/www/mcluhan.mediummessage.pdf.
MIT Media Lab. “In Memory: Seymour Papert.” Accessed December 18. https://www.media.mit.edu/people/in-memory/papert.
Murray, Janet H. 2011. Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, Massachusetts: The MIT Press. http://site.ebrary.com/lib/alltitles/docDetail.action?docID=10520612.
Nguyen, Long Tien, and Alan Kay. 2015. “The Cuneiform Tablets of 2015.” Paper presented at the Onward! Essays track at SPLASH 2015, Pittsburgh, PA, October 25. Available at http://www.vpri.org/pdf/tr2015004_cuneiform.pdf.
“One Laptop per Child.” 2016. Accessed December 18. http://one.laptop.org/.
Oshima, Yoshiki, Alessandro Wart, Bert Freudenber, Aran Lunzer, and Alan Kay. 2006. “Towards Making a Computer Tutor for Children of All Ages: A Memo.” In Proceedings of the Programming Experience Workshop (PX) 2016, 21–25. New York: ACM.
Renfrew, Colin. 1999. “Mind and Matter: Cognitive Archaeology and External Symbolic Storage.” In Cognition and Material Culture: The Archaeology of Symbolic Storage, edited by Colin Renfrew, 1–6. Cambridge, UK: McDonald Institute for Archaeological Research.
Sutherland, Ivan. 2003. “Sketchpad: A Man-Machine Graphical Communication System.” In The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 109–126. Cambridge, MA: MIT Press. Originally published in American Federation of Information Processing Societies Conference Proceedings 23:329-246, Spring Joint Computer Conference, 1963.
Wegner, Peter. 1997. “Why Interaction Is More Powerful Than Algorithms.” Communications of the ACM 40, no. 5: 80–91.
Wing, Jeannette. 2006. “Computational Thinking.” Communications of the ACM 49, no. 3: 33–35.
———. 2009. “Jeannette M. Wing – Computational Thinking and Thinking About Computing.” YouTube video, 1:04:58. Posted by ThelHMC. October 30.