There is a lot to say about how computing got to the metamedia stage, and about where it could be going next. Lev Manovich puts the focus on software as unlocking computing’s potential as metamedia. But humans, of course, wouldn’t be able to build computing systems or interact with them if they were not members of a symbolic species who can make new meaning from abstractions. Not only can we make meaning with symbols on a screen, but we can in parallel make meaning out of other symbols as well, from sounds to videos to facial expressions. The hybridization of multiple mediums would be, well, meaningless without those capabilities.
Working with that foundation, electrical engineers, mathematicians, computer scientists, and more moved from massive mechanical artifacts that take inputs and produce outputs to electricity-powered interactive computing systems that automatically feed outputs back into the system to produce new meanings. With the advent of high-level programming languages that humans can relatively easily read and understand, the process of writing programs that computers can then execute became more efficient. Software for computing systems proliferated, allowing humans to offload some of their cognitive burden onto the machines.
Most notably, Bush, Sutherland, Licklider, Engelbart, and Kay advanced computer design by putting forth plans for interfaces and human-computer interaction that would support and augment human intellectual capabilities. In particular, Kay sought to establish the PC as a tool for learning. His vision was significant because it gave users, children even, the ability to manipulate programs to solve unexpected problems and develop new ideas and processes.
While Kay’s vision seemed clear, it is interesting to think that our two mainstream commercial options for operating systems (Mac and Windows) are closed to normal-user manipulation. Some software can be modified, but doing so requires programming knowledge that isn’t universally taught. Apps, and the relative ease with which they can be developed, are potentially current manifestations of Kay’s vision.
Though Kay’s learning concept was not standardized, as we read each new kernel of information about the DynaBook and his other ideas, it became clearer that he in many ways wrote the blueprint that developers would follow for decades. Many of the concepts have been attempted in real life or already standardized: text editing, the mouse, graphical user interfaces, screen windowing, pressure-sensitive keyboards, synthesizers that can hook up to PCs to make music.
A particularly transformative concept was Kay’s vision of personal dynamic media, which was designed to “hold all the user’s information, simulate all types of media within a single machine, and ‘involve the learner in a two-way conversation'” (Manovich 61). This could be viewed as an early description of various AI technologies available today, such as Amazon Echo or IBM’s Watson. Yet, as Manovich explains, it also generally applies to the interactions with software that would come to transform the way we understand media.
Meanwhile, Sutherland with his Sketchpad prototype emphasized the need to interact with data in different dimensions. The division of his screen into four separate quadrants could be viewed as an early predecessor to the concept of hypermediacy. Engelbart’s concept of view control, which allowed users to switch between different views of data, shows the importance that he placed on the concept of user perspective and indicates his understanding of how to “layer” mediums.
However, Kay’s development of the graphical user interface, which provided a “desktop” on which different programs could be displayed and layered, is something that we truly take for granted when using modern computing devices. For instance, both Rebecca and Becky have many programs running simultaneously to process text, listen to music, send texts, manage emails, navigate multiple webpages, and more. We can toggle between the various windows and tabs with easy keyboard shortcuts and little thought, thanks to Kay’s design concepts.
Yet, both Rs independently ended up at the same ideas about tweaking this concept: flattening the layered interface system. In a sense, Microsoft’s OneNote and Google Docs are headed in this direction. However, this could go further by first, for instance, including as part of the word processing interface a web browser so that users no longer have to switch between windows but rather have everything contained in the workspace in which they are operating. (Word has some internet search functionality, but the integration doesn’t go as far as we have in mind.) Eventually, all media software could be combined into one layer. This might be awkward to do, given current hardware limitations and given the drive to make devices smaller, but not impossible. It could work well with larger fields of view, such as with a virtual or augmented reality computing system that is not limited by display size. The goal would not be to simply play music in iTunes or edit movie clips in iMovie or draft documents in Word and then put them all together. Rather, the point would be to allow someone to use these various forms of media in one software platform in order to access them in a more integrated way.
These readings brought up a number of additional ideas for both of us, but for ease of reading, we’ll keep them brief and discuss in person. A common theme among Kay’s and others’ ideas seemed to be the concept of developing interfaces that are more adapted to the human body: the chair with the keyboard in it, for instance. This category also includes the idea of eliminating the keyboard altogether and just using voice or graphical input to interact with the computing system. This is an area that has not been explored fully, but that would potentially be of great benefit to all those with stiff necks and more. Innovation, and history in general, also seems to flow in cycles to a degree, with innovation, consolidation and refinement, and innovation again, roughly speaking. It seems as if we might be ready for that next age of innovation in the computational world.
“Alan Kay — Doing with Images Makes Symbols.” Filmed 1987. YouTube video, 48:20. Posted by John DeNero, November 12, 2013. https://www.youtube.com/watch?v=kzDpfk8YhlE.
Bolter, Jay David, and Richard Grusin. Remediation: Understanding New Media. Cambridge, MA: The MIT Press, 2000.
Engelbart, Douglas. “Augmenting Human Intellect: A Conceptual Framework.” In The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: MIT Press, 2003.
Kay, Alan C. “A Personal Computer for Children of all Ages.” Palo Alto, CA: Xerox Palo Alto Research Center, 1972.
Kay, Alan, and Adele Goldberg. “Personal Dynamic Media.” In The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 393–404. Cambridge, MA: MIT Press, 2003.
Licklider, J. C. R. “The Computer as Communication Device.” In Systems Research Center, In Memoriam: J. C. R. Licklider, 21–41. Palo Alto, CA: Digital Equipment Corporation, 1990.
Manovich, Lev. Software Takes Command. New Yor: Bloomsbury Academic, 2013.
Sutherland, Ivan. “Sketchpad: A Man-Machine Graphical Communication System.” In The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 109–126. Cambridge, MA: MIT Press, 2003.