In Wikipedia’s disambiguation page for Interface, you can see many interfaces—user interface, hardware interface, biological interface, chemical interface, and social interface. There even exists a place in Northern Ireland called Interface Area where “segregated nationalist and unionist residential areas meet”[i], which intuitively reveals the implication of interface—boundaries. I’d like to quote words from Professor Irvine’s introduction essay—“an interface is anything that connects two (or more) different systems across the boundaries of those systems[ii].”
In our symbol systems, basically, anything can serve as an interface as long as it mediates between “social-cultural” systems. Ancient cave paintings are interfaces connecting people and their long lost myths. The Bible is an interface to a huge system of culture and value. An abacus is an interface to an ancient calculation system. In computing devices, a user interface is a boundary between the user and the computing system.
Interaction is the function of interfaces. Information flows through interfaces. However, many interfaces are like one-way streets, allowing information flowing toward only one direction. For example, an artwork is an interface to a meaning system, but many artworks only allow information flowing toward the audience. We have to stand behind a line when we are appreciating Von Gogh’s Starry Night. But artists are increasingly creating more and more interactive works, e.g. the pile of candies that people were free to take away mentioned by Amanda in her post several weeks ago[iii].
So when I first read about how the pioneers like Kay, Sutherland, Engelbart, and Bush explored improving man-machine interaction using computational interfaces, I was truly amazed. Bush suggested the model of memex to store, record, and retrieve books[iv]. Engelbart envisaged a computer network that could augment human intelligence. He also invented the mouse that could point to anywhere on computer screens[v]. Licklider envisioned a man-computer symbiosis system for more effective man-computer interaction[vi], which was realized partially by Sutherland with his Sketchpad that allowed people to use a light pen to draw on computer screens and that could modify your drawings to perfect circles or rectangles, realizing true two-way interaction[vii].
Many of those pioneers’ visions have been explored, as computing power gets stronger and stronger. For example, CAD software and devices with touchscreens are all rooted in the concepts proved by Sketchpad. Mobile devices like cell phones, Kindles, and iPads are pretty much resembling Kay’s Dynabook. Online file systems envisioned by Engelbart that allowed multiple users to read and edit at the same time are recently implemented, such as Quip and Google Doc.
However, there are many paths remaining unexploited. Here I will discuss three of them.
Knowledge Navigation System
Memex suggested by Vannevar Bush put forward a personal information management and “knowledge navigation” system. I was surprised by how much cognitive workload could be offloaded onto this system, although it was designed seventy years ago with the absent of digitalization. Even today, when everyone has their own computer(s) and multiple external hard disks, we haven’t built a highly efficient knowledge navigation system. Maybe Wikipedia is close, but it can’t present your personal knowledge structure. In my view, a true knowledge navigation system should have the following properties:
- Portability. Cloud storage might be a good choice.
- Searchability. You can search any word, image, soundtrack, even video and get everything relevant quickly.
- Present your knowledge structure easily. It could use methods like data visualization or library classification to present your knowledge structure and allow you to navigate your knowledge landscape both horizontally and vertically. In other words, you can zoom in and zoom out on your “knowledge map” and see your knowledge on different scales, like a Google Map for knowledge.
- Knowledge discovery. You can use it to discover new knowledge. This also reminds me of Google Earth, which is a good example of knowledge discovery. For example, when you zoom in on the Pacific Ocean, you could see many islands. According to the layers you choose, through clicking icons distributed on the map, you can discover various kinds of knowledge, such as Wikipedia entries about this island, three-dimensional topographies both on land and undersea, and documentary videos of marine animals shot by BBC or Discovery Channel. If you zoom in on the Mars in Google Earth, you can learn tons of knowledge like what chemical and physical factors shaped some strange geographic feature. The hyperlink in Wikipedia is also a good way of knowledge discovery. This property is realized through hyperlinks and network with huge online databases.
- Connect to other people’s knowledge system. You can share knowledge with other people, navigate knowledge on your social network, and at the same time navigate your social network on the entire human knowledge landscape.
Ray Kurzweil, a futurist of Google once predicted that in thirty years, human beings can use nanobots in our brains to connect to the Internet and conduct many crazy functions[viii], such as downloading function modules according to your needs like The Matrix. It sounds very hype, but may be a good way to realize “Knowledge Navigation.”
Virtual Personal Assistant
In his The Computer as Communication Device in 1968, Licklider mentioned OLIVER (on-line interactive vicarious expediter and responder) proposed by Oliver Selfridge. OLIVER was “a very important part of each man’s interaction with his online community.” According to Licklider, OLIVER was “a complex of computer programs and data that resides within the network” that could take care of many of your matters without your personal attention. It even could learn through experience. This path is a very typical “Intellectual Augmentation” method, having been explored in apps like Siri and Microsoft Cortana. Another example is Amy, a virtual assistant that are able to arrange your schedule using information drawn from your emails with natural language processing[ix].

Ccing to Amy would allow it to arrange your schedule according to your time and location.
However, because the algorithms are still in their primitive stages, this path still has a very long way to go.
Metamedium
Alan Kay coined the term metamedium for computers that serve as media for other mediaii. He also envisioned a system with software that allowed everyone including children to program their own software as “creative tools.[x]” This path was exemplified in his SmallTalk project. It is under-exploited today. Most computer owners including me don’t know how to program. Computing devices are mainly consuming devices. As we discussed in this week’s Leading by Design course, open source software that would free us from lock-in systems like Microsoft Windows and OS X may be a way to realize Kay’s vision. Another way I can think of is to teach children to program with interesting tools such as LEGO and Minecraft, which might be a commercially plausible approach.
References
[i] “Interface Area.” 2016. Wikipedia. https://en.wikipedia.org/w/index.php?title=Interface_area&oldid=738749718.
[ii] Irvine, Martin. n.d. “Introduction to Affordances and Interfaces: Semiotic Foundations.”
[iii] Amanda Morris. 2016. “Using the Piercian Model to Decode Artwork – Amanda | CCTP711: Semiotics and Cognitive Technology.” Accessed November 3. https://blogs.commons.georgetown.edu/cctp-711-fall2016/2016/09/28/using-the-piercian-model-to-decode-artwork-amanda/.
[iv] Vannevar, Bush. 1945. “As We May Think.” Atlantic, July.
[v] Engelbart, D. C., and Michael Friedewald. n.d. Augmenting Human Intellect: A Conceptual Framework. [Fremont, CA: Bootstrap Alliance], 1997.
[vi] Licklider, J. C. R. 1960. “Man-Computer Symbiosis.” IRE Transactions on Human Factors in Electronics HFE-1 (1): 4–11. doi:10.1109/THFE2.1960.4503259.
[vii] Sutherland, Ivan. 1963. “Sketchpad: A Man-Machine Graphical Communication System.”
[viii] Kurzweil, Ray, and Kathleen Miles. 2015. “Nanobots In Our Brains Will Make Us Godlike.” New Perspectives Quarterly 32 (4): 24–29. doi:10.1111/npqu.12005.
[ix] “Testing Amy: What It’s like to Have Appointments Scheduled by an AI Assistant.” 2015. GeekWire. December 15. http://www.geekwire.com/2015/testing-amy-what-its-like-to-have-appointments-scheduled-by-an-ai-assistant/.
[x] Manovich, Lev. 2013. Software Takes Command. International Texts in Critical Media Aesthetics, volume#5. New York ; London: Bloomsbury.