Humans in Control: Thoughts on the Past, Present and Future of HCI

Zach Schalk

In 1968, the world got a glimpse of the future. A young, nervous sounding researcher sat in front of a computer workstation: a small visual monitor above a desktop with a typewriter keyboard in the middle, one smaller key pad with various light-up buttons to the left and to the right a clunky black box which the user could move to point at and interact with specific objects on the visual display. Presenting live in front of computer professionals at the Fall Joint Computer Conference in San Francisco and a series of cameras recording the presentation, the researcher demonstrated the fruit of nearly a decade’s worth of work from the Augmentation Research Center at the Menlo Park, California, Stanford Research Institute. The project being unveiled, NLS (or the on-line system), was a ground breaking computer system introducing for the first time many of the computer interface features we are familiar with today: the point and click mouse (that clunky looking black box), a functional hypertext system dynamically linking files to one another within the computer’s memory, the ability to collaborate with other individuals on the same screen via networked computers and much more.

Douglas Engelbart was the young computer scientist and the presentation is known as “The Mother of All Demos.” Despite its groundbreaking nature, “The Mother of All Demos” is largely forgotten outside the realm of computer science and Douglas Engelbart is far from a household name. The NLS was never commercialized, and by the time general consumers experienced many of the features developed for the NLS, the entire paradigm of computing had shifted—from large, expensive computers only available at large institutions such as universities or corporations and shared by several users to smaller (and less powerful) personal computers. However, the work completed at the Augmentation Research Center is undoubtedly some of the most influential research ever completed in the field of Human-Computer Interaction (HCI). Several of the researchers from the project would go on to develop the foundation of the personal computer paradigm at Xerox PARC, the influential Palo Alto research center, thus heavily influencing the personal computing revolution that began in the early 1980’s (Grudin 2008). And Engelbart’s research agenda seeking to use computers to augment human intelligence is still very much alive and well in research labs around the world today.

HCI Technology Timeline

This timeline shows the progression of work on foundational aspects of HCI. Source: Brad Myers “A Brief History of HCI History” p. 46

In this essay, I aim to examine some of the foundational works in the HCI field. According to Jonathan Grudin, former Editor-in-Chief of ACM Transactions on Human-Computer Interactions and one of the leading voices in the field, HCI includes research in four main areas: human design factors, information systems, computer science and library & information science (Grudin 2012). A comprehensive survey of HCI—including hardware development, input/output technologies, the software that brings it all together and the resulting new media produced by an ever expanding pool of computer users—is impossible in a paper of this scope. This essay also leaves out a discussion of many of the early theorists responsible for developing the general use computer—including Alan Turing, John von Neumann, and others—and the information theorists whose work laid the foundation for the digitization of information—most notably Claude Shannon (although Shannon’s fingerprints can be found in this paper, as he advised Ivan Sutherland’s work on Sketchpad, which will be discussed at length). Instead, I will focus on the work of a select few of the field’s most influential thinkers, using their research as a lens through which to explore the early development of HCI in the first few decades following the invention of the digital computer. After reviewing this history, I will evaluate the field’s successes in reaching the goals set by these visionary thinkers. Finally, I will examine some current and emerging research trends that might fulfill some of the as yet unachieved goals with the hope of maximizing the future consumer benefits as computers continue to become increasingly pervasive throughout society.

The Early Days of HCI

Programming the ENIAC

In this image from the early days of computers, two operators work with the ENIAC. Source: http://www.columbia.edu/cu/computinghistory/eniac.html

Today, computers of nearly all shapes and sizes surround us. The average CCT student likely rarely spends a waking moment beyond arms reach of some sort of computational device. And with the realities of Moore’s Law—which stipulates that the number of transistors on an integrated circuit will double roughly every two years, but which can also be used to describe patterns of decreasing cost, size and availability of many technologies developed in the computer era (Grudin 2012)—our interaction with and uses of computers will only continue to grow in the coming years.

At the dawn of the computer age, the universal machines that would come to dominate our lives were nearly unrecognizable by today’s standards and by necessity human interacted with these so-called “giant brains” in very different ways (Grudin 2012). They filled entire rooms, sucked up enough electricity to power a small town and did little more than complete large computations. The vacuum tubes used by early computers were unreliable and expensive. As opposed to our current conception of the lone programmer hacking away at his laptop to create the next great computer application, early computers required a team of practitioners to function. The programmers actually had very little interaction with the machines themselves. Instead, they worked in the abstraction of machine language, creating the punched cards or tape on which programs were recorded for input and decoding the printed results after the computer did its work. The grunt work of physically interacting with the computers was left to the operators:

Computer operators loaded and unloaded cards and magnetic or paper tapes, set switches, pushed buttons, read lights, loaded and burst printer paper, and put printouts into distribution bins. Operators interacted directly with the system via a teletype: Typed commands interleaved with computer responses and status messages were printed on paper that scrolled up one line at a time. (Grudin 2012)

The need for an improved manner of interface between human and computer was obvious. When the transition from vacuum tube to solid-state, transistor based computing made computers cheaper and more reliable, it also opened up new possibilities for human interaction. The machines no longer demanded a team of engineers just to function. While this was a great advance in the ease of computing, it only put more emphasis on the need for better systems to ease the pains of interaction so that computers could be useful to less savvy operators (Grudin 2012). Luckily, early visionaries in the HCI field had already started thinking about this problem.

When telling the history of HCI, Vannevar Bush and his vision of the Memex are a very good place to start. Bush was one of the most influential engineers of his day. An MIT professor, science advisor to both presidents F. Roosevelt and Truman and the director of the Office of Scientific Research and Development, Bush played a major role in the secret wartime development of the computer (Grudin 2012). Beyond his work on the practical development of the digital computer, Bush’s most lasting impact came in the form of an essay published by The Atlantic in 1945 called “As We May Think.” His primary concern was a problem with which we can all relate: how can anyone possibly stay informed on the flood of relevant information being produced around the world? He believed that this problem could be solved in the computer age. In the essay, Bush describes his vision of a futuristic personal workstation—the Memex—that would allow trained professionals to better organize, catalogue, recall and share their work via an indexed and linked system of microfilm (Bush). While the essay never used the term, the Memex is widely considered to be the first described hypertext system (Baecker; Myers).

The system would allow workers to visually record their work, to interact with the machine via natural spoken language and physical keyboard, to easily recall any relevant memory and to share it if need be:

Consider a future device for individual use, which is a sort of mechanized private file and library…A Memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory. (Bush)

Ironically, Bush never succeeded in bringing this machine to life (though it is debatable whether or not that was every really his intuition) in large part because of unrealistic specifications that could have been addressed if he had been aware of contemporary work that had been completed in the field of library science:

Bush’s machines failed because he set overly ambitious compression and speed goals, ignored patent ownership issues, and most relevant to our account, was unaware of what librarians and documentalists had learned through decades of work on classification systems…Had he worked with them, Bush, an electrical engineer by training, might have avoided the fatal assumption that small sets of useful indexing terms could easily be defined and agreed upon. (Grudin 2012)

In short, the Memex was a victim of the exact problem the system was designed to solve.

The Memex

A conceptual drawing of the Memex. Source: http://www.wired.com/wiredenterprise/2012/12/social-media-history/#slideid-36962

While the Memex never existed as a functioning product, it had an enormous impact on the field of HCI as one of the first public visions of a computer system that could be useful as more than just a giant calculator. Bush’s vision influenced an entire generation of researchers who would go on to lay the foundation of HCI as a field of study. However, it would take nearly 20 years for researchers to bring many of his ideas—the functional workstation, hypertext system, a visual display, etc.—to life.

J.C.R. Licklider was another figure of great importance in the early days of HCI. A psychologist by training who worked as a faculty member at Harvard and MIT, Licklider brought a unique perspective to the early study of computing, a field dominated by engineers and mathematicians. After leaving MIT, Licklider would go on to support some of the most influential projects in early computing, first at Bolt, Beranek and Newman (BBN) and then as the director of the Information Processing Techniques Office of the Department of Defense Advanced Research Projects Agency (ARPA or DARPA) in the early 1960’s, during which he oversaw the creation of the Internet’s forerunner the ARPANET (Grudin 2012). While Licklider’s influence across the field of computer science likely can’t be overstated, for the purposes of this essay his most relevant work is a paper published in 1960 called “Man-Computer Symbiosis.”

In his famous essay, Licklider outlines a research agenda that would drive a generation of HCI advances and that still resonates today. His goal was to effectively leverage the power of computers so as to augment human intellect in a manner that would open up new and creative avenues of problem solving:

One of the main aims of man-computer symbiosis is to bring the computing machine effectively into the formulative parts of technical problems…The other main aim is closely related. It is to bring computing machines effectively into processes of thinking that must go on in ‘real time,’ time that moves too fast to permit using computers in conventional ways. (Licklider)

He envisioned the computer as an error checker that could help efficiently guide a human through the problem solving process, offloading the clerical or mechanical tasks that can distract the human thinker in order to free up the creative capacities of the human brain (Licklider). He clearly outlines his thinking for why a symbiosis between computer and man is desirable:

…men are noisy, narrow-band devices, but their nervous systems have very many parallel and simultaneously active channels. Relative to men, computing machines are very fast and very accurate, but they are constrained to perform only one or a few elementary operations at a time. Men are flexible, capable of ‘programming themselves contingently’ on the basis of newly received information. Computing machines are single-minded, constrained by their ‘pre-programming.’ Men naturally speak redundant languages organized around unitary objects and coherent actions and employing 20 to 60 elementary symbols. Computers ‘naturally’ speak nonredundant languages, usually with only two elementary symbols and no inherent appreciation either of unitary objects or of coherent actions…Computing machines can do readily, well, and rapidly many things that are difficult or impossible for man, and men can do readily and well, though not rapidly, many things that are difficult or impossible for computers. That suggests that a symbiotic cooperation, if successful in integrating the positive characteristics of men and computers, would be of great value. (Licklider)

In order to attain this ambitious vision of symbiosis, Licklider outlined several areas in need of advancement over the contemporary status quo: improved processing speed, improved memory hardware and organization, language issues limiting communication between computer and humans and easy to use input and output equipment (Licklider). Researchers were ready to attack all of these problem areas, setting the stage for rapid advances in HCI during the 1960’s and 1970’s

HCI Matures: Computer Graphics, GUI and the PC

The two decades that followed Licklider’s “Man-Computer Symbiosis” were an exciting time for advances in HCI. The computer, once little more than a room filled with vacuum tubes and flashing lights, had finally matured enough for researchers to begin working on practical applications that might bring Turing’s universal machine into everyday life. A new generation of researchers began to take the first practical steps that would bring to life the visions of Bush and Licklider. Researchers like Ivan Sutherland, Douglas Englebart and Alan Kay would forever change the way humans interacted with computers.

In 1963, Ivan Sutherland was a Ph.D. student developing his thesis at MIT’s Lincoln Laboratory. His research, supported by the U.S. Air Force and the National Science Foundation (NSF), resulted in the program Sketchpad, which Sutherland called “A Man-Machine Graphical Communication System.” (Myers; Sutherland 1963) Sketchpad introduced a direct manipulation interface, in which a pointing device is used to interact with visible objects on the computer screen, and early computer graphics (Myers). In the introduction to his thesis in which Sketchpad is described, Sutherland outlines his reasoning for why such a program was needed:

The Sketchpad system makes it possible for a man and a computer to converse rapidly through the medium of line drawings. Heretofore, most interaction between men and computers has been slowed down by the need to reduce all communication to written statements that can be typed; in the past, we have been writing letters to rather than conferring with our computers…The Sketchpad system, by eliminating typed statements…opens up a new area of man-machine communication. (Sutherland 1963)

While the system was difficult to use and never widely adopted (nor was it meant to be, as the system was specialized for the TX-2 machine made available to Sutherland at Lincoln Laboratory), Sketchpad had a sweeping impact on the field. Jonathan Grudin argues that Sutherland’s thesis “may be the most influential document in the history of HCI, launching computer graphics, taking influential steps to make computers ‘more approachable,’ and frankly describing the program’s successes and failures for the benefit of other researchers.” (Grudin 2012)

Sutherland’s influence on HCI didn’t end with his contribution of Sketchpad. In fact, he was just getting started. When J.C.R. Licklider left ARPA’s IPTO in 1964 to return to MIT, Sutherland was selected as his replacement (Myers). While at the IPTO, Sutherland furthered his research into computer graphics, laying the groundwork for future research into virtual and augmented reality. His influential 1965 essay “The Ultimate Display” described the contemporary limitations of input and output systems while also outlining the desirability of a hypothetical display that could take full advantage of the affordances offered by the computer medium: “A display connected to a digital computer gives us a chance to gain familiarity with concepts not realizable in the physical world. It is a looking glass into a mathematical wonderland.” (Sutherland 1965)

He proposes several features of what he calls a “kinesthetic display,” which would enrich the immersive possibilities of the computer display by encompassing more natural senses (for instance digitizing smell and tactile feedback in addition to sending audio and visual signals) along with gestural and eye-tracking interfaces to help make human interaction with computational space more natural. He ends the essay with a stunning vision of what can be possible in a total computational environment:

The ultimate display would, of course, be a room within which the computer can control the existence of matter. A chair displayed in such a room would be good enough to sit in. Handcuffs displayed in such a room would be confining, and a bullet displayed in such a room would be fatal. With appropriate programming such a display could literally be the Wonderland into which Alice walked. (Sutherland 1965)

Sutherland would later explore some of these possibilities more practically as a researcher at Harvard University, where he created what is widely considered to be the first virtual reality/augmented reality system, a head mounted display called The Sword of Damocles (McCracken.)

Sword of Damocles

A Demonstration of the Sword of Damocles head mounted display system. Source: http://techland.time.com/2013/04/12/a-talk-with-computer-graphics-pioneer-ivan-sutherland/

While at IPTO, Sutherland also played a direct role in securing funding for Douglas Engelbart at the Stanford Research Institute (Grudin 2012). In 1962, Engelbart published his seminal paper “Augmenting Human Intellect: A Conceptual Framework” in which he outlined the research agenda that would take practical steps to bring the concepts and aspirations expressed in Licklider’s “Man-Computer Symbiosis” to life. Eventually, his work would lead to NLS system described at the beginning of this essay. In the paper, Engelbart fully embraces the ambition of his project: “By ‘augmenting human intellect’ we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems.” (Engelbart) Engelbart acknowledges that his work follows in the footsteps of Bush’s MEMEX and seeks to improve upon its offerings while also bringing such a system to life. As mentioned earlier, the NLS built to meet Engelbart’s specifications included the first instance of functional hypertext “support[ing] creativity and problem-solving in teams” (Baecker); the first use a mouse as a cheap input device to replace the more expensive and difficult to use light-pen utilized in other systems such as Sketchpad; the first instance of windows to integrate text, graphics and video content on a single display; the first instance of networking computers for collaborative use by multiple operators across space; the first instance of a word processing application (including basic functions such as cut, copy and paste) laying the ground for all future work in this area; the first instance of “view control,” switching between multiple representations of the same data at the user’s command; and more (Myers; Grudin 2012; Manovich 72-75). More symbolically, Engelbart’s research marked a migration of important HCI research from its traditional home on the East Coast (in laboratories at MIT, Harvard, BBN and elsewhere) to the West Coast, which remains the center for computer technology development today (Grudin 2012).

Like Bush’s MEMEX, which never came to fruition for various predictable reasons, Engelbart’s NLS soon faded to obscurity after the 1968 “Mother of All Demos.” While Engelbart was concerned with human factors in design, he was more interested in improving efficiency for skilled users and not necessarily systems designed for general use—a distinction that ultimately cost him funding: “Use of Engelbart’s systems required training. He felt that people should be willing to tackle a difficult interface if it delivered great power once mastered…His demonstration become something of a success disaster: DARPA was impressed and installed NLS, but found it too difficult.” (Grudin 2012) It would take another visionary researcher, years of development and yet another paradigm shift in computing technology before the general user could enjoy the benefits of a system that encompassed Engelbart’s vision of augmented intelligence.

In 1969, while Engelbart’s work at the Stanford Research Institute was winding down, Alan Kay was finishing his Ph.D. thesis at the University of Utah. Working with Ivan Sutherland, who helped found the influential computer graphics department at the University of Utah in 1968 following his stints at the IPTO and Harvard University, Kay proposed the idea of overlapping windows as a way to manage different visual elements and applications on a computer display—furthering the ideas that Engelbart had pioneered with the NLS (Myers). After completing his degree, Kay moved on to the newly established Xerox PARC, where he directed the Learning Research Group. Over the next decade, the Learning Research Group pioneered further advances into the graphic user interfaces (GUI) Engelbart and Sutherland made use of in the NLS and Sketchpad, refined the mouse as a practical input device and defined other key features of the personal computer paradigm (Manovich 57). In the early 1973, these ideas came together in one of the first personal computers: the Xerox Alto (Myers). While not a terribly impressive machine, and never really marketed as a consumer product, the Alto was a big step in the direction of personal computing.

Alan Kay's DynaBook

A conceptual drawing of Alan Kay’s DynaBook. Source: http://www9.georgetown.edu/faculty/irvinem/theory/Kay-Dynabook-OriginalPaper-PARC-1972.pdf

Kay’s most important contribution to the field of HCI would come a few years later, when he unveiled his vision of the DynaBook in an essay titled “A Personal Computer for Children of All Ages.” In the essay, Kay sets out to describe a tool that can augment and enhance learning for children—and ultimately be used for the benefit of everyone. He imagines a medium that is active, attention grabbing and flexibly controlled by the user: “It can be like a piano: (a product of technology, yes), but one which can be a tool, a toy, a medium of expression, a source of unending pleasure and delight…and, as with most gadgets in unenlightened hands, a terrible drudge!” (Kay) The essay is filled with lighthearted asides and fanciful descriptions, giving it a unique charm, but also portrays a powerful device that fits many of the conventions of personal computing we are all familiar with today.

As envisioned, the DynaBook is a portable, flexible and powerful personal computer that could be used for many different purposes—for consuming and producing multimedia; recording information via a keyboard, stylus or voice recognition; “what you see is what you get” (WYSIWYG) document editing; sharing information via removable physical memory or by connecting to the burgeoning networks in development at the time; and much more (Kay; Manovich 57). It easily handles all needed computation, allows its user to create rich media content and allows for the easy storage, recall and sharing of information. He ends the essay with various technical specifications and extrapolations from contemporary components, showing the seriousness of his intentions with the device and his belief that it could become a real consumer product (Kay).

In the book Remediation: Understanding New Media, authors Jay David Bolter and Richard Grusin describe the concept of remediation as the way in which one form of media can be represented in another form of media (Bolter and Grusin). This concept accurately describes what Kay and his colleagues created at Xerox PARC. Essentially, Kay’s group set out to create a “metamedium” that could represent all forms of existing media on one display (Manovich 65). While the new computational medium they created had many unique properties not found in pre-existing forms of media (unparalleled flexibility, dynamic linking between and within content, etc.), the beauty of the graphical user interface as conceived at Xerox PARC was that it represented familiar concepts on the computer display. Users could locate and interact with familiar visual icons that represented the programs they wished to use. Users didn’t have to understand the underlying computations to create and edit a text document or to make a painting. In short, Kay and his team were seeking to fulfill the promise of Sutherland’s Sketchpad by creating a graphical communication system between humans and computers that could be easily interpreted by the average user.

HCI Achievements and Unmet Goals

The history of HCI can largely be seen as a series of successes: theorists outline research agendas that open up new avenues for computer interaction; researchers achieve these goals and more; and consumers benefit. For instance, the Memex envisioned by Vannevar Bush, while undoubtedly revolutionary for its time, is surpassed in functionality and power by the laptop on which I am currently writing this essay. J.C.R. Licklider’s ideas outlined in “Man-Computer Symbiosis” have largely been realized, in part enabled by exponential improvements in the computing speed, memory organization and memory capacity requirements he highlighted as key roadblocks to fulfilling his vision. In fact, in many applications from word processing to CAD, the relationship between humans and computers is much as Licklider envisioned it should be half a century ago: humans supply creative inputs, computers error check and the resulting work is better than it would have been had either acted independently. In theory, I should have no misspelled words in this essay thanks to Microsoft Word’s built in spell checking. While Doug Engelbart’s research agenda was more detailed and his aims more ambitious, much the same can be said about his concepts about computer augmented intellect. While we certainly still have much room to improve the efficiency and efficacy of our interactions with computers, compared to the era in which these visionaries worked computing has vastly improved both the work and play environments for humans. Similar success can be claimed by Ivan Sutherland and Alan Kay. The proliferation of computer graphics and the GUI have forever changed the manner in which humans interact with computers. Direct manipulation systems have been pervasive since computers became a mainstream consumer product. And while the form factors of computers have continued to shift with the advent of mobile phones and tablets, in many ways we still live in the computing paradigm created by the vision of Alan Kay and his colleagues at Xerox PARC.

Despite all this, there are still interesting research questions from these early HCI visionaries that remain unanswered today—or if not unanswered, then the potential for the current answers to be improved upon is readily apparent. While changing technology leads to changing research questions (for instance, while the initial need for more operator friendly interfaces is still relevant to HCI today, researchers no longer need be concerned with punched note cards or lever inputs), and many researchers have already started looking forward to questions unimagined by pioneers like Bush, Sutherland and Kay, there are interesting unresolved questions that have great potential to benefit all computer users. In the past decade, some researchers have proposed using the concept of distributed cognition (“extend[ing] the reach of what is considered cognitive beyond the individual to encompass interactions between people and with resources and materials in the environment” (Hollan, Hutchins and Kirsh 175)) as a new theoretical framework from which to approach the research agenda introduced by pioneers such as Bush, Licklider and Engelbart. Proponents of this theory believe that the perspective of distributed cognition helps to break the HCI paradigm that accentuated individual cognition (exemplified by the PC) and to allow for the creation of new, human centered systems as we enter the age of ubiquitous computing (Hollan, Hutchins and Kirsh 175). In the field of Artificial Intelligence research, companies such as Google and Facebook are experimenting with an approach called “deep learning,” which seeks to mimic human neural networks in order to better organize the flood of information available in today’s computational environments (Simonite). These and other emerging trends offer interesting possibilities for new improvements to the intellect augmentation research agenda.

Another area of research with many possibilities for future gains involves the language used to communicate with computers. In one-way or another, all the theorists featured in this essay sought to tackle this issue. From Ivan Sutherland’s pioneering work with Sketchpad to Alan Kay’s revolutionary SmallTalk programming language—an early language designed at Xerox PARC in which many of the groundbreaking media creation programs were built—easing the process of creation in a computational environment has long been a focus of HCI research (Manovich 98-100). In “Man-Computer Symbiosis,” Licklider outlined what he called “The Language Problem” as follows: “In short: instructions directed to computers specify courses; instructions directed to human beings specify goals.” (Licklider) It is this gap that we are still attempting to bridge. While great progress has been made in this area with the development of high-level languages and software applications that hide the grunt work of code behind a wall of visual representation, there is still much room for improvement to be made in this area of HCI if the average user is to enjoy the full benefits of computational environments.

There is also the question of input and output systems, another area of focus shared by all the pioneers discussed in this paper. While we’ve come a long way from the days of interpreting flashing light bulbs or mechanically punched notecards, our existing interactive interfaces leave much to be desired. For most consumers, computer interaction is still constrained by the tools developed or refined at Xerox PARC in the 1970s: a bit-mapped display, the QWERTY keyboard and a point and click mouse (Manovich 99). While the emerging mobile computing paradigm is quickly changing some of these dimensions, the input/output limitations described by Ivan Sutherland in his essay “The Ultimate Display” still largely apply today. However, researchers around the world are working to tackle many of these issues. An end to Sutherland’s longing for a better “looking glass into a mathematical wonderland” might be around the corner.

Sutherland’s vision of an immersive world launched the field of Virtual Reality and Augmented Reality research that continues today. His pioneering work with the Sword of Damocles ushered in the head-mounted display paradigm. While head mounted displays have largely been the stuff of research labs, they are beginning to break into the mainstream. Products such as Google Glass and the Oculus Rift are bringing Augmented Reality and Virtual Reality head mounted displays into the hands of consumers for the first time (Parkin). Accessories such as the Xbox Kinect and the Virtuix Omni are opening up new avenues for natural gesture recognition to further enhance the virtual environment. While systems of these types have existed in research labs for years, their availability on the consumer market at reasonable price points can potentially signal the beginning of another paradigm shift in how humans experience and interact with virtual space. Other unresolved issues pointed out by Sutherland, such as digitizing and reproducing the sense of smell, are also receiving new research interest (Digital Olfaction Society). At the same time, researchers such as Mark Weiser and Paul Dourish have pushed back against the pull of virtual worlds, instead nudging HCI in the direction of “embodied virtuality.” This concept seeks to create an environment in which computers melt into the background, creating the invisible infrastructure supporting human interaction (Weiser). Ultimately, both models of HCI research are worth pursuing, offering different affordances for different activities and uses.

Finally, recent advances in electrical input technology would make Vannevar Bush proud. Near the end of “As We May Think,” in a flash of often-overlooked insight far ahead of his time, Bush questions the necessity of mechanical information inputs:

In the outside world, all forms of intelligence whether of sound or sight, have been reduced to the form of varying currents in an electric circuit in order that they may be transmitted. Inside the human frame exactly the same sort of process occurs. Must we always transform to mechanical movements in order to proceed from one electrical phenomenon to another? (Bush)

New research into this area is leading to groundbreaking possibilities for input and output systems. Some consumer products, such as the MYO armband, are attempting to take advantage of this concept as a new means of computational control (Metz), and other research into this area is exploring the ability of mental control over virtual constructs, though most experiments are in the very early stages (Young 2013). However, a far more interesting application can be found in the development of prosthetic limbs. Not only are patients in experiments now able to control prosthetic limbs in new and improved ways using their thoughts (Young 2012), new techniques are also introducing the possibility that these artificial limbs might return sensations of touch to the patient (Humphries; Talbot). The implications for improving the quality of life are breathtaking. I can only imagine that Bush would approve.

Conclusion

The field of HCI has taken great strides since its birth in the middle of the 20th century. The research undertaken by the discipline’s foundational thinkers such as Vannevar Bush, J.C.R Licklider, Ivan Sutherland, Douglas Engelbart and Alan Kay has had a profound and lasting impact on society in the computer era. And there are many areas of the field—both old and emerging—left uncovered in this essay. My hope is that this exploration of important research topics, achievements and emerging trends has shed some light on the field of HCI—where it’s been and where it might go in the future.

Works Cited

Baecker, Ronald. “Themes in the early history of HCI—some unanswered questions.” ACM interactions 15.2 (2008): 22-27. < http://dl.acm.org/citation.cfm?doid=1340961.1340968>

Bolter, Jay David and Grusin, Richard. Remediation: Understanding New Media. Cambridge, MA: The MIT Press, 2000.

Bush, Vannevar. “As We May Think.” The Atlantic 1 July 1945: The Atlantic. Web. < http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/>

Engelbart, Douglas. “Augmenting Human Intellect: A Conceptual Framework.” Augmenting Human Intellect: A Conceptual Framework. Web. <http://www.dougengelbart.org/pubs/augment-3906.html-with-computer-graphics-pioneer-ivan-sutherland/>.

Grudin, Johnathan. “A Moving Target—The Evolution of Human-Computer Interaction .” Human-Computer Interaction Handbook (3rd Edition). Web. <http://research.microsoft.com/en-us/UM/People/jgrudin/publications/history/HCIhandbook3rd.pdf>.

Grudin, Johnathan. “Why Engelbart wasn’t given the keys to Fort Knox: revisiting three HCI landmarks.” ACM interactions 15.5 (2008): 65-67. www.interactions.acm.org. Web. 2 Dec. 2013.

Hollan, James, Hutchins, Edwin, and Kirsh,David, “Distributed Cognition: Toward a New Foundation for Human-computer Interaction Research.” ACM Transactions, Computer-Human Interaction 7, no. 2 (June 2000): 174-196.  <http://www9.georgetown.edu/faculty/irvinem/theory/Hollan-Hutchins-Kirsch-Distributed-Cog.pdf>

Humphries, Courtney. “Giving Prosthetics a Sense of Touch.” MIT Technology Review. 6 Oct. 2011. Web. <http://www.technologyreview.com/news/425664/giving-prosthetics-a-sense-of-touch/>.

Licklider, J.C.R. “Man-Computer Symbiosis.” IRE Transactions on Human Factors in Electronics, volume HFE-1, pages 4-11, March 1960. Web. <http://groups.csail.mit.edu/medg/people/psz/Licklider.html>

Manovich, Lev. Software Takes Command. New York: Bloomgsurry. 2013. Print.

McCracken, Harry. “A Talk with Computer Graphics Pioneer Ivan Sutherland | TIME.com.” Time. Web. <http://techland.time.com/2013/04/12/a-talk-with-computer-graphics-pioneer-ivan-sutherland/>.

Metz, Rachel. “An Armband Promises a Simpler Route to Gesture Control.” MIT Technology Review. 26 July 2013. Web. <http://www.technologyreview.com/news/517176/an-armband-promises-a-simpler-route-to-gesture-control/>.

Myers, Brad. “A Brief History of HCI Technologies.” ACM interactions 5, No. 2 (1998): 44-54. < http://www.cs.cmu.edu/~amulet/papers/uihistory.tr.html>

Parkin, Simon. “Can Oculus Rift Turn Virtual Wonder into Commercial Reality?.” MIT Technology Review. 7 Oct. 2013. Web. <http://www.technologyreview.com/news/519801/can-oculus-rift-turn-virtual-wonder-into-commercial-reality/>.

Simonite, Tom. “AI Effort to Find Meaning in Your Posts.” MIT Technology Review. 20 Sept. 2013. Web. <http://www.technologyreview.com/news/519411/facebook-launches-advanced-ai-effort-to-find-meaning-in-your-posts/>.

Sutherland, Ivan. Sketchpad: A Man-machine Graphical Communication System. Jan. 1963. Tech. no. 574. Republished by: University of Cambridge, 2003. Web. <http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-574.pdf>.

Sutherland, Ivan. The Ultimate Display. IPTO, ARPA, 1965. Web. < http://www.wired.com/beyond_the_beyond/2009/09/augmented-reality-the-ultimate-display-by-ivan-sutherland-1965/>

Talbot, David. “An Artificial Hand with Real Feelings.” MIT Technology Review. 5 Dec. 2013. Web. <http://www.technologyreview.com/news/522086/an-artificial-hand-with-real-feelings/>.

Weiser, Mark. “The Computer for the 21st Century.” Scientific American September (1991): 94-104. Web. < http://wiki.daimi.au.dk/pca/_files/weiser-orig.pdf>

“Welcome to the 2nd DOS World Congress 2014.” Digital Olfaction Society. Web. 2 Dec. 2013. <http://www.digital-olfaction.com/>.

Young, Susan. “Brain Chip Helps Quadriplegics Move Robotic Arms with Their Thoughts.” MIT Technology Review. 16 May 2012. Web. <http://www.technologyreview.com/news/427939/brain-chip-helps-quadriplegics-move-robotic-arms-with-their-thoughts/>.

Young, Susan. “Monkeys Drive Two Virtual Arms with Their Thoughts.” MIT Technology Review. 6 Nov. 2013. Web. <http://www.technologyreview.com/view/521471/monkeys-drive-two-virtual-arms-with-their-thoughts/>.

Further Reading

Being Human: Human-Computer Interaction in the Year 2020 (2008) edited by Richard Harper, Tom Rodden, Yvonne Rogers, Abigail Sellen