Category Archives: Week 10

The Cultural Element (Katie)

This week’s readings helped frame computation as a stop on a historical continuum of cognitive artefacts in relation to affordances, meaning making and interpretation. All of these elements revolve around culture – this is a point that has interested me since the beginning of this course. An artefact is a meeting of properties of something and the environment in which it exists. Mahoney, in particular, articulates this when he says “any meaning the symbols may have is acquired and expressed at the interface between a computation and the world in which it is embedded” (Mahoney, 129). The world that meaning is embedded in is created by humans (cognitive agents). In other words, symbols have meaning to us – not computers – because symbols and the way we string them together expresses our representation of the world.

We derive meaning via an interface, which connects two systems by transcending the boundaries of those systems. As Irvine discusses, the interpretations we make come from the way we are socialized. Thus, affordances are “good” when they fulfill our expectations (Irvine, 3). Interfaces are created when we use perceptible, physical features to make meaning, which is reflective of our culture, values and intentions.

In this way, affordances draw meaning when they correctly communicate human intention through detectible features. As cognitive agents, we enact meaning through applied collective associations that we learn. Computing can be understood as the intersection between human intention and the symbolic process.

Some further questions:

As technology continues to evolve, should we be aware of the levels of mediation that affect meaning making? What if a new “meta layer” forms?

How specifically does the history of computation inform us moving forward? Do we ever take steps “backward”? What is the relationship between past and present?

References

Engelbart, Dave. 1962. “Augmenting Human Intellect: A Conceptual Framework.” New Media Reader. Wardrip-Fruin, Noah, Nick Montfort, ed.. 93–108. Cambridge, MA: The MIT Press, 2003.

Irvine, Martin. 2016. “Introduction to Affordances and Interfaces: The Semiotic Foundations of Meanings and Actions with Cognitive Artefacts”.

Mahoney, Michael S. (2005) “The Histories of Computing(s).” Interdisciplinary Science Reviews 30, no. 2 p.119–35.

J.A.R.V.I.S – Carson

Computing as a Symbolic Process:

I found Mahoney’s point about computation as a symbolic process very interesting. He states “Computation is about rewriting strings of symbols [and]… The symbols and their combinations express representations of the world” (p.129). Like we went over last week, this is very apparent in programming languages. The symbols Mahoney refers to are probably functions, variables, for loops and other commands within programming languages. This idea carries into Conery’s article. Conery talks about the need for agents in computing, “clearly there must be some structure to the computation, otherwise one could claim any connection of random symbols a constituted state” (p.815). There needs to be reason and order for there to be an output. I believe that the fact that there needs to be these things to create an output, makes this process symbolic.

 

J.A.R.V.I.S:

J.A.R.V.I.S Just A Rather Very Intelligent System, created by Tony Stark (aka Iron Man, aka Robert Downey Jr.), is an operating system with the ability to complete complex tasks and communicate using context.

 

 

I am not 100% sure if this is what Engelbart, Sutherland, Licklider or Bush had in mind at the time for HCI, but I don’t think this concept would seem too out of reach for them. J.A.R.V.I.S is the ultimate memex, not only can Tony Stark store all of his “memory” but he can also communicate with J.A.R.V.I.S to access these memories at a rapid pace. J.A.R.V.I.S also fulfills some of Licklider’s Man-Computer Symbiosis ideas. There is a developed partnership and relationship between Tony Stark and J.A.R.V.I.S where they problem solve together. Obviously J.A.R.V.I.S is a fictional operating system…that I know of… but the functions presented do not seem unattainable.

Random thought:

Would Google’s function “Did you mean…” be an example of a kind of Man-Computer Symbiosis?

References:

Bush, Vannevar (1945)  “As We May Think,” Atlantic.

Conery, S. John (2002) “Computation Is Symbol Manipulation.” The Computer Journal 55, no.7. p.814-816.

Engelbart, “Augmenting Human Intellect: A Conceptual Framework.” First published, 1962. As reprinted in The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: The MIT Press, 2003.

Ivan Sutherland,(1963) “Sketchpad: A Man-Machine Graphical Communication System”.

J. C. R. Licklider, (1960)”Man-Computer Symbiosis”.

J. C. R. Licklider, (1986) “The Computer as Communication Device”.

Mahoney, Michael S. (2005) “The Histories of Computing(s).” Interdisciplinary Science Reviews 30, no. 2 p.119–35.

Fancy Dancy Phenomenological Trickery

The biggest surprises in this week’s readings came from Mahoney for me – how difficult a task it is to talk about the history of computing; how difficult a task to figure out where even to start. I barely have a grasp on how to define computing, let alone to truly contextualize it in human history. But the biggest takeaways for me were two ideas: 1) that computing history is the history of computing market demands and the people who were in charge of meeting these demands, and 2) the idea of designing computing processes as creating “operative representations.” The prior worked out an understanding that the design of computer hardware and software is not a magical black box – it is a constant reimagining of computing capabilities to meet the changing demands of consumers, as well as an ongoing process of human innovation in computing capabilities. The latter, applying the process of semiosis, shows what computing adds to meaning making. It is to add to our meaning making process a “meta layer,” as stated in Dr. Irvine’s introduction. It is to reason with our meanings in a hyper automatized way, and in this way, we can think of advances in computing capabilities as advancements of cognitive capabilities.

I love all the neat stuff we’ve gotten to look at throughout the course of this class. This week, the original design of the Memex and Engelbart’s patent for the computer mouse were especially neat-o. The idea of computing history as a history of the designers and engineers in history who made computers especially got my mind going when looking at Engelbart’s technological realizations of Bush’s post-war computing challenges and visions. Today’s computers carry all of the functionality that Bush’s Memex envisioned, but designed so we can even carry our devices with us. Vannevar would lose his mind if he could fiddle with an iPad.

Looking at the patent for the mouse helped me make the connection between design, interface, and the idea of a human-computer symbiosis. Up until this week, I always thought of the mouse pointer as an object that I was moving across the computer screen. That the laser at the bottom of my mouse and the movement it tracked was actually moving the pointer across the screen. What this course is showing me is that the mouse pointer is not necessarily an object, but an array of pixels, and the manipulation of these pixels gives the effect that I am moving an object across the computer screen as opposed to the messages from the mouse changing the position of the array of pixels that look like a mouse pointer (the shape of the mouse pointer itself being a semiotic sign). This phenomenological perceptual trickery is by design – the computer screen should feel like an extension of space, and the mouse pointer should be a metaphorical limb with which I navigate the digital interface, especially if the intention mirrors Vannevar Bush’s vision of how computing would integrate into human cognition. Another instance of this trickery can be seen from an older computer mouse design, the one with a ball in it. Though the effect could show the mouse pointer traveling across the screen diagonally, this doesn’t change that the pointer can only move along an x or y axis (I’m not sure if this is the case with laser mouses).

References

Bush, Vannevar. “As We May Think.” Atlantic, July, 1945.

Engelbart, Douglas. “Augmenting Human Intellect: A Conceptual Framework.” The New Media Reader. Ed. Noah Wardrip-Fruin. Ed. Nick Montfort. Cambridge: The MIT Press, 2003. Pp. 93-108.

Irvine, Martin. “Introduction to Affordances and Interfaces: Semiotic Foundations.”

Licklider, J.C.R. “Man-Computer Symbiosis.” The New Media Reader. Ed. Noah Wardrip-Fruin. Ed. Nick Montfort. Cambridge: MIT Press, 2003. pp. 73-82

Mahoney, Michael S. “The Histories of Computing(s).” Interdisciplinary Science Reviews 30, no. 2 (June 2005): 119–35.

 

Memory Supplements and Cylons (Becky)

Bush, Sutherland, and Engelbart are all discussing ways that humans can interact with existing and future technology. And each is discussing ways to bridge the gap between humans and computing systems—that is, developing interfaces. But the differences in and progression of approaches is fascinating.

Particularly interesting to look at is the way in which the authors propose to extend cognition. Bush’s Memex device seems to be all about offloading information, organizing memories in a receiving device. These memories can be linked and stored, but the ideas can’t be manipulated. In Sutherland’s Sketchpad, graphical manipulation is possible via the interface—the device is a more active participant, so to speak, in the process of meaning making. Engelbart, meanwhile, extends these ideas even further and wants to change human behavior to build a sort of symbiotic system of human-computer interaction and augmented human intelligence.

Bush describes Memex as a “memory supplement.” A piece of furniture, the technology is meant to be integrated as seamlessly as possible into humans’ surroundings. The human user does all the thinking and processing and then the products are stored in the Memex  as images or sounds (in miniature!) that can be recalled. They can be sent to others and loaded into other Memex devices to share the information wealth. This is essentially the process of saving and sharing files today.

The Memex user is responsible for establishing trails and remembering codes, which seem to be somewhat unnatural even if they’re mnemonic. Perhaps that is why we ended up with icons that look like folders and software that can establish the trails for us (and that is more organized around “goals” as Licklider describes). But the general concept of linking information has withstood the test of time. (As has, perhaps, Bush’s “roomful of girls”/secretaries in the form of Romney’s “binders full of women” and others’ colorful phrases.)

Wearable tech is a not-fully-realized projection of Bush’s ideas. Technology is becoming more seamlessly integrated into human behavior and the environment as devices become smaller. The GoPro looks close to the “little lump larger than a walnut” that Bush describes a “camera hound” wearing in the future. And thanks to ubiquitous smartphones with cameras on their backs, the capability to record life in still photos as Bush imagines is standard these days. But there is much more to do along this path.

Bush's scientist of the future and a GoPro with a head mount

Bush’s scientist of the future and a GoPro with a head mount

With the Sketchpad, or the computerized Etch A Sketch, we start to see the beginnings of interfaces that are more a part of the cognitive process as opposed to memory storage devices. Many of the descriptions seem as if they could be explaining interactions today, with Photoshop, for instance. The ideas of recursive functions and creating instances of master versions have certainly spanned the decades and been deeply ingrained in today’s software. And tablets seem to be descendants of the Sketchpad and light pen ideas.

Etch A Sketch from CC BY-SA 3.0

Etch A Sketch from CC BY-SA 3.0

Meanwhile, if I’m understanding correctly, Engelbart builds on Bush’s linking trails and describes technology that performs in a way that is similar to the way humans make meaning and manipulate symbols—a non-serial conceptual structure. Yes, the technology he describes offloads information and stores it. But it also helps humans in the process of making meaning, if humans can make little changes to their MOs.

Many of the ideas that Engelbart describes sound eerily familiar. Parts of his process of working with statements could easily be explaining today’s word processing. His clerk seems like it could be describing today’s software or hardware, I can’t quite tell which.  And I wonder if Engelbart (and Licklider) could’ve imagined where the networking concepts have led. Yet, the way the augmented architect operates, by using a pointer and then moving his hand over the keyboard, is a bit cryptic—is this is describing what we do today with a mouse and a keyboard or something else that hasn’t been adopted (97)?

Taking a broad view, Engelbart appears to be making the case for even closer integration between humans and computers. Bush too wondered about a more direct path for transferring information, though in a different context. What is the natural extension of these trains of thought? Augmented reality? Neural implants? Battlestar Galactica–style cylons?

The development of human society (Roxy)

Let us go first with the development of human beings. There could be two clues to analyze the history of mankind. The first one is “extension”, or “ affordance”. According to Marshall McLuhan, the development of science and technology depends on the continuous substitution of men’s extension. When people afford moving on the wheels and rockets, we extend our legs; when people afford washing on washing machine, we extend our hands; when people afford watching on TV, we extend our eyes.

The second clue is “interaction”. Although interface, nowadays, “has a narrow range of meaning as a shared boundary across which two separate components of a computer system exchange information, ” as showed in the first Wikipedia’s entry of interface. From an etymological perspective, this word, in 1874, means “ a plane surface regarded as the common boundary of two bodies.” In its broadest sense and also the modern usage, this term, “interface is anything that connects two (or more) different systems across the boundaries of those systems. “

Basically, interface exists since the dawn of mankind. From the moment we use language to substitute the gesture. We can feel the interface between language system and gesture system. The sensory systems of human beings could interact with each other. From the moment we start to draw patterns on the pottery. We can perceive the interface between the aesthetic system and practical system. Outside systems which could fulfill different human needs could interact with each other. We then apply interface to the computational system. We also interact with other species: in the nomadism civilization, people interact with sheep, horses and dogs; in the agriculture civilization, people interact with rice and wheat; in the industrial civilization, people interact with oil and coal.And now, in the information era, with the help of WIMP (windows, icons, menus, and pointer devices), we can detect the interface between the human system and the computational system. Human beings and machine systems could interact. In the future, we will create more interfaces, for example, through recently the heated project “The Reality Editor”, we can read the interface between the computational system and the material system.

“The Reality Editor  is a new kind of tool for empowering you to connect and manipulate the functionality of physical objects. Just point the camera of your smartphone at an object built with the Open Hybrid platform and its invisible capabilities will become visible for you to edit.” Drag a virtual line from one object to another and create a new relationship between these objects. With this simplicity, you are able to master the entire scope of connected objects. This tool could help users to maximize their strength, such as spatial coordination, muscle memory and tool-making. For example, it could help you to turn off the light in your bedroom without letting you to stand up and walk to the switch.

The interactions between human and human, human and object, and human and information will decide how this society operates. In the next step, I think the computing may focus on the small things, the details. The critical details can make the difference between a friendly experience and traumatic anxiety. A problem cannot be solved with the unhandled details.  With the Internet, people are tagged. More specific targets mean more segments of the market, which requires better detail-optimization. 

Question:

What is the difference between the HCI and interaction design?

 

 

Have a conversation with computer, where are we on the continuum between human and computer? – Ruizhong Li

Interacting with computers, we human are on a continuum between ourselves and computers. According to Licklider’s depiction of “Man-Computer Symbiosis”, in order to enable the effective man-computer interaction, human and computers should work on the same display surface. In that case, before we get to that point when human and computers literally can work collaboratively, where is human’s position on the contimuum in this process?

It is interesting to know that Licklider’s effort to clarify “man-computer symbiosis” is considered as a way to humanize computing. We are getting computer adapted to human thinking. However, in the process of getting computer stayed tune with human, human is adapted to computer in this process as well. Extrapolating from it, it is true that human function as an agent in the development of technology, but technologies are self-evolved according to their innate mechanism. The role human play in the process is to take advantage of the technology and, according to human’s need, to decide what’s the main technology human would probably make heavy use of in the next decade. In this process, human can benefit from what affordances provided by the technologies, but also cannot get rid of the constraints of the technology.

As we look back into the history, it’s not hard to find that many of the computing technologies were developed during the post-War period. We cannot neglect the social context when we recount the history of computing. In the post-War period, there were growing demands of accuracy and efficiency in computing. It seems that the atomic blast that ended the World War II have an unforgettable impact on post-War scientific research. The eager for nuclear technology created the need for developing the basic computing technology. The most important feature of Sketchpad when it was invented in 1960s, was to perform rapid and accurate calculation, and model the architecture design: both of the two features are related to the preparation of arm race.

But look at the Sketchpad technology today, it is widely employed in our daily life, like handwriting system in our mobile phone, and children’s eco-friendly sketchpad.

Picture2

Picture3

It seems that using Sketchpad technology to do these daily things are like employing a steam engine to crack a nut. Is it? I think it is the necessary “backwards” of the technology. They are sharing the similar mechanisms, but they are differentiated by their usage. Using a mature technology to find radical new ways of using such technology. It reminds me of Gunpei Yokoi’s philosophy: lateral thinking with withered technology.

“Withered technology” in this context refers to a mature technology which is cheap and well understood. “Lateral thinking” refers to finding radical new ways of using such technology. Yokoi held that toys and games do not necessarily require cutting edge technology; sometimes, expensive cutting edge technology can get in the way of developing a new product.

It is exactly what is going on between human and technology. If we could slow down and have a look at the way we were coming through, human are driven by the social needs all the time. That’s why we are in a passive position in the development of technology. We do not have enough time to understand the computers, and we lock the secret of how computer working in a little box. We are going too fast to understand what’s going on in the world. Using Sketchpad for daily use is a sign that we could change our way of thinking and be ready to embrace another breakthrough.

Reference:

Martin Irvine, “Introduction to Affordances and Interfaces: Semiotic Foundations

Mahoney, Michael S. “The Histories of Computing(s).” Interdisciplinary Science Reviews 30, no. 2 (June 2005): 119–35.

Bush, Vannevar. 1945. “As We May Think.” The Atlantic.

Conery, John. 2010.  “Computation Is Symbol Manipulation.” The Computer Journal, 55, no. 7.

Licklider, J.C.R. 1960. “Man-Computer Symbiosis”. New Media Reader. Wardrip-Fruin, Noah, Nick Montfort, ed.. 74–82. Cambridge, MA: The MIT Press, 2003.

Interfaces, Relationships, Symbols – Lauren Neville

This week I have grasped a much better understanding of the meaning behind interfaces and their evolution alongside digital computers. My understanding of the definition best came from Herbert Simon’s writings, “An artifact can be thought of as a meeting point—an “interface” in today’s terms—between an “inner” environment, the substance and organization of the artifact itself, and an “outer” environment, the surroundings in which it operates. If the inner environment is appropriate to the outer environment, or vice versa, the artifact will serve its purpose.” In this respect, I am able to understand an interface on a computer as more than simply graphics and instead as a substrate for conceptual organization. Interfaces also act as platforms allowing the mind to build these concepts through the interface design.

Engelbart wrote about the notion of interfaces in the form of filing systems. This is not a new system, but has acted effectively for thousands of years. Computation then allowed to tasks to happen faster and for information to take up less space. Because of this, the interface platform is both modeled after traditional filing systems as well as altered to utilize the affordances of computation. Written information on a computer is often stored on a document which resembles a piece of paper. Then that document is stored in file folders and are often organized alphabetically or by data. All of these interface systems are for humans to feel conceptually organized. Computers on the other hand, do not actually store written information in a folder, it stores it wherever it has space in the form of binary numbers.

Near the end of Augmenting Human Intellect, he writes, “The conceptual metaphors of interface and medium have deep roots. The philosopher-scientist and founder of semiotics, C. S. Peirce, defined a sign (and sign clusters) as a medium because sign structures enable cognitive agents to go beyond the physical and perceptible properties of representations to their interpretations (values, meanings), which are not physical properties of representations themselves. Perceptible (and remembered) representations only become signs when an agent supplies.”

This nod to Peirce has helped me make some of the conceptual leaps between semiotics, computation, and design. I am beginning to understand that each symbol we interact with is in fact a series of complex relationships to our knowledge of the past, our cultural standards, and our cognitive organization. The standard graphical user interface that is so often what comes to mind when one thinks of interfaces is actually a complex network of relationships to each other as well. Each of the sign vehicles on a computer screen make up a symbol system similar to our alphabet. The meanings behind them have deep roots in our analog system.

If we take the filing system example again, when we see a folder on our computer screen, it acts as an icon referencing a physical folder. What I am curious about is how many new symbol systems that are made on computers are not referencing analog experiences and culture. Because computational user interface is a fairly new field of symbol system making (30-40 years) as compared to the roots of the alphabet or filing library systems. I wonder if as time moves on interfaces and computational graphical user interface design will move more towards the creation of symbol standards.

Engelbart, “Augmenting Human Intellect: A Conceptual Framework.” First published, 1962. As reprinted in The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: The MIT Press, 2003.

Martin Irvine, “Introduction to Affordances and Interfaces: Semiotic Foundations

Mahoney, Michael S. “The Histories of Computing(s).” Interdisciplinary Science Reviews 30, no. 2 (June 2005): 119–35.

Associated Indexing and Memory – Joe

It would be very easy to say that there is a lot to unpack from these readings. So many ideas espoused in these papers exist today in modern computing, and while things did not progress exactly like these papers predicted (our computer systems are much more compact than a memex), they are still pretty incredible to read. For me it was Bush’s concept of “associated indexing”  that the memex would allow that provided the “wow” moment.

When Bush discusses the “associated trail” I automatically thought of Zotero and RefWorks, where we can save our sources as we conduct research. He foresees that lawyers, doctors, attorneys, will all have their own trail, correlated to their specific expertise. Could he have imagined that all of their information would exist communally on the internet? That Douglas Englebart’s patent is not only known to patent attorneys, but to anyone who searches for it on Google.

The constraints of other media are limited to our ability to search with them, a physical book we have to flip through the pages we want, a painting requires a trip to a museum. With a tablet or computer system our only limits are what we can cognize and how much electricity we have. I type in any book section I want and it comes up, ditto any painting, and they aren’t saved on microfilm, but exist digitally. These are cognitive artifacts that exist within a cognitive artifact. Artifacts which we can blend together.  This was one of my takeaways from Conery piece, even if he did not intend it this way: that we need to think of computing as more than code, more than programming, but as thinking. As researchers, using the right databases to extract the exact information we need, is actually a form of computing. We create an associative trail as we think, and with every new bit we save, we extend our memory.

One small thing that I keep thinking about as it applies to design thinking,  is what if Xerox PARC hadn’t shared it’s work with aspiring entrepreneurs. We know that Apple basically took Englebart’s GUI system and ran with it straight to the bank, and they did so with a closed system. Englebart’s GUI and mouse made computers accessible, but they could have helped popularized an open system. If Apple hadn’t been the first to reach a critical mass, maybe we would all be hobbyists and adept with the more technical aspects of our computers. Englebart, Bush, and Licklider, predicted (and worked on) some of the most amazing developments of the 20th/21st century, but could they have seen the Cubs winning the World Series? I’m not so sure.

Bush, Vannevar. 1945. “As We May Think.” The Atlantic.

Conery, John. 2010.  “Computation Is Symbol Manipulation.” The Computer Journal, 55, no. 7

Irvine, Martin. 2016. “Introduction to Affordances and Interfaces: The Semiotic Foundations of Meanings and Actions with Cognitive Artefacts”.

Licklider, J.C.R. 1960. “Man-Computer Symbiosis”. New Media Reader. Wardrip-Fruin, Noah, Nick Montfort, ed.. 74–82. Cambridge, MA: The MIT Press, 2003.

 

“The two most significant events in the 20th century: Allies win the war…and this.”

I found this week’s readings to be some of the most fascinating we’ve done so far.

One of the main takeaways I had from the Mahoney reading is that the history of computational and software development was not singular and fixed, but rather collective and malleable. As Mahoney himself says “The computer thus has little or no history of its own. Rather, it has histories derived from the histories of the groups of practitioners who saw in it, or in some yet to be envisioned form of it, the potential to realise their agendas and aspirations.” (Mahoney 119). The various “communities of computing” that were present in the formative years of machine based computing, such as the science and engineering community, the data processing community, the management science community, and the industrial engineering community all have their fingerprint on our modern computational systems. They each had unique expectation and needs from the computer, and as such, the product we have now is an amalgamation of these distinct cultures.

I was interested in how we got from the relatively esoteric and community-based uses of computers to the general-purpose PCs we have now. A key step in this process seems to be the universalization of computational affordance via the evolution of GUI design. Going from Vannevar Bush’s Memex to Douglas Engelbart’s work at the SRI labs, to the innovations that came out of Xerox PARC, we can see an active thread linking much of our contemporary conception of computing to designs decades in the making. But while Xerox PARC may have been the incubation chamber for much of our modern GUI design, such as the WIMP UI that we all know and love, companies like Apple and Microsoft played a crucial role in taking these breakthroughs and disseminating them to the public at large.

As for the development of interfaces and interactions, I was stunned by that side-by-side comparison of writer-scribe Jean Mielot in his 15th century library and Ivan Sutherland demonstrating the Sketchpad in 1963 (Irvine 7-8). Yet again, we see that the gods didn’t gift us these technologies. Rather, they are extrapolations and advances in the existing technological lineage. The mediatory role of interfaces is something I find highly fascinating. In fact, to bring it back to semiotics, one can think of interfaces as languages that allow us to communicate with the underlying technology. A good UI designer always keeps this concept in mind. Realization of good interface design also requires knowledge of contextual use. What is the ultimate purpose of the underlying technology? Is it to read text, like Ramelli’s 14th century “Book Wheel” and Bush’s Memex? That context has a particular affordance history that will inform the design of that technology’s interface.

Cubs are one out away! 

Thinking of a high-profile interface development in recent times, my mind goes to Google Glass.

Ubiquitous computing was Google’s goal in producing the headset, and this context informed their interface design. The product was a commercial failure, but one thing the readings have taught me is that timing is a crucial component of public adoption. Many of the historical antecedents to the current interface designs didn’t catch on in their time. But they played a crucial role in laying down the framework that successfully adopted interfaces have utilized. So even though right now everyone isn’t walking around with Google’s sleek, metallic eyeglasses, one day we may be.

If ubiquitous computing is to be the dominant computing context, then I believe the future of interface design will be shaped around seamless, recognition-based non-events. Augmented reality devices are also a distinct possibility, and will require a whole new set of design principles as we learn to rethink not only traditional computational affordance, but the affordance of everyday objects we map computational abilities onto. Diversification of design is another concept that should play a large role, especially as computers become less physically constrained, and more multifarious (think IoT). This shifting paradigm will require a new interface and interaction framework, but perhaps we will look to the past to help define our future.

CUBS WIN! I’M GOING TO BED! 

References

  1. Mahoney, Michael S. “The Histories of Computing(s).” Interdisciplinary Science Reviews 30, no. 2 (June 2005): 119–35.
  2. Engelbart, Dave. 1962. “Augmenting Human Intellect: A Conceptual Framework.” New Media Reader. Wardrip-Fruin, Noah, Nick Montfort, ed.. 93–108. Cambridge, MA: The MIT Press, 2003.
  3. Vannevar Bush, “As We May Think,” Atlantic, July, 1945.
  4. Sutherland, Ivan. 1963. “Sketchpad: A Man-Machine Graphical Communication System.” New Media Reader. Wardrip-Fruin, Noah, Nick Montfort, ed.. 109–125. Cambridge, MA: The MIT Press, 2003.
  5. Irvine, Martin. “Introduction to Affordances and Interfaces: Semiotic Foundations”

Unfulfilled Intelligence Augmentation Visions – Jieshu

In Wikipedia’s disambiguation page for Interface, you can see many interfaces—user interface, hardware interface, biological interface, chemical interface, and social interface. There even exists a place in Northern Ireland called Interface Area where “segregated nationalist and unionist residential areas meet”[i], which intuitively reveals the implication of interface—boundaries. I’d like to quote words from Professor Irvine’s introduction essay—“an interface is anything that connects two (or more) different systems across the boundaries of those systems[ii].”

In our symbol systems, basically, anything can serve as an interface as long as it mediates between “social-cultural” systems. Ancient cave paintings are interfaces connecting people and their long lost myths. The Bible is an interface to a huge system of culture and value. An abacus is an interface to an ancient calculation system. In computing devices, a user interface is a boundary between the user and the computing system.

Interaction is the function of interfaces. Information flows through interfaces. However, many interfaces are like one-way streets, allowing information flowing toward only one direction. For example, an artwork is an interface to a meaning system, but many artworks only allow information flowing toward the audience. We have to stand behind a line when we are appreciating Von Gogh’s Starry Night. But artists are increasingly creating more and more interactive works, e.g. the pile of candies that people were free to take away mentioned by Amanda in her post several weeks ago[iii].

So when I first read about how the pioneers like Kay, Sutherland, Engelbart, and Bush explored improving man-machine interaction using computational interfaces, I was truly amazed. Bush suggested the model of memex to store, record, and retrieve books[iv]. Engelbart envisaged a computer network that could augment human intelligence. He also invented the mouse that could point to anywhere on computer screens[v]. Licklider envisioned a man-computer symbiosis system for more effective man-computer interaction[vi], which was realized partially by Sutherland with his Sketchpad that allowed people to use a light pen to draw on computer screens and that could modify your drawings to perfect circles or rectangles, realizing true two-way interaction[vii].

Many of those pioneers’ visions have been explored, as computing power gets stronger and stronger. For example, CAD software and devices with touchscreens are all rooted in the concepts proved by Sketchpad. Mobile devices like cell phones, Kindles, and iPads are pretty much resembling Kay’s Dynabook. Online file systems envisioned by Engelbart that allowed multiple users to read and edit at the same time are recently implemented, such as Quip and Google Doc.

However, there are many paths remaining unexploited. Here I will discuss three of them.

Knowledge Navigation System

Memex suggested by Vannevar Bush put forward a personal information management and “knowledge navigation” system. I was surprised by how much cognitive workload could be offloaded onto this system, although it was designed seventy years ago with the absent of digitalization. Even today, when everyone has their own computer(s) and multiple external hard disks, we haven’t built a highly efficient knowledge navigation system. Maybe Wikipedia is close, but it can’t present your personal knowledge structure. In my view, a true knowledge navigation system should have the following properties:

  • Portability. Cloud storage might be a good choice.
  • Searchability. You can search any word, image, soundtrack, even video and get everything relevant quickly.
  • Present your knowledge structure easily. It could use methods like data visualization or library classification to present your knowledge structure and allow you to navigate your knowledge landscape both horizontally and vertically. In other words, you can zoom in and zoom out on your “knowledge map” and see your knowledge on different scales, like a Google Map for knowledge.
  • Knowledge discovery. You can use it to discover new knowledge. This also reminds me of Google Earth, which is a good example of knowledge discovery. For example, when you zoom in on the Pacific Ocean, you could see many islands. According to the layers you choose, through clicking icons distributed on the map, you can discover various kinds of knowledge, such as Wikipedia entries about this island, three-dimensional topographies both on land and undersea, and documentary videos of marine animals shot by BBC or Discovery Channel. If you zoom in on the Mars in Google Earth, you can learn tons of knowledge like what chemical and physical factors shaped some strange geographic feature. The hyperlink in Wikipedia is also a good way of knowledge discovery. This property is realized through hyperlinks and network with huge online databases.
  • Connect to other people’s knowledge system. You can share knowledge with other people, navigate knowledge on your social network, and at the same time navigate your social network on the entire human knowledge landscape.

Ray Kurzweil, a futurist of Google once predicted that in thirty years, human beings can use nanobots in our brains to connect to the Internet and conduct many crazy functions[viii], such as downloading function modules according to your needs like The Matrix. It sounds very hype, but may be a good way to realize “Knowledge Navigation.”

Virtual Personal Assistant

In his The Computer as Communication Device in 1968, Licklider mentioned OLIVER (on-line interactive vicarious expediter and responder) proposed by Oliver Selfridge. OLIVER was “a very important part of each man’s interaction with his online community.” According to Licklider, OLIVER was “a complex of computer programs and data that resides within the network” that could take care of many of your matters without your personal attention. It even could learn through experience. This path is a very typical “Intellectual Augmentation” method, having been explored in apps like Siri and Microsoft Cortana. Another example is Amy, a virtual assistant that are able to arrange your schedule using information drawn from your emails with natural language processing[ix].

iphone_5s_response_cropped

Ccing to Amy would allow it to arrange your schedule according to your time and location.

However, because the algorithms are still in their primitive stages, this path still has a very long way to go.

Metamedium

Alan Kay coined the term metamedium for computers that serve as media for other mediaii. He also envisioned a system with software that allowed everyone including children to program their own software as “creative tools.[x]” This path was exemplified in his SmallTalk project. It is under-exploited today. Most computer owners including me don’t know how to program. Computing devices are mainly consuming devices. As we discussed in this week’s Leading by Design course, open source software that would free us from lock-in systems like Microsoft Windows and OS X may be a way to realize Kay’s vision. Another way I can think of is to teach children to program with interesting tools such as LEGO and Minecraft, which might be a commercially plausible approach.


References

[i] “Interface Area.” 2016. Wikipedia. https://en.wikipedia.org/w/index.php?title=Interface_area&oldid=738749718.

[ii] Irvine, Martin. n.d. “Introduction to Affordances and Interfaces: Semiotic Foundations.”

[iii] Amanda Morris. 2016. “Using the Piercian Model to Decode Artwork – Amanda | CCTP711: Semiotics and Cognitive Technology.” Accessed November 3. https://blogs.commons.georgetown.edu/cctp-711-fall2016/2016/09/28/using-the-piercian-model-to-decode-artwork-amanda/.

[iv] Vannevar, Bush. 1945. “As We May Think.” Atlantic, July.

[v] Engelbart, D. C., and Michael Friedewald. n.d. Augmenting Human Intellect: A Conceptual Framework. [Fremont, CA: Bootstrap Alliance], 1997.

[vi] Licklider, J. C. R. 1960. “Man-Computer Symbiosis.” IRE Transactions on Human Factors in Electronics HFE-1 (1): 4–11. doi:10.1109/THFE2.1960.4503259.

[vii] Sutherland, Ivan. 1963. “Sketchpad: A Man-Machine Graphical Communication System.”

[viii] Kurzweil, Ray, and Kathleen Miles. 2015. “Nanobots In Our Brains Will Make Us Godlike.” New Perspectives Quarterly 32 (4): 24–29. doi:10.1111/npqu.12005.

[ix] “Testing Amy: What It’s like to Have Appointments Scheduled by an AI Assistant.” 2015. GeekWire. December 15. http://www.geekwire.com/2015/testing-amy-what-its-like-to-have-appointments-scheduled-by-an-ai-assistant/.

[x] Manovich, Lev. 2013. Software Takes Command. International Texts in Critical Media Aesthetics, volume#5. New York ; London: Bloomsbury.