I Guess I’m No Steve Jobs

When thinking about the history of computer design, I’ve always just assumed that the primary goals were overtly technical, such as facilitating complex mathematics, decoding information, and so on. However, reading this week’s selections I was surprised to learn how many of the early computer design concepts were focused on the ideas of communication, organization and efficiency. It seemed that most of the early conceptual models were trying to establish a centralized tool for storing, consolidating and retrieving information in a manner that was easy for users to understand and access quickly.

However, what I found even more interesting is the way in which some of the fundamental design concepts, such as selection, recursion and indexing, serve both technical functions and practical functions at the same time. In other words, last week we learned how in coding we can program variables and create indices. From these indices we can select data to build complex abstractions that, through recursive actions, allow us to perform various computational functions. Those actions of course are often hidden from our view on the side of the interface with which we don’t always interact. However, between Bush’s description of selection (Bush), Sutherland’s description of recursion (Sutherland 118-9), and Engelbart’s description of indexing (Engelbart 99-100), I was able to see how we actually use these same concepts for organizing and manipulating data on the side of the interface with which we regularly interact.

As for continuing and improving computer design, the first thing that came to my mind in this regard is the mouse. On one hand, it forces users to perform an interaction that is relatively unnatural to the ways in which humans typically indicate things. Rather it is more like the motion of wiping down a surface than pointing to something. It’s only through the visual interface on which the mouse’s corresponding arrow is displayed that we are able to understand the significance of the motion used to move the mouse. Our reliance on this correspondence is most clearly revealed in the moments when the arrow doesn’t respond to the user’s movement, causing them to do things such as pick up the mouse, turn it over or click it repetitively.

Rather than performing than performing the odd motions that the mouse requires, I began to consider the possibility of using touchscreen technology as some tablet PCs have already begun to do, since using one’s finger to manipulate objects displayed on a digital interface provides stronger affordance to users (Irvine 1-2). Specifically, I wondered why, if touchscreens have already become the norm for cellular phone and tablet interface design, have we been so slow to standardize this technology for PCs and render the mouse obsolete ?

As far as I can tell, it seems to be a matter of precision, application development and ergonomics. To begin, even though it seems that the overall constraints of using one’s finger versus using a mouse to perform actions on a PC interface are the same (Irvine 2), the pointer to which a mouse corresponds allows more precision than our fingertips can. In particular, as the mouse’s pointer functions as an internal component of the interface, it can be significantly and consistently more precise than our fingers, as external tools acting on the interface. Of course, I don’t think that this is something that couldn’t be remedied; however, I think that in addition to improving the precision of touchscreen technology, applications would also require redesigns in order to facilitate touch activations more easily. A primary issue in this regard is that there is no standardized finger size, while the pointer for a mouse, on the other hand, is standardized. As a result, the dimensions of application buttons would need to be redesigned in order to accommodate the possibility that some people might have large fingers, which might mean that the look of a PC interface would need to change significantly.

There is also the possibility that a mouse is simply more comfortable to use because it doesn’t require the user’s arm to be elevated in order to reach the screen. Rather, the user can rest his hand on the desk while using the mouse. This also serves to stabilize the user’s hand and provide additional stabilization. When I think about the design of the mouse in this sense, I’m not sure that it should become obsolete or rather, it doesn’t seem that touchscreen indication is the most ideal design evolution for the PC.

So how could interaction with the PC interface be improved? The next thought that comes to mind is through voice commands. While I’m personally intrigued by the possibilities that advanced speech recognition holds for word processing, the potential it holds for interaction with the general PC interface seems more complicated. In particular, while it would surely relieve the user of performing functions with his hands, voice command control could potentially cause an unwanted burden of learning when interacting with the computer interface. For example, when using computer programs, we take for granted the large amount of data with which we interact, that we don’t really understand. For example, I know what all of the buttons in my word processor’s toolbar do, but I am not able to tell you what most of them are called. However, if I were to rely on voice commands in order to utilize any of these functions, I would be forced to learn the names of these functions in order to use them and communicate that to the computer. That may not be too daunting within one program, but think of all the different and new programs I may want to use and the changes that may result from any future updates. Utilizing an indicating tool, such as the mouse, allows me to understand and interact with these functions rather seamlessly without storing the additional information of their names.

Again, this issue could be solved by redesigning applications and internet browsers in order to facilitate voice commands. However, even if such redesigns were desirable, there are numerous social implications of using voice command for PCs . In particular, the office environment would become incredibly chaotic without developing some sort of barriers to the various voice commands that would be floating throughout the office. In an era in which we are finally moving away from the cubical, it seems that such a development would actually inhibit the advances in business communication and collaboration that have recently been acknowledged as beneficial. In this sense, the overall efficiency that is gained through voice command features for PCs might be less than the efficiency gained through open work environments.

All of that is to say that I’m not sure how the PC interface could be redesigned, unless we could develop a way to truly achieve “Man-Computer Symbiosis” (Licklider) and control computers through some sort of unspoken cognitive functions. Or, perhaps, there’s a way to interact with the computer via visual cues, such as installing a sensor on the screen that can track the focus of one eye and then respond to blinks in the same way we use clicks. However, that would probably require a large amount of additional buttons within programs that we would need to activate certain functions (I’m thinking of things like highlighting text with a cursor), so I don’t know if it would be more efficient or not. And while it may free users from developing carpal tunnel, I’m not sure if most people would trade that for a twitch!

Bush, Vannevar. 1945. “As We May Think.” The Atlantic, July.

Engelbart, Dave. 1962. “Augmenting Human Intellect: A Conceptual Framework.” New Media Reader. Wardrip-Fruin, Noah, Nick Montfort, ed.. 93–108. Cambridge, MA: The MIT Press, 2003.

Irvine, Martin. 2016. “Introduction to Affordances and Interfaces: The Semiotic Foundations of Meanings and Actions with Cognitive Artefacts”.

Licklider, J.C.R. 1960. “Man-Computer Symbiosis”. New Media Reader. Wardrip-Fruin, Noah, Nick Montfort, ed.. 74–82. Cambridge, MA: The MIT Press, 2003.

Sutherland, Ivan. 1962. “Sketchpad: A Man-Machine Graphical Communication System.” New Media Reader. Wardrip-Fruin, Noah, Nick Montfort, ed.. 109–26. Cambridge, MA: The MIT Press, 2003.