Category Archives: Week 9

Humans and Interfacing with Interfaces

  • In our culture, there has recently appeared a “symbol-manipulation artifact” [the computer] of such power that we feel certain it will do much to “extend man’s intellect.” (Memo, Dec. 1960)

In the study of the evolution of the internet network, we learn how a military experiment during the Cold War (1947) produced and evolved way into the future to become the general-purpose technology (internet protocols) for which we now use in communicating globally. This alone provides a thinking scale for projecting unto some of the factors behind the conceptual design steps which continues to enable computer design to develop beyond its early form. 

In time past, I would think of the computer interface (on my mobile phone for example) as the parts of the mobile which enabled me to communicate with it. Mostly as software and more specifically as ‘Apps’. To me, the keyboard would simply be the interface with which I may compute (read and write) data and without which texting on my mobile would not be possible. This week’s readings reveal the underlying ideological factors that impact the design of computer interfaces in many applications specifically; military, government, and business applications. Humans are and will always remain an integral part of the overall design of the computer even before a computer is designed or built. How humans are evolving and particularly communicating is a significant factor which is taken into consideration for the design concept and evolution of computers. I think of this relationship as one where computers are designed to be used by humans in the same sense as humans are designed to use computers. 

  • The idea for ‘augmentation’ means organizing access to all forms of symbolic representation and expression by creating the software and hardware interface tools to enable composing and interpreting information.

 I focus on two key factors that enable design when describing some of the conceptual steps that enable computer design: Human Language & Embodiment.

Consider the avatar technology for example; a simulating figure embodying humans and applicable  in video games, movies, internet forums, virtual assistants, etc as the  modern-day application for ‘Augmenting Human Intellect’ and we can  juxtapose its application over the brilliant happenstance which was revealed by Doug Engelbart in applying the use of a cathode ray tube for display in early Television screens. Technology can then be perceived as supportive of human-problem solving by simply communicating with humans to aid in symbolic thought and “problem-solving”.

One of the biggest (misconceived) concerns in recent times comes in the form of a big question; “Is artificial intelligence taking away jobs from humans?”. This has impacted the use of computers in automating processes and other technological applications in modern-day working contexts. To keep this misconception away, the human language stands as a unique identifier that highlights how and why computers were never designed to take away from humans but rather to take from humans information (like language) necessary to support and augment the problem-solving capabilities of humans.

  • “In the context of our computational and software screen metaphors, a computer device interface is a design module for enabling semiotic inter-actions with software and transformable representations for anyone talking up the role of the cognitive interpreting agent.”

To see how computers went from being ‘number crunchers’ to becoming more ‘general purpose’ for information processing via graphical interfaces, consider how Alan Kay describes the user interface as a ‘second stage development’ of the operating system (OS) that we see in computers. The user interface makes the computer more efficient for the human user by means of what we define as Embodiment. Considerations like how we read, interact from a distance and/or calculate and translated as physically perceptible features and come together to formulate the programming language for computers. This is why computers were later perceived in the 1960s-70s to seem to have some intelligence even though we know now that they may never become intelligent beyond what we program them to intelligently do.

Computer interaction augments mentalities

Jun Nie

Nowadays, a majority of the mass play a role in passive user, taking the graphical user interface for granted because of the endeavor of decreasing the learning cost while increase capability levels of computers. As the biggest beneficiaries, users can master the methods of manipulating based on the using experience derived from a physical world as quick as possible, and create contents in a democratized environment, guiding the innovation to develop in multiple dimensions. However, the intellectual origins of GUI have been forgotten under the commercially successful paradigm.

At the beginning, the designers intended to boost mankind’s capability by dealing with complex problems and augment the human intellect during the process of interaction with computers. As a metamedium simulating various media’s functions and adding new properties, computer integrates different ways of interacting, which can help the users exercise their mentalities by thinking through symbols, actions and images. Besides, computer provides a platform for experiments and innovation, as an open-ended machine, its potential for future development is endless because of the modular design principles. When the physical substances limit the interaction, simply changing existing or writing new software can help the computer modify itself to satisfy complicated working demand. These design principles should be reclaimed and implemented in our current system, especially should be stressed among the users.

Except for using the computer as a tool to produce content, everyone is encouraged to understand the media language and make contributions of creating new structures and technologies. The free-software movement and the open-source software movement give an opportunity for users to use, copy, study, and change the software freely. Whether we make good use of the resources and opportunities provided by computers depends on the will and personal orientation of the users. We can choose to follow the lead of the experts and be a passive user, or we can try to use the media properties of the computer in the process of interaction to exercise our mentalities and promote technological innovation to the greatest extent, but I think maybe the latter is the original intention of GUI designers.

References:
Martin Irvine, Introduction to Symbolic-Cognitive Interfaces for Computer Systems: History of Design Principles
Lev Manovich, Software Takes Command, pp. 55-106
Bill Moggridge, ed., Designing Interactions. Cambridge, MA: The MIT Press, 2007.

Concepts behind and ahead computer development

Many visions and concepts preceded the computer development and were far more ambitious than what the computer is today. The main conceptual leap enabling computer design to develop beyond earlier designs is to consider computational technologies are powerful symbolic engines for abstraction and representation. The two major research and development labs in the 1960s-70s represented two major direction of “human-computer interface” design principles: One is to turn computer into a comprehensive media machine dealing with multiple media in a single device. The “metamedia” metaphor is fantastic because it could embrace both the existing media and those not-yet-invented media. The other is to augment human intellect, whose ultimate end is to enhance and expand human intellectual abilities and creativity through symbolic representation process. These visions led to screen which can display and manipulate symbols for users. What fascinating is although these concepts seemingly have the same consequences in computer development, they actually have different possibilities. And the “time-sharing” concept, which became networked computing later, rendered many possibilities in interpersonal communication and work cooperation. All in all, the greatest conceptual leap is the screens and displays could be designed to take input as instructions back into computing process, rather than passive representational substrates. The concept not only contributed to the invention of interactive interface in 1960s-70s, but also the many new functions in smartphone today like customized content (take our inadvertent using habits as input).

There are many concepts not fully fulfilled even today. Engelbart’s concept of “view control” can be seen in many software today such as Word and Powerpoint. And it becomes more dynamic today, even in some video platform, users can choose if they want to see the cut of a specific character. However, it’s still hard to turn one medium representation into another medium, and the “chain of views” are totally unfulfilled. It could be implemented today such as when go to a video platform, we can switch the view of video and script. And in some apps to write down our to-do list, we can switch to view the possible locations of these things. Another unfulfilled concept is Kay’s vision of programming environment, which provide users already- written general tools to reprogram for a customized need and make their own creative tools. This is still not possible for most users because nowadays computer and apps are black boxes. To achieve this, we need to make our computer programming an open source system, but it could conflict with copyright in many cases. Additionally, Nelson’s vision about hypertext and hypermedia is very experimental, and it goes far more beyond hyperlink. His emphasis on complexity and interconnectivity and on breaking up conventional units for organizing information is consistent with post-modernity. There are some platforms experimenting this idea through interactive videos, such as Black Mirror: Bandersnatch. But Nelson’s vision still has many possibilities to achieve, like allowing users to choose and create the sequence of a storyline. And I believe those possibilities could be more vital and realistic with the advent of 5G technology.

References:

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces for Computer Systems: History of Design Principles

Lev Manovich, Software Takes Command, pp. 55-106, on the background for Allan Kay’s “Dynabook” Metamedium design concept.

The Development of Computers

(Accidentally did the readings the wrong week, here’s last week’s piece with some added thoughts at the end which discuss new ideas.) Early computer design was technically human brain design. Computers have long been modeled to recreate the way humans process information and calculate entities. I’m reminded of the film “Hidden Figures” which features groups of female computers for NASA who is in charge of doing all of the calculating, sometimes machines take over, so a character becomes adept at using and running a machine computer so the process becomes more efficient.

Even during the building of modern computers and laptops, the way one interacts with the interfaces is down to how humans interact with their office spaces, as mentioned in one of this week’s readings: “In 1974 Tim Mott3 was an outsider at Xerox PARC, working for a Xerox subsidiary on the design of a publishing system. He describes how the idea of a desktop came to him as part of an “office schematic” that would allow people to manipulate entire documents, grabbing them with a mouse and moving them around a representation of an office on the screen. They could drop them into a file cabinet or trashcan, or onto a printer. One of the objects in the office was a desktop, with a calendar and clock on it, plus in- and out-baskets for electronic mail.”

The computer, whether human or machine, is by its very nature a way of interpreting information, therefore considering we as people have ways of calculating things (equations, signals, using language and codes to send messages etc.) this must be reflected on the mass creation of physical computers. As Professor Irvine discusses in his reading: “In the context of our computational and software screen metaphors, a computer device interface is a design module for enabling semiotic inter-actions with software and transformable representations for anyone talking up the role of the cognitive interpreting agent.”

This idea of ‘agency’ is vital for opening up computing to the general public. Tech companies had to give agency to people by mirroring their thought patterns in a technological way, allowing human beings to interact with a machine in a way they would other physical non-technical artifacts. One must be able to manipulate, identify and understand the information presented to them on a screen in order to truly interact with it and prove the computer a reasonable object for use.

Another point made from Manovich was discussing remediation. This is a concept I was unfamiliar with prior to the reading. However, it makes me question even the use of ‘computer’ as the identity of a remediation machine. Surely due to the vast network of products, from smart watches to airpods, to smart home devices, the idea of a computer or PC interface being the only form of remediation device, must be altered slightly. Perhaps the phrase computational devices would be better suited.  As we have advanced the idea of ‘what’ can constitute a computer, we have also redefined remediation.

By Eish Sumra

Some readings.

Computer Interface Design Concepts: Major Historical Developments

Designing Interactions by Moggrich

Computing with Symbolic-Cognitive Interfaces for All Media Systems: The Design Concepts that Enabled Modern “Interactive” “Metamedia” Computers Professor Martin Irvine

Computation-culture as a loop

During the preliminary stages, computing required specific training sections and templates for humans to manipulate symbols, which means it was a process that only reflected limited groups of people’s intellectual outputs. However, even in that situation, computers were never meant to replace humans as problem-solver, but a tool that aggrandizes human abilities and intellectual capacity. Considering it would be somehow absurd to propose that every being with varied computational backgrounds or within different cultural contexts to excel at computing, a more realistic approach was presented-programming computers to understand the options that people give without having to “dumb” the computers down.

The interfaces nowadays are unprecedentedly user-friendly, the employment of interfaces is not only circumscribed to those of components, but in a broad and abstract way, incorporating humans’ interpretation, culture, intellect into the system. Humans are supposedly the actors of directly manipulating symbolic signs, which seems to be the trend for technologies of any kind in this era.

Although symbol interpretation was implemented in computer engineering after the “great conceptual leap” during the 1960s and 1970s, the concept itself was not a brand-new invention. In the macro sense of “computers” (artifacts that process signals and give out feedbacks), televisions were invented way before personal computers. Back then, people, as interpreting agents, started to use graphics, languages and videos as symbols to perform cognitive interpretations based on the presentation. But it was the implementation of symbol manipulating software that promised a future where symbol input is accessible to the larger population. Through the symbolic orders that represent a part of human culture from human actors to the computers, the free will to utilize symbols goes back to humans, handing over the power of inputting and interpreting.

By these processes, humans better enhance intellect and establish new rules and cultures for the reality world that grant novel opportunities for future development in both computation and cognitive-symbolic systems. For humans, the greatest leap that the concepts (“Augmenting Human Intellect”, Graphic User Interface, etc.) brought about is probably accelerating the loop of culture construction, or even re-establishment.

References:

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces for Computer Systems: History of Design Principles

Lev Manovich, Software Takes Command, pp. 55-106, on the background for Allan Kay’s “Dynabook” Metamedium design concept. Excerpts in pdf.

What Makes It Simple?

Simplicity is a commonly recognized characteristic for a user-friendly product. if a product is with useful functions and an elegant exterior design but it could not be used without thinking, it would not be a product designed by User Centered Design concept since it ensures usability in products by lots of developed process and principles. The design process includes: 1. Observation; 2. Ideation; 3. Prototype; 4. Test. A good product’s interaction with user should be clear and easy enough to lower the threshold for using it and further make it possible for this product to achieve business success.

However, simplicity for business’s goal is only a small fraction of its huge positive effects on human society and culture. Simplicity is also a prerequisite for digital products that serve as substrates for human abstract semiotic and cognitive system. The abstract meaning in our mind needs to be reflected in some substantial physical structures in order to preserve it so that we can pass our knowledge from generation to generation. Also, these physical structures help human ease our “memory job” so that we can get more space to explore new ideas, although it’s possible for us become more and more rely on these substrates for our meaning system, which conversely limits our creativity. What if digital books, like Kindle or books in websites, is complex and users need to read an instruction in order to read books on it? Many users will choose to use paper book and the remediation of book will not as influential as it is nowadays.

For designers, engineers and computer scientists, simplicity is not simple at all. There are tons of principles, patterns and hints to make sure its realization and there is a long history of development of them. Why making a product simple is so difficult? From my perspective, one of reasons is that the product itself is not simple. In some people’s opinion, the simple objects are with simple design. Although it might be true in some traditional artifacts, even the most “simplest” digital products are developed by sophisticated design, receiving help from antecessor products. From a “simple” and small audio memo application, we can find encoding and decoding process which are elaborated in Information Theory. There is transducer to transform human voice into binary code to be stored in the virtual memo.

In my opinion, the design concept enabled “computers” to become more than big calculators is abstraction. The calculation of numbers in an original computer is concrete, but the operation for our cognitive representation in today’s computers is abstract. It’ quite counterintuitive in the beginning since we always think that the thing more precise and concrete will be easily implementable and abstraction will mess things up. As mentioned in the book The Great Principles of Computing, “abstraction is one of the most fundamental powers of the human brain. By bringing out the essence and suppressing detail, an abstraction offers a simple set of operations that apply to all the cases.” Abstraction is a tool helping us transcend physical limitation and stimulate our imagination. How can our voice and face appear in a cell phone thousand miles away? It is because the abstraction. The phone abstract (encode) our voice and face and quantize them to be proper to be presented in pixel screen in another cell phone.

Simplicity sometimes stems from difficulties and this is the history of the development of digital products.

 

References:

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces: History of Design Principles (essay). Read Parts 1-2.

Bill Moggridge, ed., Designing Interactions. Cambridge, MA: The MIT Press, 2007. Excepts from Chapters 1 and 2: The Designs for the “Desktop Computer” and the first PCs.

Peter J. Denning and Craig H. Martell. Great Principles of Computing. Cambridge, MA: The MIT Press, 2015, chapters 7 (Memory) and 9 (Design). [Concluding background on computer system design.]

 

Computer From a Big Calculator to a Metamedium

The two concept leaps,“augmenting human intellect” and “cognitive design”, and the introduction of GUI in the 1980s help computer to evolve from a “big calculator” to a metamedium, a human symbol manipulator and a problem solver.

The main design concept of computer starts from a “numerical and logic processing” machine. The first working-stored program computer EDSCA was huge and immobile. The computer programs run at a linear and uninterruptible sequence. The user interface to the machine was instruction set. The only way people control the computer is writing programs into it. The data and number were represented in binary. The mode of the computer is relatively passive, since it has little interactivity with human. I see computer, of the earlier stage, as simply a data processing machine. If I were a nontechnical user at that time, I would never think of having any connection with that machine, since it is neither user friendly nor smart.

The further development of computer, especially the development of graphical user interface makes the concept of “computers as big calculators” seem too limiting, because calculating is just a facet of computer’s problem-solving function. GUI not only possess the original concept of interface, anything that physically connects different parts of the system, but also enhances human interactivity with the computer. It works as a two-way mediator (input and output) between human and the computer by enabling them to “delegate, extend, and off-load some processes of human symbolic cognition and agency to software. It gets input from human. For example, human type words with keyboard and control the software with the mouse. Human can further manipulate the input by changing the font, arrangement and colors of those words through the interface. The interface displays the functions of computer software such as automatic spelling correction, hyperlinks, movements of manuscript etc. In the case of hyperlink, GUI imitates the process of reading and the affordance of book, a human artifact, and library. Sections of the books are connected together physically by paper and glue. Books are connected together by the library. The hyperlink utilizes the pattern of how books are physically connected to link information together. It saves human from the troublesome process of doing research, so that they are able to acquire the information they need in one glance, instead of going through all resources and filtering them. That is also how computer helps augmenting human intellect.

The development of GUI enables human to communicate with computer by means of the feedback circle of information. It helps computer accept various kinds of commands from human, helps human make use of the hiding details and functions of the computer, and better connects the softwares inside the computer system.

Reference:

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces for Computer Systems: History of Design Principles

Lev Manovich, Software Takes Command, pp. 55-106, on the background for Allan Kay’s “Dynabook” Metamedium design concept.

Peter J. Denning and Craig H. Martell. Great Principles of Computing. Cambridge, MA: The MIT Press, 2015, chapters 7 (Memory) and 9 (Design).

Human Symbol Manipulation and User Interface Building

Xueying Duan

The early computer is designed for human problem-solving, especially in mathematical and logic process. Although the device itself is big in its size, it can only deal with a single task at one time and run the orders in a linear sequence. Gradually, it was introduced graphical elements and allow users to interact with it using the displayed interface, which calls for the definition of GUI and occurs different interaction interface experiences pointing to the same function. The concept of “desktop” of the modern computer is a typical interface. The word desktop actually originated from the surface of a desk which people use to manage personal belongings and multiple works. Considering user’s demand and the affordance of this object, here comes the prototype of modern computer that we now are used to which combines screen, memory, mouse, keyboard and so on in an integrated device that allow user to interact between each layers. What interests me is the concept of the Symbol Manipulation System designed for individual to transform symbol ideas into machines. Human thinking itself is one kind of symbol manipulation that they can add their intelligence into a machine to reach some intention or action. As the computer science developing, there exists several methods that enable digital users to be creators using specific computer language or software, which bring what was supposed to be limited inside computer scientist to open to everyone in the same virtual world. Human’s creativity and our cultural symbols can often lead us into new creations. As sometimes we believe that we are sharing the same principles in user interfaces, there’re still significant distinction according to different physical structures and perceptions.

Speaking of the idea of “user interface”, I’m trying to think about an application or website which can customize its interface or function out of users’ preference or demand. Then social platform service comes into my mind. Although unlike a specific software that can be arranged freely according to the user’s design, social media like Twitter or Instagram allow users to customize their following list in order to receive what they prefer to watch. Traditional media like CNN and the Guardian mostly foster the pattern of cramming information to its users rather than let the user pick whatever they would like to see due to their personal interests. Social media, to some extent, have fulfilled one’s will to design their content on a pre-existing program. Another product is Apple’s IOS system, which allows users to customize their dock, desktop and gestures on the SketchPad to promote a more fluent user experience. After all, although I currently cannot come up with a specific software or program that user can randomly change the position or function of its modules. Social media and IOS both share the same design principle about the flexible tendency to decompose what used to be like a whole system into different modules and then reconnect them via specific interfaces and finally, like Manovich said, create a cultural medium.

References:

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces for Computer Systems: History of Design Principles (essay).

Lev Manovich, Software Takes Command, pp. 55-106, on the background for Allan Kay’s “Dynabook” Metamedium design concept.

Bill Moggridge, ed., Designing Interactions. Cambridge, MA: The MIT Press, 2007. Excepts from Chapters 1 and 2: The Designs for the “Desktop Computer” and the first PCs.

The magic of computer design

Zijing Wang

The computer has become an indispensable part of human society. We quickly take for granted that the computer was born to enhance daily human life. It takes a long time to change the computer into a cognitive artifact.

Doug Engelbart brought out the concept of “Augmenting Human Intellect.” He classified human capabilities into four areas that can be enhanced: artifacts, language, methodology, and training. He believed that a computer should support the intrinsic abilities of human thought and extend humans’ s capabilities in solving problems. In this way, computers are not only a calculating machine but a symbol system that fulfill human’s cognitive interpretation. People and computers work as integrity to deal with everyday problems. This concept broke the barrier which surrogate computers with ordinary people and create the possibility of designing a more easy-understanding computer.

To make the computer more accessible to the public, Doug also pointed out that interaction between people and computers should be real-time. Users can immediately know the results of manipulating the black-box behind the machines. So he invented the mouse. It is hard to imagine how difficult it would be to operate the computer without the cursor today.

Later in the 1970s, Alan Kay and his research group tried to turn the computer into a personal media. They achieved the goal by simulating existing media together within a computer. Kay regarded the computer as a two-way tool, which provides people with the function that the non-digital version does not have. He believed that simulation is a critical feature of the computer. The computer should be able to add many other functions to existing media. Like the example Lev Manovich mentioned, when we are using the word, we can delete the context or change the format. This makes the computer more comfortable to use for non-expert.

Also, the design principles for graphical user interface contributes a lot to the popularization of the computer. Xerox Star for the first time in 1981 using the window design. Designers think the user interface is a system that connects the displaying part and the invisible part. The design of the interface requires a transparent representation of symbolic engines. From the keyboard to the touch screen, the interface has made the computer an everyday object in daily life.

The magic of advancing computer is putting human symbolic-cognitive capabilities at the center. No matter what our computer has become, it will always serve for human cognition needs.

References:

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces for Computer Systems: History of Design Principles

Lev Manovich, Software Takes Command, on the background for Allan Kay’s “Dynabook” Metamedium design concept.

Bill Moggridge, ed., Designing Interactions. Cambridge, MA: The MIT Press, 2007.

Design, Remediation, and Semiotics

Through the course of this week’s readings, my mind kept returning to two distinct, but related, questions. First, does the intentionality of figures like Alan Kay, Doug Englebart, and others in designing computing systems to remediate “real world” commonplaces such as the desktop exempt them from the pattern of media theory, first observed by Marshall McLuhan, that “the content of the new medium is the old medium?” Second, are computational systems, and particularly commercial computational systems, unique from all other human artifacts in that the remediating action always involves symbols remediating other symbols and not only stylistic imitations (like Manovich describes with Gutenberg Bible’s imitating manuscripts or cinema imitating the theater)?

Regarding the first question, several of the readings for this week, including the Manovich and Moggridge excerpts discussed the process of making cultural computing viable through the use of iconic remediations like the desktop metaphor and the GUI display. These histories also emphasized the role of their inventors in providing these breakthroughs in human-centered design for computers. And while certainly these innovations ought not be trivialized or overlooked, they seem to carry with them a fact that is simultaneously  self-evident and ontologically limiting; that computers remediate the way they do because they were designed to do so. On a certain level, this is irrefutably true. The actualized remediations that the digital citizen interacts with every day owe the debt of their existence to Stu Card, Larry Tesler, Doug Englebart, and so on. But none of these figures invented remediation for computational media. In fact, the problem with formulating these innovations in such a way so as to equate them with remediation is that it obscures the fact that even before computers enlisted the help of icons for mainstream acceptance and use, computers were still remediating something. Isn’t the digital substrate itself a remediation of boolean logic? Doesn’t command line interface remediate the syllogism? The point I am trying to make is that it seems unfair to even unintentionally imply that these men initiated remediation in the history of computing. They simply initiated the version of remediation which is actualized and recognizable to the mainstream computer user.

This leads me to my second question. There are, as noted in the cases of boolean logic and command line interface, cases where existing phenomena seem to be remediated by a computer in the same way as, say, the theater was remediated by filmmaking. However, a greater number of remediated objects on the computer are required to first go through a process of being represented by other symbols and finally reconstructed (as is the case with icons, GUI, etc). This seems categorically different than what traditionally occurred in remediation. In the case of the cinema or the printed book, the cultural practices of the old medium influenced the cultural practices of the new medium, which could then continue these practices or change them as a society saw fit. In the case of digital iconicity, however, the old medium is semiotically constructed so as to communicate something to the user. In other words, its old cultural practices are insignificant and the representation of the old medium is only as significant as a device of communication to the user. Hence skeuomorphs like the floppy disk in the save icon, or a postage stamp for email. The differences between these two types of remediation, in our reading, seems implicit, but perhaps needs further working out.

Works Cited:

Lev Manovich (2013) Software Takes Command, New York, NY: Bloomsbury

Marshall McLuhan (1964) Understanding Media. Cambridge, MA: The MIT Press

Bill Moggridge, ed. (2007) Designing Interactions. Cambridge, MA: The MIT Press