Qi Wang Week 10

After this week’s reading, I found that the evolution of interface is very interesting. The previous one is batch processing. It is a series of operations in a program on the computer without manual intervention, and it is non-interactive (everything on the card is fixed). Strictly speaking, it is like a processing procedure: processing data stored on a punch card, rather than individual manipulation. If you want to create a data file or program to use the data with other computers, the only way is to use a punch card. Then the early GUI, Douglas Engelbart came up with a new idea: Augmenting Human Intellect. This new conception denies that computers can only solve basic mathematic problems. His new system has a mouse, bitmap screen, and hypertext all laid the foundation for the development of modern desktop operating systems. Next, there is Xerox PARC (Palo Alto Research Center). Inspired by Engelbart’s presentation, Xerox researchers developed the WIMP model as early as 1970. It has windows, icons, menus, mouse pointers, which has been in use today.

Based on the reading, the professor mentioned the human’s semiotic ability can solve the question, why we need the interface. He said human sign systems must have a physical substrate for the human to recognize the patterns or understand the meaning, values, and intention of signs (Irvine). And this substrate is the “interface”. After 1960, there is a big leap in the computer system design, the computer screen is not just used to display the results of processing, it became a channel to work with the system. In this “two-way” system, humans’ intention and response are another input that gives instructions to the system. Just like the graph, symbolic process cycle with Interactive computing system design, human’s action goes back to the system and command the computer to make further output (Irvine).

Human-computer interaction is now graphical user interaction. For example, Windows, OS, IOS, and Android are all graphical user interfaces. At this stage, users mostly use their hands and eyes to input commands to the computer or gain output from the computer. If the computer receives the user’s input in diversified forms, it also transmits different outputs to the users. For example, not only can the users tap on the screen with their finger to obtain input, they can use the audio information to give instructions. Also, the output to the users has changed from a single text form to a chart, menu, graph, or other forms.

 

Reference:

Professor Martin Irvine. From Symbol Processing & Cognitive Interfaces to Interaction Design: Displays to Touch Screens as Semiotic Interfaces
Professor Martin Irvine.Computing with Symbolic-Cognitive Interfaces for All Media Systems: The Design Concepts that Enabled Modern “Interactive” “Metamedia” Computers