Week 10 – Yanjun Liu

“Interactive system and interface designs begin with understanding the nature of our symbolic capacities and symbolic systems, and why we always need material representational “interfaces” for “interacting” with symbolic forms that we create, receive, and interpret in a community of other interpreters.” (Irvine, 2020)

Just as Professor Irvine mentioned in the article, the nature of our meaning interpretation procedure decides how we design interface and establish human-computer interaction. Take “compiler” and “interpreter” of the computer system as an example, they are created for processing input information from human side and then translate into computer-understandable orders.

Meanwhile, the computer screen can also be regarded as a symbol processor because it displays both the input and output in visual ways that can be captured by human eyes. 

“The layer of level for HCI/UX design is in what we call the “user facing” or “presentation” layer in a PC or device. This design principles now enables a user as “agent” to become the  conductor of the “orchestration,” that is, giving instructions and choosing directions for the software to follow in a two-way dialogic interface:

system representation + human agent interpretation → human agent response as directed action with an interface → re-representation from system. ” (p5)

Let’s take adobe premiere as an example to explain how symbols can be built-in software and is offered new meanings after interaction between human and the software based on the rules above. 

I open a video in the adobe premiere and I want to create a new video with new story line and new background music by editing it. So I drag the video in the editing track and start cutting, pasting, deleting, and attaching music as well as filters on it, then I add all components together and click on the button, the software will generate a 2.0 version video that automatically save in my computer. 

In this process, I am generating new meanings by interacting with the software (adobe premiere) and using the functions it offers inside the application.

My question: It seems that humans have invented computers and computer peripherals based on their own physiological features, such as screens for eyes, keyboards and mice for hands, and audio equipment for ears. So if humans did not have the physiological structure we know today, but had, for example, eyes on the soles of their feet and hands on their necks, would we have designed our “computers” differently?

 

Reference

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces: History of Design Principles (intro essay).