Category Archives: Week 10

Qasim Week 10

Computer systems are a complex integral of how communicate with our devices. We also call the complexity of these systems a blackbox, that is a system that continues to be decoded through many layers and redefining and conceptualizing language and human interaction. Especially communicating with these devices we interact with it on multiple spectrums. For one, visually. Is the content on the screen appealing and are we able to internalize it? This come with User Interface and User Design methods. We also have audio forms that make it easier for us to connect better. Specifically, Siri and Alexa are actors in being the liaison between the computer and human on a much more personalized level. These systems (Siri) are so complex we are seeing the evolution in how it communicates with us. The voice is more human and they use more casual lingo and are sometimes, a bit smart.

The transferrable data in how socialization and data socialization. As we remediate the representation of images we also are learning how to do so digitally. Computing needs to be a medium that needs to be conceptualized through accessible data whether it is imagery, audio, and for some people, touch. Data types combine together into one interface is efficient and not just complex in the programming, but how it appeals to us aesthetically.

Brad Meyers delves into other ways how we will communicate with computers and one that struck me was “gesture recognition”. With this in place, we can broaden the users who use computers much more broadly. The human intention applied into interface and user design in how we are becoming more creative with these systems. Other ways Meyers believes will be modes of how we communicate with computers and ways we do, include; multimedia, three-dimensionality, computer-supported work. Imagining these systems feels so far fetched and are hard to think of when we have become so accustomed to what we use now. It almost feels like imagining a new color. 

Chutong, Week 10

Computer at the beginning is just for calculating. The idea of “Augmenting Human Intellect” expanded the ability of computer as well as our understanding of what a machine can do. Now we have the information interaction between human and machine. And we can intuitively (based on physical perceptible images/sound/etc, and human symbolic cognition) understand these interactions through the interface. The inputs and outputs, the sockets, the screens are all parts of interface. 
 
PS:
The section on symbols in the reading reminds me of a website https://historyoficons.com/. This is a website about the evolution of icons. We can see people use different images to interpret same meanings. 
 
Example: WeChat
 
WeChat’s logo is a very typical chat application logo. Compared with iMessage and WhatsApp, they all have this speech bubble (has a long history in inset and comics) inside. 
 
Understand the icons: 
The main page of WeChat is quite clear even you don’t read Chinese. The first logo below refers to message; second refers to contact list; third refers to exploration Moments (like twitter), this icon looks like a compass ( similarity: logo of Apple browser Safari ); and the forth is for profile setting. Understanding these icons are based on our previous experience and shared cognitive knowledge in our community. For example, why we can distinguish your own profile and contact list is because we usually use three or more lines as an image index of “list”, thus, we can quickly understand why they design the icon that way. 
How we sense all the elements on our screen:
Images that are visible to the human eye need the power of pixels to appear on the screen. Each pixel is made up of three colors, red, green and blue, which are arranged densely on the screen, presenting any graphics with different color values. The screen is made up of a number of pixels, each of which has red, green and blue filters (CRT) behind it. The filter filters the white light from the back of the screen, leaving a single color to pass through. The white light passes through three filters and is broken down into red, green and blue rays that enter the eye. Because a pixel is extremely small and the filters are so close together that when the light passing through them enters the human eye, we cannot distinguish the three beams of light. In other words, the light mixing in the human eye endow a pixel to “have” color.
A complete process from an input device to an interface display information to human eyes goes like this :(application/input device) data and instructions — >CPU — > graphics card driver — > graphics card — > display on your screen — > eyes.
 
Question 1: 
The reading about touch screen satisfied my curiosity of how touch screen works. Now I can understand why fingers can work well with touch screen while gloves can’t. But I still have questions about it. In a lot of experiments that I’ve done (as a Tap Tap Fish fan, I’ve done a lot of experiments with all kinds of touch-screen materials for saving my time), I’ve found that oranges work, pens don’t, although pen is conductive. And any conductive pointy shape things don’t work. Does this mean that the touchscreen sensor will also be affected by the stressed area? Can I regard this as “the orange on the screen has a large stressed area so it can effectively touch the screen, while the pen is too sharp so it can’t” ? If yes, why Apple Pencil works?
 
Question 2: 
People always say Apple has closed system, this term appears in the reading as well. And in the reading said the Macintosh PC is “the beginning of black-boxed, closed software systems” (Irvine, p.10). Could you explain more what is the closed system in computer science? Cause if I didn’t study this course, Microsoft PC is also a blackbox for me, for me it is also a closed box with mysteries. 
 
 
References:
Crash Course Computer Science: Technical Background on PCs and GUI Interfaces

Qi Wang Week 10

After this week’s reading, I found that the evolution of interface is very interesting. The previous one is batch processing. It is a series of operations in a program on the computer without manual intervention, and it is non-interactive (everything on the card is fixed). Strictly speaking, it is like a processing procedure: processing data stored on a punch card, rather than individual manipulation. If you want to create a data file or program to use the data with other computers, the only way is to use a punch card. Then the early GUI, Douglas Engelbart came up with a new idea: Augmenting Human Intellect. This new conception denies that computers can only solve basic mathematic problems. His new system has a mouse, bitmap screen, and hypertext all laid the foundation for the development of modern desktop operating systems. Next, there is Xerox PARC (Palo Alto Research Center). Inspired by Engelbart’s presentation, Xerox researchers developed the WIMP model as early as 1970. It has windows, icons, menus, mouse pointers, which has been in use today.

Based on the reading, the professor mentioned the human’s semiotic ability can solve the question, why we need the interface. He said human sign systems must have a physical substrate for the human to recognize the patterns or understand the meaning, values, and intention of signs (Irvine). And this substrate is the “interface”. After 1960, there is a big leap in the computer system design, the computer screen is not just used to display the results of processing, it became a channel to work with the system. In this “two-way” system, humans’ intention and response are another input that gives instructions to the system. Just like the graph, symbolic process cycle with Interactive computing system design, human’s action goes back to the system and command the computer to make further output (Irvine).

Human-computer interaction is now graphical user interaction. For example, Windows, OS, IOS, and Android are all graphical user interfaces. At this stage, users mostly use their hands and eyes to input commands to the computer or gain output from the computer. If the computer receives the user’s input in diversified forms, it also transmits different outputs to the users. For example, not only can the users tap on the screen with their finger to obtain input, they can use the audio information to give instructions. Also, the output to the users has changed from a single text form to a chart, menu, graph, or other forms.

 

Reference:

Professor Martin Irvine. From Symbol Processing & Cognitive Interfaces to Interaction Design: Displays to Touch Screens as Semiotic Interfaces
Professor Martin Irvine.Computing with Symbolic-Cognitive Interfaces for All Media Systems: The Design Concepts that Enabled Modern “Interactive” “Metamedia” Computers

From app to app

As I was trying to ‘decode’ the writing questions for this week and going through the material I was trying to think of a good example of a software feature that represents a symbolic-cognitive function that we basically take for granted. I realized that the concept of “space” in terms of interface is something so abstract when it comes to our phone and computers. I was on my phone switching from one app to another without comprehending what I’m actually doing, but I’m most definitely expecting for the apps to automatically open one after the other or switch from one to the next in no time. We have taken this movement and change from app to app or one tab to the next, opening and closing programs like it is no big deal. But in reality what we are doing is switching from one understanding of a specific room, space, concept to the next as if we are going from one physical store location to another store, from one room of our house to another. Each store in town and each room in your house have a different use/meaning/understand that you or your community have attributed to it. Similarly, every app for example represents something else, one is a game, another is an online store, another holds your stocks.  We are changing one space to the next because that is what we also do in real life. You have to go to a different place to see your doctor that you would go to buy a piece of furniture. Different spaces and different rooms satisfy different needs and wants the same way each app we swap from does. 

C.S. Pierce explains how a sign structure is basically a medium that “enables cognitive agents to go beyond the physical and perceptible properties of representations to their uses for interpretations which are not physical properties of representations themselves” (Irvine). We have an understanding of what it means to switch between apps, and when you have made that switches let’s say from your Canvas app to Instagram you have, most likely, completely unconsciously also switched your head-space, your behavior, your goals and expectations because each app represents and stands for something else. There is also a lot to be said about the symbolic representation of each app and the meaning behind their specific shape, frame, coloring etc. that even that has become so installed in us that whenever they make a change of there is an software update most of us are left shocked by the new appearance of an app because of your previous association with the past appearance i.e. Google changing all it’s icons, going from the older Instagram icon with the brown polaroid camera to the current one (and everything in between). 

We use technology for a more simplified version of our lives where everything becomes easier because everything truly is so easily accessible with just simple movements of our fingertips. However, these versions exist because we have taken much more complicated physical and real-life representations and decode them into an electronic, computational form because they mean something to us: “All symbolic forms from speech to images in any medium must have perceptible features that “afford” consistent inferences for recognizing the sign patterns that can be correlated with the meanings and uses understood by a community” (Irvine). And so we have taken this concept of a physical space and a physical human action and turned them into a digital world that unfolds itself without being second guessed. You don’t really sit and think “hmm I wonder how I’m going to get to his app…”, “I wonder how long it is going to take me to get to the library app…”. In non digital life, we would probably be thinking “hmm what time do i need to leave my house to go the clothing store and then the library? What if there is an accident on the way?”. We have mastered this interface of basically being able to switch locations and tasks so fast without realizing that in that moment we are also mentally going from one concept to the next, from being in school on Zoom to shopping for food on Amazon Fresh, representations of what would be a physical classrooms and all of its associations and an actual physical grocery store. 

 

 

References

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces: History of Design Principles 

Martin Irvine, (new) From Cognitive Interfaces to Interaction Designs with Touch Screens

Janet Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012. Excerpts from Introduction and Chapter 2.

 

Fordyce Week 10

Human-Computer Interaction (HCI) is a field that only continues to grow. Computing systems were first forms of calculating machines, but with development became symbol processors and highly advanced user interfaces that allowed a more fluid relationship between the user and the input/output of the computer. The way we use computers today have become near extensions of our physical beings. In Myer’s article, he writes “Even the World Wide Web is a direct result of HCI research: applying hypertext technology to browsers allows one to traverse a link across the world with a click of the mouse” (Myers). Computing systems are what allow the information to exist, but HCI research are the developments in computation that make it easily accessible to users. Without HCI, most of us wouldn’t be able to access the information. Myers helps delineate the conceptual leaps and “explosive growth” that has occurred in the HCI field – he provides a useful timeline for the development of the everyday things we have become extremely accustomed to:

This graphic is useful in understanding what concepts the core elements of computing systems were developed with HCI. Something as simple as the “direct manipulation interface” – where we use our mouse to move objects on a screen (in other words, being able to grab objects, move them, and change their size) – was something that had to be developed as an interaction design concept. Everything seems so obvious now because of the ease with which we do them, but they weren’t originally obvious concepts to develop. Most of what I did to create this very post, originated as some kind of interaction concept.

As Irvine explains in his paper, “the history of designs for books, libraries, and techniques for linking one section of text to another, or one book to another… have long histories, and many of the early concepts and techniques underlie our more recent concepts for hypertext and hyperlinking in GUI interfaces” (Irvine). It’s interesting to compare our modern computational practices with historical means of communication and idea sharing – much of it is rooted in the same concepts, only today it exists in a more machine-like, efficient form. Computation specific symbolic cognition (how we understand symbols – generally speaking) has become a kind of learned language because of how second nature it has become.

 

 

References

Brad A. Myers, “A Brief History of Human-Computer Interaction Technology,” Interactions 5, no. 2 (March 1998): 44-54.

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces: History of Design Principles (intro essay).

Week 10 – Yanjun Liu

“Interactive system and interface designs begin with understanding the nature of our symbolic capacities and symbolic systems, and why we always need material representational “interfaces” for “interacting” with symbolic forms that we create, receive, and interpret in a community of other interpreters.” (Irvine, 2020)

Just as Professor Irvine mentioned in the article, the nature of our meaning interpretation procedure decides how we design interface and establish human-computer interaction. Take “compiler” and “interpreter” of the computer system as an example, they are created for processing input information from human side and then translate into computer-understandable orders.

Meanwhile, the computer screen can also be regarded as a symbol processor because it displays both the input and output in visual ways that can be captured by human eyes. 

“The layer of level for HCI/UX design is in what we call the “user facing” or “presentation” layer in a PC or device. This design principles now enables a user as “agent” to become the  conductor of the “orchestration,” that is, giving instructions and choosing directions for the software to follow in a two-way dialogic interface:

system representation + human agent interpretation → human agent response as directed action with an interface → re-representation from system. ” (p5)

Let’s take adobe premiere as an example to explain how symbols can be built-in software and is offered new meanings after interaction between human and the software based on the rules above. 

I open a video in the adobe premiere and I want to create a new video with new story line and new background music by editing it. So I drag the video in the editing track and start cutting, pasting, deleting, and attaching music as well as filters on it, then I add all components together and click on the button, the software will generate a 2.0 version video that automatically save in my computer. 

In this process, I am generating new meanings by interacting with the software (adobe premiere) and using the functions it offers inside the application.

My question: It seems that humans have invented computers and computer peripherals based on their own physiological features, such as screens for eyes, keyboards and mice for hands, and audio equipment for ears. So if humans did not have the physiological structure we know today, but had, for example, eyes on the soles of their feet and hands on their necks, would we have designed our “computers” differently?

 

Reference

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces: History of Design Principles (intro essay).