Category Archives: Week 10

Interaction of ‘Face ID’ and other unlocking features

Perhaps one of the biggest technological leaps, both in terms of hardware and software, has been seen in phone security. We’ve graduated from unlocking buttons, to pin codes, to fingerprint sensors, to now using simply your unique facial features, to unlock a device. Hardware has had to comply with the changes, with sensors, touchpads, and such being employed for security purposes. The software has had to learn how to memorize and detect individual features in order to prove the user is the one attempting to sign in to the device.

In terms of Face ID, which has become Apple’s primary mode for unlocking the iPhone, we can see many of the elements of the ‘symbolic process cycle’ which has been outlined to us. We see the natural facial expression which is the user’s ‘representation’, then be interpreted as ‘the user is indeed the one attempting to enter the phone’s database’, the corresponding software action would be unlocking the phone.

Within the design of this feature, we see many of the ‘golden rules’ we read about. There is consistency with how one must present themselves to the phone’s camera, with an unobstructed, head-on view. There is universal usability with no age range or discriminatory practice meaning some may not use this feature. There is interface feedback with the visual padlock and opening of the phone’s screen to the home page. This feature also keeps users in control of their actions and of their hardware.

Martin Irvine, From Cognitive Interfaces to Interaction Designs with Touch Screens.

Ben Shneiderman’s Eight Golden Rules for Interface Design

Design Interaction: Amazon App as a Shopping Medium

MEDIUM: “Material is an adaptable system of guidelines, components, and tools that support the best practices of user interface design. Backed by open-source code, Material streamlines collaboration between designers and developers, and helps teams quickly build beautiful products.” – Google

On our mobile phones, the Amazon App opens up to a user interface that presents the shopper with a simulated market experience; goods are arranged by departments to make sorting easy and fast with a live banner for running ads from popular vendors including Amazon’s direct marketing for Amazon Prime features, discount codes, prepaid shopping vouchers, coupons, etc. There is also a shopping bag which is more a less a trolley/cart in the physical store and a checkout page dedicated to vetting items and processing payments just like the cashier stand at a store. Vendors also have dedicated pages like storefronts where products are listed and adequately described so that a buyer can get some real sense of what he/she might be getting. Some storefronts utilize video for this and even accept third-party video descriptions in their review section to provide even more depth with product description. 

Some shopping behavioral patterns unique to humans are also taken into consideration as the App allows for window shopping without the commitment of making a purchase. Buyers can also ‘save items for later’ in case buying isn’t an option momentarily or certain contemplations arise while using the App, and at a later time, can simply ‘checkout’ on saved items without having to search for them all over again. 

For all of these interactions to be made possible, there is heavy use of varying text, audio and visual media forms within the App and these are all inbuilt for accessibility through the Apps graphical user interface. Video, audio, text, and photo formats are all in standardized designs that allow for interoperability within the app or outside the App  (e.g a review video can be played via a third-party application like an external music player on the device or redirects to a Youtube page using embedded links in the product description). In building the App, we can observe how the designers of the Amazon App graphical user interface obey sets of universally laid down principles, theories, and guidelines to accommodate these media forms while customizing the shopping experience for its users based on some human cognitive and behavioral factors.

Below are some translations of standard design princliples into obvious features on Amazon App:

PRINCIPLE FEATURE
STYLE: Standardize task sequences. The Checkout sequence is uniform for all users.
STYLE: Descriptive Links Links are labeled in each category (e.g. ‘programs and features’ ‘deal of the day’)
ACCESSIBILITY: Predictable Pages Amazon App provides product suggestions based on items viewed on each page (e.g “Related to items you’ve viewed’ )
DISPLAY: Flexibility for user control of data display. A private view of shoppers’ personal profile allows for more personalization of storefronts within the app.

These and other basic design principles combine with some level of direct manipulation and customizations (i.e simple buttons, reversible actions, meaningful visual metaphors, etc) to make the overall shopping experience on the graphical user interface appear seamless and automated making the Amazon App a relatively user-friendly one.

This user-friendliness might also help explain the many successes of Amazon as a company in the e-commerce/marketplace business which we learn has recently turned in 1 billion US dollars in net worth according to CBS News. It is evident though that considerations for the disabled like the visually impaired, for example, are not presently accommodated by the Amazon App graphical user interface design. 

ref/

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces: History of Design Principles. Part 3.

Ben Shneiderman, Catherine Plaisant, et al. Designing the User Interface: Strategies for Effective Human-Computer Interaction. 6th ed. Boston: Pearson, 2016.

Ben Shneiderman. Eight Golden Rules for Interface Design. University of Maryland

Hybrid Media, Affordance and Previous Design Research

Xueying Duan

For this week, the introduction of hybrid media helps me with the understanding of the blackbox of product design. Hybrid media can be seen as a mix of various media forms. It enables multiple function modules to be applied in one integrated application and serve the overall operation. It’s like tearing the application apart into several different layers, the first layer – the interface, is the external layer that presents us with the ultimate graphic presentation. The second layer may have GPS function, photo-shooting function, payment function and so on. Followed by the huge database, algorithm and deeper program including binary numbers. This kind of hierarchy and the overlaps between layers are included in all stages of electronic products’ development.

For a long time. Many companies have been working on the optimization of the affordance on a simple device. Just look at the screen of a smartphone. Every upgrade is made for promoting it to carry more pixels in a limited space. The same is with the camera. HUAWEI recently released its smartphone with supersensing and telephoto camera which not only shows the improvement of the camera design but also the development of screen display. Although I don’t totally agree with the crankiness of focusing on the update of camera precision and variety, it can still somehow represent the technology tendency of maximizing the affordance.

Moreover, with the ambition of internet companies and the intense market situation. There occur many homogenous products that share a similar function but differentiate in their designing details. For me and some designers, the process of launching a new product or new function is the process of communicating with your consumers. The focus should be on how to keep consumers’ interest in learning and accepting your application, or how to let them explore your products painless and follows your principles. If I compare the interface of two similar apps: Booking and ctrip. First of all, when I was doing the screenshotting of the two apps, I noticed that ctrip (which is a Chinese platform) cannot change its display language into English which I find a little bit inconvenient.

By comparing the welcoming page of them. Booking presents the choice like destination, dates and so on precisely in the middle which ctrip, because it integrates too many functions in one app, requires users to find the hotel booking module among lots of choices. In the next few steps, they both present a very good filter function of the results like types or brands. But when I come to the checking out section, booking usually charges you the price that they present exactly on the result page while I sometimes find that the price when I’m going to check out is a little bit different than what shows out there. Moreover, every time I make my appointment on Booking during a trip, it always presents me with the purchased date on the searching box and gives suggestions on the destinations or routines during the available days. Although it’s not precise or useful sometimes, I can still see its innovation and better user experience. 

I also noticed that the research on gesture recognition was performed firstly in the corporate, rather than in the university which most innovations took place first. My explanation is, during the manufacture of electronic devices, it first happens when the corporate or industry was thinking about developing the user experience of early electronic devices. Rather, university labs that own some funding from the government tend to focus more on the basic principles of the whole industry like mouse, windows and so on.  I’m still a little bit confused on the big gap between each two fields of those technologies and the difference between corporate research and commercial research, I would very like to hear about more explanations on the occurrence of those distinctions between categories.

References:

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces: History of Design Principles (essay). Read Part 3.

Martin Irvine, (New Intro) From Cognitive Interfaces to Interaction Designs with Touch Screens.

Brad A. Myers, “A Brief History of Human-Computer Interaction Technology,” Interactions 5, no. 2 (March 1998): 44-54.

Interface design principles of Google

zijing

The interaction design has significant impacts on the ways users interact with computers. When users’ expectations meet with the actual procedures of computers, the pleasant experience of agency will appear. Interaction design should develop an interactive system that can arouse the user’s positive reaction so that the user can feel relaxed and comfortable, and enjoy it. In my following paragraphs, I will apply some principles from Murray on Google.

One of the design principles that Murray mentioned is transparent. She thinks that an excellent interface should make users focus their attention on the primary function of the system naturally instead of compulsorily. As the picture shows, after opening google, users can’t ignore the search bar at the center of the web. The search bar also gives clear direction on how to use it: search google or type a URL.

Another design principle is multiple instantiation. The interaction interface is not a fixed object, and it should offer various choices for users to explore. The shortcuts below the search bar provide using habits for different users. They can add their most visit websites under the search bar, which enhance user efficiency. Moreover, at the lower right corner, there is a customize button. By customizing the colors and themes, users can gain unique experiences.

Last but not least is visuality. At the upper right corner of this page, we see four buttons. It is easy for users to understand their different functions because their icons are exactly their function characters. The design of the button that is the image of nine dots follows common sense in users’ expectations. Everyone knows it means “more.” More importantly, all the icons are designed according to their physical images. Envelop represents Gmail, paper represents google docs, camera represents google meet. All these designs reflect the principle of visuality.

References:
Martin Irvine, Introduction to Symbolic-Cognitive Interfaces: History of Design Principles
Janet Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012.
Brad A. Myers, “A Brief History of Human-Computer Interaction Technology,” Interactions 5, no. 2 (March 1998)

Interactive interface and touch screen

Touch screen plays a crucial role in the success of interactive surface. We touch it and it responds. We take such interaction for granted. The design of touch screen, which is based on the x/y coordinate pixel-map of the screen, can be grouped into two types: capacitive, reacts to the change in voltage passing through the invisible wire under the screen, and resistive, reacts to pressure on the screen. Resistive screen is applied in some old Nokia phones, NDSL’s, a hand-held video game, touch screen, electric signing machine of delivery companies, car’s screen, etc. I found its reaction pretty slow and lack of sensitivity. Captive screen, on the other hand, works in iPhone, the navigating machine in shopping mall, etc. Although it reacts quickly and accurately to our touch, it does not work with objects that does not have a same charge to our fingers. When water drops on the screen, or when our hands are wet, the touch screen become slow and less sensitive to the touch. In this case, we deliver our information by means of our gesture. The grids of wires sensor the change of the electrostatic field, triggered by our “input”, of the touched area”, and transmit it to the microcontroller. Then it functions as a translator of information, translate the location to the inner working part of the device.

The pixel grids of the touch screen perform as two-way representation and interaction. The location of the icon inside the pixel grid serves as an index for our action: where can we find the app, to where should we point to while using the app, the direction of our gesture, the meaning of our gesture, etc. Its design principle for motion could be applied in video apps. For example, YouTube. When you are playing a video, a double-tap on the right of the screen means fast-forward, a double-tap on the left means rewind; Swiping from right the left means jumping to the next video, vice versa; by scrolling up and down, we can control the volume of the video. The gestures are all monitored inside the grids. The pixel grids both guide users and the software: they provide index for user’s actions, transmit user’s command to the software and help the device identify the task.

Except for the hand gestures, there are other elements of input. Take the example of initiating the process of finding information: we can type keywords in the searching box to search for things we want (applies in most of the applications). For Taobao app, there are lots of elements that can be applied as input: we can drag photos of the stuff we want (input) to trigger a search, or we can first copy the “special command”(input) for that product shared by other users in other apps, then the search is automatically triggered when we open the Taobao app.  For Spotify app, we can scan the “Spotify code”(input) to search or share a specific song. We can also use voice command to ask it to play a certain song, which functions like Siri. Similar to triggering the touch screen by the change in capacitance, our different forms of input triggers the “translator” of the app. Then it transmits our command to the device. But I am not sure how that process works.

Question: What kind of touch screen does kindle applies? It functions in between these two kinds of screens. It does not work as sensitive and smooth as those screens of smartphones, but it works better than those resistive screens: it allows multitouch usage.

Reference:

Martin Irvine, (New Intro) From Cognitive Interfaces to Interaction Designs with Touch Screens.

Design Guidelines and Principles in App Lifesum

Lifesum is a weight control App, which serves to record daily food intake and quantity of exercise. Users can know their eaten and burned calories, intake of carbohydrates, fat and protein with it. They can also set a goal of weight and Lifesum will help them control intake and trace the change of weight. Weight control is always boring and even painful, but this well-design App eases users’ pain and resistance to weight control by its design.

Some principles in Lifesum:

There are 5 buttons(icons) in the bottom of the App and each of them represents a different page. When you in the page Diary, the button Diary is green and others in gray (except the “Plus” button which is always green). This is an indication of location. This design illustrates principle Conceptual Model. The long-term application and website using gives users a conceptual model that if a tag on a website or a button in an App has a different color from others, it indicates the place the user is. Every time the user switches pages, the button of the page where the user is will turn green and others become gray. This gives a feedback to user: you have switched the page successfully. All functions in this App is visible and every function maps to a button or a link. Nothing will be out of expectation in Lifesum.

Some guidelines in Lifesum:

Record function is located in two different pages, however, there is only one record subpage for users recording their weight, daily intake and exercise. This design not only makes sure that users can find the functions they need easily, but also makes sure the consistency in data entry. If users find that they record in different pages, it will cause confusion. In addition, the designers of this App design it with consistent color, layout, terminology and font.

All terms in this App are easy to understand: they are as same as our everyday words. Button “Diary” is for your record; “Plans” is for weight control plan; “Me” represents personal information. Anyone can use it with intuition. This intuitive design style guarantees usability. In the Recipes page, thumbnail pictures serve as a button and preview for users. This is a very effective way for navigation and interaction. Just liking shopping in a supermarket: finding what is needed or what looks great for you and picking it up directly. Every action is reversible in this App, which means that every time you record a mistake number or choose a recipe, but you don’t like it soon, you have unlimited numbers of chances to correct and change them. Lifesum is a memory offloading application for people who use it to record. Moreover, its design also helps us offload short-term memory. It’s safe to say, you don’t need to memory anything in this app. All the design in the App ensures users use it fluently without memorizing. There are no complicated functions that need explanation and no terms unfamiliar for users.

Designing principles and guidelines are useful for UX designers and product designers to elaborate their design. However, a satisfactory design also needs users’ personas, iterative usability test and so on. In a word, designing principles and guidelines are important, but they could not guarantee a satisfactory design.

 

References:

Ben Shneiderman, Catherine Plaisant, et al. Designing the User Interface: Strategies for Effective Human-Computer Interaction. 6th ed. Boston: Pearson, 2016. Excerpts.

Ben Shneiderman’s Eight Golden Rules for Interface Design (on one page)

Apple Developer: Human Interface Design Guidelines

The Affordance of X/Y Coordinates

From early cave paintings to ancient writings through television and computational media, humanity continually demonstrates a proclivity to abstract our symbolic capacities onto two-dimensional substrates. The application of this insight by Doug Engelbart and others laid the foundation for any number of advances in interface design, from the two-dimensional arrangement of pixels in graphical user interface, to the x/y coordinates that a technology like a mouse and cursor depends upon, finally to touchscreen design.

The principle, however, that computational interfaces in their current iteration consist of screens constructed of rows of “picture elements,” or pixels, which respond to programmed signals which indicate color, darkness, and position easily recedes from the forefront of conscious thought when using even the most basic functioning interfaces. The WordPress interface used for this course, for example, functions primarily as a text input program. However, as students write their weekly reflections, one can only assume that it is only upon the rarest occasion that they consider how every keystroke signals to the computer and the website interface, triggering a complex sequence of commands which finally result in “lighting up” the set of pixels correlated with the letter-form which they typed. Conversely, when one presses the backspace key, they do not really “erase” the previous character, they simply trigger the command which communicates that the correlating pixels ought to return to the previous state of the background color (white in this case). The same goes for functions like bold, italics, text alignments, and even hyperlinks. While other affordances may also be triggered by something like hyperlinks, the immediate, human-perceptible change (i.e. black text to blue, underlined text) is nothing more than a sequence of commands correlating to tiny elements of light arranged on an x/y axis.

The difficulty in interface design, to which I find myself continually returning, is that as people become more and more accustomed to the abstracted designs of an interface, the less and less they have to think about the black-boxed system of physical phenomena which ultimately underlay the entire process. Certainly, in cases like pixels, which only concerns the most basic, and fundamental level of an interface design, the stakes are relatively low. However, in many cases, designed interfaces which hide the physicality of computing only leads to confusion about the limits of computational media and promotes a perception that the digital is the same as the magical.

Works cited:

Martin Irvine, From Cognitive Interfaces to Interaction Designs with Touch Screens