Author Archives: Wanyi Huang

Augmented Reality – Interactive and Immersive Design

Introduction:

We have probably witnessed the applications and possibilities it brought about in many aspects of daily life, such as entertainment and gaming. But Augmented Reality is more than a BeautyCam filter or cutting fruits on Fruit Ninja, AR applicability extends to practical fields including medical, military and education. Although we might have unintentionally encountered some common applications of AR already, we are not necessarily aware that they are AR-based because the terminology seems elusive and abstract, for example, what exactly is being augmented? What are the ways of augmenting? What is the ultimate purpose of the augmentation?

The definitions of AR vary, but in essence they all indicate a characteristic, Augmented Reality can be perceived as a medium where digital information overlaps with the physical environment (Craig, 2013). In Craig’s work Understanding Augmented Reality: Concepts and Applications, he proposed that “the ultimate goal of augmented reality is to provide the user with a view of the surroundings enriched by virtual objects”. Indeed, humans have been modifying the surrounding conditions of the reality to make living easier since day one. However, it was not until the emergence of Information Age did the majority of the alteration shift from sufficing survival to gaining as much as information as possible. Today, digitalized computers allow enormous amounts of information to be retrieved, saved and available for manipulations speedily. One can easily find traces in this respect in AR applications, let’s take the simplest example, the digital maps allow us to gain information of a certain place that we are not physically placed at. While we are using the application, we get a faster comprehension (than actually getting to the place to gain information), a possibility of gaining information. In Engelbart’s 1962 work Augmenting Human Intellect: A Conceptual Framework, the author defined the concept “augmenting human intellect” as increasing capability to face a complex problem, to gain comprehension to suit particular needs, eventually to resolve the previously complex problem. Based on this connection, the ultimate goal of AR is to challenge and redefine the existing reality, to derive corresponding solutions to the emerging problems. From this ongoing process, not only the amount of information is augmented, but also human intellect.

In this paper, I will discuss two of the essential design principles developers adopt to improve the usability of Augmented Reality applications. Sorted by hardware devices, software bases, applicable fields and so on, the number of applications can be innumerable. Applications and settings can be infinite, depending on human’s initiatives and technological bedrock. For this reason, this paper will only focus on mobile augmented reality (MAR) experience.

1.1 Interaction design

In any interactive design, it takes computer intellect to form a platform and human intellect to comprehend. Perhaps we can refer this experience to the metaphor of watching a movie, while the lighting, angles and tones of the set can be as compelling as possible, it is the interpretation of how the observation constructs meaning in real world situation that helps viewers understand the story it intends to convey.

1.2 Elements to interact

Although AR is designed to be interactive, this process is not always visible. It is hard to be fully aware of the interactions going on in the space and time, for instance, it remains ambiguous to most people about what “reality” is being augmented and what the virtues of the augmentation are. To better understand AR and take an active role in participating in the interactive process, one must determine what is there to interact with and the underlying design techniques that enable it.

The definition of Interaction Design (IxD) is abstract yet self-explanatory in the title. To successfully interact, both product and user need to contribute their share of effort. As Gillian Crampton Smith proposed, Interaction Design consists of 5 dimensions, 1) words, 2) visual representations, 3) physical objects/space, 4) time, 5) behavior. The first four dimensions encompass what products and services (digital/non-digital) have to offer, while the fifth dimension (behavior) stresses the importance of the user interface, in this respect, users are encouraged to realize their goals and objectives as much as possible by using the products.

1.1.1 Words

History and culture endow characters and letters with specific meanings. In IxD, words serve as one of the essential elements to improve the usability. In common with any other application, A successful AR application should have enough words to explicate the instructions and elucidate the usage, allowing users to form an understanding of what the next step is and what goals can be achieved using the application. The amount of words should be concise enough to make clear the objectives instead of providing overwhelming information.

1.1.2 Visual Representations

“Humans are visual animals”, this statement holds true in the context of using application. In line with the first element, visual representations adorn applications with cognitive symbolism. For instance, we have long figured out that hands can be utilized to grab and drop objects, so when the cursor turns into a hand shape, we know it means the targeted files can be moved to almost any other spots on the screen. In a word, affordance that suits its intended usage is appreciated in an application. Most of the time, instead of giving out wordy instructions to proceed, simply put out a button-like representation and the usability is enhanced by the suited affordance. (See Fig. 1)

(Figure. 1. On apple’s measure app, words as descriptions and cinemagraphs are presented to indicate the possible movements the application can recognize. Visual representations such as images and videos deliver an instant instruction for users.)

 1.1.3 physical objects/space

The third dimension takes context into account, it is the physical environment within which users will be interacting with the products. Since this paper mainly discusses about design principles of Augmented Reality applications on mobile devices, the object (the device as virtual window through which users experience the products) is the mobile devices such as laptop and smartphones, the range of space (physical environment where users use the products) can be as broad as desired enhancement of environment can be achieved. (See Fig. 2)

(Figure. 2. Real-time maps allow users to garner information from anywhere on the maps, this process can be done by almost any digital devices and anywhere as it is a personal context.)

1.1.4 Time

 This concept can be interpreted as the time that users spent on interacting with the application. Users get feedback (audible/visual) from the application and over time, participate in a complete interactive process with the application. Reasonably timed feedback from products is crucial in constituting this dimension, as users gain further instruction and information from feedbacks to their actions hence the steps of interaction unfold. The amount of time depends on the capability of specific applications and the depth of purpose that users intend to obtain. (See Fig. 3)

(Figure. 3. Amazon’s Augmented Reality function allows users to view products on the intended surface before purchase. The products move around the room with the motion of the user’s fingers and when placed, devices vibrate to indicate that the product has “dropped” on the surface. When surfaces are not detected by the device, there will also be words and visual representations on the screen explaining the error occurred.)

1.1.5 Behavior

 In relation to the user interface, behaviors are considered as a range of actions conducted by users to interact with the product, including operation, presentation and reaction (Kevin Silver, 2007). In IxD, the first four dimensions integrate at this step, shaping users’ behaviors (i.e., the predefined possibilities or constraints of command) and encouraging users to create a personalized experience. 

1.1 Summary

 Computers are participatory medium (Murray, 2012). As a computational system, interactive is an innate feature of augmented reality experience. The level and quality of interactions depend both on computer and human interface, how effective does an application present its ideas and provide clues to users, the first three dimensions (words, visual representations, physical objects/space) can be directly improved in design processes, while the last two dimensions (time, behavior) are engaged with user interface, thus they are influenced but not straightforwardly altered by the technical modifications. However, they can be directed to develop positive and compelling interactions with the application if the elemental designs are successful (e.g., timely feedback).

2.1 Immersion technology

Most forms of media utilize certain senses of human body. We can read a book, listen to the radio, however, within the framework of Augmented Reality (AR), using only eyes, ears and hands would not have achieved an optimal experience for users. That is what differentiates AR from other forms of media – an immersive user experience (Craig, 2013). 

 As far as the current development of AR applications on mobile devices is concerned, to assert that AR provides a complete immersion would be somewhat unrealistic. Unlike Virtual Reality (even VR has its limitations, e.g., locational mobility), Augmented Reality leaves the users with connections with the physical world, meaning it has sensual boundaries of the environment it shapes. Applications and settings of AR can be infinite, depending on human’s initiatives and technological bedrock. However, full immersion has yet to be achieved.

 Notwithstanding the efforts in progress, the total immersion in most AR applications has not been successfully registered in the physical world, due to both the limitations in design techniques and the human factors (Aukstakalnis et al, 2016). So far, there’re few academic works and pragmatic studies on AR’s total immersion theme. Whether MAR total immersion is a realistic goal remains in question, however, feasible means are discussed in existing literature about how enhanced immersion in AR applications can be achieved by ameliorating organizational/modular system (hardware and software components). Given that this paper only discusses MAR, the ensuing discussions will be centered on the software layer. 

2.1.1 Sensual design

 Visual design covers a range of standards in software components such as image processing and recognition. Studies have indicated that humans garner information mostly (80-85%) by visual system, largely exceeding that by other senses (Politzer, 2015). Human eyes receive lights reflected by objects, then stimulate the cognitive system in brains to process and recognize objects. For this reason, visual design is a pivotal factor in AR immersion because it decides whether users are able to establish beliefs in the digitized environment while creating little maladjustment. (See Fig. 4 and 5)

As Fig. 4 and 5 presented, the digitized information (the animation character and spider) is a virtual layer overlaps the actual environment captured by the camera. Because the figures do not blend in seamlessly with the real-world environment, the degree of immersion (the level of disbelief in virtual environment) is low compared to those applications that take in-depth simulation (e.g., brightness, contrast similar to that in actual environment) into consideration. (See Fig. 6)

(Figure. 6)

 In contrast, Civilisations AR, an application launched by BBC, packs more pixels into given screen areas, improving the quality and authenticity of the digitized information, thus telling a more compelling story. 

 Other improvements to the software layer can also be made to generate a more immersive experience for users, such as feedback timing, which affects the latency in human-computer interaction (UCI).

2.2 Limitations

 For immersive goals, developers must consider improvements both in hardware (e.g., head-up displays, two-handed panel, etc.) and software components. For the limited scope in this paper and a few instances and cases in existing business and academic fields, the last part was not able to develop comprehensively. Moreover, for the lack of time, other feasible software improvements were not presented (e.g., audio effect, object recognition, etc.). Moreover, since the users’ interaction plays a vital role in creating experiences, human factors also need to be taken into account, because AR is essentially a hybrid image of the digitalized and the physical, so users might choose certain information but not all to process (Bolter et al, 2013).

3.1. discussion

 As we discussed in the previous paragraphs, augmented reality is both an interactive and partially immersive experience. Although whether total immersion of AR remains unsolved, the subject itself is designed to be heuristic, which means that in AR design process, user experience is not the main concern but rather the challenges it can impose on the reality and the derivative solutions to the problems. With further development, AR has the potential to reach a ubiquitous level, advanced immersion design (images, audio, feedback) relates to interaction design and provide users with better AR experience, conversely, interaction design helps users develop refined immersion experience.

———————–

References:

Engelbart, D. C., and Friedewald, Michael. Augmenting Human Intellect a Conceptual Framework . Fremont, CA: Bootstrap Alliance], 1997., 1997. Print.

Interaction Design Foundation, The Encyclopedia of Human-Computer Interaction, 2nd. Ed. https://www.interaction-design.org/literature.

Sziebig, Gabor. (2009). Achieving Total Immersion: Technology Trends behind Augmented Reality- A Survey.

Fischer, Jan & Bartz, D. & Strasser, W.. (2005). Stylized augmented reality for improved immersion. IEEE Proceedings. VR 2005. Virtual Reality, 2005.. 2005. 195-325. 10.1109/VR.2005.71.

Jacobs, Marco, Livingston, Mark, and State, Andrei. “Managing Latency in Complex Augmented Reality Systems.” Proceedings of the 1997 Symposium on Interactive 3d Graphics. ACM, 1997. 49–ff. Web.

Aukstakalnis, Steve. Practical Augmented Reality: A Guide to the Technologies, Applications, and Human Factors for AR and VR. Old Tappan, NJ: Addison-Wesley Professional, 2016.

Dunleavy, Matt. “Design Principles for Augmented Reality Learning.” TechTrends: Linking Research and Practice to Improve Learning 58.1 (2014): 28–34. Web.

Murray, Janet H. Inventing the Medium : Principles of Interaction Design as a Cultural Practice . Cambridge, Mass: MIT Press, 2012. Print.

Craig, Alan B. Understanding Augmented Reality: Concepts and Applications. Waltham, MA: Morgan Kaufmann / Elsevier, 2013.

Bolter, Jay David, Maria Engberg, and Blair MacIntyre. “Media Studies, Mobile Augmented Reality, and Interaction Design.” Interactions 20, no. 1 (January 2013): 36–45. https://doi.org/10.1145/2405716.2405726.

Choudary, Omar et al. “MARCH: Mobile Augmented Reality for Cultural Heritage.” Proceedings of the 17th ACM International Conference on Multimedia. ACM, 2009. 1023–1024. Web.

The Underlying Technology Communications

When we try to find a website, we typically do it in two ways, 1) type in the Universal Resources Locator in the toolbar directly, 2) go to google and search for the domain name (YouTube, Amazon, etc.)

Based on the readings and video introductions for this week, the two ways of opening webpages are essentially sending request to the server and then get feedback. The URLs are displayed in a hierarchical fashion. For instance, if I try to locate the official website of Amazon, “http” stands for the language used for computers to interpret and communicate, but I realized I did not have to specifically type that in to locate the correct website. That is because the browser makes assumptions based on the port number, which has its own assigned task and is associated with IP address and protocol type of communication. As we can see from the table below, port number 80 is in charge of the Hypertext Transfer Protocol in WWW. If we do not type in http://, the browser automatically assumes 80 as it is the standard port for HTTP, thus directs us to the correct page.

The DNS holds responsible for our request for a certain website. If we input “amazon.com”, our computer communicates to the server asking the address of the website. If the server has not heard of this, it forwards the request to the domain name system“.com” asking for the server handling “amazon.com”. Once the address is found, the data returns in the form of HTML code and our computer receives them and reassembles them, therefore appears the graphic interface of website or application on our screen. Because of the existence of DNS, although we do not always have to put in “http”, we do need to get the domain name right. No IP address of “amazon.gov” will be found because there is no amazon under the governance of “.gov”-unless they create one in the future.

HTML coding controls how a webpage appears to users, where the images are placed, how wide is the frame, etc. The text is contained as part of the coding, but other files like videos and images come with their own URLs. Every piece of information is transmitted through the internet in data packets, our computer reconstructs the information when they are received. In this fault-tolerable process, the packets do not have to follow the same path and arrive at the exact same time. But the images and videos are retrieved in a slower manner, especially when they contain larger bits of data and the network is congested. This explains why sometimes when we open a page, we can see the text but the images are still loading.

Computation-culture as a loop

During the preliminary stages, computing required specific training sections and templates for humans to manipulate symbols, which means it was a process that only reflected limited groups of people’s intellectual outputs. However, even in that situation, computers were never meant to replace humans as problem-solver, but a tool that aggrandizes human abilities and intellectual capacity. Considering it would be somehow absurd to propose that every being with varied computational backgrounds or within different cultural contexts to excel at computing, a more realistic approach was presented-programming computers to understand the options that people give without having to “dumb” the computers down.

The interfaces nowadays are unprecedentedly user-friendly, the employment of interfaces is not only circumscribed to those of components, but in a broad and abstract way, incorporating humans’ interpretation, culture, intellect into the system. Humans are supposedly the actors of directly manipulating symbolic signs, which seems to be the trend for technologies of any kind in this era.

Although symbol interpretation was implemented in computer engineering after the “great conceptual leap” during the 1960s and 1970s, the concept itself was not a brand-new invention. In the macro sense of “computers” (artifacts that process signals and give out feedbacks), televisions were invented way before personal computers. Back then, people, as interpreting agents, started to use graphics, languages and videos as symbols to perform cognitive interpretations based on the presentation. But it was the implementation of symbol manipulating software that promised a future where symbol input is accessible to the larger population. Through the symbolic orders that represent a part of human culture from human actors to the computers, the free will to utilize symbols goes back to humans, handing over the power of inputting and interpreting.

By these processes, humans better enhance intellect and establish new rules and cultures for the reality world that grant novel opportunities for future development in both computation and cognitive-symbolic systems. For humans, the greatest leap that the concepts (“Augmenting Human Intellect”, Graphic User Interface, etc.) brought about is probably accelerating the loop of culture construction, or even re-establishment.

References:

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces for Computer Systems: History of Design Principles

Lev Manovich, Software Takes Command, pp. 55-106, on the background for Allan Kay’s “Dynabook” Metamedium design concept. Excerpts in pdf.

Some basic “take aways”

Looking back at the days since the invention of computers, it is easy to accept the fact that computers have been the most powerful and influential invention of all times. It is such a common opinion that rarely anybody would think about the reasons why and in what ways does it affect our daily life, even sometimes, we shrug it off. It is an ingenious invention hidden in plain sight. We have been taught to understand computers as a laptop or desktop, an artefact that has a keyboard and screen, but never thought of a microwave or a car as a computer. Indeed, if we define computers as an artefact that receives external signals, processes the abstract information, then gives out results, we would get a new view of what computers are.

Computers receive a signal, stimulating an impulse that activates trillion steps of procedures, these procedures are automated by computational algorithms, through preset programs and software, eventually providing feedback as interpretable views to humans. That’s why even though we do not know how to do coding and programming, we do not know how to put together a computer piece by piece, we can still operate a computer. But the most powerful thing is the internal processes, by understanding more details of the systems utilizing computational thinking, we could possibly be more active in the role of problem-solver, instead of negatively being subject to the constraints of the artifacts provided.

What was fascinating to me was the information-coin toss metaphor. I wonder what the information stands for specifically? Why does it require a yes or no answer? What is the result ensuing the answers? Evans explained in his book, that the bits of information are the essential steps for the system to process and provide a precise outcome, minimizing uncertainty level to the lowest. But not all the steps are necessary, by determining questions where the binary answers are equally likely, the steps can be simplified and the time needed can be reduced.

I have not a specific question for the readings this week, however, I do feel like there are a lot of key terms that we need to understand before we sort out the logic and connections. Can we discuss some of the key terms in class and how it is interpreted in computational context? apart from that, I would also like to learn more about the mathematics employed in the information tree and the possible depth, as it is a bit intricate to read and digest on my own.

False Claims or Predictions?

After putting “technology effects” in the google search bar, as always, the algorithm-based results popped up on my page. As I skimmed through the titles, most accused the modern technologies of the detrimental effects they have on human society and humanity, while some moderately pointed out that people should be aware of the negative effects of media technologies despite the positivity they brought about. Judged from the information enabled and presented by modern technologies, one could be easily led into the dreadful thoughts that humanity is to be destroyed or at least severely damaged by the technologies we invented and human intelligence will soon be the biggest victim of its own success.

On this single note, there seems to exist prevailing opinions that, the societal problems are largely doomed by the emergence and rapid growth of technology. This school of criticism predicates three assumptions as far as can be superficially implied: (1) modern technology is an independent antithesis of the human society, (2) technology has gained absolute power that overrides human culture, (3) mediations of media were designed to enhance humanity.

For starters, to see technology as the antithesis of human risks the stake of isolating technology from the context of humanity. Instead, the term “technology effects” only makes sense within the environment of human culture. As Dr. Irvine articulated in the intro video, technologies should not be perceived as an autonomous force for obvious reasons that, without the basics that were developed by humans and assimilated as a part of human culture, technological formats will not be what they are nowadays and might affect human culture in other ways that unfortunately we could not have found out. Although threats were posted in some ways, modern media technologies should still be one side of the society coin but not an analogy of a penny and a dime.

The second assumption is somewhat a corollary of the first. Just as people seek social values in the social or cultural context, technology is not defined by what it actually does for or to the society, its standards and values are confined to the pre-existing social and cultural norms. That is to say, human culture endows media technology or its visible forms (e.g., digital artifacts) with meaning, culture is not owned by technology.

Last and importantly, no questions asked that most mediations of media were designed to benefit humans, however, saying that they were built to enhance humanity is an overstretch. Further, borrowing the words in A Philosophy in Technology, even if the initial intents were designed as elaborated, how they are actually brought into play is usually an uncontrollable factor.

Now we are in these conundrums: How do we keep the elusive factors under control? Can technologies be designed in the initial stages to serve human society as well as enhance what human culture merits? If these questions will not be answered, will the last question be “is it possible to force the societal norms to conform to the emergence of technologies”?

References:

Martin Irvine, Intro to Media and Technical Mediation

Pieter Vermaas, Peter Kroes, Ibo van de Poel, Maarten Franssen, and Wybo Houkes. A Philosophy of Technology: From Technical Artefacts to Sociotechnical Systems. San Rafael, CA: Morgan & Claypool Publishers, 2011.

Wikipedia page: Social Dualism. https://en.wikipedia.org/wiki/Social_dualism

The hidden archaeology

One of the concepts the weekly readings illustrated was that, historical human activities of making symbolic tools and the processes of learning how to use them from preceding generations to succeeding generations provide a firm ground for human’s symbolic-cognitive intelligence later on, as human beings keep re-arranging the information or skills generated by the ancestors and re-creating new artifacts that inadvertently yet somehow inevitably perpetuate the pattern of the indirect mode of actions.

Be the purposes to cater for human’s craving for convenience or simply making more profits, artifacts are never plainly artificial components put together by automated machines. As Michael Cole pointed out in his excerpt, artifacts are well nourished by both their material and conceptual nature. This is true when we start to question the most common artifacts we are conversant with in retrospect, “why are mobile phones of the size of a hand? How come the shape is a compressed cuboid instead of preexistent parts jointed?”. Interestingly, the answers to these questions are also ideological outputs that we took for granted. We can no longer see the world before all the questions were contemplated earlier in history, just like we no longer see the world we saw as an infant. In a word, we cannot un-know the knowns. We have been so immersed in the artifacts that carried on along the human culture, from language we acquire to the physical black boxes we manipulate on a daily basis, that it is sometimes easy for us to obscure the ideal side of them.

Unraveling the ideal side of artifacts under the cover of physical instruments is as vital to the development of the artifacts themselves as it is to the gradual modifications of human culture. From the very beginning, humans created tools mainly to deal with the impending situations. Blades, grindstones, fishing tools, for example, were invented for the dire need of survival; as humans evolved, or more specifically, as human thoughts started to engage with deeper concerns, the intents surpassed secular needs and moved to a more future-oriented direction. From “how to make fire to roast a dead serpent” to “how to take photos and make phone calls in one device”, the impact of modern artifacts has culminated in changing human behaviors, including interpersonal interactions and intrapersonal reflections, which are inseparable constituents of human culture, the efforts of mankind ought not to be forgotten.

By creating and renovating artifacts, humans create a unique culture different from those of any other creatures. Now, the process of cultural modifications seems unprecedentedly speedy, because the cognition of artifacts is a snowball technique, the more tools we maneuver, the more tasks we can accomplish. That being said, this is a picture-perfect conception only when presumably executions we have done to the world converge with the evaluations we eventually make, when every move of us represents a cognitive and conscious decision, when evaluations are not so much based off of human insatiable desires. We stand on one end of the rope, we pull, from the systematic perspective, we were manipulating the artifact to get to our purposes at the other end. But from personal view, is it, in fact, the other way around?

References:

Michael Cole, On Cognitive Artifacts, From Cultural Psychology: A Once and Future Discipline. Cambridge, MA: Harvard University Press, 1996. Connected excerpts.

Donald A. Norman, “Cognitive Artifacts.” In Designing Interaction, edited by John M. Carroll, 17-38. New York, NY: Cambridge University Press, 1991. Read pp. 17-23.

Independency vs. interdependency

As Carliss Y. Baldwin and Kim B. Clark in their volume of The Power of Modularity indicated, there is room for innovations to be made even when the basic structures remain the old ways. When we take the evolutions of iPhone throughout the decade, for example, this statement rings true. Every generation of iPhone has not been a makeover compared to the last one – that comes with a cost, but by means of improving individual module and as such, it maximizes the functions of an independent system and the artifact working as a whole, therefore, seems renovated. In my opinion, this proves what interdependency means to a device and its further developments. Even though the modules within stay relatively independent, by making certain adjustments, the utilities could be bettered to the extent the systems are designed to achieve.

If modules were designed to be interconnected with several other ones that each performs unique functions, both the process and the ramification could be problematic. During the production process, two or more modules are intertwined to enable an independent module to multitask, not only does it complicate the procedures and increase the costs of software maintenance, but further, highly likely that more interfaces would be required as more modules are connected, in which sense the interdependency would be jeopardized. It much resembles the circuits within a household, if the circuits were placed in series, the wires have more interfaces had they been placed in parallel. In this case, when a lightbulb goes out, the whole house would be simultaneously out of light.

Waze, a GPS navigation application available on mobile devices exemplifies interdependencies as well as interdependencies between the modules hidden in the black-boxed artifacts. Like other navigation apps, Waze provides basic real-time locating and navigating services, including searching for destinations, estimating drive time, automatic rerouting if there are more efficient alternatives. While using this app, you can lock your screen if feeling distracted, the application does not go into hibernate mode as it cuts off the electricity runs through the module that controls the screen, while the application is still running and collecting real-time data.; The “history” option related to the storage module also processes data when you put in an address on the search bar if presumably electricity and network are guaranteed.; With the audio indicator, without having to stare at the screen, you can get an idea of what distance remains before you should make a right or left, where there is road construction and traffic jam ahead, etc. Audio is also responsive when the screen is locked, this could not have been realized if the modules involved were highly dependent on one another.; Perhaps the most fascinating feature of this application is the crowdsourcing approach to gather information from users, users can easily report a traffic accident or a police trap, the steps are as simplified as it could get, no descriptions or code commands needed. By clicking the orange button in the lower right and select the graphics that pop up, the data is collected and shown on every other user’s application interface.

In making the modules interconnected, the sum of functions, or the power of reorganization could be significantly enhanced.; Lines should be drawn, however, between each module and how much they interact with one another so that the “individualism” could best serve the system and consolidate the usability and testability.

 

References:

  • Carliss Y. Baldwin and Kim B. Clark, Design Rules, Vol. 1: The Power of Modularity. Cambridge, MA: The MIT Press, 2000. Excerpts. Read chapters 1 and 3 for this week.
  • Lidwell, William, Kritina Holden, and Jill ButlerUniversal Principles of Design. Revised. Beverly, MA: Rockport Publishers, 2010. [Selections: Read Affordances, Hierarchy, Mental Model, and Modularity for this week.]