Author Archives: Banruo Xiao

How Can You Mute Your Voice on iPhone?

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Banruo Xiao


The purpose of this paper is to de-blackbox the mute function embedded in every social media application that every user uses many times on the smartphone. This paper is trying to show that the one click effect is not that easy and simple. Conversely, it is a complicated process combining Internet, software and hardware to work together to achieve the result. Following with this purpose, this paper will address Internet and software and hardware in a smartphone separately to explain how each functions and how they work together.


Technology often brings surprise and excitement to users in their daily use. It is hard to imagine how people can talk to each other in remote distance before cell phone and Internet are created. Now we have smartphone, and iPhone is a typical example. Numerous applications, following with the creation of smartphone, are designed to make users more convenient on using the Internet. Users do not even need to pay for making online phone calls on social media applications. Technology can do more than that. When someone makes a call, he/she can even mute his/her own voice as long as the call keeps continuing by a simple click on the mute function. It is really incredible that one can hear the voice from the recipient, but no one can hear his/her voice. This paper will mainly focus on the mute function that each social media application embeds, and how it technically works from the side of designer’s view. This paper will be divided into two parts to logically explain the mute function. The first part will discuss how users can make online phone call through Internet. The next part will pay attention on how to technically achieve the function of muting one’s voice with a simple click on iPhone.

How can people make online phone call?

It is a common sense that we now are in a digital age. Internet connects each device together. Users can acquire and share information and communicate through Internet. It seems that we all are on the Internet. However, from computer scientist’s and Internet developer’s point of view, Internet is designed based on a complex system, which contains multiple layers and various modules. The layers and modules are working together to provide a user-friendly interface for people who know nothing about computer design and Internet design, like me. In other words, the design of Internet is far more complex that user’s thoughts. Although Internet is a product of complex design thinking, it follows many universal principles.

  • How the process works and the definition of some key terms

To allow users to communicate online, such as sending message, making online phone call and video call, the Internet networked devices rely on protocols which are the methods for sending and receiving data packets (Irvine, 2018). Transmission Control Protocol (TCP) and Internet Protocol (IP) are the two important communication protocols. TCP works on breaking up information and message into pieces called packets and resembling the packets into original information. IP is responsible for ensuring the data packets are sent to the right destination. The Internet works primarily “end to end” to make sure data packets are sent or received correctly from one connecting point to another (Gralla, Troller, 2006). In this case, Internet is also known as packet switched network. To understand and interpret the protocols, devices must have a socket or a TCP/IP stack software.

The technique for allowing users to make online phone call is Voice over Internet Protocol (VoIP), which uses TCP/IP to deliver voice message. By relying on VoIP, the process of making an online phone is simple. One can speak into the microphone attached to the device. The VoIP phone transforms the voice signal into digital data and compresses it for easier delivery through the Internet. The compressed, digitized voice signal will be broken down into packets. The voice packets will be sent to an IP voice gateway nearest to the destination. The IP gateway will take the voice packets. With a process of combing, uncompressing and converting back to the original form, the voice signal will be sent through the normal Public Switch Telephone Network. The recipient can listen through speakers and a sound card, or using an earphone connected to the device through a USB port (Gralla, Troller, 2006). Currently, most smartphones adopt Voice over Long Term Evolution (VoLTE) which uses VoIP to achieve network communication (2017).

  • A broader image of the process: Modularity, layering and Internet’s extensibility and scalability

The complete process of making an online phone call implies numerous Internet design principles, and the rule of thumb is that the Internet is not an integrated product. Indeed, there are many layers and modules working for different objects and purposes behind the interface we usually see to form the whole Internet. Modularity and layering shape the architecture of Internet; and the consideration under the principles is to make the components more independently but can work together with efficiency.

According to Barbara van Schewick, modularity employs abstraction, information hiding and a strict separation of concerns to make the Internet more users friendly. More specific, modularity separates visible information and hidden information that users only need to see the visible information to fulfill their purpose, while designers can access to the hidden information to develop their modules. In this case, from user’s side, they can only do the actions including opening an application and calling someone. TCP/IP and VoIP staffs are hidden, while application designer knows how to work on them.

At the same time, layering is a special form of modularity which constrains the dependency among modules. Lower layers can only interact with its neighbors and provide service to the higher layer. At the same time, higher layers are protected from changes in lower layers. Layering helps reduce complexity of the network. And end to end argument places the functionality of each layer. In this case, TCP and IP are two layers working separately. At the same time, they work together when the voice signal needs to be sent to the destination.

The modularity and layering also give Internet more possibility. The ability of adding unlimited modules and layers solves two design problems: scalability (how does the design scale to unlimited connections) and extensibility (how to add new modules and layers to the common architecture). As long as protocols function correctly, the two problems will no longer need to be concerned.

Explain the mute function (basically how people act)

After being clear of the process of making an online phone call, it is pretty straightforward to understand how mute function works. Basically, the mute function works similarly to an on/off toggle switch. From user’s side, by simply clicking the icon of mute function on screen, the microphone embedded in the phone is automatically turned off. Based on the process explained in the previous section, no more voice signal needs to be digitized and compressed. In this case, the following steps are automatically ended. However, from developer’s point of view, the whole process is not that simple. There are many questions need to be answered before reaching the “one click effect”. For example, how is it possible to touch the screen to turn on the mute function?

  • How to actually achieve a mute status on a smartphone

Before addressing the question, this part will firstly de-productize a smartphone. The components of a smartphone will show how each part cooperates together to satisfy the user’s demand.

  • The components of a smartphone

The first obvious component should be the display. It is an interactive interface enabling users to interact with the device. Today, there are mainly two types of display. One is based on LCDs, and the other is based on LEDs. According to Apple’s official website, the newest version of iPhone has LCDs based display, meaning that the lights users see are generated by the lights from the other side of the display shining through some filters (FOSSBYTES, 2017). The next component is battery. The battery of most brands’ smartphone is normally built in rechargeable lithium-lion battery.

In a phone, perhaps the most important item is ‘system on a chip’ or SoC, which comprises CPU, GPU, LTE modem which is used for communication, display processor, video processor, and other bits of silicon turning it into a functional system. Apple’s own developed chipset uses ARM’s system architecture.

In addition, each device would contain Random Access Memory (RAM) and memory. RAM works with CPU to increase processing efficiency and to extend battery life. And memory has varies capacity which is used for internal storage. On the outside, all smartphones come with a rear facing and front shooting camera, comprising up to three main parts: the sensor for detecting light, the lens and the image processor.

In addition, there are five main sensors allowing a smartphone to provide the touch enabled functionality. They are: “

  1. Accelerometer: Used by applications to detect the orientation of the device and its movements, as well as allow features like shaking the phone to change music.
  2. Gyroscope: Works with the Accelerometer to detect the rotation of your phone, for features like tilting phone to play racing games or to watch a movie.
  3. Digital Compass: Helps the phone to find the North direction, for map/navigation purposes.
  4. Ambient Light Sensor: This sensor is automatically able to set the screen brightness based on the surrounding light, and helps conserve battery life. This would also explain why your smartphone’s brightness is reduced in low-light environments, so it helps to reduce the strain on your eyes.
  5. Proximity Sensor: During a call, if the device is brought near your ears, it automatically locks the screen to prevent unwanted touch commands.” (FOSSBYTES, 2017)

Indeed, there are too many more components inside an iPhone to have a space writing down all of them. Some other crucial elements relative to mute functions include three microphones, earpiece speaker, lower speaker enclosure, top speaker assembly, and board chips containing gigabyte LTE transceiver, modem, WiFi/Bluetooth module and touch controller.

  • Touch screen and how it works with other parts to achieve the mute function

More than that, the most obvious relative component probably is the touch screen. For allowing users to use touch commands, its touch screen includes a layer of capacitive material. iPhone’s capacitors are arranged according a coordinate system. “Its circuitry can sense changes at each point along the grid. In other words, every point on the grid generates its own signal when touched and relays that signal to the iPhone’s processor. This allows the phone to determine the location and movement of simultaneous touches in multiple locations (How the iPhone Works, 2007).” The touch screen detects touch through two ways: mutual capacitance and/or self-capacitance. “In mutual capacitance, the capacitive circuitry requires two distinct layers of material. One houses driving lines, which carry current, and the other houses sensing lines, which detect the current at nodes. Self-capacitance uses one layer of individual electrodes connected with capacitance-sensing circuitry. Both of these possible setups send touch data as electrical impulses (How the iPhone Works, 2007).” The later version of iPhone combines the capacitive touch sensing layer and the LCD display layer into one layer.

The iPhone’s processor and software in the logic board chip interpret input from the touch screen. The capacitive material sends raw touch location data as electrical impulses to the processor, and the processor asks software located in memory to interpret the raw data as command and gesture. The interpretation process will analyze the size, shape and location of the affected area and determine which gesture the user made. It combines physical movement and information about the application the user was using and what the application was doing. The processor may also send command to the screen and other hardware. In mute function’s case, when a user is calling someone and trying to mute his/her own voice through an application, the processor will follow the above steps and send command to turn off the microphone. At the same time, when the user is calling someone through the application, some other hardware including RAM, LTE transceiver, WiFi/Bluetooth module and modem will also begin to function to complete the process of transferring Internet signal which is discussed in the first part of this paper. In general, for the hardware part, the processor in the logic board chip is the most important component to deal with all the required steps to fulfill the command of muting voice.

  • Design principles and concepts: affordance (the icon), interface (touch screen and others), modularity, computational thinking

The whole process working on the iPhone shows several design principles and concepts. For example, the icon of the mute function, which is universally applied by almost every social application embedding the function, clearly represents no more speaking or talking would not be allowed. In the design principle, according to Martin Irvine (2018), something affords an action or certain interpretation when its use seems to be an “obvious” inference can be called an affordance. It is an artefact leaves visual cues on how to use it. In fact, the “obvious” inference never comes out automatically. Instead, it is a product of socialization, and human understand it from social learning. At the same time, the touch screen, which enables users to use their smartphone, can be seen as an interface. Interface is defined as anything connects two different systems across the boundaries of them. In this case, the touch screen is the interface connecting users and the smartphone. More than that, the touch screen is actually an interactive interface which allowing users to interact with the device. Furthermore, the idea of modularity can also be applied. Each component of the iPhone runs individually, but they can work together to take the command.



Overall, the process of muting one’s voice includes Internet protocols digitizing, compressing and sending voice signal to the destination. At the same time, by rely on the touch screen as an interactive interface, the smartphone enables users to simply touch the display connecting processor, microphone, speaker and software working together to complete the process. And the most fascinating thing is both Internet design and hardware design share similar design principles, implying that universal design principles build a solid foundation any technological design.


Barbara van Schewick, Internet Architecture and Innovation. Cambridge, MA: The MIT Press, 2012. Excerpt from Chap. 2, “Internet Design Principles.”

Elnashar, A., El-Saidny, M. A., & Mahmoud, M. (2017). Practical Performance Analyses of Circuit-Switched Fallback and Voice Over LTE. IEEE Transactions on Vehicular Technology, 66(2), 1748–1759., M. P. (2017, November 03). Inside the iPhone X: First teardown reveals two batteries. Retrieved from

Gamet, J. (n.d.). IPhone 4: Finding the Hidden Hold Button. Retrieved from

Gralla, P., & Troller, M. (2007). How the Internet works (8th ed.). Indianapolis, IN: Que Pub.

Tracy V. Wilson, Nathan Chandler, Wesley Fenlon & Bernadette Johnson “How the iPhone Works” 20 June 2007. <> 13 December 2018Martin I., (2018). “Introduction to Affordances and Interfaces.”

Martin I., (2018). The Internet: Design Principles and Extensible Futures (Why Learn This?) (n.d.). Apple iPhone 7 Teardown. Retrieved from


Weekly Writing for Week 12

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Banruo Xiao

To listen to a song on Apple Music, users need two steps. First, typing the name of the song in the search bar. Second, choosing the right one and playing it. Mobile application provides great convenience for users. However, there are much more steps behind the interface for designers to design the service, and Ron White discusses the steps specifically. 

When someone type the name of a song and search it, the server, which stores the Web page consisting of an HTML text file, starts responding to the browser request. The HTML text file is a collection of codes including the URLs of sound files. The server sends the HTML document back to one’s browser’s Internet provider address. At the same time, the server sends instructions to the sites telling them to send the sound files to one’s mobile application. The files are stored in the cache in the iPhone and enable the browser to retrieve them.

Next, streaming, a technology used with a variety of players and audios/videos format, allows the application to play the file as soon as the first bytes arrive. Streaming uses the User Database Protocol (UDP) to send files on the Internet. A protocol is the rules governing how two computers connect to each other and how they break up data into packets and synchronize sending the packets back and forth. With the help of streaming technology, sound file (metafile) tells web browser to launch the right audio player (plug-in). The audio player connects to the audio server and tells the server how fast the Internet speed is. Based on the speed, audio server will choose the appropriate version of the song. When the sound file arrives at one’s PC through UDP, the system decompresses and decodes it and sends the results to a buffer, a small portion of RAM that holds a few second of sound. When the buffer fills up, the audio player will process the files through sound card. Now, one can listen to the song on Apple Music.

Weekly Writing for Week 11

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Banruo Xiao

According to Barbara van Schewick, some computers and devices are “on” or “attached to” network, while others are “in” networks. He says: “computers “on” the network support users and run application programs.” User can use the services of the network to communicate one another, to surf the Internet and to send and receive emails. These are all example by meaning of “on the Internet.” In general, computers on the network are the destination of data.

“Computers “in” the network form or implement the network,” Schewick further mentions. They are more like hidden behind the network, connecting computers attached to the network, and including the cable modem termination system to give the users access to the Internet. It also includes the routers that network providers use to forward Internet data from one physical network to another. Computers in the network are the data flows, while computers attached to the network are the origins of data.

Indeed, the Internet is not an integrated product. Instead, there are many layers and modules working for different objects and purposes behind the interface we usually see to form the whole Internet. Indeed, modularity, layering and end to end arguments shape the architecture of Internet, and the consideration under the principles is to make the components more independently but can work together.

Modularity employs abstraction, information hiding and a strict separation of concerns to make the Internet more users friendly. More specific, modularity separate visible information and hidden information that users only need to see the visible information to fulfill their purpose, while designers can access to the hidden information to develop their modules.

At the same time, layering is a special form of modularity which constrains the dependency among modules. Lower layers can only interact with its neighbors and provide service to the higher layer. At the same time, higher layers are protected from changes in lower layers. Layering helps reduce complexity of the network. And end to end argument places the functionality of each layer.

Overall, the design of Internet is really a complex work composed by modules, layers and functions in each layer. Although it seems complicated, the components actually work together to reduce the complexity and make the network stable.

Weekly Writing for Week 10

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Banruo Xiao

Janet Murray’s reading summarizes several key terms we accessed in previous weeks. At the same time, she also raises many new ideas and definitions to further explain the interaction design and affordance.

Interaction Design

In this part, one of the most interesting key point Murray mentions is designers are often engaged in a process of refinement. Every design is developed from an immature medium. Through the improvement of technology and a deeper understanding of culture and human, the immature medium, no matter it is an online platform or a mobile application, will become more user friendly. Instagram is an appropriate example. Comparing to the original version, the interface of the current updated Instagram is clear. Each function is almost represented by a well-known icon. A new user probably does not need to learn how to use it. It is user friendly, because the designer imitates the human’s real life gesture and behavior and creates a similar environment online. In specific, for example, when a user wants to exit a current page, he/she no longer needs to click the close icon. Instead, he/she just simply slides the page to right. The page will immediately disappear from the interface. The improvement is significant because it successfully imitate a human behavior – closing a book – to imply that the user is going to stop reading the page.


In the meantime, Janet Murray emphasizes many times in her book the four affordances of a computer: encyclopedic, spatial, procedural and participatory. Computer is an encyclopedic medium since it contains and transmits tons of information that human can get access to. It is a spatial medium because it creates virtual spaces for users to navigate. It is a procedural medium because of it is ability to represent and execute conditional behavior. It is a participatory medium as it allows users to interact with it or with other users. Instagram can also be a proper example. Users keep updating information on Instagram to help it become an informational platform. Instagram enables users to navigate through their personal page to others’. At the same time, it is pretty stable and offers more possibilities for users to interact with the application. As a social network platform, it is obviously participatory, no matter for human computer interaction or human to human interaction.

A successful platform can be a symbol assembling many academic definitions and scholarly outcomes, and Instagram is one of the symbols.

Weekly Writing for Week 9

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Banruo Xiao

The history of technology world is pretty like a process of sculpturing a piece of stone. Each scientist, engineer and developer works on one specific part, and they all together create a masterpiece. From a giant machine to an interactive artifact, the more the engineers and developers dig into the world of technology, the more hiding treasures they find.

According to Martin Irvine, the computer machine initially closely related to the idea of interface. The interface connects components together shown on one screen. Doug Engelbart creates a concept of symbol manipulation system to help people solve problem with the interface and the computer. Stu Card then comes up with an idea that design is to understand people. Instead of just focusing on developing hardware, people should pay more attention to the user side. The creation of mouse, desktop, system and many other advanced technologies are developed based on one idea which is how to make people interact with the machine smoother.

At the same time, Lev Manovich spends a whole chapter talks about how medium and metamedium help the development of computer and make it more interactive to nontechnical users. Many forms of mediums, such as text, image, music and video, make computer possible to provide more functions and solutions to people with various needs. Indeed, medium provides more ideas and possibilities to the designer and user. Later, the programming language, which Lev Manovich describes it as metamedium, can even help developers solve different layers of problems at the same time.

Bill Moggridge mentions in his paper that each conceptual idea should be tested a lot of times to see if it works for most users. Doug Engelbart illustrates the idea of demo for testing an idea for a better design. In fact, the idea of interface, symbol manipulation, interaction and metamedium all come out from a thousand times of trying. The capability of computer goes beyond the field of military and other earlier usage. The accomplishment should be credit to the designers’ persistent attempt and the ability to discover and to solve problem.


Weekly Writing for Week 8

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Banruo Xiao

Looking through the history of computing, I feel that each development follows with a human need at that time. The process of data input and output is just a way to transmit signals through the circuits and bytes and to spread the information to everyone who can access to it. Computer, in this case, is no more than a book. The only difference is we all can interactively make some changes.

GitHub is a really interesting website. On the one hand, it is a production of a programming language and computation. On the other hand, it is a platform bringing developers together to work on the code (open sources). The finalized work (the information in a representation of symbol sequence that everyone can read it) and the raw data (the details of the way developers transmitting the data) appear in the same page.

The website visitors can directly see the process of a complete project in the web-page. However, they can acquire the needed information because the design of the website leads them to the results through texts, graphics and a search bar. If the whole web-page is full of code without computation, even developers (the main target of GitHub) might hardly decode it. Therefore, computation has its unique significance which is helping users to understand the symbols first and then to satisfy their needs.

In addition to the idea, rewinding back to my experience of learning python, I somehow feel that a programming language is an interpreter helping me and the computer understand each other and solve problems together. The language is not something that human being can hardly understand. Instead, each line can be read clearly. By just adding some functions, such as “+” and “if…then…,” developers can let the computer run the computation and transmit the data into something appears on the interface we usually see. In other word, the significance of a programming language is to give computer an instruction leading to some result.

Although computation and programming languages have their uniqueness and shining point, in some sense, they are merely the tools, as same as matches and knifes, people utilize to satisfy their demand.

Weekly Writing for Week 7

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Banruo Xiao

Technology opponents criticize many times that our life is occupied by phone and computer. People send emails and search information online by computer, and we text to our friend and do a lot of fun things through smart phone. Almost nobody, except computer engineers, is really familiar with what computer and phone work behind the screen.

Basically, computer works in a way similar to what radio and telegram work, which is the transmission of signals. Shannon describes the process in his theory of information that information can be transmitted and received without depending on information’s meaning. For example, when I’m typing this blog post on laptop, the laptop does not have to understand what each word means to process it in system. No matter the action is typing, copying and pasting, the system can encode and decode the information accurately. Actually, according to Shannon, cited by Peter J. Denning and Tim Bell, the process of computation system is more like endless choices of yes or no. Each piece of information is transformed between input and output channel. There is nothing related to the actual meaning of the content.

Indeed, does information really contain meaning? We read text, see image and watch video to acquire information and understand them in the way we learn since our childhood. I’m a Chinese and can understand English. If someday somebody asks me to read a piece of paper in Russian, does the piece of reading really have meaning for me? No, the reading is just as same as a bunch of strange symbols with no meaning. Therefore, the content itself has no meaning. People give meaning to each word and each gesture to form a society. In this case, telegram, radio, phone and computer definitely can work without learning the meaning behind each symbol. All the information relies on people’s mind to interpret.

From computation system’s point of view, it has its own language, C-language, to talk and discuss. People do not understand the electronic signal does not mean that the signal has no meaning. The computation system knows the next step by receiving signal or processing code. In other word, the interaction between human and machine is connected by two parallel languages, and the computation system is more like an interpreter to let both side do their own work.

Weekly Writing for Week 6

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Banruo Xiao

In this week’s readings, affordance is described as a social and/or cultural product. It could be seen as a norm guiding the designer to think about how a product/service to be convenient to users. At the same time, people rely on the affordance to learn social behavior.

For example, the affordance of a book includes, first, readers do not need to learn how to open a book. The behavior of opening and closing can be subconscious, since only one side can be flipped. Second, the other side of a book is designed to be solid and supportive for reader to hold. Inside a book, the text usually starts from the left side, following people’s reading habits. However, in Taiwan, the text starts from the right side with vertical layout to fit the reading behavior of local residents. Furthermore, the size and the font of the text are designed to make people read it clear. Also, the nature of the paper implies that people can take notes in blank area. At the same time, people are easy to pick up and carry a book, because the size and the weight are designed to be used with hands and to conform to people’s capacity.

However, since the book has its own constraints. For example, it is hard to search for a specific paragraph, to save the paper sources and to share, because the property of papers limits these actions. With the emerging technology development, hard copy is gradually transferred to electronic source. People can carry thousands of books in a one-book sized tablet, such as iPad and Kindle. People no longer stay in the library and check out books to do research. In a word, the interface has changed from paper to a combination of pixels and bits. However, the affordance of a tablet has many similarities with a real book. For example, reader still needs to flip to read the next page; the layout still follows local resident’s reading pattern. They are similar because people get used to read a book with these behaviors. A new pattern would only become a constraint.

The difference is the behaviors including taking notes, flipping page and highlighting sentences do not directly act on the screen of the tablet. Instead, the behaviors are transformed and interpreted by computing code and operation system. Indeed, affordance is defined as a relationship between interface and agency. No matter how interface or agency change, there are some universal standard of affordance remain. There might be minor changes, but at least for a book shown in a hard paper and shown on the pixel based screen, the affordances are basically the same.

Weekly Writing for Week 5

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Banruo Xiao

Instead of saying that “Hi, I’m a human being,” most people probably will introduce themselves by their name when firstly meeting someone. From my perspective, name is the mediation represents who I am. People can get an initial sense of people’s family by seeing the last name and can even know one’s nationality. There are many personal information hind behind the simple name. More than that, one can present his/her name in different ways, through voice or through text. Extending from the shallow example, I would like to take YouTube as an example and to try to de-blackbox it.

Regis Debray (1999) introduces a method with four steps of how to know about the combinatoriality which is the structure and technology behind the interface and the system model underlying mediology. Following his method, firstly, I can recognize the website page of YouTube is the interface of the whole system.

Second, the whole interface can be segmented into elements. It has link, video topic, video image and function lead to a new web-page or a new interface. More specific, different texts have separate meaning. For example, text can be a link to a new website page, and it can also be the reviews that people who watch the video leave. The image can be an episode of a video, and it is also the link of the video. Symbols have more meanings, and each symbol has its own function.

Thirdly, n element can be an interface of a network. For example, the symbol of “×” means close a window. Behind the symbol, it represents a row of code on a node. In fact, everything we see on YouTube is created by a line and a line code. The words we can read are not directly typed on the website. When we leave a message, the words are automatically transmitted by a row of code.

Thinking beyond the YouTube platform, the meaning of a cross symbol “×” is plentiful. Except representing the close of a web-page, it can also mean something false and finished. There is no need to learn how to close a web-page for people navigating the web-page, because the symbol has already had its meaning indicating the viewers that “this is not what I want.”

This is just a very straightforward process to systematically think about the platform. There are many extensions that individual can create for themselves’ use. Google Developer gives a lot of examples. The details behind the platform are complicated. However, the idea is simple: a user friendly website is composed by numbers of elements, element combinations, and cultural implications.

Weekly Writing for Week 4

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Banruo Xiao

I still remember how hard it was for my family to drive to an unknown place before navigation GPS was created. We had a map but can hardly follow it, since no one can always pay all attention on it. We followed the street name sign, but the road was sometimes in construction so that we had to take a detour. We could ask people passing by but would get lost while no one was on the road.

The creation of online navigation platform, such as Google map, saves our life. The only thing I need to do is typing in the name of the destination. It will automatically choose the appropriate route, and it has a voice prompt guiding me to the right direction. Image and language are two of the oldest symbolic techniques that people use them to understand and to communicate. Google map transforms the whole real world into a symbolic depiction of the relationship between elements in the space. The voice prompts Google adopted can easily provide guidance and can avoid user to keep eyes on the screen.

According to Michael Cole (1996), Google map, as an artifact, becomes the mediator and changes the interaction between users and the real world. From a personal view, Google map changes user’s task from seeking for a right way to following its guidance (Donald A. Norman, 1991). It is also a distributed cognitive process coordinating people and their immediate environment (James Hollan, Edwin Hutchins, and David Kirsh, 2000).

From a theoretical perspective, I can explain the design of online navigation platform in dozens of academic definition. However, for people in the real world, it truly changes the way we interacts with the external environment. In fact, all the tools and technologies, from the first compass to today’s AR embedded navigation application, make people better adapt to the surrounding environment.