How Can You Mute Your Voice on iPhone?

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Banruo Xiao


The purpose of this paper is to de-blackbox the mute function embedded in every social media application that every user uses many times on the smartphone. This paper is trying to show that the one click effect is not that easy and simple. Conversely, it is a complicated process combining Internet, software and hardware to work together to achieve the result. Following with this purpose, this paper will address Internet and software and hardware in a smartphone separately to explain how each functions and how they work together.


Technology often brings surprise and excitement to users in their daily use. It is hard to imagine how people can talk to each other in remote distance before cell phone and Internet are created. Now we have smartphone, and iPhone is a typical example. Numerous applications, following with the creation of smartphone, are designed to make users more convenient on using the Internet. Users do not even need to pay for making online phone calls on social media applications. Technology can do more than that. When someone makes a call, he/she can even mute his/her own voice as long as the call keeps continuing by a simple click on the mute function. It is really incredible that one can hear the voice from the recipient, but no one can hear his/her voice. This paper will mainly focus on the mute function that each social media application embeds, and how it technically works from the side of designer’s view. This paper will be divided into two parts to logically explain the mute function. The first part will discuss how users can make online phone call through Internet. The next part will pay attention on how to technically achieve the function of muting one’s voice with a simple click on iPhone.

How can people make online phone call?

It is a common sense that we now are in a digital age. Internet connects each device together. Users can acquire and share information and communicate through Internet. It seems that we all are on the Internet. However, from computer scientist’s and Internet developer’s point of view, Internet is designed based on a complex system, which contains multiple layers and various modules. The layers and modules are working together to provide a user-friendly interface for people who know nothing about computer design and Internet design, like me. In other words, the design of Internet is far more complex that user’s thoughts. Although Internet is a product of complex design thinking, it follows many universal principles.

  • How the process works and the definition of some key terms

To allow users to communicate online, such as sending message, making online phone call and video call, the Internet networked devices rely on protocols which are the methods for sending and receiving data packets (Irvine, 2018). Transmission Control Protocol (TCP) and Internet Protocol (IP) are the two important communication protocols. TCP works on breaking up information and message into pieces called packets and resembling the packets into original information. IP is responsible for ensuring the data packets are sent to the right destination. The Internet works primarily “end to end” to make sure data packets are sent or received correctly from one connecting point to another (Gralla, Troller, 2006). In this case, Internet is also known as packet switched network. To understand and interpret the protocols, devices must have a socket or a TCP/IP stack software.

The technique for allowing users to make online phone call is Voice over Internet Protocol (VoIP), which uses TCP/IP to deliver voice message. By relying on VoIP, the process of making an online phone is simple. One can speak into the microphone attached to the device. The VoIP phone transforms the voice signal into digital data and compresses it for easier delivery through the Internet. The compressed, digitized voice signal will be broken down into packets. The voice packets will be sent to an IP voice gateway nearest to the destination. The IP gateway will take the voice packets. With a process of combing, uncompressing and converting back to the original form, the voice signal will be sent through the normal Public Switch Telephone Network. The recipient can listen through speakers and a sound card, or using an earphone connected to the device through a USB port (Gralla, Troller, 2006). Currently, most smartphones adopt Voice over Long Term Evolution (VoLTE) which uses VoIP to achieve network communication (2017).

  • A broader image of the process: Modularity, layering and Internet’s extensibility and scalability

The complete process of making an online phone call implies numerous Internet design principles, and the rule of thumb is that the Internet is not an integrated product. Indeed, there are many layers and modules working for different objects and purposes behind the interface we usually see to form the whole Internet. Modularity and layering shape the architecture of Internet; and the consideration under the principles is to make the components more independently but can work together with efficiency.

According to Barbara van Schewick, modularity employs abstraction, information hiding and a strict separation of concerns to make the Internet more users friendly. More specific, modularity separates visible information and hidden information that users only need to see the visible information to fulfill their purpose, while designers can access to the hidden information to develop their modules. In this case, from user’s side, they can only do the actions including opening an application and calling someone. TCP/IP and VoIP staffs are hidden, while application designer knows how to work on them.

At the same time, layering is a special form of modularity which constrains the dependency among modules. Lower layers can only interact with its neighbors and provide service to the higher layer. At the same time, higher layers are protected from changes in lower layers. Layering helps reduce complexity of the network. And end to end argument places the functionality of each layer. In this case, TCP and IP are two layers working separately. At the same time, they work together when the voice signal needs to be sent to the destination.

The modularity and layering also give Internet more possibility. The ability of adding unlimited modules and layers solves two design problems: scalability (how does the design scale to unlimited connections) and extensibility (how to add new modules and layers to the common architecture). As long as protocols function correctly, the two problems will no longer need to be concerned.

Explain the mute function (basically how people act)

After being clear of the process of making an online phone call, it is pretty straightforward to understand how mute function works. Basically, the mute function works similarly to an on/off toggle switch. From user’s side, by simply clicking the icon of mute function on screen, the microphone embedded in the phone is automatically turned off. Based on the process explained in the previous section, no more voice signal needs to be digitized and compressed. In this case, the following steps are automatically ended. However, from developer’s point of view, the whole process is not that simple. There are many questions need to be answered before reaching the “one click effect”. For example, how is it possible to touch the screen to turn on the mute function?

  • How to actually achieve a mute status on a smartphone

Before addressing the question, this part will firstly de-productize a smartphone. The components of a smartphone will show how each part cooperates together to satisfy the user’s demand.

  • The components of a smartphone

The first obvious component should be the display. It is an interactive interface enabling users to interact with the device. Today, there are mainly two types of display. One is based on LCDs, and the other is based on LEDs. According to Apple’s official website, the newest version of iPhone has LCDs based display, meaning that the lights users see are generated by the lights from the other side of the display shining through some filters (FOSSBYTES, 2017). The next component is battery. The battery of most brands’ smartphone is normally built in rechargeable lithium-lion battery.

In a phone, perhaps the most important item is ‘system on a chip’ or SoC, which comprises CPU, GPU, LTE modem which is used for communication, display processor, video processor, and other bits of silicon turning it into a functional system. Apple’s own developed chipset uses ARM’s system architecture.

In addition, each device would contain Random Access Memory (RAM) and memory. RAM works with CPU to increase processing efficiency and to extend battery life. And memory has varies capacity which is used for internal storage. On the outside, all smartphones come with a rear facing and front shooting camera, comprising up to three main parts: the sensor for detecting light, the lens and the image processor.

In addition, there are five main sensors allowing a smartphone to provide the touch enabled functionality. They are: “

  1. Accelerometer: Used by applications to detect the orientation of the device and its movements, as well as allow features like shaking the phone to change music.
  2. Gyroscope: Works with the Accelerometer to detect the rotation of your phone, for features like tilting phone to play racing games or to watch a movie.
  3. Digital Compass: Helps the phone to find the North direction, for map/navigation purposes.
  4. Ambient Light Sensor: This sensor is automatically able to set the screen brightness based on the surrounding light, and helps conserve battery life. This would also explain why your smartphone’s brightness is reduced in low-light environments, so it helps to reduce the strain on your eyes.
  5. Proximity Sensor: During a call, if the device is brought near your ears, it automatically locks the screen to prevent unwanted touch commands.” (FOSSBYTES, 2017)

Indeed, there are too many more components inside an iPhone to have a space writing down all of them. Some other crucial elements relative to mute functions include three microphones, earpiece speaker, lower speaker enclosure, top speaker assembly, and board chips containing gigabyte LTE transceiver, modem, WiFi/Bluetooth module and touch controller.

  • Touch screen and how it works with other parts to achieve the mute function

More than that, the most obvious relative component probably is the touch screen. For allowing users to use touch commands, its touch screen includes a layer of capacitive material. iPhone’s capacitors are arranged according a coordinate system. “Its circuitry can sense changes at each point along the grid. In other words, every point on the grid generates its own signal when touched and relays that signal to the iPhone’s processor. This allows the phone to determine the location and movement of simultaneous touches in multiple locations (How the iPhone Works, 2007).” The touch screen detects touch through two ways: mutual capacitance and/or self-capacitance. “In mutual capacitance, the capacitive circuitry requires two distinct layers of material. One houses driving lines, which carry current, and the other houses sensing lines, which detect the current at nodes. Self-capacitance uses one layer of individual electrodes connected with capacitance-sensing circuitry. Both of these possible setups send touch data as electrical impulses (How the iPhone Works, 2007).” The later version of iPhone combines the capacitive touch sensing layer and the LCD display layer into one layer.

The iPhone’s processor and software in the logic board chip interpret input from the touch screen. The capacitive material sends raw touch location data as electrical impulses to the processor, and the processor asks software located in memory to interpret the raw data as command and gesture. The interpretation process will analyze the size, shape and location of the affected area and determine which gesture the user made. It combines physical movement and information about the application the user was using and what the application was doing. The processor may also send command to the screen and other hardware. In mute function’s case, when a user is calling someone and trying to mute his/her own voice through an application, the processor will follow the above steps and send command to turn off the microphone. At the same time, when the user is calling someone through the application, some other hardware including RAM, LTE transceiver, WiFi/Bluetooth module and modem will also begin to function to complete the process of transferring Internet signal which is discussed in the first part of this paper. In general, for the hardware part, the processor in the logic board chip is the most important component to deal with all the required steps to fulfill the command of muting voice.

  • Design principles and concepts: affordance (the icon), interface (touch screen and others), modularity, computational thinking

The whole process working on the iPhone shows several design principles and concepts. For example, the icon of the mute function, which is universally applied by almost every social application embedding the function, clearly represents no more speaking or talking would not be allowed. In the design principle, according to Martin Irvine (2018), something affords an action or certain interpretation when its use seems to be an “obvious” inference can be called an affordance. It is an artefact leaves visual cues on how to use it. In fact, the “obvious” inference never comes out automatically. Instead, it is a product of socialization, and human understand it from social learning. At the same time, the touch screen, which enables users to use their smartphone, can be seen as an interface. Interface is defined as anything connects two different systems across the boundaries of them. In this case, the touch screen is the interface connecting users and the smartphone. More than that, the touch screen is actually an interactive interface which allowing users to interact with the device. Furthermore, the idea of modularity can also be applied. Each component of the iPhone runs individually, but they can work together to take the command.



Overall, the process of muting one’s voice includes Internet protocols digitizing, compressing and sending voice signal to the destination. At the same time, by rely on the touch screen as an interactive interface, the smartphone enables users to simply touch the display connecting processor, microphone, speaker and software working together to complete the process. And the most fascinating thing is both Internet design and hardware design share similar design principles, implying that universal design principles build a solid foundation any technological design.


Barbara van Schewick, Internet Architecture and Innovation. Cambridge, MA: The MIT Press, 2012. Excerpt from Chap. 2, “Internet Design Principles.”

Elnashar, A., El-Saidny, M. A., & Mahmoud, M. (2017). Practical Performance Analyses of Circuit-Switched Fallback and Voice Over LTE. IEEE Transactions on Vehicular Technology, 66(2), 1748–1759., M. P. (2017, November 03). Inside the iPhone X: First teardown reveals two batteries. Retrieved from

Gamet, J. (n.d.). IPhone 4: Finding the Hidden Hold Button. Retrieved from

Gralla, P., & Troller, M. (2007). How the Internet works (8th ed.). Indianapolis, IN: Que Pub.

Tracy V. Wilson, Nathan Chandler, Wesley Fenlon & Bernadette Johnson “How the iPhone Works” 20 June 2007. <> 13 December 2018Martin I., (2018). “Introduction to Affordances and Interfaces.”

Martin I., (2018). The Internet: Design Principles and Extensible Futures (Why Learn This?) (n.d.). Apple iPhone 7 Teardown. Retrieved from