Unlocking Facial Recognition – The design and implications of Facial Recognition and how privacy may or may not be compromised by using it. 

Eish Sumra, Leading By Design Final Project, December 2019


Facial recognition is both an innovative and intimidating feature growing in prevalence on smartphones. Most newly released phone models include some for of facial recognition or fingerprint sensors in order to unlock the device. This presents us with many questions about how they are designed, both as an interactive element of hardware and software, but also how they deal with the data they need to function correctly. In order to establish these processes we must de-black box the product, by looking specifically at Apple’s popular Face ID, how it works, what ecosystems of data it collects and creates and what implications there are for privacy rights. Then by including debate by commentators and rhetoric from a prominent court case, a clear narrative on the burden of protection can be built to discover which party, the user, the company or the government is responsible for the security of a person’s information. 


In the past decade alone, how we unlock our smartphones has changed as much as the hardware itself. From simple two-button unlocking systems to passcodes, to ‘connect the dots’, to fingerprint sensors, to now Apple’s prominent ‘Face ID’ – we have seen the idea of phone privacy redefine how we interact with our devices. Originally pushing buttons were our only form of keeping our phone locked when offline, now with elaborate hardware/software interaction, companies have created interactive ways to open your phone and supposedly protect it from others. 

However, the concept of facial recognition and fingerprint sensors, as impressive and future-forward as they may seem, the information needed to make the processes work are quite personal and if stolen or shared inappropriately, could lead to vital data being given or taken to external parties who can use it for other means. While there are few mainstream stories of anyone exploiting these processes, it doesn’t necessarily mean we are all safe from cyber misuse. Additionally, these seemingly exciting features lead to our extremely personal and individual information, fingerprints, facial identification, being shared with a company or a product. This in itself is a potentially dangerous precedent to be set by tech companies – encouraging the sharing of impossible information in order to use a basic and necessary function on our phones. These processes are optional, one may disable the use of facial recognition or fingerprint sensing if one doesn’t want it, however, the vast population of smartphone users still use both functions or one of the two, perhaps without truly understanding what information is being shared and how our personal data is being used – not to mention the implications this has on our own levels of privacy. 

This paper investigates both Face ID and Touch ID, how they work and the implications of the mechanisms at play. Little is understood by consumers about what is happening when one opens their phones, yet the information being used is important and specific to each person. By reaching into the black box of phone security and unpacking the various levels of design and technical systems, we are able to compare the safety of both functions and discover how using them affects the users. By using information shared by the companies of Google and Apple, we can establish the journey of our personal data and compare the accessibility of such data. 

Once the mechanisms are understood we can break them down and look at design flaws and security implications of both. Then we can ask the question: “Who is responsible when it comes to our privacy on smartphones?” Do governments and policymakers have to ensure boundaries are put in place to protect consumers? Is it the duty of phone manufacturers to stick to their promises of secure usage or is the burden on us, the general public, to consistently scrutinize these functions and the companies at play, and avoid/use the devices depending on our own moral position on privacy? With three key players in this field, sometimes it can be unclear who is at fault if anyone is at fault when private information is shared. However, the lines can be seen clearly once the mechanisms of these functions are truly understood and the commitments of tech companies evaluated accordingly. In my research, I hope to lay out the narrative as it pertains to the simple act of unlocking a phone, however, this debate is wide-ranging and includes other applications and other smart devices such as the Amazon Echo or Google Home. Privacy is perhaps one of the biggest issues facing our time, as the more we are connected with one another, the more exposed we are to threats, hacking and exploitation. Our human interactions are becoming ever more interwoven with technology, from our speaking patterns and conversations being heard by smart speakers, to films of private property being seen by smart security cameras. I hope to lay out what our privacy expectations should be and how we can ensure the safety of billions of people online. 

How does Face ID Work? 

Face ID is set up by the use of the front camera which projects over 30,000 dots from infrared light, onto your face, tracking the undulations and specificities of one’s profile. You have to move your head in a circular motion, up and down, so the dots can cover as much surface area as possible, building a digital picture of your individual face. The infrared map of one’s face is translated into a mathematical representation for your machine. What makes Face ID particularly interesting is the ability to detect one’s face, even when wearing a hat, scarf, sunglasses or a beard

Face ID is just one of many facial recognition softwares currently in use by phone manufacturers. Each one operating using varying technology but all with the same goal, to create a system of translating the contours of a head to a mathematical code that can be understood by an operating system. As a phone function, it is impressive in both its specificity in identifying different faces and its ability to continually update and modify the mathematical mapping which has taken place, allowing natural or frivolous changes to one’s appearance to happen without affecting the phone’s ability to recognize identity. 

As a design concept, I have been drawn to facial recognition because it has become standard across many disciplines, not just personal technology. In airports, it is used in immigration lines, in some countries it is used through closed-circuit cameras on streets and by regular apps such as the Apple photos app which identifies faces and collects pictures into albums relating to specific people who crop up multiple times in one’s camera roll. It is also an intensely personal entity to involve in technological processes, one which, depending on your views, can be a great way to ensure privacy for your device, or be vulnerable to exploitation. Using our class materials, we can determine facial recognition to be a great piece of interactive design. Using Ben Shneiderman’s: ‘Eight Golden Rules of Interaction Design’, the Face ID example of facial recognition stands up to many of the tests. 

The first rule is “Strive for consistency,” the rule states: “consistent sequences of actions should be required in similar situations.” Naturally, Apple has integrated Face ID into multiple uses, such as downloading an app, opening an app or paying for items using apple pay. The interface is the same each time and the function works with similar speeds and exacting results with each form of use. The second rule is “Seek universal usability”, with Shneiderman going on to say that: “Recognize the needs of diverse users and design for plasticity, facilitating the transformation of content. Novice to expert differences, age ranges, disabilities, international variations, and technological diversity each enrich the spectrum of requirements that guides design.” Once again, Face ID allows for all users to be able to access their phones using the software. This is because mapping infrared dots on an object doesn’t discriminate, it literally is a map of lines, shapes, and depth, which means regardless of aging skin, the color of your skin, or the uniqueness of your appearance, the sensors should be able to map and therefore respond to whatever is put in front of it. The information released about Face ID also states that: “Accessibility is an integral part of Apple products. Users with physical limitations can select “Accessibility Options” during enrollment. This setting doesn’t require the full range of head motion to capture different angles and is still secure to use but requires more consistency in how you look at your iPhone or iPad Pro.

Face ID also has an accessibility feature to support individuals who are blind or have low vision. If you don’t want Face ID to require that you look at your device with your eyes open, you can open Settings > General > Accessibility, and disable Require Attention for Face ID. This is automatically disabled if you enable VoiceOver during the initial set up.” This inclusion of ‘accessibility’ into design thinking further enhances the strength of its interactiveness. 

Number three and four are “Offer informative feedback” and “design dialogs to yield closure”. With Face ID, the system tells you to move closer or hold up your phone at a different angle through direct verbal communication or through simple signs such as the ‘shaking’ of the padlock icon, showing the phone isn’t receiving the information it needs to open. The ‘dialog’ of the function is clear, with the padlock icon being shown on the locked screen when the screen is woken up by a physical movement or touch, it will then show an opening of the padlock icon when the user’s face is identified by the sensor, then the screen tells you to ‘swipe up’ because your phone is unlocked and you are able to access your apps. It is a simple set of processes, but one consistent with good design. 

Number five is an obvious rule:  “prevent errors” – which is something Apple, in particular, has worked hard to do, many manufacturers are also attempting to do the same. The rule states: “As much as possible, design the interface so that users cannot make serious errors; for example, gray out menu items that are not appropriate and do not allow alphabetic characters in numeric entry fields.” Facial recognition services should not respond to non-human entities, nor should they respond in any way to an entity that is human but not the user themself. It prevents this through ensuring that in order to open, an eye or iris is detected in the mapping of a face. The final three rules are “permit easy reversal of actions,” “keep users in control” and “reduce short-term memory load.” All three are met easily. You can lock your phone using a button, only a user can (or should) open the phone using the phone, and no knowledge is needed of the function apart from the general awareness that the phone can open using one’s face. It is obvious that this design feature of smartphones definitely presents as a well designed, simple to navigate function and so is a strong design addition to any smartphone from a pure usability standpoint. 

Is Facial Recognition Safe? 

The burning question around facial recognition is not whether it is an interesting feature, or whether it works correctly. Facial recognition can be argued to be a much-needed form of protection for one’s device, much more than a passcode that can be viewed by others or figured out through other methods. The phone is in essence protected by your face being on your body only. However, the clear question is what happens to the data your phone collects? How vulnerable is the system of facial replication? Most importantly, can anyone access the digital map created of a user’s face?  

Companies have thought long and hard about how to design facial recognition on personal devices and how to ensure the information gathered doesn’t make a user susceptible to malicious intent. Apple has released a comprehensive guide to how Face ID specifically protects your data. One key passage which sticks out is the following: 

“Face ID data doesn’t leave your device and is never backed up to iCloud or anywhere else. Only in the case that you wish to provide Face ID diagnostic data to AppleCare for support will this information be transferred from your device. And even in this case, data isn’t automatically sent to Apple; you can first review and approve the diagnostic data before it’s sent.”

What is key here is the idea of consent. The user can review and approve any data sent to Apple, apart from this the data does not leave a device, nor is it uploaded to any cloud system. This is a surprising and ultimately impressive design function built by Apple, whereas I went into my investigation thinking that Apple surely collected data externally in order to help its software run better, it does not. Instead, the software ensures that the ability for Face ID to learn more about its efficacy from data, is contained within the confines of the device, with no interaction with any other system within the family of Apple products or mechanisms. The company ensures that a separation between operations remains instead of actively using data we give our devices in a centralized control system (such as the cloud). 

The same rules apply for non-Apple applications, with the privacy report stating that: “Within supported apps, you can enable Face ID for authentication. Apps are only notified as to whether the authentication is successful. Apps can’t access Face ID data associated with the enrolled face.”

Some writers and technology experts have questioned how far Apple can go to protect the users, with author Jake Laperruque writing an opinion piece for Wired magazine about how Face ID could be a weapon for mass surveillance. Laperruque says that: “Apple doesn’t currently have access to the faceprint data that it stores on iPhones. But if the government attempted to forced Apple to change its operating system at the government’s behest—a tactic the FBI tried once already in the case of the locked phone of San Bernardino killer Syed Rizwan Farook—it could gain that access. And that could theoretically make Apple an irresistible target for a new type of mass surveillance order.” The author goes on to say that: “To many these mass scans are unconstitutional and unlawful, but that has not stopped the government from pursuing them. Nor have those concerns prevented the secretive FISA Court from approving the government’s requests, all too often with the public totally unaware that mass scans continue to sift through millions of Americans’ private communications.”

Despite his concerns, Laperruque says that Apple has been a fierce protector of privacy rights and that the problems or concerns could arise from governments using the data or having access to such data through coercion. He points out that this should be the focus of any hesitation from the public to use facial recognition: “The public should demand that Congress rein in the government’s ever-growing affinity for mass scan surveillance. Limiting or outlawing the controversial Upstream program when the authority it’s based on expires this December would be an excellent start, but facial recognition scans may soon be as big a component of mass surveillance, and the public need to be ready.”

So in order to be trusted, companies such as Apple have made sure that they promote their belief in user security and they continue to fight for the interest of consumers instead of those who wish to exploit the data. Google is another company that has been openly committing to advancing its technology so that the recognition software could not be exploited. An affordance the firm allowed for was speed when deciding to let their software respond to faces with closed eyes. This could, in theory, mean that a third party can gain access to a phone by holding it up to the user while they are sleeping. However, the backlash from customers was swift and ensured that with their most recent smartphone the Pixel 4, they worked on adapting the software to protect from such instances. The Guardian newspaper covered this shift and reported that: “Google has announced an update that will offer a more secure option. ‘We’ve been working on an option for users to require their eyes to be open to unlock the phone, which will be delivered in a software update in the coming months,’ it told technology website The Verge. ‘In the meantime, if any Pixel 4 users are concerned that someone may take their phone and try to unlock it while their eyes are closed, they can activate a security feature that requires a pin, pattern or password for the next unlock.’ Google’s initial decision was based on a tradeoff between speed and security, with the company focusing more on speed than Apple had when it launched its competing system in 2017 alongside the iPhone X.”

This in itself is an encouraging turn of events, with Google admitting that privacy is more important than operational speed or comparative advantage against other smartphones. 


Facial recognition can be an intimidating function for anyone to use. We have placed the most identifiable part of ourselves in the hands of a company, without much say in how they use that data, only whether they use it. While multiple experiments show there are kinks in the armor of the software, this is in no way enough to diminish the extraordinarily specific readings it can provide the phone with. Certainly, it is easy to see why using your face is a better way to ensure your phone works in your own hands, compared to numerical entry or pattern recognition. Our face is not something that can be easily replicated, nor is it something which one can ‘figure out’ like date of birth or a preferred set of digits. As a design concept it is well thought out and well-executed, while being a very natural way to open a phone as you are usually facing your phone when you are trying to open it. Apple, Google and other phone providers have acted well to make sure that the data isn’t shared or stored in the cloud, it merely exists on the physical device and any updates or modification to the data is kept in the same ecosystem. Only information about how well the devices are responding to faces is shared with companies so they can evaluate the efficacy of the feature. Privacy is maintained because it is designed to exist in the singular module of a phone instead of the vast online systems which connect our phones to each other and to the digital world. Therefore, as it is extraneous to design systems it works in tandem with the phone’s operations but is not dependent on any other operations which take place outside the hardware. It is natural to fear the loss of such information, however, companies have taken it upon themselves to write clear rubrics to ensure that users feel safe, in fact, in contrast to social media companies who regularly hide information regarding security from the public in order to freely collect masses of data which they can use to further exploit users behavior and online lives.

Additionally, another win for individual privacy came earlier this year. While on a federal/international level there are not mandated laws protecting civilians from having governmental/public institutions insisting on using one’s information to open a phone, in California, there was a ruling to the contrary. Forbes reported that: “A California judge has ruled that American cops can’t force people to unlock a mobile phone with their face or finger. The ruling goes further to protect people’s private lives from government searches than any before and is being hailed as a potential landmark decision.”

The article goes on to state that: “But in a more significant part of the ruling, Judge Westmore declared that the government did not have the right, even with a warrant, to force suspects to incriminate themselves by unlocking their devices with their biological features. Previously, courts had decided biometric features, unlike passcodes, were not “testimonial.” That was because a suspect would have to willingly and verbally give up a passcode, which is not the case with biometrics. A password was therefore deemed testimony, but body parts were not, and so not granted Fifth Amendment protections against self-incrimination.” This is a powerful step forward for individual rights of usage. 

What we can gather from the world slowly adapting to these new phone features, is that the rights of users are of paramount importance to tech companies, as without the trust of consumers their business plans would be defunct and their products widely susceptible to skepticism and criticism, two things which are huge barriers to sales. Therefore it is in the interest of these firms to ensure privacy and security when handling data and by also ensuring the general public is aware of their rights. Furthermore, legal institutions have begun to link these features to the right to privacy and constitutional law, evolving the influence of technology on national legal systems.

The burden is and should always be placed on the manufacturers, whose influence on the world seems only to be getting more and more powerful. Their products have become so integral to modern life that many can forget and do forget the implications some functions have on their individual rights and their right to privacy. We must ensure that people are educated correctly about using such devices and mechanisms at play, however the responsibility to protect users comes from the tech companies and those who utilize facial recognition in these products. Our legal system must hold them to account while also ensuring that no governmental or corporate actors can gain access to our data through coercion or stealing, they provide consumers with an extra level of protection (at least in liberal democracies such as the U.S.). Yet, as proven by the California court case, the company in question must fight for the right of its users and ensure that their devices are not leaving anyone exposed to negative forces. Facial recognition in the form of unlocking a phone, seems like an innocent feature, an impressive one as well. However, in the wrong hands, it could be a dangerous weapon. All liberal actors must work together to build a framework of information and protection to allow. this feature to continue being used while putting the needs and rights of the user first.


Brewster, Thomas. “Feds Can’t Force You To Unlock Your IPhone With Finger Or Face, Judge Rules.” Forbes, Forbes Magazine, 14 Jan. 2019, www.forbes.com/sites/thomasbrewster/2019/01/14/feds-cant-force-you-to-unlock-your-iphone-with-finger-or-face-judge-rules/#3d2666d342b7.

Laperruque, Jake. “Apple’s FaceID Could Be a Powerful Tool for Mass Spying.” Wired, Conde Nast, 14 Mar. 2018, www.wired.com/story/apples-faceid-could-be-a-powerful-tool-for-mass-spying/?mbid=social_twitter.

Apple, Apple. “About Face ID Advanced Technology.” Apple Support, 29 Oct. 2019, support.apple.com/en-gb/HT208108.

Shneiderman, Ben. “Golden Rules of Interaction Design .” Ben Shneiderman, www.cs.umd.edu/users/ben/goldenrules.html.

Pieter Vermaas, Peter Kroes, Ibo van de Poel, Maarten Franssen, and Wybo Houkes. A Philosophy of Technology: From Technical Artefacts to Sociotechnical Systems. San Rafael, CA: Morgan & Claypool Publishers, 2011.

Ron White, “How the World Wide Web Works.” From: How Computers Work. 9th ed. Que Publishing, 2007.

Richard N. Langlois, “Modularity in Technology and Organization.” Journal of Economic Behavior & Organization 49, no. 1 (September 2002): 19-37.