Google Glass is not an “invention” that came from nowhere. By looking at its modular structure, the last several steps of Google Glass’s “evolution” can be depicted. After understanding the more detailed process of design evolution on the levels of symbols selection and syntax use, a logical system of Google Glass can be discussed. Further, this found can be brought to a broader context to talk about the semantics of Google Glass. Two approaches are utilized. Firstly, the framework of affordances is used to understand the human-computer interface; secondly, the epistemology from software studies is borrowed to talk about Google Glass’s human-computer-culture interface.
Google Glass has been available to ordinary consumers since April, 15th, 2014. It is a wearable computer with an optical head-mounted display(OHMD) developed by Google (Miller, 2013). Some people might think Google Glass as a radical technology. However, it is not an “invention” that came from nowhere. To explain the existence of Google Glass, evolution is a powerful concept. The term of evolution does not necessarily mean that Darwinism should be applied to the field of technology. However, Similar to the explanation of the biological evolution, it is crucial to firstly realizing the “family relationship” between Google Glass and many other technologies by discovering their common assemblies. Then we can get basic units to talk about its evolution mechanisms. But unlike the Genetic evolution, a natural phenomenon that is usually treated as a value free process, the Design evolution is intertwined with human will, the value of which is usually interpreted and evaluated within certain context.
Combination might be a key to figuring out realistic mechanisms of the invention and evolution of technology (Arthur, 2009). The evolution of Google Glass can be also viewed as such a process of structural deepening. In this article, I will try examining the last few steps of the evolution that made it become Google Glass and understanding the meanings behind the evolution.
To unite the aforementioned words (assembles, mechanisms, combination, etc.) into one conceptual system, the General Definition of Information (GDI) can be adopted as an operational standard. According to GDI, information is entity that made of data with certain rules, or information is equal to data or symbols plus the syntax, and should comply the semantics of chosen system (Floridi, 2010).
I think there are at least two advantages of General Information Theory that makes it suitable to study the material practice of design. Firstly, it doesn’t strip information and symbols from its material carrier, which is suitable to study the material practice of design. In a general term, the following definition of symbol is proposed: A symbol is an energy evoking and directing agent (Campbell, 2002). In this term, Google Glass can be analyzed as symbols that are composited by syntax at different levels. Secondly, it can help us to clarify the narrative. Google Glass can be de-blackboxed and studied from the three different levels:
- The basic symbols that come up with the modules: selected components and the source of variations of the evolution.
- The syntax that makes new design structure to achieve the evolution: the selection mechanisms and the operators that change the structure(Baldwin & Clark, 2000).
- The semantic aspect about how we can possibly evaluate and interpret Google Glass: The selection criterion, the affordances of this technology and other social implication.
Lev Manovich points out a new media object has the same modular structure throughout (Manovich, 2001). The concept of modularity contains two basic principles of technology: combination and recursiveness that Arthur brings up in his The Nature of Technology (Arthur, 2009). He also mentioned that a technology and its assemblies should all supply a functionality and are executable (Arthur, 2009).
In Google Glass, elements are assembled into larger-scale objects but they continue to maintain their separate identity (Manovich, 2001). We can rephrase it that the symbols on different levels are packaged as objects and manipulated by new syntax.
2. The stage of symbols selection
The design is not fully from the minds of designer, but also largely confined by the design parameters that can be chosen among. What’s inside the design structure of Google Glass? Baldwin and Clark built an effective framework to answer this question from different aspects. There are three categories we can use to fully address the design information: architecture, interfaces, the protocols and standards (Baldwin & Clark, 2000). Now let us teardown Google Glass to see what symbols, or modular, are utilized.
Architecture indicates that what modules will be part of the system, and what their roles will be (Baldwin & Clark, 2000). Google Glass is not a single-tiered system that directly formed by evenly distributed small units. By looking closer to Google Glass, we found that it is made up of many assembles that are relatively independent. On the first level of hardware teardown, we found it involves these basic assemblies: main logic board, display assembly, battery, speaker, touchpad, etc. These assemblies are themselves technologies, whose functions are distinguished from the adjacent assemblies.
We can easily find that most of these assemblies are not just created for Google Glass. Almost all of them have existed around us widely for a long time. The microphone is produced by Wolfsen, which has been equipped into smart phone; The touchpad is produced by Synaptics, which is similar to the touchpads for laptop computers; On the main logic board, the Wifi/Bluetooth Modules are ordinary modules that supplied by Universal Scientific Industrial Corp. If we go further to see into core chips, there is also nothing novel. For example, ROM is provided by Sandisk, RAM is provided by Eplida Memory, as many other digital devices.
However, the display assembly is one exception that looks novel to me. I continue to open up this part. This assembly is also combined with more subassemblies, creating a recursive structure from the macroscopic levels to microscopic levels. I found the gyro censors, the gravity censors and accelerometers that supplied by InvenSense Inc, and the light sensor, supplied by LITE-ON IT. They are all mature fittings that exist in the market for a long time.
The most unique part in display assembly is the head-mounted display(OHMD), which uses a PBS. PBS is a partially reflecting mirror beam splitter. It allows the information displayed on LEDs be reflected to a partially reflective mirror. Through this mirror, users can see the real scene and the computer-generated information at the same time. A very similar technology called head-up display (HUD) has been developed in the field of military for a long time. Similar optical device can even be traced back before World War II. Now this technology is becoming common with aircraft and several business jets to present to the pilot a picture that overlays the outside world (Norris, Homas, Wagner, & Forbes Smith, 2005). The most similar use of OHMD to Google Glass is the iOptik, which is developed by US Department of Defence and Innovega for military use (Anthony, 2012). These two organizations cooperated to successfully make the OHMD into a very small size.
The interfaces are defined as information generated between different modules. They are largely determined by the architecture descripted above. By looking between these selected units, the detailed descriptions of how the different modules will interact can be found (Baldwin & Clark, 2000).
In contrast to the view that to define the whole information provided by certain device, the practices of studying interfaces among various modules give us a more specific depict of “remediation”. The interfaces reflect the activities of translating, refashioning and reforming other information, both on the levels of content and form (Bolter & Grusin, 1999). We can find that in Google Glass, the functions and arrangements of interfaces share many common features with smart phone. Generally speaking, the designers of Google Glass didn’t introduce substantial variation to the old information exchange pattern. In other term, the various forms of media in Google Glass have been already hybridized by smart phone before.
The main differences are the interfaces inside the display assembly, as we described above. By combining and re-arranging this modular, designers realize their intentions of providing new experience of human-computer interaction, which has an influence on the affordances that will be discussed later.
2.3 Integration protocols and testing standards
The examination of the interfaces naturally leads to the next questions: How is the information between those interfaces transferred? What are the protocols and standards that allow designers to assemble the system?
Standards are utilized on each interface of all the modules. To name a few: the Advanced Audio Coding (AAC) that established by ISO and IEC, conducts the audio transmission; the Transmission Control Protocol/ Internet Protocol (TCP/IP), which implemented on the Advanced Research Projects Agency Network (ARPANET), links Google Glass into the backbone of the Internet; the 802.11 standard, proposed by the Institute of Electrical and Electronics Engineers (IEEE), makes the information exchange between different devices wireless.
All these standards and protocols are embodied with rich history. Their developments are molded by countless technological, cultural and social forces, which create the infosphere of all the actors related to Google Glass. We can find that almost all of these standards have been applied on Google’s smart phone, which implicates that Google Glass is placed into a similar design space with smart phone. The study of standards and protocols unfolds the view of how a technology can work as a node of a network linking different social systems, which will be discussed later.
3. The stage of syntax use:
The structural fact, modularity, separates technologies into functional groupings, also simplifies the process of design. But how is the modularity achieved? It is a question related to syntax, which manipulated the symbols abstracted from the subsystems. The structural similarity of technologies implicates that Google Glass can be viewed as having “family relationship” with smart phone and other technologies. Google Glass, as a hybrid with descent of other technologies, is not made by loosely connecting these technologies. The precedent technologies are hybridized in a deeper way. In hybrid media, what come together are the languages of previously distinct media (Manovich, 2013). New syntax appears to exchange properties and create new structures.
Baldwin and Clark categorize the means to create the new modular structure into six modular operators. In complex adaptive systems, operators are actions that change existing structures into new structures in well-defined ways (Baldwin & Clark, 2000). We have been aware of the close connection between smart phone and Google Glass. Although there are influences from other technologies, for the sake of analyzing convenience, the task structure of smart phone will be chosen as an old structure, based on which the operators manipulate.
These following modular operators (Baldwin & Clark, 2000) are implemented to achieve the new task structure of Google Glass:
- Augmenting—adding a new module to system:
As Arthur claims, a technology is usually organized around a central principle or essential idea that allows it to work; In practice this means that a technology consists of a main assembly: an overall backbone of the device that executes its base principle (Arthur, 2009). One of the most apparent new module added to the old structure is the display assembly. In this sense, augmenting of the display assembly creates the key feature of Google Glass.
- Splitting a design (and its tasks) into modules:
The display assembly is expected to run complicated functions. In order to achieving that, it should be further split into subsystems, with the independent goals to sensing the ambience, or to overlapping the computer generated information on the real scene. To achieve the later goal, the function of optics can be further divided into illumination region and viewing region (Wiki, 2014).
- Substituting one module design for another:
The display assembly can also be viewed as a substitution of the smart phone display. We will discuss how the action of this substitution influences the technology’s affordances.
- Excluding a module from the system:
The smart phone uses keyboard or touchscreen to achieve input capabilities. However, the frame of Google Glass doesn’t have enough space to contain them, whose existence would impact the balance of the frame. The keyboard and touchscreen has to be excluded from the system. (But a touchpad is added to the system to make up part of the defect.)
- Porting a module to another system:
Designers want the Google Glass to provide functions based on the users’ locations. However, for certain reasons, they didn’t add the GPS module into Google Glass, but porting it to the linked smart phone. This porting creates a special relationship between Google Glass and cellphone, which will be elaborated later.
- Inverting to create new design rules:
This operator describes the action of taking previously hidden information and “moving it up” the design hierarchy. However, in the case of Google Glass, I didn’t find any once hidden modules become visible.
From a humanities perspective, the design of digital objects is a cultural practice (Murray, 2011). But the cultural practice cannot be understood fully without firstly treating the design as a material practice. The using of modular operators is a process of realizing task structure, which is a list of tasks that need to be done to make a new technology. Task structure is isomorphic with design structure, which comprises a list of design choices (Baldwin & Clark, 2000). After seizing these design choices, we can continue to analyze the selection criteria and the goals of the designers that reflected by them.
4. The stage of semantics creating
Now we have answered the question that how the changes of the structures are made. However, it is just a starting point to examine the design evolution by talking about the combinations of various physical units. The next questions can be: how is Google Glass designed to become meaningful in human society? Who gave it the syntax of evolution?
Very often in the world of technology, changes at one level must be accommodated by changes at a different level (Arthur, 2009). In this section, we can get a glance of how cultural practice of design and the material practice of design correspond. By operating the design parameters that preexist, the functions of precedent media are mediated. I will try discussing how Google Glass adds new semantics in our environment with two aspects.
4.1. Affordances: about the human-computer interface
By making the choices of design parameters and operators, Google Glass is placed into a design space closed to the smart phone. What is the positional relation of Google Glass and the smart phone in the design space? Does Google Glass exploit any property than old technologies? This question can be studied by introducing the concept of four affordances. Affordances can be simplified as “action possibilities” Donald Norman defined it as the perceived and actual properties of the thing, primarily those fundamental properties that determine just how the thing could possibly be used (Norman, 2002).
4.1.1. Procedural Affordance
The procedural affordance is used to measure the technology’s ability to represent and execute conditional behaviors (Murray, 2011). It is hard to compare the procedural affordance between Google Glass and smart phone in their whole range. However, by de-blackboxing them according to the modular structure, multiple systems can be examined separately in the term of whether or not they contribute to exploit the affordance. The adding of touchpad obviously contributes the procedural affordance, because on some conditions that the voice command device can’t provide effective functions, the touchpad can be used to make clearer executable instructions.
4.1.2. Encyclopedia Affordance:
The encyclopedia affordance reflects a technology’s ability to contain and transmit information. As mentioned in the section that discusses modular operators, if we treat smart phone as an old design structure, the exclusion of cellphone touchscreen makes Google Glass has inferior encyclopedia affordance, since currently the display on the Glass cannot present extensive media formats and genres. It is an apparent shortcoming of Google Glass. In addition, its limited capacity of storage also impairs the encyclopedia affordance. However, since Google Glass is usually used with the aid of smart phone. If we treat them as a whole system, the introduction of Google Glass actually enhances the procedural and encyclopedia affordances of the whole system. In addition, “In the wild”, a technology rarely is fixed. It constantly changes its architecture, adapts and reconfigures as purposes change and improvements occur (Arthur, 2009). As many other technologies, to enhance the display function and expand storage is the direction of the continuous design evolution of Google Glass.
4.1.3. Participatory Affordance:
The digital media is participatory in allowing an interactor to manipulate, contribute to, and have an effect upon digital content and computer processing (Murray, 2011).The concept of the participatory affordance is closely related to two values of interface design, intuitive and transparent.
Most ordinary users don’t ask for the things that they want to own before seeing them, as babies cannot describe their needs but can immediately point to something they want when the object comes into view. Technology is supposed to achieve this kind of unconscious expectations.
Google Glass doesn’t provide wordy guidance to teach users how to use its various functions. Look at the menu that Google Glass displays. All the options are provided with simple icons. Google Glass presents users some functions that they seem to naturally understand how to use. Murray said the designer must script both sides so that the actions of humans and machines are meaningful to one another (Murray, 2011). However, this script is not only provided by designers. Since Intuitions about the world are often based on repeated experience (Murray, 2011), people had been adapted to understand many symbols long before the designers of Google Glass use that.
On the menu, some icons imitate the appearance of old media that we have rich experience with, such as camera and clock. They let users expect to take photos and check the time with them. And some intuitions can be more recently built, such as users’ interpretation of the icon of a magnifier. It doesn’t provide higher magnification, but means “looking up”. Similarly, although the symbol of “64°” might mean measurement of angles or other things in different context, users of Google Glass usually only connect it to the weather. These connections happen so naturally, because users are already familiar to these metaphors when using other digital devices, especially smart phones. We are already habitants of the infosphere in which Google Glass just entered. Although Google Glass is unconventional to a certain sense, it still inherits established conventions to a large extent to build effective human-computer interaction.
The aforementioned examples also implicate another related design value that Google Glass achieves–transparent. One example can be utilized to illustrate that the addition of display assembly can promote the participatory affordance.
The pre-mentioned head-mounted display (OHMD) allows users to see information provided by computers without looking away from their usual viewpoints. We all know that the compass application in smart phone remains the image of a compass, but Google Glass erases that image and only preserves the concept of a compass. When opening the Google Compass on the Glass, users can see two short red lines in the middle of the screen, and they will notice some characters pass through between them when they turn around their heads. This minimalism design is informative enough to users. As we explained before, the most of us have already owned the knowledge of a precedent medium, the compass; we can naturally imagine that we stand at the origin of a rectangular coordinate, as what is presented on an ordinary compass. And the red lines straightforwardly tell us the relationship of the direction we face and the coordinate built inside our cognitive system.
Similarly, Google Glass takes the functions of clock, camera, thermometer and many other media we use for information exchange but erase their physical bodies. Highly related to the augmenting of display assembly, Google Glass’s another advantages is the display reduces the access time of information. When the time between cognition and action is very small, we hardly recognize the media between the environment and us. We feel the interface is an extension of the self.
4.1.4. Spatial Affordance：
I think the spatial affordance is closely related to the participatory affordance, because the sense of space is a kind of mental model based on the relationship of users’ cognitive system and what the media present. This mental model helps us to make sense of the world. The computer interface acts as a code, which provides its own model of the world, its own logical system, or ideology (Manovich, 2001). What is the logical system of space that Google Glass provides?
Gibson’s affordances capture a fundamental aspect of human perception and cognition, that is, the fact that much information needed for perception and action is in the environment as invariants that can be picked up directly (Zhang & Patel, 2006). The direct collection and processing of information in our environment provide human’s sense of “real space”. Google Glass’s semitransparent display provides the possibility to let users gain a feeling of picking up computer processing data from the real scene. To give the pre-mentioned example of Google Compass, the letter “N” will always appear in front of the person when she faces the North. And the “N” will gradually move to the right when she turns her head to the west. For users, it seems that the “N” is an invariant existing beside us. Another example is the Turn-by-Turn Directions that provided by Google Navigation. If the users follow it to the same destination every time, the same visual direction will always appear at the same corner. These spatial metaphors create interaction patterns that are consistent with the real environment, which leads the users into an augmented reality. In Google Glass’s spatial logic, computer generated information is mapped out into the real world.
4.2 Software epistemology of Google Glass: about the human-computer-culture interface
Manovich use the term “cultural interfaces” to describe human-computer-culture interface: the ways in which computers present and allows us to interact with cultural data. (Manovich, 2001)
He also provides the following five principles of new media as general tendencies of a cultural undergoing computerization (Manovich, 2001). The first two principles are on material level: numeric coding and modular organization. They can be studied by discussing the symbols and syntax of design structure. The third and the forth principles are automation and variability, which can be achieved based on the first two priciples. I think automation and variability are highly related to the design affordances that we have also talked about before. The last, fifth principle of cultural transcoding aims to describe the most substantial consequence of media’s computerization. (Manovich, 2001) He thinks in general, new media can be thought of as consisting from two distinct layers: the ”cultural layer” and the “computer layer”. These two layers are being composited together. Manovich encouraged a “conceptural transfer” from computer world to culture. I think the practice of de-blackboxing Google Glass provides such a view of examining cultural categories and concepts from computer’s ontology.
We started by looking at the different modules and their interrelationships, then understood the genres of data created and transferred among them, further, we can begin to see the cultural, social, and economic forces that shapes each modular.
People gradually agree that “all intellectual work is now ‘software study’.” (Manovich, 2001). The software is used to create and access media objects and environments, which enables the whole global information society. The design evolution of Google Glass not only translates the functions of precedent media, but also mediates other cultural forms in a larger scope. The agency of many social institutions are also involved into the design and use of Google Glass.
By opening up Google Glass, we gained a glance of the assemblies on the first several layers, together with the protocols and standards utilized to combine them and transfer data. We can get a good sight to count Google’s partners and subsidiaries that make those semi-finished products; we also come to know the organizations and institutions that related to the standards and protocals.
I think that the design value of transparent can also be used to explain these social institution’s hiding from us. We do not need to buy applications from different suppliers and connect them into other hardware by ourselves any more, as playing video games using hard drive. Looking at the modular structure of Google Glass, we can notice that the activities of different suppliers are also packaged to hide from us in a similar pattern of modular structure. Also, the protocols and standards are hidden after the various remediated information.
Being aware of Google Glass’s modular structure has another inspiration to us. With this new vision unfolded, we can easily find that Google Glass accesses the similar established system with Google’s smart phone. Google keeps partnerships of many companies who produce fittings or provide services to its smart phone. Google Glass, as an actor, borrowed the network already linked by Google’s smart phone to a great extent. In other word, by borrowing the existing physical design parameters, Google Glass not only inherits many features from smart phone, but the networks it locates.
Especially considering there is no GPS module in Google Glass and it has to rely on smart phone to access and distribute the information related to location, Google Glass and smart phone are like conjoined twin babies in some sense. In the network, smart phone has already built close relationships with third parties, including social media websites, news organizations, education institutions, etc. Many of them, such as Facebook or the New York Times, have easily transplanted their applications from smart phone to Google Glass.
Google Glass is locked into the space of the software universe already exists. From this view, it is more like a node that link other actors and agencies than an independent tool that can only deal with information with fixed methods. But Google Glass, as a new comer into the network, still brings a few more complexity to the layer that mediates all areas of contemporary societies. Google might prove its potential to increasingly enhance our “interfacing” to various cultural data, it might also create some surprising impacts in the network, such as influencing the education or legal systems. As an actor in the software universe, it will also carry on the ambitious missions in this network, to carry on the new media revolution, the shift of all of our culture to computer-mediated forms of communication (Manovich, 2001).
Google Glass, as other technologies, is descended from earlier technologies, the mechanism of “heredity” is worthy to be studied, which is some detailed connection that links the present to the past (Arthur, 2009).
I think unfolding a technology with the framework of its modular structure is a good approach to talk about the evolution of the technology. By opening up Google Glass and examining the functional components hidden from the novel-looking appearance, we have been provided a structural fact of the Google Glass. The modular structure can serve as clues to interpret and evaluate Google Glass. Because of the complexity of Google Glass, it is more practical to assess the change of affordances based on the addition or exclusion of particular modules. Also, the relationships among different modules further reflect the social networks linked by Google Glass in the software universe.
Anthony, S. (2012). US military developing multi-focus augmented reality contact lenses | ExtremeTech. ExtremeTech. Retrieved May 10, 2014, from http://www.extremetech.com/computing/126043-us-military-developing-multi-focus-augmented-reality-contact-lenses
Arthur, W. B. (2009). The nature of technology: What it is and how it evolves. Simon and Schuster.
Baldwin, C. Y., & Clark, K. B. (2000). Design rules: The power of modularity(Vol. 1). Mit Press
Bolter, J. D. and Grusin, R. (1999) Remediation: Understanding New Media Camridge, Mass. The MIT Press.
Campbell, J. (2002). Flight of the Wild Gander:- The Symbol without Meaning. California: New World Library. p. 143.
Floridi, L.(2010) Information: A very short introduction. Oxford University Press.
Gouyet, J (1996). Physics and fractal structures. Paris/New York: Masson Springer.
Mandelbrot, B.B. (1983). The fractal geometry of nature. Macmillan. Retrieved 1 February 2012.
Manovich, L. (2001). The language of new media. MIT press.
Manovich, L. (2013). Software takes command (Vol. 5). A&C Black.
Miller, Claire Cain (2013). “Google Searches for Style”. The New York Times. Retrieved 8 May 2014.
Murray, J. H. (2011). Inventing the medium: principles of interaction design as a cultural practice. The MIT Press.
Norris, G.; Thomas, G.; Wagner, M. and Forbes Smith, C. (2005).Boeing 787 Dreamliner—Flying
Womack, M. (2005) Symbols and Meaning: A Concise Introduction. California: AltaMira Press.
Zhang, J., & Patel, V. L. (2006). Distributed cognition, representation, and affordance. Pragmatics & Cognition, 14(2).
Google Glass. (n.d.). Wikipedia. Retrieved May 11, 2014, from http://en.wikipedia.org/wiki/Google_Glass#cite_note-US20130070338-25