Author Archives: Tianyi Cheng

Interpreting the Evolution of Google Glass in the Concept Framework of Modularity

Tianyi Cheng

Abstract

Google Glass is not an “invention” that came from nowhere. By looking at its modular structure, the last several steps of Google Glass’s “evolution” can be depicted. After understanding the more detailed process of design evolution on the levels of symbols selection and syntax use, a logical system of Google Glass can be discussed. Further, this found can be brought to a broader context to talk about the semantics of Google Glass. Two approaches are utilized. Firstly, the framework of affordances is used to understand the human-computer interface; secondly, the epistemology from software studies is borrowed to talk about Google Glass’s human-computer-culture interface.

1. Introduction:

Google Glass has been available to ordinary consumers since April, 15th, 2014. It is a wearable computer with an optical head-mounted display(OHMD) developed by Google (Miller, 2013). Some people might think Google Glass as a radical technology. However, it is not an “invention” that came from nowhere. To explain the existence of Google Glass, evolution is a powerful concept. The term of evolution does not necessarily mean that Darwinism should be applied to the field of technology. However, Similar to the explanation of the biological evolution, it is crucial to firstly realizing the “family relationship” between Google Glass and many other technologies by discovering their common assemblies. Then we can get basic units to talk about its evolution mechanisms. But unlike the Genetic evolution, a natural phenomenon that is usually treated as a value free process, the Design evolution is intertwined with human will, the value of which is usually interpreted and evaluated within certain context.

Combination might be a key to figuring out realistic mechanisms of the invention and evolution of technology (Arthur, 2009). The evolution of Google Glass can be also viewed as such a process of structural deepening. In this article, I will try examining the last few steps of the evolution that made it become Google Glass and understanding the meanings behind the evolution.

To unite the aforementioned words (assembles, mechanisms, combination, etc.) into one conceptual system, the General Definition of Information (GDI) can be adopted as an operational standard. According to GDI, information is entity that made of data with certain rules, or information is equal to data or symbols plus the syntax, and should comply the semantics of chosen system (Floridi, 2010).

The stack of General Information Theory

The stack of General Information Theory

I think there are at least two advantages of General Information Theory that makes it suitable to study the material practice of design. Firstly, it doesn’t strip information and symbols from its material carrier, which is suitable to study the material practice of design. In a general term, the following definition of symbol is proposed: A symbol is an energy evoking and directing agent (Campbell, 2002). In this term, Google Glass can be analyzed as symbols that are composited by syntax at different levels. Secondly, it can help us to clarify the narrative. Google Glass can be de-blackboxed and studied from the three different levels:

  • The basic symbols that come up with the modules: selected components and the source of variations of the evolution.
  • The syntax that makes new design structure to achieve the evolution: the selection mechanisms and the operators that change the structure(Baldwin & Clark, 2000).
  • The semantic aspect about how we can possibly evaluate and interpret Google Glass: The selection criterion, the affordances of this technology and other social implication.

Lev Manovich points out a new media object has the same modular structure throughout (Manovich, 2001). The concept of modularity contains two basic principles of technology: combination and recursiveness that Arthur brings up in his The Nature of Technology (Arthur, 2009). He also mentioned that a technology and its assemblies should all supply a functionality and are executable (Arthur, 2009).

In Google Glass, elements are assembled into larger-scale objects but they continue to maintain their separate identity (Manovich, 2001). We can rephrase it that the symbols on different levels are packaged as objects and manipulated by new syntax.

The modular structure in the language of General Information Theory

The modular structure in the language of General Information Theory

2. The stage of symbols selection

The design is not fully from the minds of designer, but also largely confined by the design parameters that can be chosen among. What’s inside the design structure of Google Glass? Baldwin and Clark built an effective framework to answer this question from different aspects. There are three categories we can use to fully address the design information: architecture, interfaces, the protocols and standards (Baldwin & Clark, 2000). Now let us teardown Google Glass to see what symbols, or modular, are utilized.

Screen Shot 2014-05-10 at 10.24.14 AM

                                       http://www.catwig.com/google-glass-teardown/

2.1 Architecture

Architecture indicates that what modules will be part of the system, and what their roles will be (Baldwin & Clark, 2000). Google Glass is not a single-tiered system that directly formed by evenly distributed small units. By looking closer to Google Glass, we found that it is made up of many assembles that are relatively independent. On the first level of hardware teardown, we found it involves these basic assemblies: main logic board, display assembly, battery, speaker, touchpad, etc. These assemblies are themselves technologies, whose functions are distinguished from the adjacent assemblies.

The general architecture of Google Glass

The general architecture of Google Glass

We can easily find that most of these assemblies are not just created for Google Glass. Almost all of them have existed around us widely for a long time. The microphone is produced by Wolfsen, which has been equipped into smart phone; The touchpad is produced by Synaptics, which is similar to the touchpads for laptop computers; On the main logic board, the Wifi/Bluetooth Modules are ordinary modules that supplied by Universal Scientific Industrial Corp. If we go further to see into core chips, there is also nothing novel. For example, ROM is provided by Sandisk, RAM is provided by Eplida Memory, as many other digital devices.

However, the display assembly is one exception that looks novel to me. I continue to open up this part. This assembly is also combined with more subassemblies, creating a recursive structure from the macroscopic levels to microscopic levels. I found the gyro censors, the gravity censors and accelerometers that supplied by InvenSense Inc, and the light sensor, supplied by LITE-ON IT. They are all mature fittings that exist in the market for a long time.

The structure of the optic in Google Glass

The structure of the optic in Google Glass

The most unique part in display assembly is the head-mounted display(OHMD), which uses a PBS. PBS is a partially reflecting mirror beam splitter. It allows the information displayed on LEDs be reflected to a partially reflective mirror. Through this mirror, users can see the real scene and the computer-generated information at the same time. A very similar technology called head-up display (HUD) has been developed in the field of military for a long time. Similar optical device can even be traced back before World War II. Now this technology is becoming common with aircraft and several business jets to present to the pilot a picture that overlays the outside world (Norris, Homas, Wagner, &  Forbes Smith, 2005). The most similar use of OHMD to Google Glass is the iOptik, which is developed by US Department of Defence and Innovega for military use (Anthony, 2012). These two organizations cooperated to successfully make the OHMD into a very small size.

2.2 Interfaces

The interfaces are defined as information generated between different modules. They are largely determined by the architecture descripted above. By looking between these selected units, the detailed descriptions of how the different modules will interact can be found (Baldwin & Clark, 2000).

A sketch of some interfaces in Google Glass

A sketch of some interfaces in Google Glass

In contrast to the view that to define the whole information provided by certain device, the practices of studying interfaces among various modules give us a more specific depict of “remediation”. The interfaces reflect the activities of translating, refashioning and reforming other information, both on the levels of content and form (Bolter & Grusin, 1999). We can find that in Google Glass, the functions and arrangements of interfaces share many common features with smart phone. Generally speaking, the designers of Google Glass didn’t introduce substantial variation to the old information exchange pattern. In other term, the various forms of media in Google Glass have been already hybridized by smart phone before.

The main differences are the interfaces inside the display assembly, as we described above. By combining and re-arranging this modular, designers realize their intentions of providing new experience of human-computer interaction, which has an influence on the affordances that will be discussed later.

2.3 Integration protocols and testing standards

The examination of the interfaces naturally leads to the next questions: How is the information between those interfaces transferred? What are the protocols and standards that allow designers to assemble the system?

A sketch of standards and protocols related to Google Glass

A sketch of standards and protocols related to Google Glass

Standards are utilized on each interface of all the modules. To name a few: the Advanced Audio Coding (AAC) that established by ISO and IEC, conducts the audio transmission; the Transmission Control Protocol/ Internet Protocol (TCP/IP), which implemented on the Advanced Research Projects Agency Network (ARPANET), links Google Glass into the backbone of the Internet; the 802.11 standard, proposed by the Institute of Electrical and Electronics Engineers (IEEE), makes the information exchange between different devices wireless.

All these standards and protocols are embodied with rich history. Their developments are molded by countless technological, cultural and social forces, which create the infosphere of all the actors related to Google Glass. We can find that almost all of these standards have been applied on Google’s smart phone, which implicates that Google Glass is placed into a similar design space with smart phone. The study of standards and protocols unfolds the view of how a technology can work as a node of a network linking different social systems, which will be discussed later.

3. The stage of syntax use:

The structural fact, modularity, separates technologies into functional groupings, also simplifies the process of design. But how is the modularity achieved? It is a question related to syntax, which manipulated the symbols abstracted from the subsystems. The structural similarity of technologies implicates that Google Glass can be viewed as having “family relationship” with smart phone and other technologies. Google Glass, as a hybrid with descent of other technologies, is not made by loosely connecting these technologies. The precedent technologies are hybridized in a deeper way. In hybrid media, what come together are the languages of previously distinct media (Manovich, 2013). New syntax appears to exchange properties and create new structures.

Baldwin and Clark categorize the means to create the new modular structure into six modular operators. In complex adaptive systems, operators are actions that change existing structures into new structures in well-defined ways (Baldwin & Clark, 2000). We have been aware of the close connection between smart phone and Google Glass. Although there are influences from other technologies, for the sake of analyzing convenience, the task structure of smart phone will be chosen as an old structure, based on which the operators manipulate.

These following modular operators (Baldwin & Clark, 2000) are implemented to achieve the new task structure of Google Glass:

  • Augmenting—adding a new module to system:

As Arthur claims, a technology is usually organized around a central principle or essential idea that allows it to work; In practice this means that a technology consists of a main assembly: an overall backbone of the device that executes its base principle (Arthur, 2009). One of the most apparent new module added to the old structure is the display assembly. In this sense, augmenting of the display assembly creates the key feature of Google Glass.

  • Splitting a design (and its tasks) into modules:

The display assembly is expected to run complicated functions. In order to achieving that, it should be further split into subsystems, with the independent goals to sensing the ambience, or to overlapping the computer generated information on the real scene. To achieve the later goal, the function of optics can be further divided into illumination region and viewing region (Wiki, 2014).

  • Substituting one module design for another:

The display assembly can also be viewed as a substitution of the smart phone display. We will discuss how the action of this substitution influences the technology’s affordances.

  • Excluding a module from the system:

The smart phone uses keyboard or touchscreen to achieve input capabilities. However, the frame of Google Glass doesn’t have enough space to contain them, whose existence would impact the balance of the frame. The keyboard and touchscreen has to be excluded from the system. (But a touchpad is added to the system to make up part of the defect.)

  • Porting a module to another system:

Designers want the Google Glass to provide functions based on the users’ locations. However, for certain reasons, they didn’t add the GPS module into Google Glass, but porting it to the linked smart phone. This porting creates a special relationship between Google Glass and cellphone, which will be elaborated later.

  • Inverting to create new design rules:

This operator describes the action of taking previously hidden information and “moving it up” the design hierarchy. However, in the case of Google Glass, I didn’t find any once hidden modules become visible.

From a humanities perspective, the design of digital objects is a cultural practice (Murray, 2011). But the cultural practice cannot be understood fully without firstly treating the design as a material practice. The using of modular operators is a process of realizing task structure, which is a list of tasks that need to be done to make a new technology. Task structure is isomorphic with design structure, which comprises a list of design choices (Baldwin & Clark, 2000). After seizing these design choices, we can continue to analyze the selection criteria and the goals of the designers that reflected by them.

4. The stage of semantics creating

Now we have answered the question that how the changes of the structures are made. However, it is just a starting point to examine the design evolution by talking about the combinations of various physical units. The next questions can be: how is Google Glass designed to become meaningful in human society? Who gave it the syntax of evolution?

Very often in the world of technology, changes at one level must be accommodated by changes at a different level (Arthur, 2009). In this section, we can get a glance of how cultural practice of design and the material practice of design correspond. By operating the design parameters that preexist, the functions of precedent media are mediated. I will try discussing how Google Glass adds new semantics in our environment with two aspects.

4.1. Affordances: about the human-computer interface

By making the choices of design parameters and operators, Google Glass is placed into a design space closed to the smart phone. What is the positional relation of Google Glass and the smart phone in the design space? Does Google Glass exploit any property than old technologies? This question can be studied by introducing the concept of four affordances. Affordances can be simplified as “action possibilities” Donald Norman defined it as the perceived and actual properties of the thing, primarily those fundamental properties that determine just how the thing could possibly be used (Norman, 2002).

4.1.1. Procedural Affordance

The procedural affordance is used to measure the technology’s ability to represent and execute conditional behaviors (Murray, 2011). It is hard to compare the procedural affordance between Google Glass and smart phone in their whole range. However, by de-blackboxing them according to the modular structure, multiple systems can be examined separately in the term of whether or not they contribute to exploit the affordance. The adding of touchpad obviously contributes the procedural affordance, because on some conditions that the voice command device can’t provide effective functions, the touchpad can be used to make clearer executable instructions.

4.1.2. Encyclopedia Affordance:

The encyclopedia affordance reflects a technology’s ability to contain and transmit information. As mentioned in the section that discusses modular operators, if we treat smart phone as an old design structure, the exclusion of cellphone touchscreen makes Google Glass has inferior encyclopedia affordance, since currently the display on the Glass cannot present extensive media formats and genres. It is an apparent shortcoming of Google Glass. In addition, its limited capacity of storage also impairs the encyclopedia affordance. However, since Google Glass is usually used with the aid of smart phone. If we treat them as a whole system, the introduction of Google Glass actually enhances the procedural and encyclopedia affordances of the whole system. In addition, “In the wild”, a technology rarely is fixed. It constantly changes its architecture, adapts and reconfigures as purposes change and improvements occur (Arthur, 2009). As many other technologies, to enhance the display function and expand storage is the direction of the continuous design evolution of Google Glass.

4.1.3. Participatory Affordance:

The digital media is participatory in allowing an interactor to manipulate, contribute to, and have an effect upon digital content and computer processing (Murray, 2011).The concept of the participatory affordance is closely related to two values of interface design, intuitive and transparent.

  • Intuitive:

Most ordinary users don’t ask for the things that they want to own before seeing them, as babies cannot describe their needs but can immediately point to something they want when the object comes into view. Technology is supposed to achieve this kind of unconscious expectations.

Google Glass doesn’t provide wordy guidance to teach users how to use its various functions. Look at the menu that Google Glass displays. All the options are provided with simple icons. Google Glass presents users some functions that they seem to naturally understand how to use. Murray said the designer must script both sides so that the actions of humans and machines are meaningful to one another (Murray, 2011). However, this script is not only provided by designers. Since Intuitions about the world are often based on repeated experience (Murray, 2011), people had been adapted to understand many symbols long before the designers of Google Glass use that.

12

On the menu, some icons imitate the appearance of old media that we have rich experience with, such as camera and clock. They let users expect to take photos and check the time with them. And some intuitions can be more recently built, such as users’ interpretation of the icon of a magnifier. It doesn’t provide higher magnification, but means “looking up”. Similarly, although the symbol of “64°” might mean measurement of angles or other things in different context, users of Google Glass usually only connect it to the weather. These connections happen so naturally, because users are already familiar to these metaphors when using other digital devices, especially smart phones. We are already habitants of the infosphere in which Google Glass just entered. Although Google Glass is unconventional to a certain sense, it still inherits established conventions to a large extent to build effective human-computer interaction.

  • Transparent

The aforementioned examples also implicate another related design value that Google Glass achieves–transparent. One example can be utilized to illustrate that the addition of display assembly can promote the participatory affordance.

23

The pre-mentioned head-mounted display (OHMD) allows users to see information provided by computers without looking away from their usual viewpoints. We all know that the compass application in smart phone remains the image of a compass, but Google Glass erases that image and only preserves the concept of a compass. When opening the Google Compass on the Glass, users can see two short red lines in the middle of the screen, and they will notice some characters pass through between them when they turn around their heads. This minimalism design is informative enough to users. As we explained before, the most of us have already owned the knowledge of a precedent medium, the compass; we can naturally imagine that we stand at the origin of a rectangular coordinate, as what is presented on an ordinary compass. And the red lines straightforwardly tell us the relationship of the direction we face and the coordinate built inside our cognitive system.

Similarly, Google Glass takes the functions of clock, camera, thermometer and many other media we use for information exchange but erase their physical bodies. Highly related to the augmenting of display assembly, Google Glass’s another advantages is the display reduces the access time of information. When the time between cognition and action is very small, we hardly recognize the media between the environment and us. We feel the interface is an extension of the self.

4.1.4. Spatial Affordance:

I think the spatial affordance is closely related to the participatory affordance, because the sense of space is a kind of mental model based on the relationship of users’ cognitive system and what the media present. This mental model helps us to make sense of the world. The computer interface acts as a code, which provides its own model of the world, its own logical system, or ideology (Manovich, 2001). What is the logical system of space that Google Glass provides?

Gibson’s affordances capture a fundamental aspect of human perception and cognition, that is, the fact that much information needed for perception and action is in the environment as invariants that can be picked up directly (Zhang & Patel, 2006). The direct collection and processing of information in our environment provide human’s sense of “real space”. Google Glass’s semitransparent display provides the possibility to let users gain a feeling of picking up computer processing data from the real scene. To give the pre-mentioned example of Google Compass, the letter “N” will always appear in front of the person when she faces the North. And the “N” will gradually move to the right when she turns her head to the west. For users, it seems that the “N” is an invariant existing beside us.2332 Another example is the Turn-by-Turn Directions that provided by Google Navigation. If the users follow it to the same destination every time, the same visual direction will always appear at the same corner. These spatial metaphors create interaction patterns that are consistent with the real environment, which leads the users into an augmented reality. In Google Glass’s spatial logic, computer generated information is mapped out into the real world.

4.2 Software epistemology of Google Glass: about the human-computer-culture interface

Manovich use the term “cultural interfaces” to describe human-computer-culture interface: the ways in which computers present and allows us to interact with cultural data. (Manovich, 2001)

He also provides the following five principles of new media as general tendencies of a cultural undergoing computerization (Manovich, 2001). The first two principles are on material level: numeric coding and modular organization. They can be studied by discussing the symbols and syntax of design structure. The third and the forth principles are automation and variability, which can be achieved based on the first two priciples. I think automation and variability are highly related to the design affordances that we have also talked about before. The last, fifth principle of cultural transcoding aims to describe the most substantial consequence of media’s computerization. (Manovich, 2001) He thinks in general, new media can be thought of as consisting from two distinct layers: the ”cultural layer” and the “computer layer”. These two layers are being composited together. Manovich encouraged a “conceptural transfer” from computer world to culture. I think the practice of de-blackboxing Google Glass provides such a view of examining cultural categories and concepts from computer’s ontology.

We started by looking at the different modules and their interrelationships, then understood the genres of data created and transferred among them, further, we can begin to see the cultural, social, and economic forces that shapes each modular.

People gradually agree that “all intellectual work is now ‘software study’.” (Manovich, 2001). The software is used to create and access media objects and environments, which enables the whole global information society. The design evolution of Google Glass not only translates the functions of precedent media, but also mediates other cultural forms in a larger scope. The agency of many social institutions are also involved into the design and use of Google Glass.

By opening up Google Glass, we gained a glance of the assemblies on the first several layers, together with the protocols and standards utilized to combine them and transfer data.  We can get a good sight to count Google’s partners and subsidiaries that make those semi-finished products; we also come to know the organizations and institutions that related to the standards and protocals.

I think that the design value of transparent can also be used to explain these social institution’s hiding from us. We do not need to buy applications from different suppliers and connect them into other hardware by ourselves any more, as playing video games using hard drive. Looking at the modular structure of Google Glass, we can notice that the activities of different suppliers are also packaged to hide from us in a similar pattern of modular structure. Also, the protocols and standards are hidden after the various remediated information.

Being aware of Google Glass’s modular structure has another inspiration to us. With this new vision unfolded, we can easily find that Google Glass accesses the similar established system with Google’s smart phone. Google keeps partnerships of many companies who produce fittings or provide services to its smart phone. Google Glass, as an actor, borrowed the network already linked by Google’s smart phone to a great extent. In other word, by borrowing the existing physical design parameters, Google Glass not only inherits many features from smart phone, but the networks it locates.

Especially considering there is no GPS module in Google Glass and it has to rely on smart phone to access and distribute the information related to location, Google Glass and smart phone are like conjoined twin babies in some sense. In the network, smart phone has already built close relationships with third parties, including social media websites, news organizations, education institutions, etc. Many of them, such as Facebook or the New York Times, have easily transplanted their applications from smart phone to Google Glass.

Google Glass is locked into the space of the software universe already exists. From this view, it is more like a node that link other actors and agencies than an independent tool that can only deal with information with fixed methods. But Google Glass, as a new comer into the network, still brings a few more complexity to the layer that mediates all areas of contemporary societies. Google might prove its potential to increasingly enhance our “interfacing” to various cultural data, it might also create some surprising impacts in the network, such as influencing the education or legal systems. As an actor in the software universe, it will also carry on the ambitious missions in this network, to carry on the new media revolution, the shift of all of our culture to computer-mediated forms of communication (Manovich, 2001).

5. Conclusion

Google Glass, as other technologies, is descended from earlier technologies, the mechanism of “heredity” is worthy to be studied, which is some detailed connection that links the present to the past (Arthur, 2009).

I think unfolding a technology with the framework of its modular structure is a good approach to talk about the evolution of the technology. By opening up Google Glass and examining the functional components hidden from the novel-looking appearance, we have been provided a structural fact of the Google Glass. The modular structure can serve as clues to interpret and evaluate Google Glass. Because of the complexity of Google Glass, it is more practical to assess the change of affordances based on the addition or exclusion of particular modules. Also, the relationships among different modules further reflect the social networks linked by Google Glass in the software universe.

References

Anthony, S. (2012). US military developing multi-focus augmented reality contact lenses | ExtremeTech. ExtremeTech. Retrieved May 10, 2014, from http://www.extremetech.com/computing/126043-us-military-developing-multi-focus-augmented-reality-contact-lenses

Arthur, W. B. (2009). The nature of technology: What it is and how it evolves. Simon and Schuster.

Baldwin, C. Y., & Clark, K. B. (2000). Design rules: The power of modularity(Vol. 1). Mit Press

Bolter, J. D. and Grusin, R. (1999) Remediation: Understanding New Media Camridge, Mass. The MIT Press.

Campbell, J. (2002). Flight of the Wild Gander:- The Symbol without Meaning. California: New World Library. p. 143.

Floridi, L.(2010) Information: A very short introduction. Oxford University Press.

Gouyet, J (1996). Physics and fractal structures. Paris/New York: Masson Springer.

Mandelbrot, B.B. (1983). The fractal geometry of nature. Macmillan. Retrieved 1 February 2012.

Manovich, L. (2001). The language of new media. MIT press.

Manovich, L. (2013). Software takes command (Vol. 5). A&C Black.

Miller, Claire Cain (2013). “Google Searches for Style”. The New York Times. Retrieved 8 May 2014.

Murray, J. H. (2011). Inventing the medium: principles of interaction design as a cultural practice. The MIT Press.

Norris, G.; Thomas, G.; Wagner, M. and Forbes Smith, C. (2005).Boeing 787 Dreamliner—Flying

Norman, D. A. (2002). The design of everyday things. Basic books. Redefined. Aerospace Technical Publications International. 

Womack, M. (2005) Symbols and Meaning: A Concise Introduction. California: AltaMira Press.

Zhang, J., & Patel, V. L. (2006). Distributed cognition, representation, and affordance. Pragmatics & Cognition, 14(2).

Google Glass. (n.d.). Wikipedia. Retrieved May 11, 2014, from http://en.wikipedia.org/wiki/Google_Glass#cite_note-US20130070338-25

Is Fractal a Good Metaphor for Google Glass?

Tianyi Cheng

Notes:

A fractal is a mathematical set that typically displays self-similar patterns (Gouyet, 1996). The concept of fractal includes the idea of a detailed pattern repeating itself (Mandelbrot, 1983). In Manovich’s The language of New Media, he points out that just as a fractal has the same structure on different scales, a new media object has the same modular structure throughout (Manovich, 2001).

20130619141235773577

However, is fractal a necessary metaphor? Can we just replace it with the simpler concept—layers? Because different layers don’t need to share self-similar patterns and can have parallel relations. Although Manovich provides us with some examples, none of them is detailed enough to show the fractal pattern on different scales of one object. If there exists fractal structure, what is the pattern that different scales share? I think Google Glass can be a dynamic case to deblackbox.

I assume Manovich’s use of fractal is accurate. The concept of fractal also contain two basic principles of technology: combination and recursiveness that Arthur brings up in his The Nature of Technology(Arthur, 2009). He also mentioned that a technology and its assembles should all supply a functionality and are executable (Arthur, 2009). In the General Definition of Information (GDI), information is also treated as reified entities and something that can be manipulated (Floridi, 2010). If we treat all elements of Google Glass as stuff constituting information, GUI can apply the primary framework to deblackbox.

Untitled1

According to GDI, information is made of well-formed data that is meaningful. Or information is equal to data or symbols plus the syntax, and should comply the semantics of chosen system (Floridi, 2010). I think another advantage of GDI is that it doesn’t strip information and symbols from its material carrier. In a general term, the following definition of symbol is proposed: A symbol is an energy evoking, and directing, agent (Campbell, 2002). In this term, Google Glass can be analyzed as symbols that are composited by syntax at different levels. Symbols on different levels are packaged as objects and manipulated by new syntax. Elements are assembled into larger-scale objects but they continue to maintain their separate identity (Manovich, 2001).Screen Shot 2014-04-23 at 3.30.09 AM

“The Stack” shows different layers between users and the physical materials of network. The interface simulates the way human view the world. However, when the layers go deeper, things are presented in a way that more distant from natural language.

When applying the model of “stack” to analyze Google Glass. I met some problems. I feel the “stacks” seem not reflect the fractal structure. They are all layers that look relatively independent. And this mode separates the hardware and software. Also, I am confused about the position of “network”. If the network means the connection system made of many different computers, it should exist at all the levels of hardware, OS and applications, not only at the bottom of material level. But I think this conflict can be solved if we rebuild the “stack” into a fractal mode. By applying new level of syntax, we can see how actors on different scales create the network. I try further merging the “stack” into the fractal structure. Hardware should also be de-blackboxed and correspond the functions that are shown through software. Applications can obviously be viewed as objects, and can be treated as symbols. I am not sure about the OS, it might can be either treated as objects or syntax (as a programing language) depend on the definition. I feel interface is more than a metaphor rather than objects or syntax. It reflects an interactive relationship between two objects.

The technology is not something largely self-sufficient and fixed structure, but subject to occasional innovations. So the next question is, how the fractal pattern inside Google Glass grows? How do cultural and social concepts are built into the syntax to manipulate symbols on different levels to create Google Glass? Conversely, how does its function and the conventions of HCI transcode our concepts? I will further consider these questions in my final project, borrowing more concepts from this semester, such as remediation, hypermedia, Actor-Network Theory, augmented reality, affordance, etc.

Works Cited

Manovich, L. (2001). The language of new media. MIT press.

Gouyet, J (1996). Physics and fractal structures. Paris/New York: Masson Springer.

Mandelbrot, B.B. (1983). The fractal geometry of nature. Macmillan. Retrieved 1 February 2012.

Campbell, J.(2002). Flight of the Wild Gander:- The Symbol without Meaning. California: New World Library. p. 143.

Womack, M. (2005) Symbols and Meaning: A Concise Introduction. California: AltaMira Press.

Arthur, W. B. (2009). The nature of technology: What it is and how it evolves. Simon and Schuster.

Floridi, L.(2010) Information: A very short introduction. Oxford University Press.

 

Bibliography

Manovich, L. (2001). The language of new media. MIT press

This book provides me with the main topic for the final project. In this book, Manovich talks about five principles of new media, and introduces a metaphor, fractal, to describe the principle of modularity. He also emphasizes the importance of software studies rather than media studies. The former is a method to study new media while considering its material base and combining knowledge of computer. My final project will approach Google Glass from this perspective.

Floridi, L.(2010) Information: A very short introduction. Oxford University Press.

This book provides basic concepts in information theory, such as the elements of information, the flow of information. I will use some of these concepts as base in my final project. In this book, Information is talked from several different perspectives, including mathematics and science. It also discusses the philosophy of information and information ethics, by considering the broader social background. I think those discussion can offer me a good starting point to consider Human Computer Interaction.

Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1), 335-346.

This is a profound article, which talked about important social and cognitive problem of symbol grounding. I will not dig too deep toward the question related to epistemology. But this article provides me a framework of symbol systems.

White, R., & Downs, T. (2007). How computers work. Que Corp.

It is a definitive guide to the basic knowledge of computer science. It introduces almost every last component of hardware found inside PCs, from transistors to processor. It also has in-depth explanations about home networking, the Internet, PC security, how networks of mobile device operate, etc. Equipped with this knowledge, I gained a clearer approach to deblackbox the Google Glass.

Other Sources

Kindberg, T., Barton, J., Morgan, J., Becker, G., Caswell, D., Debaty, P., … & Spasojevic, M. (2002). People, places, things: Web presence for the real world. Mobile Networks and Applications, 7(5), 365-376.

Zook, M. (2009). How does software make space? Exploring some geographical dimensions of pervasive computing and software studies.

Starner, T., Mann, S., Rhodes, B., Levine, J., Healey, J., Kirsch, D., … & Pentland, A. (1997). Augmented reality through wearable computing. Presence: Teleoperators and Virtual Environments, 6(4), 386-398.

Google Glass. (2014, April 22). In Wikipedia, The Free Encyclopedia. Retrieved 09:46, April 22, 2014, fromhttp://en.wikipedia.org/w/index.php?title=Google_Glass&oldid=605382389

Google Glass and the Ethics of Information

Tianyi Cheng

Google Glass is finally for sale to the public. It is a good timing to take a look at what we can see with it. Google Glass is sending us new package of information. However, by unpacking it, I don’t think the information is largely different from what we received before on the levels of symbols, syntax and message.

Untitled1

Combination might be a key to figuring out realistic mechanisms of invention and evolution of technology (Arthur, 2009). If we took apart Google Glass, we would find assemblies and subassemblies that can be found somewhere else. The chips consisted of innumerous transistors, run Boolean Logic that presents the sign system, or symbols, as what computers started presenting more than half a century ago. Chips are just becoming more tightly packed, which complies with Moore’s Law. This observation can explain half of the reason that why this smart glasses couldn’t appear decades earlier although humans already had this idea. Codes were written as rules for assembling symbols at syntax level. It helps the Glass to make information in fixed formats and transfer data with established protocols. The ambient sensor, proximity sensor, motion sensor were around us long before. Touchpad, small-sized camera and voice commands are not strange too. These and other feedback devices work with Internet and satellites, receiving and generating messages that combined symbols and syntax. These messages accord with the existing measures from authorities, industrial pacts or social traditions. Google Glass was locked into the standardized world.

But it is still a new technology. It makes change on the semantics level by re-arranging many human and non-human actors and relinking the networks. Google Glass shows us an augmented reality, and is also intertwined into the ethic problem in reality. One main reason that Google Glass started controversial discussion is about the ethics of the augmented information, which can be viewed from the semantics level. I think morality can be a topic related to distributed cognition, which also influenced by external cognitive artifacts and activities in concrete situations (Zhang & Patel, 2006). The book, Information, A Very Short Introduction provides an interesting mode to de-blackboxing Google Glass from the perspectives of ethics. As a human actor existing in the infosphere, the Glass brings us more immediate relationship with the once-hidden information.

Screen Shot 2014-04-16 at 12.20.57 AM

Considering information-as-a resource ethics, the previously mentioned censors, GPS systems and Internet connection remediate information that gathered in the past and present to us together with the new information in time. It has the potential to let us see a department’s information outside the school buildings. Socrates argued that a well-informed agent is more likely to do the right thing (Floridi, 2010). However, more information doesn’t necessarily mean that we deal with them justly. A better prediction might also let us pursue more self-interest goals. And we might even make more prejudiced judgments by seeing others’ public profiles. Also, Socrates lived in an era that people weren’t aware of infoglut. Google Glass might let second-hand information largely influence our direct experience. It is worthy to consider that how to create more effective filters to only provide certain types of information.

From the perspective of information-as-a-product ethics, people wear Google Glass can also produce information with cameras and voice command devices. The information can be sent via Bluetooth (connecting to some other devices) or WiFi (connecting to the Internet). As an information producer, we may be subject to constraints while being able to take advantage of opportunities (Floridi, 2010). With the constraints of current technology, we can’t edit video or pictures before sending them away directly from Google Glass. So, one interesting topic can be “can we lie with Google Glass?”. Does it force us to tell the truths? Will Google Glass become an important information source on social media or courts, and be considered to own better accountability and liability than information sent from iPhones or PCs?

The third aspect is information-as-a-target ethics, it includes a human actor’s respect for, or breach of someone’s information privacy or confidentiality, for example, hacking (Floridi, 2010). However, I don’t think Google Glass have very good capacity to get unauthorized access to private information systems. Of course, it can take pictures without consents, but this activity is at a visible level, which can be more easily regulated then hacking with codes. Conversely, it can be harder to notice that the camera on your Google Glass has been hacked. Currently, the Glass can’t be used in a very personalized way and can only execute certain applications, unless it is linked to other devices. In this term, I think Google Glass owners are more likely to be victims rather than victimizers.

In addition, I don’t think this mode represents the whole story. For example, GPS microchip allows Google Glass to determine its location via satellite signals. The location is not merely product or resource, but information shared in the infosphere.

Google glass is not widely used currently, and the innovation of technology is socially and culturally rooted, so its mediation functions need further observations. Although it is matured to go to market, it might be still at a new starting point on its recursive process.

  Works Cited

Arthur, Brian. The Nature of Technology: What It Is and How It Evolves. NY: Free Press, 2009.

Floridi, Luciano. (2010). Information: A very short introduction. Oxford University Press.

Google glass. (2013, March 16). In Wikipedia, The Free Encyclopedia. Retrieved 10:35, April 15, 2014, from http://en.wikipedia.org/w/index.php?title=Google_glass&oldid=544563843

Zhang, J., & Patel, V. L. (2006). Distributed cognition, representation, and affordance. Pragmatics & Cognition14(2).

The Field Work of Ambient Computing

Tianyi

I explored several blocks closed to Foggy Bottom Metro Station.

Screen Shot 2014-04-08 at 2.32.53 PM

Screen Shot 2014-04-08 at 11.06.09 AMHere is a photo I took from the outside of a dry cleaner. It was the first time I ran into this cleaner. But I can get much information about it through Internet by seeing the rate and reviews on websites. In this case, a new comer of the city will not feel lacking too much past experience of the city. She can get information from content-sharing platforms, which is created by other costumers. It is one kind of urban sensing: crowd-sourcing (Nabian & Ratti). The cities don’t unfold in front of people merely with their current look. They are also displayed on a timeline—extending into the past, as well as the future. By scanning the QR code posted. I can download coupons to my cellphone. I also got the information that the discount will end in the August. This cleaner also accepts payment by GW student card. I think it can be regarded as a viral sensing.Screen Shot 2014-04-08 at 2.14.02 PM This method of payment didn’t built new infrastructure, but simply connected POS, Banking system and campus card services. This method provides conveniences by sensing consumers’ identities, and completing the payment at the discounted prices at the same time.

Screen Shot 2014-04-08 at 4.27.51 PMOn the door of a coffee shop next to the cleaner, more small posters can be found. One of them looks like a bottle. It means that this coffee shop is one partner of TapIt, which is an association who promote bottle-less life-style. They have bound many cafe into a network that lets those who want water find those willing to provide it. People can find the locations of TapIt partners on an online map.  It expands each citizen’s perceived sphere of responsibility from the domestic space, to the space of the city, which might result in a more responsible urbanity (Nabian & Ratti). Another of them informs people that there is a security system built inside. I went to the website of the security company. Screen Shot 2014-04-08 at 4.32.47 PMIt provides security services by utilizing the existing equipment (Another example of viral sensing). It is claimed as a system that can deal with various kinds of threats and users can operate it with smartphone.

Screen Shot 2014-04-08 at 2.24.36 PM

Some databases have been created to let people make better sense of service providers. People who borrow rental bikes might meet problems that they cannot find docks at a bike station when they want to put the bike back. If we cannot create more docks immediately, citizens can become distributed. A mobile app shows many pie charts that indicate the locations and numbers of bikes and docks. Screen Shot 2014-04-08 at 2.27.00 PMData might be collected by the sensors that were built at the bike stations. It enables people to make better decisions. Similarly, with another app, people can input the expected arrive and depart time, and compare the parking fee nearby. The once hidden information now become public accessible.

IMG_2109The control system of traffic light better displays Ambient computing system’s capability of actuation. People can touch the sensor to tell the system that they want to cross. The traffic light will “decide” to change or not based on several factors in the context. Intelligent and assistive devices provide a mechanism by which AmI systems can executive actions and affect the system users (Cook, Augusto & Jakkula, 2009). However, I can’t say that these ambient computing devices I saw have intelligence. AmI is considered as a new challenge for AI and is the next step in AI’s evolution (Ramos, Augusto & Shapiro, 2008 ). However, I doubt that the maturing ambient computing is generated from AI. These systems still don’t have the ability to make creative decisions as humans. At this stage, they don’t distinguish themselves from complicated operation systems that run well-designed algorithms.

IMG_1691

The last picture that I want to put here is one I took at Healy Hall. The poster encourages people to come back to the “real world”. However, I don’t think technology is leading us toward sort of ”fake reality”, although maybe we will never walk outside Plato’s caves and see the reality. But putting this argument aside, I think ambient computing are not creating virtual reality, but just provides us with information in unconventional ways.

Works Cited

Cook, D. J., Augusto, J. C., & Jakkula, V. R. (2009). Ambient intelligence: Technologies, applications, and opportunities. Pervasive and Mobile Computing, 5(4), 277-298.

Nabian, Nashid & Ratti, Carlo, “The City to Come,” in Innovation: Perspectives for the 21st Century(OpenMind), https://www.bbvaopenmind.com/en/article/the-city-to-come/.

Ramos, C., Augusto, J. C., & Shapiro, D. (2008). Ambient intelligence—The next step for artificial intelligence. Intelligent Systems, IEEE, 23(2), 15-18.

SixthSense, Hiding the Blackbox

Tianyi Cheng

It is too early to say that we already have everything we need at our fingertips. There is always a time lag beTitween the creations of commands in our mind and the execution of those commands. This time lag, together with the physical input and output components of computers, making us clearly aware that there is a blackbox existing in front of us. This process of execution will be more instant, and the input-output interfaces are going to be combined. Pranav Mistry at MIT’s Media Lab has developed the “Sixth Sense”. It is a technology that you can use your fingers to speak your minds immediately. Making commands by gesturing, people will not even notice there is a blackbox between themselves and the output of information processing.Screen Shot 2014-04-01 at 10.29.54 PM

At this stage, the SixthSense prototype is comprised of a pocket projector, a mirror and a camera. The hardware components are coupled in a pendant like mobile wearable device. Both the projector and the camera are connected to the mobile computing device in the user’s pocket. The projector projects visual information enabling surfaces; while the camera recognizes and tracks user’s hand gestures and physical objects using computer-vision based techniques. The software program processes the video stream data captured by the camera and tracks the locations of the colored markers at the tip of the user’s fingers using simple computer-vision techniques. (SixSense)

Screen Shot 2014-04-02 at 2.20.40 AMScreen Shot 2014-04-02 at 2.30.29 AM

This technology tries moving all the programs and file storage on Clouds, which makes powerful devices wearable. The very ideas of “screen” might be changed soon. It can make all the surfaces into “screens” and merge input and output interfaces. One can watch a video on newspaper’s front page, navigate through a map on the dining table, and take a photograph by simply making a framing gesture as our childhood fantasy. It aims to remove distraction, to allow the users to focus on the task at hand rather than the tool (Murray, 61). The metamedium is created by seemly adding one layer on ordinary world and linking everything together. But for Manovich, the “SixthSense” might be not revolutionarily different from Turing machine, which is still on the way to create general-purpose simulation. It can handle “virtually all of its owner’s information-related needs”. (Manovich, 68)

Another exciting thing is that people can change the components of SixthSense device and create their own apps with it. Instructions of how to develop a personal device can be found on this website: https://code.google.com/p/sixthsense/ To certain degree, the SixthSense will never be a terminal product, it will be always by design and always leaves room for not-yet-invented media. I think it provides us a new way to see “deep remixability”, the remix can go even deeper than what we deal with new media currently. We now choose from various established algorithms to generate remixed works. But we might soon be able to remix algorithms as well. Users can execute information with the algorithms they create themselves, which can better simulate the way we deal with everyday problems (We don’t borrow established algorithms to run our life).

Andy Clark views language itself as a form of mind-transforming cognitive scaffolding (Clark, 44). Other interesting considers can be: although SixthSense provide higher cognitive functions, it still changes our language at a very basic level. Will we further develop the gestures that we overlooked after developing speaking language? How will speaking language and gestures cooperate together in future’s HCI? If the computer, which is viewed as medium rather than tool, does not teach us new language, how does it influence our language system?

 

Works Cited

Clark, Andy. Supersizing the Mind: Embodiment, Action, and Cognitive Extension (New York, NY: Oxford University Press, USA, 200

Manovich, Lev. Software Takes Command. New York: Bloomsbury Academic, 2013.

Murray, Janet. “Affordances of the Digital Medium.” Inventing the Medium. Cambridge: MIT Press, 2012.

“SixthSense – a Wearable Gestural Interface (MIT Media Lab).” SixthSense – a Wearable Gestural Interface (MIT Media Lab). N.p., n.d. Web. 02 Apr. 2014.

Individual Creation and Collective Reproduction

Tianyi Cheng

Benjamin’s thought is enlightening because he put art and cultural productions in a grand background and criticize them on different levels. However, when reading his articles, I got a feeling that the narrative might be too grand in certain aspect. Benjamin seems to draw a line between fine art and pop culture, which sometimes cannot be differentiated clearly especially nowadays. He viewed the repeatability is “the stripping of the veil from the object” and “the destruction of the aura” (Benjamin, 2008). By using the metaphor of “aura”, he viewed the traditional and elite artworks as better forms. I am not sure whether there is a contradiction between this attitude and his political position. He criticized the ideology of bourgeoisie while showed individualist and elite spirit in his taste of art. His idea of “aura” has some romanticist’s nostalgia to me.

One meaning of the term “reproduce” that Benjamin uses implicates that the creators of art works deliberately make certain number of copies of the productions. However, I think this concept cannot be easily applied to the age of Web 2.0. When an Internet user creates an artwork on line, he or she usually can’t decide the number of copies of her work. It might even be not proper to count copies of online works. Even though the work can be displayed on many other users’ screens, the creator might still view it as a single creation, which is only viewed from many audiences who access the pages. There is no clear process of duplication and how the artwork is spread is largely depend on audiences’ clicks. The publics are not such passive receivers of cultural productions. Also, a digital creation doesn’t have an unique existence at the place. Benjamin’s emphasis on the context of place is attractive but sounds as Luddism nowadays.

Composer Eric Whitacre led a virtual choir of singers from around the world. He talks through the creative challenges of making music powered by YouTube, and unveils the first 2 minutes of his new work, "Sleep".

Composer Eric Whitacre led a virtual choir of singers from around the world. He talks through the creative challenges of making music powered by YouTube, and unveils the first 2 minutes of his new work, “Sleep”.

However, this “back to the old time” inclination reminds me a book called You Are Not a Gadget: A Manifesto, which is written by Jaron Lanior, who coined the word of “virtual reality”. But Lanier takes a different perspective than Benjamin and Bourdieu, who criticized the maintenance of ideological discourses in the social fields such as culture and media, which Bourdieu called collective misrecognition (Irvine). If these two scholars want to decode the value of art value. Lanier thinks that there are not too much meaningful value that has been newly encode into online art activities.

Lanier doubted the tendency of collective creation in the age of Web 2.0. He used to lay great hope in the new media environment of creation. However, the first ten year of this century passed, he commented with frustration that there was no new creative pattern of music which can represent 21 century (I don’t know whether he has commented on Eric Whitacre). Also, most spoofs and remixes are inferior. He set “peer production” aside from those which are real original and blamed the online conformity for causing the lack of creation. To him, The Cloud is threatening ingenuity. The real-time individuality and independent creativity are always the root of variety of ideas, art works and digital productions. For me, this argument sounds more convincing than the “aura” conception to support traditional creation pattern.

Works Cited

Benjamin, W. (2008). The work of art in the age of its technological reproducibility, and other writings on media. Harvard University Press.P255-256

Irvine, Martin. “Cracking the Art Value Code: Thinking with Bourdiew.” Web

Lanier, Jaron (2010). You Are Not a Gadget: A Manifesto. New York: Knopf.

 

Wireless Charging Bowl

The discussions on mediation and hybrid media in the readings mainly focus on software and visual technologies. I happened to see a new device that might offer the other aspect to see mediation. Since a “deep remix” can be on at multiple levels simultaneously (Manovich, 2001). A linked system can be built not only on the visual level, but the infrastructure level.

It is always annoying to carry chargers or rush for sockets. What’s more, the number of wearable devices is growing. It has become a bothersome task to match all the devices with charging cables. Many people have a habit of dropping cellphone, iPod or camera together to a certain place when arrive home. It will be fortunate that if the place where we put all our devices can also charge them for us. Intel recently activated a bowl that has this potential capability. Right now, the wireless charging bowl is not as smart to charge all the devices we own. It now merely pairs with Intel’s newly announced smart headset, which will charge automatically as soon as it is dropped into the wireless bowl. But the company has ambitious plan of expanding this technology to make the bowl accommodate a wider array of devices, including phones, tablets, and Ultra books (Bell).

Screen Shot 2014-03-19 at 1.23.23 AM

People might think it is just an Intel thing, however, it is a new standard. Device manufactures would need to adapt their designs to support the bowl and it will not take long to modify products to support the system (Lanxon). The Alliance for Wireless Power (A4WP) is a not-for-profit organization that formed in 2012. It includes leading brands from a wide range of industries including consumer electronics, mobile services, wireless technology, automotive, furniture, software and more. They are working cooperatively to build a global wireless charging ecosystem based on Rezence™ technology (“About A4WP”). Rezence technology makes it possible to charge multiple devices simultaneously without precise placement.

Even though it is mainly an innovation of hardware, if we deblackbox (actually deblackbowl) the wireless charging bowl, we can still find the double logics of mediation. We produce the chargers and sockets not to use them directly. They don’t have interfaces. We just rely on them to maintain our interaction with other digital devices. Since they are not our ultimate purpose of media, they should be erased. The material “redundancies” are going to be invisible; the media will be more transparent. They are all attempts to achieve immediacy by ignoring the presence of the medium (Manovich, 2001). We are also trying to erase the boundaries of different media although we tried hard to multiply them before. This hypermediacy also support immediacy. People can grab a digital device to use as naturally as a primitive guy grabbing a stone tool.

Methods discovered in one medium provide metaphors that contribute new ways to think about notions in other media (Kay, 1977). The black bowl is just a starting point. The technology will be used to turn almost any surface into a wireless charging surface capable of powering any Rezence-enabled mobile device. Examining the specifics and understanding how “dead” things can be mediated into new network of modern life, one big step of technology progress can be unfolded. Our notion that some things are alive and some others are lifeless might be fundamentally changed soon. In the near future, our furniture, household necessaries and other devices will become more sensible and have deeper interactions with us.

Works Cited

“About A4WP.” Rezence. Web. 19 Feb. 2014. http://www.rezence.com/alliance/about-a4wp

Bell, Lee. “CES: Intel’s wireless charging bowl replaces your cereal with gadgets”. The INQUIRE. Web. 19 Feb. 2014. http://www.theinquirer.net/inquirer/news/2321682/intels-wireless-charging-bowl-replaces-your-cereal-with-gadgets

Lanxon, Nate. “Making Nike’s FuelBand charge wirelessly in Intel’s Smart Bowl took one hour”. Wired.com. Web. 19 Feb. 2014 http://www.wired.co.uk/news/archive/2014-01/08/intel-smart-bowl-mike-lee

Kay, A., & Goldberg, A. (1977). Personal dynamic media. Computer, 10(3), 31-41.

Manovich, L. (2001). The language of new media. MIT press. P11,P15

 

“Objects” Creation— One Pattern of Evolution of Programming Language

Tianyi Cheng

This week, I tried to savor the process of using programming language and noted the difference between Python language and my thinking pattern, and between the ways computers and human solve problems. I want to use solving the problem of rounding numbers as an example. We do rounding in everyday life. It is one of the simplest information processing in our brain. However, I found myself stuck when teaching computer to do rounding in Python language. Basically, I think the way I solve this problem is similar to this pattern:

What is the first number after the “.”?  If it is one of “5,6,7,8,9”, then round up. Add 1 to the number before “.”;  If it is one of “0,1,2,3,4”, then round down. Only remain the number before “.”.  Maybe I can create a loop section to let computer compare the number after “.” with “0~9”, then let it go through a “if…. else…”section to decide round up/down. However, it is not similar to my thinking patterns. I don’t use a loop section to solve this problem. I can easily combine the look of a string with the number it symbolizes. For me, number and string are just like two properties of one thing. However, for computers, properties of single things are processed separately.

Screen Shot 2014-03-02 at 3.06.51 PM

The most direct way that computer takes is different than mine. In this algorithm, it adds every input “x” with 0.5 and takes the integer part of the result. I think this process reflects the original intention of doing rounding. “0.5” means half way of the “distance” between two integers. What computer does is moving “x” forward 0.5 unit to test whether it reaches the first integer larger than “x”.

Interestingly, It seems that I just ignores the very series of logical processes and always generate the output directly. I noticed that I tend to follow certain rules that directly link two objects and simplify the relationship between them. So the hardest part of using python language is not obtaining the very grammar, but creating a mode which matches computer’s “thinking pattern”. By doing this, I need to move away my eyes from “objects” and unfold the interrelationships among them.

“The Stack” shows different layers between users and the physical materials of network. The interface simulates the way human view the world. However, when the layers go deeper, things are presented in a way that more distant from natural language.

“The Stack” shows different layers between users and the physical materials of network. The interface simulates the way human view the world. However, when the layers go deeper, things are presented in a way that more distant from natural language.

Before the Von Neumann architecture was created, program was not stored inside. Engineers had to rearrange hardware to run a new program. At this level, relationships in blackbox are exposed. But after that, it was still a period that engineers had to shoulder arduous work by dealing with endless “1” and “0”, which recorded slight different of changes in the machines. Then programming language was developed. On a slightly higher level is assembly language, which supply step-by-step instructions for the processor to carry out (White & Downs, 2007). On the higher end, languages such as C and Java allow programmers to write more closely parallel English (White & Downs, 2007). Complicated relationships are closured into various functions. And those functions can be called to other systems without clarifying how it works inside. The grouping of several individual sub-steps into a larger step is an example of abstraction (Conery, 2010). To me, the principle of functional abstraction (Hillis, 2013) lead computer to imitate human thinking pattern.

Creating new functions and term them with English words is just one aspect. At the same time, programmers symbolize series of relationships can create more “objects”. The creation of Wolfram Language also followed the law of the evolution of programming language. I think this new language is revolutionary, because we need no longer to teach computer the difference of “string”, “number” or other data types. Different features of one symbol are combined together.

The names of capital cities can work as text, number, locations of map, diagram and datas storing other information. They can be called in functions without clarifying the data type.

The “names” of capital cities can work as text, number, locations of map, diagram and datas storing other information. They can be called in functions without clarifying the data type.

I think our brain has similar pattern of “Closure”. We perceive the world by letting subtle information passing through our eyes, ears, hands and other sensory organ. Special processes that wrote in our brain integrate the information and create a mode of the world by mapping objects. Some questions are generated by this reflection: Is it an evolutionary advantage of brain’s function of “information hiding” to closure relationships into objects? Can this function be connected to the definition of intelligence? Or is “ignoring relationship” merely a phenomenon caused by our using of language?

Works Cited

Conery, J. S. (2010). Ubiquity symposium’What is computation?’: Computation is symbol manipulation. Ubiquity2010(November), 4.

Hillis, D. (2013). The pattern on the stone. Hachette UK. P IX

White, R., & Downs, T. (2007). How computers work. Que Corp. P95-96

 

Deblackboxing the Ruler

Tianyi Cheng

Ruler is never regarded as containing high technology. But as a medium, it carries necessary message that is communicated to construct social order. It indicates the material technologies and symbol forms are not different domains.  Ruler bridges the observer and the observed, creates the relationship between interpreting and using, making innumerous human creations compatible. The marks on rulers involve in human’s own standpoints into investigations. By making diagrams with rulers, they also carry out the process of objectification of abstract ideas. But all of these are just the activities of communication; rulers also transmit, which deserves “thick description”.

The two natures of transmission, the technical device and the organic device, can be clearly found on this simple rectangle tool. Their textures change with the adoption of new materials and their sizes are various for different purpose. Technology is just one aspect of the process of mediation. From the organic aspect, the symbolic system that later was mediated on ruler can be traced back before the birth of ruler. The object of transmission does not preexist the mechanism of its transmission (Debray, 1999). The earliest known unit of measurement is cubit, widely used in ancient Egypt, Mesopotamian and India, at least around 2800 BC. A common cubit was the length of the forearm, hand and other certain lengths on human body. These units of measurement assign numbers or other symbols to material world. The inch, foot and yard evolved from these units through a complicated transformation. Although the process has not yet fully understood, rulers using different measurement system became cornerstones of technology, economics, and judiciary in different societies (Pedhazur, Elazar & Schmelkin, 1991).

Mediation takes place as the milieu changes. The conflicts of different measurement grew when the world started to be linked together. In 1668, the English philosopher John Wilkins proposed a universal measure with a decimal-based unit of length. In 1675, the Italian scientist Tito Livio Burattini, used the phrase “metro cattolico” to denote the standard unit of length derived from a pendulum. Scientists kept looking for more concise definition of meter. In the wake of the French Revolution, French Academy of Sciences suggested a basic unit of length equal to one ten-millionth of the distance between the North Pole and the Equator, to be called mètre. And 1 centimeter was defined as equal to 0.01 meter. The definition has been periodically refined to reflect growing knowledge of science. Since 1983, it has been defined as “the length of the path travelled by light in vacuum during a time interval of 1/299,792,458 of a second” (17th General Conference on Weights and Measures, 1983). I believe it is a good example of interactions across systems.

Copies of the International Prototype were distributed to many nations from France in 1880’s, which provided accurate standard to manufacturers. Here, a broad actor-network was immobilized in the way of “translation” (Restivo, 2010). The historical event constitutes transmission as “duty and obligation, in a word, culture (Debray, 2004).” Englishmen and Americans refuted this culture. However, although complicated international situation made American government reluctant to make the metric system compulsory, the fundamental definitions of length and mass in the U.S. were based on metric units, which were stipulated by the Mendenhall Order of 1893. But the recognition doesn’t necessarily translate into practical use. Several reasons “locked” the U.S people to slowly adopt the metric system. Converting technical drawings and operations manuals for complex equipment with many parts can take thousands of man-hours. In addition, cultural and social psychological reasons also leaded to the failure of Congress to make the metric system mandatory in all 50 states (William, 2011). In U.S., a ruler is commonly marked in both the metric system and the U.S. Customary System.

Works Cited

Debray, Régis. (2004). Of Tools and Angels. Theory, Culture & Society21, 3. P5

Debray, Régis. (1999). “What is Mediology?” Le Monde Diplomatique: 32. Trans. Martin Irvine.

Harris, William.  “Why isn\u0027t the U.S. on the metric system?”  04 October 2011.  HowStuffWorks.com. <http://science.howstuffworks.com/why-us-not-on-metric-system.htm>  04 February 2014.

Pedhazur, Elazar & Schmelkin (1991). Measurement, Design, and Analysis: An Integrated Approach (1st ed.). Hillsdale, NJ: Lawrence Erlbaum Associates. pp. 15–29. ISBN 0-805-81063-3.

Restivo, Sal (2010). “Bruno Latour: The Once and Future Philosopher.” Entry in the The New Blackwell Companion to.

“17th General Conference on Weights and Measures (1983), Resolution 1.”

Lenin Coat, Cheong-sam and Semiology

Tianyi Cheng

Lenin Coat was especially popular in China during the 1950’s. It is a variant of open-collared and double-breasted suit. Similar kind of suit was common in Europe and had become a conservative choice during the first half of the 20th century. But it was a new fashion when it firstly entered China during Second Sino-Japanese War and Chinese Civil War. Russians don’t call this kind of coat “Lenin Coat”. The word was termed by Chinese. Because Lenin wore this kind of coat during the October Revolution. If we merely take the image of Lenin Coat as a signifier, Chinese and Russians share a similar first-order system (Allen, 42). But the second-order semiological system (Allen, 43) is significantly different. In Russia, people did not especially relate this coat to Bolshevik spirit. But Chinese raised the image to show respect to certain ideology. I guess in some historical period, some people wanted to emphasize this layer of meaning and use the word “Lenin” to name the coat, which in turn changed the word’s first-order system. The word “Lenin” is usually related to “revolution” and “communism”. This sign is borrowed to indicate the myth of the coat. Ironically, although what these people did can be viewed as “demystification” because they pointed out the implication of this clothing pattern, this “demystification” actually strengthened the relationship. A new sign got this ground to be more popular.

Lenin Coat

                                      Lenin Coat

Cheong-sam

                                  Cheong-sam

What interesting is Lenin Coat also means feminism to certain degree. It gradually became more popular among women than men. Women felt honored when they dressed in this unisex way, which might means that they can share half of men’s work. Comparatively, Cheong-sam, which enjoyed popularity during the period of the Republic of China (1912-1949), was regarded as feminine. Also, Cheong-sam made itself a hit when Nationalists governed China. So it is also linked to “bourgeois” and ”capitalism”. Some further equivalence can be found. Clothes knot different ideologies together: “capitalist ≈ bourgeois ≈ nationalist ≈ feminine ≈ weak ≈ conservative”, similarly, “socialist ≈ proletariat ≈ communist ≈ feminist ≈ strong ≈ revolutionized”.  Clothes are instilled with universal and natural values. The grave of Cheong-sam was prepared long before the coming of Cultural Revolution (1966-1976) when they were widely burnt and disappeared on the street. Through the process of meaning making, people mediate present to the past and to the future (6, Irvine). This grave of Cheong-sam was dug in the soil of sign system.

I think Barthes’ idea that fashion is tyrannical makes sense. However, I disagree that the fashion system is based on the decisions of a very small number of editors and consultants (Allen, 51). Larger social and cultural factors have strong impact on fashion. The way people relate concepts and clothes keeps reproducing fashions. Also, the myth of the fashion system is not only to speed up consumption, but also create broader social phenomenon.

                                                           Works Cited

Allen, G. (2003). Roland Barthes. Psychology Press.

Irvine, Martin (2012). “The Grammar of Meaning Making: The Human Symbolic Faculty, Semiosis, and Cognitive Semiotics” Web.