Restored Reality—Google Contact Lens

Google Contact Lens is a smart contact lens project announced by Google on 16 January 2014. The project aims to assist people with diabetes by constantly measuring the glucose levels in their tears. However, the original idea when Google announced the lens project was to design an augmented reality device for healthy people and even for blind.

116-fam_1a

By adopting modularity design philosophy, a lot of pre-exist technologies can embedded in the lens, such as camera, sensor and screen. An embedded camera can capture the moment users choose, and the information can be saved for future analyses. Also, the camera will follow users’ gaze and can zoom in/out to provide detailed information that natural human vision cannot cover. Also, a sensor can embed in smart contact lenses; therefore, facial recognition and searching feature can be realized. In addition, the camera and sensor can provide thermal induction and night vision features to users. Therefore, users can largely expand their vision and information in various extreme conditions. The sensor could be a light sensor, pressure sensor, temperature sensor or electrical field sensor, which may allow people to gain a “sixth sense” of sorts. With an embedded mini screen in the lenses, users can visualize important information such as the target they are looking for or an emergency. For example, the lens will highlight an approaching car to prevent car accident. Equipped police officers can haunt their target without wearing heavy gears. The camera would be placed below users’ pupil without obstructing their view. The control circuit could be linked wirelessly or via a wire to the camera and sensor. Therefore, users can control the device via gazing, blinking or finger gestures.

This idea is a perfect example of a transparent interface. Which would be one that erases itself, so that the user is no longer aware of confronting a medium, but instead stands in an immediate relationship to the contents of that medium. This device would combine three essential features of new technology: immediacy, hypermediacy, and remediation. Even though the prototype of this project was only aimed to assist people with diabetes by constantly measuring the glucose levels, the as long as the hardware has been developed, software will updated really soon. When it comes to the nature of new technology, Manovich highlighted that, “None of the new media authoring and editing technologies we associate with computers are simply a result of media ‘being digital.’ The new ways of media access, distribution, analysis, generation, and manipulation all come from software.” Manovich’s idea is similar to Bolter and Grusin’ s when they are talking about new media: immediacy, hypermediacy, and remediation did not begin with the introduction of digital media. We can identify the same process throughout the last several hundred years of Western visual representation.

All of theses reminded me the modularity design philosophy. According to Baldwin and Clark (2006), modularity means breaking the whole operation system into different, smaller, self-contained systems. Arthur (2009) pointed out that, “Supporting any novel device or method is a pyramid of causality that lead to it.” Different workers are borrowing prototypic designs from each other. With the interaction between this device and the users further, the users generated more and more “discrete pieces”. New media obviously “remediated” and “hypermediated” the pre-existed media, and the embedded features are consistent and can be traced back to Renaissance. However, my question is with technology developed, some disabled people could expand/restore their abilities, would this be considered as something innovative because of the development of hardware?

Reference:

Brian Arthur, The Nature of Technology: What It Is and How It Evolves. New York, NY: FrDavid, P. A. (1985).

Jay David Bolter and Richard Grusin. Remediation: Understanding New Media. Cambridge, MA: The MIT Press, 2000.