It is too early to say that we already have everything we need at our fingertips. There is always a time lag beTitween the creations of commands in our mind and the execution of those commands. This time lag, together with the physical input and output components of computers, making us clearly aware that there is a blackbox existing in front of us. This process of execution will be more instant, and the input-output interfaces are going to be combined. Pranav Mistry at MIT’s Media Lab has developed the “Sixth Sense”. It is a technology that you can use your fingers to speak your minds immediately. Making commands by gesturing, people will not even notice there is a blackbox between themselves and the output of information processing.
At this stage, the SixthSense prototype is comprised of a pocket projector, a mirror and a camera. The hardware components are coupled in a pendant like mobile wearable device. Both the projector and the camera are connected to the mobile computing device in the user’s pocket. The projector projects visual information enabling surfaces; while the camera recognizes and tracks user’s hand gestures and physical objects using computer-vision based techniques. The software program processes the video stream data captured by the camera and tracks the locations of the colored markers at the tip of the user’s fingers using simple computer-vision techniques. (SixSense)
This technology tries moving all the programs and file storage on Clouds, which makes powerful devices wearable. The very ideas of “screen” might be changed soon. It can make all the surfaces into “screens” and merge input and output interfaces. One can watch a video on newspaper’s front page, navigate through a map on the dining table, and take a photograph by simply making a framing gesture as our childhood fantasy. It aims to remove distraction, to allow the users to focus on the task at hand rather than the tool (Murray, 61). The metamedium is created by seemly adding one layer on ordinary world and linking everything together. But for Manovich, the “SixthSense” might be not revolutionarily different from Turing machine, which is still on the way to create general-purpose simulation. It can handle “virtually all of its owner’s information-related needs”. (Manovich, 68)
Another exciting thing is that people can change the components of SixthSense device and create their own apps with it. Instructions of how to develop a personal device can be found on this website: https://code.google.com/p/sixthsense/ To certain degree, the SixthSense will never be a terminal product, it will be always by design and always leaves room for not-yet-invented media. I think it provides us a new way to see “deep remixability”, the remix can go even deeper than what we deal with new media currently. We now choose from various established algorithms to generate remixed works. But we might soon be able to remix algorithms as well. Users can execute information with the algorithms they create themselves, which can better simulate the way we deal with everyday problems (We don’t borrow established algorithms to run our life).
Andy Clark views language itself as a form of mind-transforming cognitive scaffolding (Clark, 44). Other interesting considers can be: although SixthSense provide higher cognitive functions, it still changes our language at a very basic level. Will we further develop the gestures that we overlooked after developing speaking language? How will speaking language and gestures cooperate together in future’s HCI? If the computer, which is viewed as medium rather than tool, does not teach us new language, how does it influence our language system?
Clark, Andy. Supersizing the Mind: Embodiment, Action, and Cognitive Extension (New York, NY: Oxford University Press, USA, 200
Manovich, Lev. Software Takes Command. New York: Bloomsbury Academic, 2013.
Murray, Janet. “Affordances of the Digital Medium.” Inventing the Medium. Cambridge: MIT Press, 2012.
“SixthSense – a Wearable Gestural Interface (MIT Media Lab).” SixthSense – a Wearable Gestural Interface (MIT Media Lab). N.p., n.d. Web. 02 Apr. 2014.