Category Archives: Week 11

SixthSense, Hiding the Blackbox

Tianyi Cheng

It is too early to say that we already have everything we need at our fingertips. There is always a time lag beTitween the creations of commands in our mind and the execution of those commands. This time lag, together with the physical input and output components of computers, making us clearly aware that there is a blackbox existing in front of us. This process of execution will be more instant, and the input-output interfaces are going to be combined. Pranav Mistry at MIT’s Media Lab has developed the “Sixth Sense”. It is a technology that you can use your fingers to speak your minds immediately. Making commands by gesturing, people will not even notice there is a blackbox between themselves and the output of information processing.Screen Shot 2014-04-01 at 10.29.54 PM

At this stage, the SixthSense prototype is comprised of a pocket projector, a mirror and a camera. The hardware components are coupled in a pendant like mobile wearable device. Both the projector and the camera are connected to the mobile computing device in the user’s pocket. The projector projects visual information enabling surfaces; while the camera recognizes and tracks user’s hand gestures and physical objects using computer-vision based techniques. The software program processes the video stream data captured by the camera and tracks the locations of the colored markers at the tip of the user’s fingers using simple computer-vision techniques. (SixSense)

Screen Shot 2014-04-02 at 2.20.40 AMScreen Shot 2014-04-02 at 2.30.29 AM

This technology tries moving all the programs and file storage on Clouds, which makes powerful devices wearable. The very ideas of “screen” might be changed soon. It can make all the surfaces into “screens” and merge input and output interfaces. One can watch a video on newspaper’s front page, navigate through a map on the dining table, and take a photograph by simply making a framing gesture as our childhood fantasy. It aims to remove distraction, to allow the users to focus on the task at hand rather than the tool (Murray, 61). The metamedium is created by seemly adding one layer on ordinary world and linking everything together. But for Manovich, the “SixthSense” might be not revolutionarily different from Turing machine, which is still on the way to create general-purpose simulation. It can handle “virtually all of its owner’s information-related needs”. (Manovich, 68)

Another exciting thing is that people can change the components of SixthSense device and create their own apps with it. Instructions of how to develop a personal device can be found on this website: https://code.google.com/p/sixthsense/ To certain degree, the SixthSense will never be a terminal product, it will be always by design and always leaves room for not-yet-invented media. I think it provides us a new way to see “deep remixability”, the remix can go even deeper than what we deal with new media currently. We now choose from various established algorithms to generate remixed works. But we might soon be able to remix algorithms as well. Users can execute information with the algorithms they create themselves, which can better simulate the way we deal with everyday problems (We don’t borrow established algorithms to run our life).

Andy Clark views language itself as a form of mind-transforming cognitive scaffolding (Clark, 44). Other interesting considers can be: although SixthSense provide higher cognitive functions, it still changes our language at a very basic level. Will we further develop the gestures that we overlooked after developing speaking language? How will speaking language and gestures cooperate together in future’s HCI? If the computer, which is viewed as medium rather than tool, does not teach us new language, how does it influence our language system?

 

Works Cited

Clark, Andy. Supersizing the Mind: Embodiment, Action, and Cognitive Extension (New York, NY: Oxford University Press, USA, 200

Manovich, Lev. Software Takes Command. New York: Bloomsbury Academic, 2013.

Murray, Janet. “Affordances of the Digital Medium.” Inventing the Medium. Cambridge: MIT Press, 2012.

“SixthSense – a Wearable Gestural Interface (MIT Media Lab).” SixthSense – a Wearable Gestural Interface (MIT Media Lab). N.p., n.d. Web. 02 Apr. 2014.

Invisible and Unmediated Meta-Interfaces: Simulation and Implementation of the Physical World

Estefanía Tocado

 As Leo Manovich affirms, the permanent extendibility of the computer metamedium has generated a software epistemology and a computerized society (337).  In a society where everyone owns a smart phone, a laptop, or a tablet, the use of these gadgets is fully integrated into our everyday life, any time and anywhere.  Therefore, as Janet Murray states, participation in digital media increasingly means social participation (56).  That can be clearly seen in the development of social spaces such as Facebook as well as other spaces such as digital newspapers or shopping online.  The need to have a digital artifact that grants us access to that virtual world is also letting us interactively participate in our society.  It seems to me that one of the reasons why Apple products have been so widely popular, besides their great marketing campaign and polished design, is because they enable the interface and the manipulation to appear as being very intuitive, unmediated, invisible, and transparent.  Janet Murray affirms that the computer is a participatory medium where the users expect to manipulate digital artifacts and make things happen in response to their actions (55).   

What makes Apple products so attractive is that this apparent conceptual inertia is achieved through the properties of the media software, their interface, the very tools that make it possible for the user to access, navigate, create, and modify media documents (Manovich 335).  This accessibility and impression of being a transparent medium is based on the fact that the computer metamedium contains two different types of media:  simulations of previous physical media extended with new properties (iBook) or new computational media (3D).  Apple laptops as well as iPads and iPhones rely very much on simulating previous culturally integrated systems, such as a book or a calendar for example, and implementing them with additional media-specific software that allows the user to perform more actions in the virtual medium than he would be able to do in the “real world.”  Another interesting feature is that the user could be able to share data externally from his calendar with other gadgets through the iCloud or internally with other programs.  Murray affirms that computational artifacts do not exist as fixed entities but as changing and altered sets of bits governed by conditional rules (53).  These sets of rules perform numerous combinatorial processes but the user, most of the times, continues having the feeling that the medium has disappeared.  Therefore, as Bolter and Grusin point out, computer designers search to provide a virtual experience that is “interfaceless” where no recognizable tools appear so the user moves in the virtual space naturally as he would do in the physical world (23).  I would suggest that Apple has taken this approach to its highest degree moving away from engineering and function driven interface to a natural and logically deducted connection between the human being and the digital artifact generating the illusion of immediacy and transparency.

 Works Cited

Bolter Jay David and Richard Grusin. Remediation: Understanding Media. Cambridge,

MA: The MIT Press, 2000.

Manovich, Leo. Software Takes Command. New York: Bloomsbury, 2013.

Murray, Janet. Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012.

The Rise of the Meta-Media

As each day goes by, the boundaries between user and media are disappearing.  We  are being part of the rise of a new kind of media.  The “Meta-Media” have started popping up on our lives and becoming a part of our daily life.  We now use voice commands, hand gestures o tactile interfaces to do things that used to be so simple as changing the channel with a remote.  As stated in the text “Super sizing the mind”:  My Iphone is not my tool, or at least it is not wholly my tool.  Parts of it have become parts of me. (Clark, Foreword, x)

Untitled

Video: Samsung Smart TV Commercial

Today’s society wants and needs their devices to do most of their work for them.  We need to have the weather, the stock market, communication tools, videos and messages.  We are somehow conceding some of our brain power and leaving it all to the machines. Automation makes digitalisation possible, but if it immesaurably increases the power of the mind (as rationalisation), it can also destroy the minds knowledge (as rationality) (Stiegler, 12)

 

It is as if we wanted the  boundaries of media and user  to disappear.  We believe that this devices are “revolutionary” because they completely eliminate the “middle man”.  Who needs a remote control to change the channel?  Or what’s the best way to play video games?.  Samsung’s Smart TV or Xbox’s Kinect are a clear example of how computing and software may change our lives forever or at least alter the way we perceive it.

Kinect

Xbox’s Kinnect Announcement

This computer-user revolution wouldn’t be possible without new advancements in computing.  We are constantly seeking ways to improve our quality of life, We do not just self-engineer better worlds to think in. We self-engineer our selves to think and perform better in the worlds we find ourselves in. (Clark, 59)

 

lg-internet-fridge-420x0

LG’s Refrigerator with internet connection.

It is true that computers and improvement in computing technology have helped us greatly, however things have to be functional in order to achieve success with the consumers.  As Janet Murray states in “Inventing the Medium”:  “In approaching interaction design as a cultural practice our aim is always to make an object that is satisfying in itself and that advances the digital medium by refining or creating the conventions that best exploit these four affordances: encyclopedic, spatial, procedural and participatory”.  (Murray, 53)

What she states is how things need to be functional, appealing and easy to use for the end-user.  Technologies may be completely powerful and life changing but they need to fulfill certain parameters so the end user can integrate that technology into his life.   Without the use of this logic, the technology might be just a mysterious black box no one knows how to use.

The real question here is how much are we sacrificing for an easier life?  What’s the real price of having all this comfort if we are starting to have an automated life where everything is done for us.  It is a fact that  We constantly create external scaffolding to simplify our cognitive tasks (Hollan, 192).   But what if that scaffolding ends or is not as safe as we thought to believe?  Are we ready as individuals and as a society to live in a world without the constant use of technology?

WORKS CITED

Andy Clark, Supersizing the Mind: Embodiment, Action, and Cognitive Extension (New York, NY: Oxford University Press, USA, 200

James Hollan, Edwin Hutchins, and David Kirsh. “Distributed Cognition: Toward a New Foundation for Human-computer Interaction Research.” ACM Transactions, Computer-Human Interaction 7, no. 2 (June 2000): 174-196.

Janet Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MI