Author Archives: Yifan Wu

Interacting with Outdoor Advertisements

“I feel drawn to experiment with ways that technology can interact with notions of intimacy, because so much of technology is done in a way that’s very cold and has such an opposite effect.” Jaron Lanier


This paper is a multi-case study. The main argument developed in this paper is that when imply digital interface in the field of outdoor interactive advertisements (shorted as OIA in this paper), the normal principles of interactive principals in new media interface don’t always fit. Being metamedia, those ads use new technologies and strategies to combine pictures, videos, even holograms to create novelty, high-jack attentions and to enhance user experiences. Although being called interactive in its name, most of the OIA are designed with closed systems. They are not necessarily intuitive and transparent as other interactive machines are. They often are designed with a choice of share. They are designed to have narrowed options and using the three elements of advertising to direct the process. They sometimes are designed to offer treats as feedbacks. They follow the rules of semiotic and cultural practices.

This paper simply introduce the basic concept in advertising design and the some technologies used in OIA, then use multiple cases to illustrate the interface design values in the normal interactive machine and in OIA.

OIA have many forms. In this paper I only concentrate on the one kind which contain digital interfaces, and interact with human instead of with environment. Without specific pointing out, the OIA in this paper are references for the ads as above.

2. Introduction

In the movie The Minority Report (2002), the OIA have been evolved into personal detailed, holograms and cloud sharing information ad. The billboard can call you by your name at first sight, provide shopping history and recommend new items based on your preferences. These personal ads seem a lot ahead, while Peter Schwartz, chairman of the Global Business Network, thinks differently. “This is not too distant future. This is imminent.”It’s never too bold to imagine the future. I personally agree that the technology and the interaction used in the field of advertising which happened in the movie will soon be realized.

Nowadays with more and more non-traditional forms of outdoor advertisements transformed to new media, the interactivity of ads also catch a lot of attention. Will it work? How will it work? These questions has haunted advertisers’ mind from day one. While the technologies have become more and more mature, a lot of ideas of interactive outdoor advertisements have been realized. Research shows that in the high-involvement conditions, experienced users will possess more positive attitudes during interaction than the inexperienced users. This result suggests that in OIA, although creating a whole novel experience to hijack attention and to promote share is vital, retrospect and remediation is also important. From traditional media to metamedia based on time and space, how to design the interfaces of OIA becomes a valuable question to ask.

3. New Medias Remediated in OIA

The appliance of new media in the field of OIA is a tendency all over the world, yet not the only choice. It should be granted that traditional media without digital applications can also work its magic out perfectly if doing it the right way, as in many successful cases in the market. In this paper, I focus on the kind of OIA which are perceived as a time-based metamedia, containing digital interfaces. Being metamedia, those ads use new technologies to combine pictures, videos, even holograms to create novelty, hijacking attentions and to enhance user experiences. Although quite different from the traditional ones, OIA media remediated the old in many ways and will continue doing so in the future. The technologies and the applications listed below are the realistic foundation of the follow-up interface design.

3.1 Technology support

To realize the even simplest OIA requires some sort of algorithm and furthermore digital interaction design. With new media used in the outdoor advertising, OIA create novel and entertaining interactions thanks to the development of the following technology support. These technologies differentiate the new media from the old static ones, letting the audiences feel new and fresh meanwhile not too far away from the experiences of the traditional media.

3.1.1 Eye Tracking

Eye tracking technology is a methodology that allow the researcher to detect where a user is looking at, and furthermore to depict a path and hot spot map of user’s eye movements. Moreover, we can detect how long the user looks at something.  When the concern is the identification of elements in visual scenes like an interactive application, the technology called “point of regard (POR)” (Young and Sheena, 1975) is used to measures the orientation of the eye in space. This technologies are widely used in many fields as research method, including psychology and marketing. Meanwhile it can also be used in OIA. The most common way of using it aims at data gathering. By collecting and analyzing the data of audience attention, the information is fed back to advisors to improve the creation and interactivity of advertising. With the eye tracking technology becoming more and more mature, there’s no doubt that a new situation has been created for the OIA and a more accurate delivery of outdoor ad can might well be realized.

3.1.2 Face and Gender Recognition

Face recognition contain the face detection and expression detection. In OIA, these technologies have high values concerning the interaction they might realized. At least, it can base on the number of face it detected to create some changes, such as the London’s Women’s Aid Poster showed below. With each more face recognized, it will reduce the bruise piece by piece, in doing so call for attention on family violence towards women.

Also, the use of face and expression detection can also realize digital image simulation and therefore satisfy the ego-self-identification need of the users, creating an OIA with high entertaining rate. Moreover, nowadays a camera with software can detect whether the passer-by is a male or female. This gender recognition technology can be used in OIA to adjust its ad on the digital screen accordingly. Similar technologies include the age, race and probably identity recognition. With the technology more and more similar to the one in The Minority Report, surely arise the considerations for privacy protection, which will influence the willingness of interacting in the OIA.

3.1.3 Skeleton Tracing

Like face recognition, skeleton tracing can tell the information about the audience by detecting the 3D images. It can calculate the position and the movement of human bones and therefore simulate the movement on the digital screen.

3.1.4 Near Field Communication and Techs to Link Devices

Near field communication technology is a kind of high frequency radio communication technology used between digital devices to carry out non-contact point-to-point data exchanging. First developed by Sony and Philip, now this technology is more commonly installed in mobile devices like smartphones. Similar technologies like bluetooth are also widely applied in OIA nowadays. Normally, audience use one of the technologies that allow them to contact their smartphones with the OIA, by doing so receive “information”. Like the one below, by putting phone to the cued place, the device will connect and showed a song in the list using the tech of near field communication.

3.2 The application of ad design elements

In the field of advertising, the design is mainly constructed by three basic elements, which are image, text and color composition. Coded in the application, these three elements become core of the visual design, conveying information deeper and further than themselves. In that way can be also perceived as semiotic system in the field of visual design. Different from the traditional ads, OIA use media that can exert these elements in a dynamic way, and meanwhile remediate the print media.

3.2.1 From Static to Dynamic

Transition from the traditional media like magazine, internet and bus stop billboard, OIA use new media that are can apply dynamic footage which not only have widespread commercial value and prospects, but also provide a space for advertisers’ creation. Thanks to the contents of new media, the three elements in OIA possess abundant transformation and composition effect. The speed of changing images and texts, the movement path, the dynamic color can all be used to attract attention (and furthermore to lead their attention to specific cued place).

Like the London Women’s aid ad I mentioned before, that OIA use seemingly static images as part of the ad. However once detect faces in front of it, the image will start changing by slowly reduce the bruise. Another part of the ad is the huge text. It stay static while leaving the model’s face changing more evidently. Meanwhile, it chooses dark and solid color as background, making the only change more prominent. These three elements combined together, making OIA have more novel way to attract attention and to lead attention.

Also, audience can be led to change attention from place, thanks to the dynamic elements. This feature can hardly be realized by traditional media.

3.2.2 From Presenting to Interacting

Traditional outdoor advertisements are used to present, showing changeless information to all the audiences. While OIA use new media to convey different information to different audiences by interacting. The design elements of advertising are changed in speed, scale, position and color according to the external input. In which case, it is far more advanced and novel than the video billboard which present time-based footage.

On the one hand, mostly the dynamic elements are activated by external input, aka human interaction. In the same Women’s Aid ad, only when faces of passerbys are detected, the image on the billboard will start to change. In other word, the remediation from the video billboard set a switch in the new media, which allow the OIA’s design elements to change from the stimulation of the external world. One may call this switch “interaction”. On the other hand, from the interaction, OIA allow the audience to decide the elements composition they were shown. In OIA, audience has the choice not to interact, while in the old media and traditional outdoor ads, the advertising is always ongoing. In other word, the switch is put in audience’s hand, not the advertisers. From the perspective of the audience, the ads are not just some static background. They listen to them and seemingly change “at their will”. This feeling is what traditional media cannot elicit.

3.3 The application of the five senses

Advanced technologies create more method for advisors to achieve creativity, sometimes including the immersed experience of five senses: vision, audition, olfaction, tactile and gustation. Traditional media can also apply five senses in their ads. Vision and audition have been applied since day 1 of advertising. Even olfaction, tactile and gustation can be realized with traditional media, like the one below. PlayStation 2 stick bubble wrapped to the bus board, letting audience burst bubbles while waiting for bus.

New media allow OIA to apply more complex interaction involving the five senses, but the underneath purpose is generally the same: to let the audience not only see and hear the ad, but to touch, smell and taste them like in real times. Different from the traditional media which involve olfaction, tactile and gustation experiences, the new media allow the OIA to looks like personal designed, only activated when the external input is entered. Without external activating, the ad will be anything like the traditional ads. Like the one below, the smell of barbecue will only come out when audience tap their card as cued.

To conclude, the new media of OIA remediate the traditional media in three ways, which are shown above: technologies, design elements and five senses. The specific application will be elaborated in the following chapter.

4. Interface Design in OIA

4.1 Interactive Machine Interface Design

4.1.1  From and Based on Turing Machine

Turing Machine, as the first electronic digital computer used a closed, non-interactive system, can only carry on sophisticated computations. However, once the algorithm started running, it will stop receiving external inputs and go on operating until the computation stops. On the other hand, interactive machines can be engaged by the third parties during the operating. It can learn from the outside world and adapt to experience. The results yielded from interactive machines are depended upon unpredictable external actions, in doing so endow the machine smartness compared to the algorithm in Turing machine. Human-computer system has been recognized as the first truly interactive systems according to Beaudouin-Lafon. The SketchPad designed by Ivan Sutherland use digital pen to input messages and engage with a cathode ray tube (CRT) display. This pioneer technology can still be seen today, like Ipad. Until nowadays, graphical interface design has the core feature of interaction.

Also, from the basis of Turing machine, scholars and scientists propose new ideas on the interactivity that could be applied to machines. Van Leeuwen and Wiedemann offer a new implication of interactive computing. The interactive Turing machine (ITC), as they called it, captured the essence of computing meanwhile evolve interactive systems. This new model of computation is “in theory” more powerful than classical systems. Goldin et al extend the Turing machine system based on dynamic stream semantics, called Persistent Turing Machine (PTM). The PTM will carry on normal computations once they read an input from input tapes and will end once they produce outputs on outputs tapes. The additional worktape is responsible to retain its content from one computation to the next. Goldin’s model of PTM falls in the general class of interactive transition system. However in my point of view, it’s more expressive than interactive. With this sequential feature, PTM can yield results based on computations that remain a process of ongoing.

4.1.2 Principles of Interactive Design

Janet Murray provides perfect principles in the fields of interaction design. In her point of view, as in human computer interaction, human users are unreliable, so the designer should get ahead of the time. In other word, all of the principles in the interactive interfaces design are treating users as toddlers.

Firstly, the interface shall be intuitive. The patterns of operation, the semiotic meanings of the icons and indication signal should be drawing from our conscious expectations about the digital behaviour, experiences and subconsciousness of the world. Janet provides an example of trashcan icon. In the interface if a user see a icon similar to the looking of a trashcan in real life, it’s logical and certain to refer this icon to the function of “deleting”. It’s vital to provoke “intuitive” response which serves similar functions in real world. Also, in consideration of the complexity of the semiotic reference, one icon can logically refers to several meanings according to conventions that govern our engagement with the world and the media. Therefore the designers should also be able to distinguish among the many possible conventions while designing the interactive machine interfaces.

Secondly, the interface shall be transparent. By transparent the most evident explain is “to provide immediate feedback”, and in Murray’s opinion it’s a better design value than intuitive. Immediacy allows the interactive machine to be more similar to electronic devices that we are familiar with. Like a switch can turn on a light immediately, the mouse pointer, light pen, or touchable devices should allow the users see the changes immediately after the operation. This is helpful when a user encounters a new interfaces and be a faster learner due to familiar interaction patterns.

Also as she mentions, “a robust digital media process should pay attention to the values of all the relevant design disciplines and media traditions.” As a cultural practice, interactive interface designers should always bear in mind to consciously and in advance exploit the users’ convention and pre-existing knowledge.

In conclusion, intuition and transparency are two main features of interactive machine. To realize these two goals require the insight in the cultural tradition.

4.2 Closed Systems in OIA

Technically, the media of OIA can hardly be called as “interactive system”. As illustrated above, most of the OIA are not open systems. Instead they are modeled by algorithms and directly yield the outputs as coded once being activated by the inputs. No matter how interactive it may looks like in the outside, deep down the interaction is determined by a closed, non-interactive system, shutting out the external world once receive it inputs, and can not learn from the outside and adapted to experience. However, it is stilled called interactive advertisements as long as it can respond to audience’s operation. So although it is wrongly named technically, the OIA’s interactive system along with its interface create a novelty in the crowd and bring interaction to the audience. And their design value and principles are quite different from the interactive media I presented above.

4.2.1 Unnecessarily Intuitive and Transparent

After hijacking the attention, the OIA have to keep the audience interested until they make the right interaction. Under this circumstances, intuitive and transparent are vital. Furthermore, applying the three design elements and five senses to OIA, the immediate feedback can be designed in limitless possibilities.

On the one hand, it is important that the interfaces design follow the rules of the audience’s convention and their experience. They should not only be able to know what to do, but know that immediately, by right of pre-exist experiences. Like the bus stop OIA made by Guardian of the Galaxy, which remediate the old radio machine and let audiences to know right away the plughole is used for earphone. And from the early experiences, they are expecting to hear music after they plug in.

However on the other hand, intuition and transparency are deliberately avoided in OIA. As advertising strategies, ads often create a sense of suspense or appeal for shockness to make a strong impression and henceforth improve brand recall rate. So do OIA. So sometimes, the ultimate results to audience’s engagement are delayed by using visual effects to make suspense and reinforce the artistic appeal; sometimes the feedback is not necessary to meet expectations of the audiences, create surprise moment or shock feelings. Like in the following OIA made by Nestle Contrex in Paris, audience only know that they are wanted to ride on the bicycles at first. And once they ride the pedal, the beam is emitted from the bike, while no one know what happens next, until the lights on the wall converge and start to transform into a show. Although there’s beam emitted right after audiences’ engagement, the beam per se simulates the electric current, yet is way slower than real speed of electron. In this scenario, the audience may have a feeling about the results from their interaction, but the feedback is deliberately slowed down. The whole execution works against the value of transparency.

Furthermore, some OIA expect the passer-by to “discover” the feedback instead of showing them intuitively, by doing so create surprise and make impression. In the example showed below, the bus stop OIA combined the technology of augmented reality to cater to Axe’s global angle’s falling campaign. By showing only a bottle of Axe on the billboard, audience can’t know what to do or what will happen when they press that icon. In fact, when they press the button, an angel will fall down to the ground and walk towards you and wave and say Hi. These following sequence are not pre-exist conventions in the user’s head. Since the audience cannot trace any directions base on their previous experience and conventions about the world, the value of intuitive is completely abandoned in this kind of OIA.

4.2.2 Shareable

In recent years, mobile applications and devices gradually change people’s way of consumption. The conventional agreement is that the consumer behaviour pattern has changed from AIDMA to AISAS. The most significant part of changing is that consumer stopped being passive receiver of the information, and turned to be positive searcher. This process makes “share” more and more important. OIA cost much more money and time per audience than other mass media, so the feature of “shareable” becomes economical and beneficial.

One of the strategies used is “personal designed advertisements.” These ads will be activated by single stimulus which will give audience an illusion that this ad is designed just for him or her. When seeing this kind of ads, it will for a moment attract audience’s fully attention. Meanwhile the fantasies and self-ego identifications which the interactive design provided them will not only correspond with the brand or product, but will also arouse the desire to share. In the OIA below, the ad allow the audience to create superhero selfies and slightly adjust it. After being absorbed in this interaction, the OIA provide an option of share. You can share it directly through the interface, no matter to your own device or your friends. Furthermore, audience also may choose to take a selfies with the OIA billboard together, since it’s self-ego complacent to see one’s face on street billboard to some extent.

Another strategy called “augmented reality” provides similar choice of sharing. Even if there’s no direct choice to send through the OIA itself, the unique experience will encourage audiences to record this moment, and share. In other word, the value of “shareable” in OIA media design is two sided, one is providing option for audience on the interface (like the example above), one is satisfying audiences’ self-ego identification and spontaneously arouse the desire to share. A lot of augmented reality OIA have been very successful due to the above reasons, such as Axe’s Falling Angel Campaign, National Graphic Campaign, Hugs for Health Bus Stop Ad, etc. One common point is in these videos, you can see a lot of audiences taking pictures of themselves with the OIA after joining the fun, no matter in what country, in what kind of locations.

4.2.3 Directing the Process by Giving the Only Option

All interfaces include some degree of guidance directing users to operate the system. However, as a metamedia with extreme transparent and simple affordances, OIA interfaces shows the guidance feature very evidently, and more salient than real interactive interfaces with open systems. With a closed system, audience’s options and responses are often narrowed down to only one option. People should understand what this OIA want them to do at the first sight and this require the interfaces to be designed with directions wrapped up by the design elements elaborated above.

Using text to lead audience is the most common and efficient way. The Women Aid ads use “Look At Me” which takes up half of the billboard to guide audience’s sight. In this case, the text must be conspicuous enough for the audience on first sight. Same requirement also fit to the image. Color composition can also play roles of cueing the interaction in the OIA. The Economist once made an OIA with a big light bulb hanging on a solid red background. The light bulb was installed with motion sensor that once someone walked under it, the light bulb will be suddenly lit up. The color composition was concise and terse, laying stress on the key part of the ad——the light bulb.

More complex directions combine the three elements together. Like the OIA below, the interface direction is constituted by a two parts. One is the text and camera icon which direct the audience to touch the icon to start taking selfies. The countdown number on the lower left also limit the time which the audience should be ready. The other is the image and color composition which draws out the area where the audience should be put themselves in. If don’t follow the above directions constituted by the three elements of advertising, the interaction will be failed.

The truth is that these directions reduce interacity by exclude other options. The audiences are provided with only direction, which become the predictable external inputs. In real interactive machine design, designers also have to direct users’ operation, yet not to this extent. In open systems, inputs are completely uncontrollable, so the machine can learn and update itself through interaction. So we can assume OIA which use closed systems and removes the unpredictable agents can never update themselves. By directing the process, the OIA give audiences the only option, and by doing so assure every interaction is exactly and without accident the same.

4.2.4 Treats as Feedback

Unlike traditional advertisements, some OIA designers specially design treats as feedback to audiences’ option. In this case, when audiences complete the whole operations directed by the designers, some treats will be offered as feedbacks. The treats can be virtual like a personal designed selfie like I presented before, or real like the one below. Also they can be related to the brand they are promoting, or not. Those automatically give-away actions are coded as part of the algorithm so will continue until the stocks are empty. As the OIA below, the no-sugar Coca-Cola are distributed as treats if the audience shout out “YES” to the voice-activator. In this case, the shouting as directed can be seen as the external inputs, the outputs or feedbacks are designed as giving away a bottle of drinks as they are promoting. The treats are real, and related to the brand.  

Some traditional OIA also use these strategies, to give away some treats as feed backs. However compared to the digital OIA, those traditional OIA can either arrange staffs standing aside to control the scene and create a promotion environment, or leaving the OIA alone working their magic by themselves. In this case, the treats are not the feedback to audience’s interaction. In these traditional OIA, the feedback can also be either related to the brand like the La Place Restaurant traditional OIA which give away fresh fruits, or related to the timing or place like the Wilkinson Razor traditional OIA which give away roses in Valentine’s Day. However in this case, the traditional OIA can only give away real treats to achieve the purpose of publicity.

4.3 Open Systems in OIA

Despite the closed interactive system discussed above, some advertisers do use open system in OIA interfaces. It should be granted that the use of open system can’t guarantee the ad to be more interactive nor more effective than the ad using closed system. However, with the technology developing so fast, and people are always fond of the new and tired of the old, soon the OIA with closed system will be at every corner. Without the novelty and freshness, how long will the OIA stand its advantages? So the use of open system in OIA is a promising attempt. Who knows, one day, you might be able to chat with advertisements on street.

4.3.1 IIMs

Wegner describe the interactive-identity machines (IIMs) as interactive machines which output their inputs immediately without transforming. In article he stressed: “Though IIMs are not inherently intelligent, they can behave intelligently by replacing intelligent inputs from the environment.” In the field of OIA, many designers decided to take advantage of the intelligent but simple feature of IIM, transforming the environment directly and achieve interactive results. Normally, this kind of OIA install video player on site, and broadcast live scene from another site. Audiences are filmed through camera and footage directly sent back to the place where the video are filmed. Two groups of agents and two places are connected by a screen, whose only function is transfer real-time images. Like the OIA showed below, the bus stop billboard is actually a huge screen, broadcasting real-time images of an old grandpa. Audiences can interact with the OIA interfaces, but are in fact interacting with the man behind camera.

Other examples like Talking Tunas, Mattel’s Draw Something, etc. IIMs like this make OIA seem more interactive than the one with closed systems. They take in inputs from the unpredictable outside world which makes themselves have richer behaviour than the closed systems. 

4.3.2 Touch-screen Video Games

Another kind of OIA with open systems is those implant games within. The modes of use in these OIA verify with the kind of games they implant, but normally are simple and similar. With interaction like a video game, it rely on audience’s previous experiences of game common sense to some extend. Also the game are normally designed as touch-screen games, so the sensitivity is salient. In this case, transparency and intuition are two principles that matters.

Game OIA remediate the real touch-screen video games yet are placed in public. But the interacity varies according to different ads. In the Surf Life Saving OIA, audience can only touch limited area and there’s only two modes as on and off in these icons. It’s easy to see that behind the interfaces are still algorithms and the systems are not open enough. However, by providing more than one choice, these ads simulate the early time videogames and henceforth create a more interactive experiences than those who leave the audience countable options. There are OIA like Adobe EchoSign Digital Sign-off Game that can provide different levels of difficulties; There are OIA like San Jose Earthquakes Shooting Game that allow audiences to choose every directions they want; There are OIA like Pepsi Puzzle Game which allow the audience to move the pieces wherever they want. They also add gravity effect in this game which make it more like the interactive pad. Etc.

OIA with open systems are receiving higher involvement of interaction than those with closed system. They are designed to cope with the unpredictable external world. And the principles for intuition and transparency are more valued than the closed systems. Besides, the “shareable” and “treats as feedbacks” features are also followed.

I’m not assuming that open systems are more effective than closed ones. Just as new media didn’t eliminate traditional media, OIA with closed systems can be more effective and provide impressive interaction no less than the ones with open systems. So I believe with the more and more technology applied in the OIA, there will always be novelty on street which keeps hijacking attentions, while those OIA can be either designed with open or closed systems.

4.4 Semiotic and Culture Practice

Advertising is a field that has close relationships with semiotic and culture, so every piece of work shall also been seen as a practice in this field and follow its rules. The reason of this design matter is that designers should always have a deep understanding of the references in the symbolic world to avoid taboos and to design OIA that can convey information properly.

In Peirce’s interpretant level table, people come into the perceptual information first and then recognize them as patterns. In most cases, traditional OIA which can’t attract fully attention will stayed in this layer because they only work as background and are not specifically adsorbed. But OIA are designed to have fully engagement with its audiences. As metamedia, OIA probably contain every channel (language, music components, image components, film/video) in the table for audiences. The three basic elements in the advertising which I listed before can be viewed as signifying features of these media. The audience can view the text as part and understand what to do next, and can also combine the text and the images as whole to understand how to perform a better interaction. Furthermore, most advertisements are coded within the environment of culture. Like the Light Bulb ad of The Economist presented above, the whole composition of a solid background and an impending light bulb don’t refer to simple meanings like “the light is turned on”. In fact, by putting it above a person’s head, and lighting it up have a semantic meanings of “inspired” or “bright”. Furthermore, the cultural encyclopedia not only help with the designers to create novel ideas within shared symbolic societies, but also help them to avoid taboos. Like the Axe’s Falling Angel Campaign can not be placed in the Muslim Society not because people there can’t understand it, but because in their shared cultural encyclopedia, it’s inappropriate.

5. Conclusion

In this paper I mainly discussed one kind of OIA which have digital interfaces and interact with human. Although most of the OIA listed above are designed with closed systems, to audiences’ mind they are interactive enough and have influence towards their affection, cognition and behaviour. Scholars have discussed what principles an interface of interactive machine should follow, however OIA have its own values.

As described, the value of intuition and transparency is not mandatory due to the need of producing surprise and suspense in OIA. However in those which implant games within, the interface should still be designed as intuitive and transparent. Secondly, OIA have advanced conditions to be designed with an option of share and with treats as feedbacks. Also, because of the closed system, most of the OIA can only produce simple options for the audience to operate. So designers should be able to direct process on the interface. Finally, as a cultural and semiotic practice, OIA designers should also follow the values of proper symbol and culture pattern.

Work Cited

Arbab, Farhad. “Computing and Interaction.” In Interactive Computation, 9–23. Springer, Berlin, Heidelberg, 2006.

Bergstrom, Jennifer Romano, and Andrew Schall. “What Is Eye Tracking?” In Eye Tracking in User Experience Design. Morgan Kaufmann, 2014.

Bierma, Nathan. “‘Minority Report’ Stirs Questions over What Advertising Will Be in the Future.” Knight Ridder Tribune Business News; Washington. June 25, 2002.

Cassidy, Anne. “Interactive Outdoor Advertising.” Campaign; Teddington, March 7, 2008, 14.

Duchowski, Andrew T. “Eye Tracking Techniques.” In Eye Tracking Methodology, 49–57. Springer, Cham, 2017.

Goldin, Dina Q., Scott A. Smolka, Paul C. Attie, and Elaine L. Sonderegger. “Turing Machines, Transition Systems, and Interaction.” Information and Computation, Special Issue Commemorating the 50th Birthday Anniversary of Paris C. Kanellakis, 194, no. 2 (November 1, 2004): 101–28.

Irvine, Martin. “Peirce ++.” December 7, 2017.

Leeuwen, Jan van, and Jiří Wiedermann. “Beyond the Turing Limit: Evolving Interactive Systems.” In SOFSEM 2001: Theory and Practice of Informatics, 90–109. Lecture Notes in Computer Science. Springer, Berlin, Heidelberg, 2001.

Liu, Yuping, and L. J. Shrum. “A Dual-Process Model of Interactivity Effects.” Journal of Advertising 38, no. 2 (2009): 53–68.

Wang, Jing. “Characteristics and Application of Interactive Outdoor Advertising Design Creativity in the Context of New Media.” Hubei Institute of Fine Art, 2016.

Wegner, Peter. “Why Interaction Is More Powerful Than Algorithms.” Commun. ACM 40, no. 5 (May 1997): 80–91.

Some thoughts on the Google “Art Camera”

I’d admit that I’m no fan of art. But I like Van Gogh’s The Starry Night. My love for The Starry Night actually came from my stupid fond of a TV show called Doctor Who. Van Gogh and The Starry Night have become the semantic type in our shared cultural encyclopedia, “prototype” in Peirce’s term……

……Anyway!That’s why I search for this artifact as soon as I open the Google “Art Camera” website. AND I WAS AMAZED. What a magical work have done by the “gigapixel capture process”! By zooming in, I can even see the texture of the canvas Van Gogh once painted on. The pigment, the brushwork, the crack…… For someone who knows nothing about art like me, this is astonishing. This is probably the most intuitive way of saying “the media is the message”. I have seen The Starry Night in my text book, in news picture, and in TV, but none of that give me goose bumps like this time. From my perspective, the remediation is realized through how it changes the way people view the artifacts. Normally we see paintings as 2D artifacts, but through Google art camera, they are but beyond 2D. The detail and the gigapixel scanning is creating a feeling of being personally on the scene. The gigapixel level of magnifier serves the perfect duty for an extended mind. All of which makes this painting unique to others. As one “token” of its “type”, The Starry Night on Google Art Camera is endowed new messages through the medium.

Google’s art camera project gives normal people the opportunity to view the artifacts in details without going to the museum. Although Prof. Irvine typically stressed that in Malraux’s text he didn’t tend to achieve artifacts going beyond the walls, the re-presented artifacts is definitely doing so as a symbol token thanks to this project. However I’m a little bit concerned about several things and can’t find answers through the reading.

  1. When looking at the encyclopedia system of the Van Gogh’s museum, I can see the arrangement can be based on popularity, time, and even color, which is good. However, real museum tend to arrange the artifact in its own way. In Google Art Camera, I can only choose between viewing artifacts one by one in gigapixel details, or go on a blurry interior tour via 3D camera. The gallery painting mentioned in the reading like Morse’s, the metamedia or hypermedia, is designed to “encoding, transmitting ideas and for communication”. The Google art camera tend to achieve the same purpose, but somehow against the interface principles described in Janet Murray. Isn’t presenting a whole gallery in the same way of presenting one artifact a little bit unintuitive and nontransparent? Why the encyclopedia structure of the Google Art jumped so far from Morse’s gallery painting?
  2. Furthermore, after seeing the ultra HD version of the paintings, I want to see what they do with statues. And I was disappointed. Totally different from the painting, the statue is actually the best object to implement 3D camera. But in fact, I can only use magnifier to view one side of a three dimensional statue.
  3. I also clicked and saw the street views of Hong Kong through Google Art (Hong Kong- the Electric City- The Neons of Hong Kong). The photos and the video have the feature of old Honk Kong Film which spread to foreign cultures back then, like in Blade Runner and Ghost in the Shell. This remind me of Andy Clark’s comments on culturally embedded cognition: “culture provide us with intellectual tools that enable us to accomplish things that we could not do without them, but can also blind us to other ways of thinking.” Did Google Art Project hire people from Hong Kong to do this part? Or do they just assign it to random western photographer and copywriter? I’m more inclined to the latter.
  4. Besides, the 3D street view was roughly made, changing from rainy to sunny, day to night just within two steps. This careless production not only distorted the real Hong Kong, but also did great damage to users’ experience. Could it because the Google art project has been developing too fast to focus on the quality?

    After all, I’m still very excited to get to know this project and couldn’t wait to see it developed. Maybe someday, I can use the gigapixel scan camera (yes i’m really impressed by that) and the 3D camera (maybe VR) to view The Forbidden City of Beijing.

Try to Understand Interactive Advertising with Interaction Design

Interactive Advertising: The kinds of advertising that use online or offline interactive media to communicate with consumers and to promote products, brands, services, and public service announcements, corporate or political groups. (Interactive Advertising. Wikipedia, 2017.9.26.

One case for myself to work through: Misereor PlaCard, The Social Swipe Advertising

As a Symbolic Practice

At first, interactive advertisings seem nothing like IIM, Interactive-identity machines, which output their input immediately without transforming it (Wegner. Why Interaction Is More Powerful Than Algorithm): it is interactive, yet the outcome seems not the direct copy of the input, but the programmed respond after algorithms. People “talk” to the machine through all sorts of interfaces and the machine give different feedbacks that are corresponding. Even though different input gets different respond, the respond is far different from the input. Then it occurs to me that this is exactly where the symbolic practice joins the game.

When one interacts with the machine, all they do can be interpreted as symbolic practices. The card-swiping gesture is not just a metaphor of freeing & feeding, in the computer programming language, it refers to money donating (an input). And the images of people get freed & fed is not just a metaphor of someone somewhere is helped by your donation, it refers to money donated (an output). In fact, this algorithm is hiding under the interactive interfaces so the machine can be transparent and intuitive to human. This works to both the main physical symbol systems: to computer, the input (money donating) = the output (money donated); to human mind/ brain, the input (cutting ropes & bread) lead to the immediate respond (rope & bread being cut).

As a Cultural Practice

Continued from the preceding part, if the interactive machine IIM just output their input immediately, why do people think otherwise? Why do people feel like by swiping card, they directly help people in need of that?

Successful interactive advertising must be transparent, which means to provide immediate feedback. However, how to let customers see the feedback as soon as they interact require precise and creative design. All the interactive advertisings are desperate to realize that, either by using extremely simplified affordances, or by conducting it symbolically in the shared culture. This case uses the latter way, by endowed the action of card swiping and the images of rope & bread being cut with symbolic meaning. Because of the shared culture between the designer and the customer, these symbolic references are processed completely in human mind, unconsciously, giving the illusion of transparent.

Successful interactive advertising should also be intuitive. In my own point of view, it is more vital than transparency. In inventing the Medium: Principles of Interaction Design as a Cultural Practice, Murray indicates the intuitive design should draw on our unconscious expectations about how things behave and expectations from experiences and subconscious idea of the world. Which means interactive design will only work if the machine responds as how people expect. In this case, people are unconsciously hoping that by swiping a card in a way of using a knife, there should be some responds to simulate the situation after knife using. And the design meets their expectations.

(What surprised me is that people are always surprised, even though it’s the exact respond they are waiting.)

Janet Murray also mentioned that it should provoke unique response so people can be able to distinguish among many other possible conventions. However, this is what makes me very interested. Cause in the field of advertising, this ambiguity, or pun, is exactly vital. It relies on people to make different references with different possible conventions. Swiping card is a gesture of paying as in everyday experiences, in this case it also is a gesture of cutting in a way of using knife. Both conventions are processed. Seems against the principle of interaction design… so I honestly don’t know how to explain this part.

TO CONCLUDE, consciously and subconsciously, the order of this interactive process which taken place in the extended mind is:

  1. Card swiping—Cutting the rope & bread—Money donating—Money donated—Rope & Bread being cut—People are helped.
  2. human action—the symbolization of this human action—input—output—the symbolization of the respond—the real respond to human action

Both line 1 and line 2 are the progress of how human interact with cognitive-symbolic artifacts. The collective process it is.

Interfaceless Interface and Remediation

Well… maybe I really should wait till this week to talk about 3D manipulating in 2D virtual displays, or Pixar’s Graphical Interfaces revolution… But anyway, I find something new so provocative that can’t wait to discuss.

Interfaceless interface

“…no recognizable electronic tools-no buttons, windows, scroll bars, or even icons as such. Instead the user will move through the space interacting with the objects ‘naturally’, as she does in the physical world.”(Jay D. Bolter, Richard Grusin, 2000, Remediation: Understanding New Media”)

This Ironman’s Jarvis-like, transparent interfaceless interface will definitely be my proposal that I would like to see in our PCs and mobile devices. By realizing that, Engelbart’s “view control” shall be implemented in virtual reality. Also, the immediacy is vital. To realize immediacy require high-speed data transformation. (I read somewhere that once the 5G network is implemented, data transformation speed can literally leap so far that cache or image delay will be no problem anymore. So this future is actually foreseeable, fortunately.)

Right now, we can simulate this future by using VR glasses, which in my point of view is a highly hybrid media. In Manovich’s word, “Medium=algorithm+a data structure”. Now the trend is to hide the algorithm as deep as they can so they can only display useful data to users, in beautifully graphical ways. Here’s a VR shopping commercial released last year. This seems to be an example on how well the so-called “interfaceless interface” has been realized so far.

BTW, I would also propose the way how SQUID used in Strange Days to be implemented someday. Sending messages directly to people’s cortex seems like the next step after VR technology.


“…the remediation of one medium in another.” (Lev Manovich, 2013, Software takes command)

This is my favorite part this week. By “remediation”, Bolter and Grusin refered to new media simulating or re-mediating the old ones, so as the computer. How does that work? They wrote, “the goal of the computer graphics and is to do as well as, and eventually better than, the painter or even the photographer.” Remediation can be understand as the logic of new media in this case.

Remediation also is different from simulation. The Memex, the Dynabook, or the Kindle, are all new mediums compared to hard copy books. During the development of the PCs or personal reading devices on this trail, inventors are exhausting their ability to make the e-book experience feel like real book experience. The color, the light, the page turning display, the note taking function……all the features in today’s e-book devices are invented step by step to simulate the real book. A sardonic commercial released by Ikea is a perfect example for my idea.

In this commercial, Ikea intentionally indicates the use of a real book is simulating the use of an e-book, although the fact is the other way around.


Remediation or simulating, either way, human beings are somewhat nostalgia in this field. That’s might be exactly why we are looking forward to an “interfaceless interface”, cause in that way, the interacting will be natural, like the physical world which we have been interacting for god knows how long.

Manipulate 3D object on 2D interface

From the decrypting Turing Machine, Bush’s storing and consulting Memex, Licklider’s multi-access online interactive community, Sutherland’s sketchpad and light pen, Engelbart’s envisage of modern software, then Alan Kay’s mockup device, to finally, Steve Jobs’ epochal Ipad. This long, long, long list dates back how human work out step by step the logical function and the graphical interaction of the interfaces. This interface has been so long in my life that I’ve been taking for granted. This week’s reading helps me reexamine the magic this familiar black box has been doing so far.

In all those pioneers, Ivan Sutherland’s work is the one that surprises me most. I studied on the history of Disney and Pixar animation studio to notice that, before Pixar, Hollywood animation field is dominated by Disney by its perfect 2D hand-painting system called rotoscoping, which allowed the painter to draw on the base of real scene. But Pixar’s engineer Ed Catmull brought SOMEONE’s work into the film industry, built a cutting-edge software called RenderMan, which allowed painter to draw and design directly on the computer. This technology launched the world first 3D animation film Toy Story and set off the third revolution in the industry. And now, I know who that “SOMEONE” was. I thought it is Sutherland who allowed people to draw with, and on the computer, which is such an important invent that nearly laid the foundation of modern people’s tech habits. In Sutherland’s paper, he used the “light pen” to input semiotic graphic to the computer. Isn’t that light pen now, our fingers?

Talking about 3D, this seems to remain lots of paths to be realized in the future. CAD, Nuke, and the more common Adobe After Effects are three softwares I know who worked on the 3D function. In my own opinion, interfaces involving 3D functions are much more complex than the 2D processing mentioned by the readings so far. To manipulate a three-dimensional box on a two-dimensional board requires not only symbolic forms which constitutes and create interfaces, it also requires more application of human’s extended mind to work its magic. On the interfaces of the 3D display system, there is an “X-Y-Z Position Indicator” used to change your point of view to see the overall display of the subject. This indicator gave user an intuitive view of scene which can’t be seen by the real eyes. Using this indicator, click and move the mouse from left to right means to rotate the object clockwise on the Y-axis, down to up means to rotate on the X-axis, which is quite different from the X-Y Position Indicator. This indicator literally works as the “third eye” of human. But after all, these 3D operations can all be operated by the mouse.

Apart from this indicator which shows the producer’s view, there is another view in this 3D display system called camera. It is what will be rendered out, the spectator’s view. Taking this project for example, the view we now see is the producer’s view, from the left side of the real scene and can be altered by the “X-Y-Z Position Indicator”. However, when rendering this video out, the spectator’s view is one from the camera on the left bottom of the display.

BTW, the blue indicator means to move the camera backwards, in this case, towards the picture, instead of towards the back of the computer. Without proper indices, one can not understand the operation rule of this 3D system like they do in the 2D system.

Operating 3D object in 2D board is quite an interesting yet complex symbolic thing. Because the really operating system lack the Z-axis, we now can’t lift our mouse into the air to simulate 3D operation, hence the complexity. So to speak, some of the affordances are not intuitive enough like 2D system, so in the future, I’m looking forward to a more simplified interface than today’s. (Maybe we can lift our mouse/ fingers/ light pens in the air, who knows?)


Editing on the next day:

This morning I came into an interesting video Tilt Brush, showing how to operate 3D object on the 3D interfaces. In this case the tilt brush invented by google VR upgraded the “light pen” in Sutherland’s mind by endowing it the function to work in real 3D spaces. I was wrong before, we now can literally “lift our mouse into the air to simulate 3D operation.”

Logic of Machine and Tolerance of Human

After conducting only about 20 percent of Python learning on the CodeAcademy, I have a more intuitive understanding of how different is computing language different from natural languages. And the most obvious one is its tolerance. Having a unique syntax for computing languages, the rules to communicate to machines seems extremely strict to me. Computing languages are so not tolerate. Even blank mean something and if I miss the blank in coding “spam ” + “and ” + “eggs”, the software will not recognize it and render it as an error. Moreover, even the error has to be coded, probably using “if”, “elif”, “else”. Because the machine can’t process randomness like we human does with NLP, they have to be explicit to execute. To me, logical artificial languages like Python seem to be the exact reflex on machines’ logical mind. To conduct mathematical based, or electronic based programs, machines are meant to be logical, and intolerance.

On the contrary, human has much more tolerance than machines. In fact, we tend to describe an intolerance human who is indifferent, highly logical and cold-blooded as robot (This analogy now sounds more reasonable after learning a bit of Python). Wing, in her short paper, claiming the fundamental question computer thinking confronts is: What is computable? Or what can humans do better than computers and what can computers do better than humans?

I stumbled across an article this morning called Excuse me, you are fired. In the article, the author cited scholar’s opinion on jobs taken over by robots. Previously, it seems more than convincing to me that one day our jobs will be all taken over by robots. However, this article mentions a list of features that keep your jobs safer from automation, if your job needs 1) negotiating or communicating skills with sophistication; 2) helping or assisting others with genuine and sincere heart; 3) coming up with original ideas with aesthetic and creative minds. In contrast, if your jobs require 1) skills that can be easily grasp under training; 2) repetitive work that need experience rather than deliberation; 3) squeezing into small work spaces and seldom need to catch up with current affairs, then you are at higher risk of being replaced by the machine.

Telephone salesperson, hotel and accommodation manager or owner rank the highest and the least risk jobs from automation at a risk at 99% and 0.4%. With these two jobs, we can easily see the difference.

Hence one can see that how computational thinking is closely related to our livelihood. The safer jobs from automation are actually fields where thoughts are harder to compute. Randomness, tolerance and minds that dare to think out of logic are commonly met in those fields (probably even required in the 3rd kind of jobs). These are the jobs we do better than computers. And the higher risk jobs are fields that require fully implement of computable logic. In these fields, logic is vital and people literally teasing themselves working like a robots. In these fields, computer might do better than human, in future, or right now.

However, it is not saying that safer jobs don’t require computational thinking. In my own opinion, things might be totally on the contrary. Because in the high risk jobs where most works are computable, human act like computers but seldom think like one. After all, human are never able to exceed computers in “computational thinking”. However in those safer fields from automation, we act nothing like logical computers due to the tolerance of humanity. Human need to think more like computers in those fields, precisely because their work don’t fundamentally require computational thinking. So to me, the safer the jobs are, the more computer thinking will help.

(PS. The article Excuse me, you are fired  I read is in Chinese, but the opinion came from a comprehensive analyzing of data supplied by Michael Osborne and Carl Frey from Oxford University. See sources :THE FUTURE OF EMPLOYMENT: HOW SUSCEPTIBLE ARE JOBS TO COMPUTERISATION?)

(PPS. BBC makes this long paper by Osborne and Frey more readable, using the exact principles of “computational thinking”. See also on BBC website to find out your job’s risk: Will a robot take your job?)

The edge of distributed mind will end…where?

(These weeks reading got me excited because I can’t stop thinking about corresponding possibilities of distributed minds in sci-fi films, and how it reflex our information theory on technology doable and ethically acceptable.)

I finally understood why professor Irvine said my writing on implanted technology would meet its viable philosophy soon. The biological lock, the photographic contact lenses, the implanted headset…all those devices are given the term by “cognitive technology”. In this prospective, our anthropology development surely is a path (inventing and) absorbing different tools to become our external entity to the brain. Our cognitive technologies develop faster and faster, and we can achieve more cognitive accomplishment with the help of these environment support. So I assume it is fair to say that our distributed mind has became wider and wider alongside the path.

Then I try to fit in some examples within active externalism. The twin Yifan in Paleolithic Age had the same mind with the myself in 2017. However without the systematic languages, she couldn’t express her idea properly to clansman. Without the calculator, she couldn’t calculate how much the square root of three is. Not to mention without the smart phone, she couldn’t communicate to friends on the other side of the university, like I’m doing right now. Her extended cognition is limited, but the cognitive capability of the brain stays the same with me.

Then consider another twin Yifan in… 2049ish. Probably the interface in Ironman has become real. With such realistic affordance, she can do complex cognitive task by easily offloading her cognition to the literally interactive interfaces, and do simple cognitive task more casually with high level affordance. Unlike the twin, I use relatively indirectly designed external representations like using the mouse to click “delete” button to delete something, while the twin Yifan can “grab” the file with real hand, and “threw” it at the dustbin like what she does in daily routine, or just waving her finger, bouncing off the file etc. The function of our cognitive tech has been broadening and visualizing and life-experience-simulating and simplifying. One word for all, “affordancing”.

See 《Ironman excerpts of interfaces affordance》on

This kind of external cognitive artifact in the Ironman movie, as far as I’m concern, reflexes the wish of our human to design an interface that has no interface at all. My opinion stays the same as weeks before, as the best vision of affordance will be that, the individuals mind and the artifact are so combined in which way might reach two entirely different directions of design: on the one hand, to offload all the internal representations externally. In that case, memory, knowledge and even belief can be seen and altered by anyone authorized and can very much likely to produce more problems than solutions as showed in sci-fi show Black Mirror. On the other hand, to upload all the external representations internally, endowing human the power of computer and wiring every individual mind together as showed in an episode of another sci-fi show Doctor Who. These two are extreme cases won’t ever work. I only use them to point the direction of possible design trend of A.I interfaces.

At last, I’m feeling like asking a quite useless (or philosophical) question here, but the future definition of distributed mind will find its edge on…where exactly? If A.I developed a mind of its own, passing the Turing Test, do we still rely on them to retrieve information like Inga and Otto? And do they still remain a distributed coupled system to human brain? We viewed Blade Runner weeks before, so take it for example: suppose our cognitive technology (the replicant) has developed its own cognition, do they still belong to us as a (cognitive) tool?

And I really struggled with the idea of cognitive ethnography. What exactly does that mean and how that applied to extended mind and distributed cognition? Could we talk about it on class meeting? Maybe throw some examples?

Power of “Uncertainty”

To begin with, I want to start with certainty. In “A cultural approach to communication” the author regards communication to human as water to fish. In fact I think, we might be unaware of the logic in communication, but we definitely are well aware of the power of communication, so well as to take every advantage of it. Information has always been powerful to some extent. During the early days, literature and communication has been treated as privilege of the nobles. Not only normal citizens were deprived the right to read and write, punishments to sinners (that can be found in almost all cultures) involves cutting one’s tongue or ears and gouging out the eyes, which in my point of view might symbolic refers to cut off someone’s channel for receiving or giving information. Then time comes to Middle Ages, when French became the favorite “alphabet” of upper class society, communication became harder and harder between nobles and civilian. Not to mention nowadays, when myself, a “digital immigrant” in Floridi’s term, is more than overwhelmed by the flood of information. Time comes for ones who can get hold of the information.

Right then, in this week’s reading, I reached the topic of “uncertainty”.

In Shannon’s view, “uncertainty” is the data deficit that has value in known possibilities. A coin in my fist can create 2 possible data, two coins can create 4, three coins can create 8…… “Information can be quantified in terms of decrease in data deficit.” So apart from the Bar-Hillel-Carnap Paradox which indicates impossible instance creates maximally information, the “uncertainty” does have the power to be more informative than I previously thought.

Crypt has a lot charm, because of the uncertainty the “riddle” produce, from those used in war to the famous plot showed in BBC’s new Sherlock Holmes’s” I am _ _ _ _locked”.

China has an old poem, which translate word by word would be ” Right now no sound at all is better than there is sound”. This poem is now used to describe that under some circumstances no word is needed. In the first place, this line of poem is tend to describe the ending part of a Pipa (a traditional Chinese instrument) show, when the music is too attractive that even in its shortly pause, the silence creates more imagination in its audience mind. This situation can be seen in many musical performances, that the silence between plays are way more informative. Even the film director like to use this kind of charm proved by “uncertainty”. A few seconds of black frames or static frame are more easily to produce suspense.

The most impressive scene in a TV show “Person of Interest” is when the scientist is teaching his A.I to play chess, and the A.I rely too much on the possibility calculation and spent a lot time predicting before its first move. The scientist says: “Each possible move represents a different game… there are more possible games of chess than there are atoms in the universe, no one could possibly predict them all, even you. which means that the first move can be terrifying… but also means that if you make a mistake, there is nearly infinity amount of ways to fix it, so you should simple relax and play.”

I like this part of lyrics because it has a lot alike with the logic lies in the meaning of “uncertainty”. In the chess games, the uncertainty contains the amount of data even an A.I can’t calculate. I tend to fear the uncertainty, exactly because the nearly infinite information it could possibly produce. But here the lines prove a different angle, that the power of uncertainty is an advantage not only for the message (informer), but also for myself (the informee).

How a logo works as a symbol

We are no strangers to commercial logos. Or should I say, we can’t avoid them even if we want to. Logos are everywhere, using its own characteristics to help brand lure customers (well, not directly, but that’s what their final purpose lie). We can easily recognize one logo as a symbol, for it is a conventional agreement embedded by the brand to the memories of all viewers. We see it frequently enough to remember it, as if we “learn” it. Once our brain linked the image with a brand name or a kind of product, and we no longer take it as some random color lump, the symbolic work has been done.

Being a symbol, a logo is intersubjective. Even though a green-white two-tailed siren can easily refer to Starbucks, some people will interpret this physical vehicle into the concept of coffee instantly, while I directly think of the smelt and noise in the shop before I reach the word Starbucks.

When associated with Peircian decomposition of symbol class, I find logos can work as symbols with grounds of icon, index and symbol, separately or in combinations. Take Target’s logo for example, the target alike circle colored in red and white is apprehend as an icon, referring to the concept of a shooting target due to viewer’s past experience. And the shooting target is linked to the brand target due to the viewer’s present experience and future hoherent, which giving it an indexical and symbolic feature. Then with the letter downside (I sometimes assume the word in logos can give the symbol phonemic features), the whole logo works as a symbolic sign which has icon to “express the information”, and index to “indicate the object to which this information pertains”. As “part of a shared cultural encyclopedia”, few US modern citizens will see the white and red circle and think that means merely shooting target rather than supermarket chains. However, put the identical circle in rural mountain villages, references can be totally different, since it’s not in their culture. The circle might works only in iconic ways.

One kind of logos interest me most, which itself seems to bring out nothing else than font deformation. Like Google, like Sony, like Lenovo, like H&M, like Zara, like Calvin Klein… They are just letters put together. Logos like these make them very much like sinsign (token), “an accidents of existence makes it a sign”, a replica. However, it is not. The logo of Google is not six letters combined using a specific font and specific colors. From my point of view, it ceased to be a “word” but an “image” when being showed like this. By which I mean, when I write down “google” with this identical feature, I’m not writing. I’m drawing. All that came from the conventional meaning we endowed on this combination.

Logo can be very concise and complex at the same time. Even as simple as Google, it’s not easy to be identical. Recently a group conducted a test on people’s recollection of famous brands’ logo. The result turned out to be very funny to some extends. As the result shows, people actually can’t recall the minutia of even the simplest logo. (full test result and larger pictures showed in a weibo website:


When I looked at these false drawings, I find out that people seems to remember the iconic, index and symbolic feature of the logo in the same degree. In my previous understanding, the iconic apprehension seems to help viewers remember the most part of the logo. However, it shows that people will just remember the most conspicuous part of that, no matter it is a symbol or icon. In fact, the highest rate of accuracy seems to be the color of the logo. Does it show that when a new brand starts to design a logo and want to improve the first recall rate of their brand, they only have to choose the right color? This is a very interesting thought to me and I wish to do more research on that.

Sign language

I know little about sign language, but it keeps appealing to me while reading, and I find myself trying to understand is sign language A Natural language or a kind of Gesture Language, or both.

I’d very much like to think sign language as a natural language at first.

In the Semiotic Matrix, I didn’t see a system where I can put gesture language in. But when I assume sign language is a kind of a natural language, I find out that it fits every property, feature, and function where labeled “Y”, which seems to me that it must indicates sign language is a kind of natural language.

Furthermore, Steve Pinker used sign language acquiring by deaf infants as the very crux of explaining why language is innate. In The Language Instinct he wrote “when deaf infants are raised by signing parents, they learn sign language in the same way that hearing infants learn spoken language.” Unlike other gesture sign like flag signal, sign language is way more sophisticated and seems to have consistence one-to-one match between spoken language and sign gesture.

At last, sign language has very strict combinatory rules and generative rules of itself, which makes it fits the UG and a systematic language, like English or Chinese. Therefore, sign Language is not common gesture, which, even can be understand by people, doesn’t count as sign language, or language itself. Without constrict rules (syntax), gesture are just random lip reading and gesture mimicking.

However, part of me still has problems with this idea.

Firstly, after the first week of this course, reading Peirce’s mode, I recognized sign language as an icon symbol, or index symbol at least. Even after this week’s reading, I still think sign language is not like other natural languages we deal with everyday, but more like a (Rhematic) Iconic Sinsign or Dicent (Indexical) Sinsign (Wikipedia, Semiotic Theory of Charles Sanders Peirce). In this case, sign language, as in my point of view, is not “fundamentally arbitrary or purely conventional”, but “resemble the signified” (Daniel Chandler, Semiotics: The Basis). As showed in the pictures below.

Secondly, natural language is a collection of different languages. If we count sign language as one of the natural language, what is ASL (America Sign Language) and CSL (Chinese Sign Language) then? Dialect? Apparently not, since they are not only phonological different but syntactical and lexical different. As showed below are the similarities and differences in number indicating between ASL and CSL.

Which bring me to the third problem: sign language seems failed to fit Jackendoff’s Parallel architecture as well as processing architecture. Lacking the hearing interfaces, does that require a new structure of processing system? Or new interfaces rules that doesn’t apply to ordinary people? Will the processing system change to hierarchical or still parallel?