Category Archives: Final Projects

Identify Cultural Reference in Persona 5’s UI Design, and Why It Immerse Gaming Experience

Abstract

Persona 5 is a Japanese role-playing game produced and released by Atlus in 2017. After its release, it received critics’ and gamers’ acclaim, especially in term of its user interface (UI) design. Using Persona 5 as the case, the essay explores the cultural references used in Persona 5 using the Peircean model of semiotics, and further investigate how cultural references in UI design can add to the immersive experience of gamer by employing the conceptualization of hybridization, affordance of digital media, and general study of user interface.

Introduction

The user experience design (UI) is an emerging and multidisciplinary technology field involving semantic studies, graphic design and cognitive psychology in the field of human-computer interaction. When interfacing users, UI allow users to give instruction and providing means for them to interact with computers. Such exploitation implies that the effectiveness of UI design heavily relies on the perception of users to be recognizable of an interface’s functions, which presupposes social-cultural factors in users’ abilities to interface.

Video games, naturally, is one of the field that depends on high-quality UI design to upstaging gaming experience. While gamers are subject to the immersive exposure to completely artificial settings, culture background plays an integral role in affecting how they will perceive and recognize the game world. This essay will discuss how developers integrate cultural factors into game UI design and how it effects the gaming experience using mainly Persona 5 as the example to conduct the investigation.

Why Persona 5? An introduction and justification

As Sony PlayStation introduces,

“Persona 5 is a game about the internal and external conflicts of a group of troubled high school students – the protagonist and a collection of compatriots he meets in the game’s story – who live dual lives as Phantom Thieves…Ultimately, the group of Phantom Thieves seeks to change their day-to-day world to match their perception and see through the masks modern society wears.” (PlayStation)

Persona 5’s success has been well testified by its sales and criticisms. Its production company, Atlus, has announced that by December 2017, Persona 5 has sold out 200 million copies including digital and physical ones, which was an incomparable success compared to the gross sales of its antecedents, Persona 3 and Persona 4. (Newstex) The famous gaming media, Game Informer also gave it a rating of 93 out of 100, adding the comment that “You become a resident of Persona 5 the more you play it, and it has the rare ability to transport in a way few games can.” (Game Informer) The top-notch sense of immersion is what draws gamers to dive in and empathize with the story and gameplay, and a huge part of it comes from its UI design, which corresponds perfectly with other aspects of the game. One of the well-known video game media, Polygon, claimed that the game is “sleek and meticulously polished, from the gameplay to the menu UI.”

It is worth mentioning that many opinion articles about Persona 5’s UI design has spoken highly of the popular culture influences it use in its UI design. Rhan, a columnist at Medium.com, has detailed some of the popular design in Persona 5. As a renowned video game specifically acclaimed of its UI design, and also as a game using many pop cultural references, studying Persona 5 can grant us insights into how the cultural references are implemented in UI design and how its efficiency can be explained. (Rhan)

In-depth analysis of pop culture references in Persona 5

Any further insights of Persona 5’s UI design cannot be obtained without digging into examine Persona 5 as a whole. The plot of Persona 5 is carried out around rogue, a concept running throughout the name of the protagonist’s coalition, the main storyline and the major in-game fighting mechanics. In the game, rogues are the hidden occupation of protagonists and his companion in the underworld and their ethos of fighting injustice: by stealing “treasure”, a concretized desire of corrupted individuals from their “palace” formed by their distorted libidos in the cognitive world. Only by theft rather than killing cognitive individuals inside palace can villains confess their crimes and rehabilitate instead of dying to psychosis. Therefore, the underworld costume of the protagonist is a typical rogue design: long coats, long pants, red gloves and most significantly, white mask, to translate romanticism into the design of a polished, determined protagonist look. (Biaustein, 26)

Another consistent idea in the theme of Persona 5 is punk. Following the same spirit implied by the “phantom thief” idea, Atlus intently added punk influences into its art concept to flaunt the free, edgy persona of the main characters. In its official artbook, The Art of Persona 5, the lead artist specifically talks about his design of protagonist is to “demonstrate the punk-like attitude” on him. When players control him, his appearance and action can surely bring out the inner punk from people participating in the game, illustrating a sense of rebellion against repressive social orders. (Biaustein, 21)

The application of the two cultural references can be observed by relating the visual elements to the cultural reference of rogue and punk attitude. The cultural references are implemented by employing associated elements that serves as a symbol into the visual design. In regard to a methodology of unveiling the relationship between cultural references and visual design, the Peircean model of semiotics can be applied to better interpretation the relationship between icons and meanings.

Two essential terms within Peircean model, simplifier and simplified, are practical to conduct visual analysis of Persona 5’s UI. Inside the Peircean model, A sign that stands for something is the representament; an inference made from the initial sign is an interpretant; an object is something beyond the sign. Also, three relationships can be observed between a representament and its object or its interpretant; however, only the icon/iconic relationship, in which mode the representament is perceived to resemble the interpretant or object, will be used as the conceptualized foundation to interpret the cultural factors beneath Persona 5’s UI. (Chandler, 18)

The image below, the main menu of Persona 5, will be used to analyze the semiotic relationship between the two major cultural references, rogue and punk, to the designing mechanic in the aspects of color scheme, underlying patterns, and typography.

First, the color scheme of Persona 5’s UI is deeply reflective of the punk culture. According to a panel discussion, Atlus revealed the secret behind Persona 5’s UI design: for each of the Persona series, a main colors set is chosen to identify the game. For Persona 5, black and red is chosen to represent the passion and energy of the characters.

In order to express this consistent idea in its design, Persona 5 did not even use other colors except for HP/MP elements to make the black/read color duo as distinctive as possible. (siliconera)

This design, as shown in the picture below, is very effective in recalling the punk spirit from last century. The color scheme is concurrent to what is prevalent among 80’s punk rock band, from the Dead Kennedys to Black Flags to Misfits. (Kim) Therefore, Persona 5’s UI is the representament of past punk icons, whereas the punk culture is an object reflective of Persona 5’s ideology. The semiotic relationship between Persona 5’s UI and punk rock indicates the heavy influence it received from pop culture, and how Persona 5’s UI is reminiscent of the punk culture indicates how well audience can interpret and accepted the cultural reference.

In another aspect, some of the basic patterns, such as stars in the background, exemplify a hypnotic, tricky nature that resonates with “phantom thieves” spirit. To recognize what this repetitive, monotonous pattern represent, the trace of its immediate object, which means the object can be discovered in what is conceived of optical illusion. Optical illusion refers to the discrepancy between what people see and what is reality caused by certain phenomena. (Bach) Typical optical illusion prone image often has dream-like, repetitive but distortive patterns as the photo below. This has much resemblance to the stars lying in the background of Persona 5, which remind audience of rich cognitive psychology backstory and the deceptive nature of the protagonist’s deeds. The optical illusion also serves as a direct object of Persona 5’s UI, and it in turns become an icon for illusion and trickery.

Last, the typography is also an exact reflection of the punk spirit, employing the irregular font and English in menu all at once. The font is highly reminiscent of 80’s punk fanzines. (Rhan) The picture below, a fan-made Sex Pistol zine can directly manifest how much Persona 5 has be deeply influenced by the style of punk fanzines: the irregular, intrusive font is nearly inherited from the font of “Sex Pistol.” The nostalgic punk fanzines are also the immediate objects of Persona 5’s UI, which will make users immediately relate to relevant cultural schema and recognize the reference.

It is notable that in Peirce’s theory, the basis of an iconic relationship between representament and object is on perceived resemblance. This can also mean that to establish an icon, the person must have active perception subject to his past experience and acknowledgement. Such foundation cannot exist without the socio-cultural influence, which further attest to the cultural factors in Persona 5’s UI. (Chandler, 38)

How it immerses Gaming Experience?

The active employment of Cultural Factors inside Persona 5 can also shed us some light on how this particular approach of UI design can possibly affect gamer’s experience. While some of the positive reaction from gamers can be easily detected, the essay strives to reveal more insight into positive gaming experience by using from hybridization and user interface design theories to offer a conceptual explanation.

The positive gaming experience may come from the encouragement of sociability. The first general idea stems from the affordance of digital media, in which it generally encourages more social participation. To achieve this goal, computer has affordance that invite people to interact in a transparent way. As stated by Murray, “A large part of digital design is selecting the appropriate convention to communicate what actions are possible in ways that human interactor can understand.” (Murray, 48) Appropriate convention can mean accessible icon or representament that cue specific, desired audience to be conscious of and actively participate into the social conversation. In the case of Persona 5, people who actively identify as enthusiastic punk lover can accurately detect the punk convention inferred by Persona 5’s UI design, thus further motivated by their gaming experience and actively shared the experience within the community.

Another insight is obtained from the hybridization nature of digital media as well as user interface, which offers more diverse experience and perceptions for audience. The development of new media technology enables the storage of more data pre-set and more pre-recorded models to be hybrid into new forms of presentation to form “novel combination of media types of new species”. (Manovich, 44) Through the process of reinvention and recombination of different media elements, new ways of representation can be discovered and in turn fulfill and diversity the media space. Meanwhile, it is necessary to note that “the hybrid do not necessarily have to involve a ‘deep’ reconfiguration of previously separate media languages and/or the common structures of media objects.” (Manovich, 189) This statement means that referencing and incorporating pre-existing media objects into the design of new media work can also count as hybridization and can also enrich meanings and representations generated from different media forms. As punk culture and optical illusion prone images are well-established elements, the UI derived from the two cultural phenomena can still be incorporated into UI design to form an exclusive, stylish design that fulfill the context of Persona 5 and in a degree, transcend the traditional notion of UI as it has the potential to develop into a narrative. As Rhan claims, “UI and UX are a part of storytelling, and can reinforce a product’s overall narrative and themes. Designers are storytellers.” (Rhan) When UI becomes a part of the narrative of the game, it has the potential to stand out as a media object that express distinctive value and brand itself, rather than being generic and easily absorbed into other grand narratives.

And the previous analysis leads to the discussion of what Manovich regards as some insights about good graphic user interface design. First, “the new interface should also make use of emotive and iconic mentalities.” (Manovich, 98) Cultural references can be powerful in fully mobilize the two mentioned mentalities of human, as fully exemplified by the punk culture: people will be reminded of the turbulent youth who is reckless, free, motivated and righteous. Specifically, in Persona 5’s UI, the sharp contrast between black and red is effective in elicit intense reaction from users, as is indicated from the object it reflects: punk culture, which means reckless, free, righteous youth. Also, the unique design of the UI adds a new experience of discovery to users. When they are fascinated by the design of the impeccably stylish UI of Persona 5, people will potentially be inspired by it and thus embarking on a new journey of discovery. Manovich make this clearer in his statement, “It is successful because it was designed to help them think, discover, and create new concepts using not just on type of mentality but all of them together.” (Manovich, 219) This proves that good UI design can immerse the experience, emotion and motivation of users and audiences; furthermore, in the case of Persona 5, the emotional and iconic capabilities afforded by its design can enable the users to spontaneously to search for new information and to enrich the Persona 5 context.

Conclusion

Persona 5’s UI design has extensively referred to punk and rogue culture, and its implementation is based on the iconic relationship between such cultural references and color scheme, visual patterns, and typography. The obtained success and efficacy of culturally referred UI design in immersing gamers’ experience can be explained by the digital media’s affordance of social participation, enrichment brought by hybridization, and how user interfaces should be designed to be emotive and iconic.

Refereces:

Persona 5 hits another milestone, sells 2M copies worldwide (2017). . Chatham: Newstex. Retrieved from http://proxy.library.georgetown.edu/login?url=https://search.proquest.com/docview/1970687796?accountid=11091

Janet Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012.

Dover, Shane. “The Punk Spirit of ‘Persona 5’ – A Look at Persona Through Punk Culture.” Goomba Stomp, 25 Apr. 2017, www.goombastomp.com/persona-5-punk-culture/.

Kim, Matthew. “The Japanese Punk of Persona 5 Is Its Most Defining Trait.” Polygon, Polygon, 17 Apr. 2017, www.polygon.com/2017/4/17/15328360/persona-5-japanese-punk.

“Atlus Reveals The Design Secrets Behind Persona 5’s Distinctive UI.” Siliconera, 13 Nov. 2017, www.siliconera.com/2017/11/13/atlus-reveals-design-secrets-behind-persona-5s-distinctive-ui/.

Bach, Michael. “Optical Illusions & Visual Phenomena.” Rotating Face Mask, www.michaelbach.de/ot/index.html.

“The UI and UX of Persona 5 – Ridwan Khan.” Ridwan Khan, Ridwan Khan, 25 Apr. 2017, ridwankhan.com/the-ui-and-ux-of-persona-5-183180eb7cce.

Daniel Chandler, Semiotics: The Basics. 2nd ed. New York, NY: Routledge, 2007. Excerpts.

Lev Manovich, Software Takes Command, pp. 55-239

“Persona 5.” Playstation, www.playstation.com/en-us/games/persona-5-ps4/.

“Persona 5 Review – The Triumph Of Thievery.” Game Informer, www.gameinformer.com/games/persona_5/b/playstation4/archive/2017/03/29/persona-5-review-game-informer.aspx.

Blaustein, Jeremy. The Art of Persona 5. DK/Prima Games, a Division of Penguin Random House LLC, 2017.

The meaning behind computer systems as sign systems, a semiotics perspective

Abstract

Computers are powerful tools, and today we cannot think to complete our daily tasks without using them. But how did we come to use programming languages in computers? What are the fundamental concepts that lead to the idea of artificial languages, and how does that connect to our use of natural language? This paper discusses the adoption of sign-theoretic perspective on knowledge representation, the application that lies under the human-computer interaction and the fundamental devices of recognition. This paper suggests the study of computer systems as sign systems, on a semiotics perspective, based on Pierce’s triadic model and looks at an application such as the General Problem Solver, an early work in computerized knowledge representation.

Introduction

Being a computer science student, I found out how powerful the art of programming and coding can be, but what fascinates me is the representation of all these different programming languages that we use to program. In digital computers, the user’s input is transmitted as electrical pulses, only having two states, on or off, respectively represented as either a 1 or a 0, and this sequence of 0’s and 1’s represents the “computer’s language”. But how did something so easy in concept, just having two states, becomes something so fundamental? Of course we have to take a step back and think about all the principles and interactions that lead us to this idea, the most important of them being the human computer interaction and the meaning behind different signs and symbols.

Computer Systems as Sign Systems

This is where I turn to semiotics, the study of sign and symbols, to understand the meaning behind these representations.  Andersen presents semiotics as a framework for understanding and designing computer systems as sign systems. There are different semiotics methods that can be applied to different levels of computer systems, but I will focus on a particular perspective, one that looks at computer systems as targets of interpretations. I am interested to look at programming and programming languages as a process of sign creation, and the semiotic approach behind it. It is interesting to view computer systems as signs and symbols, whose main function is to be perceived and interpreted by a group of users. Anderson suggests that when you think of computer systems through the lenses on semiotics, they are not ordinary machines anymore. Rather, they are symbolic machines constructed and controlled by means of signs. The interface of a system is an example of a computer based sign, and using this system means that it involves the interpretation and manipulation of text and pictures.  And underneath the interface, there are other signs that we see. The system itself is specified by a program text or a language, which on its own is a sign.  Then, the actual execution of the program requires a compiler, which its main function is to transform code written in one programming language, into another programming language. That makes the compiler a sign. If we continue with this approach, passing through different layers of the system, we will encounter more signs, from the operating system to assembly code, to machine code.

Semiotic’s theories

There are many kinds of semiotics theories when in comes to defining the concept of a computer-based sign and computer system.

1. The Generative paradigm

This paradigm was founded by Noam Chomsky in 1957. This generative grammar is focused on the individual language user, not on the social process of communication. This paradigm looks at a language based on a rule-defined set of sentences. Halliday explains why this is not a good approach:

“A language is not a well-defined system,  and cannot be equated with “the set of grammatical sentences”, whether that set is conceived as finite or infinite. Hence a language cannot be interpreted by rules defining such a set. A language is a semiotic system…-what I have often called a “meaning potential”…Linguistics is about how people exchange meanings by “languaging.” (Halliday 1985).

2. The Logic Paradigm

This paradigm was first founded by Frege, but it is counted as a linguistic theory with the logical grammars of Richard Montague (1976). This paradigm consists in translating natural language sentences into logical formulas on which rules of inference can operate, and that’s why this theory has become an integrated part of computer science. One way to construct such system is to represent knowledge in terms of logical statements, to translate queries into logical formulas and to let the system try to prove the query from the knowledge based. Now days we can link this theory to the idea of a “neural network”. By that I mean you program and built a system to achieve a certain goal, let’s say you write a program that goes through different image files and selects the images where dogs appear. You can feed data to a system, the more the merrier, and train to find the images that you are looking for. But the problem with this kind of approach is that it is not a linguistic approach, and it does not describe a linguistic behavior, rather a factual one.  If we use logic and facts, we defeat the purpose of understanding the sign representation.

3. The Object-Oriented paradigm

There is a relation between the concept of object-oriented programming and semiotics, where the system is seen as a model of the application domain, and we see the components of classes and objects and their interactions in the domain. These concepts are also characteristics for a semantic analysis that go back to Aristotle, and his idea of the hierarchy of classes, also known as the Porphyrian tree. (Eco, 1984)

4. Pierce’s triadic process

If we are talking about computers and semiotics there is without doubt one person that has to be mentioned for his incredible work and that is Charles Sanders Peirce (1839-1914). As Irvine mentions in his paper, Peirce, is without question the most important American philosopher of the mid-19th and early-20th century. His background is very interdisciplinary. Peirce was a scientist, mathematician, cartographer, linguist and a philosopher of language and signs. He commented on George Boole’s work on “the algebra of logic” and Charles Babbage’s models for “reasoning machines”. Both of these concepts are fundamental for logic used today in computing systems.

Pierce’s process on meaning-making, reasoning and knowledge is a generative process, also known as a triadic experience, and is based on human sign systems and all different levels of symbolic representation and interpretation. This process in explained through Martin Irvine’s evaluation, “A Sign, or Representamen, is a First which stands in such a genuine triadic relation to a Second, called its Object [an Object of thought], as to be capable of determining a Third, called its Interpretant, to assume the same triadic relation to its Object in which it stands itself to the same Object,” (Irvine).

 

 

Pierce’s triadic model

Irvine’s representation 2016

Although many fundamental computer science principles apply binary states, Peirce discovered that the human social-cognitive use of signs and symbols is a process that can never be binary, it’s never either science and facts or arts and representations. Rather, the process of understanding symbols and signs is a process that covers everything from language and math to scientific instruments, images and cultural expressions.

The Peircean semiosis

As Irvine suggest, the Peircean semiotic tradition provides an open model for investigating the foundations of symbolic thought and the necessary structures of signs at many levels of analysis:

  • the generative meaning-making principles in sign systems (individually and combined with other systems like in music and cinema/video),
  • the nature of communication and information representation in interpretable patterns of perceptible symbols,
  • the function of physical and material structures of sign systems (right down into the electronics of digital media and computing architectures),
  • the symbolic foundations of cognition, learning, and knowledge,
  • how a detailed semiotic model reveals the essential ways that art forms and cultural genres are unified with scientific thought and designs for technologies by means of different ways we use symbolic thought for forming abstractions, concepts, and patterns of representation,
  • the dialogic, intersubjective conditions of meaning and values in the many lived contexts and situations of communities and societies.

Computational semiotics

Now that we have gone through different paradigm models and semiotics theories in helping us understand computers as systems, and after we are introduced to Pierce’s process we are going to take a closer look at the field of computational semiotics and its applications today.

Computational semiotics is an interdisciplinary field that draws on research in logic, mathematics, computation, natural language studies, cognitive sciences and semiotics properties. A common theme across these different disciplines is the adoption of sign-theoretic perspective on knowledge representation. Many of its application lie in the field of human-computer interaction and fundamental devices of recognition.

Tanaka-Ishii in her book “Semiotics of Programming” makes the point that computer languages have their own type of interpretative system, external to the interpretative system of natural languages. That is because human beings do not think in machine language, and all computer language expressions are meant for interpretation on machines. Computer languages are the only existing large-scale sign system with an explicit, fully characterized interpreter external to human interpretative system. Therefore, the application of semiotics to computer languages can contribute to the fundamental theory of semiotics. Of course that computation is different from human interpretation, and interpretation of artificial languages differ from that of natural languages, but understanding the semiotic problems in programming languages leads to considering the problems of signs.

Many of the concepts, principles of computer programming have derived from technological needs, without explicitly stating the context of human thought. An example of this idea is the paradigm of object-oriented programming which we saw earlier.

Let’s take a look at a simple program written in Java that calculates the area of a triangle given three sides:

Java program calculating the area of a triangle

At a glance, this is a simple code for people whose field of study is Computer Science, but even if that is not the case, you can understand the code. Why?

Because the mathematical principles and formula for finding the area of a triangle still apply.

Following Heron’s formula to get the area of a triangle when knowing the lengths of all three sides we have:

Let a,b,c be the lengths of the sides of a triangle. The area is given by:

Area = √p(p−a) (p−b) (p−c)

where p is half the perimeter, or   a+b+c/2

Even you may not know how to code, just by looking at the code and reading the words, you can guess what the program does. The same formula from above is applied in our code, where the user is prompted to enter the three sides and the area of the triangle is calculated. So, this explains how powerful the meaning behind symbols and signs is, and how important they are, especially in a field like computing.

Let’s take a look at an early work in computerized knowledge representation.

General Problem Solver by Allen Newell, J. C. Shaw and Herbert A. Simon.

Earliest work in computerized knowledge representation was focused on general problem solvers such as the General Problem Solver, a system developed by Allen Newell, J.C. Shaw and Herbert A. Simon created in 1959. Any problem that can be expressed as a set of well-formed formulas (WFFs) or Horn clauses, and that constitute a directed graph with one or more sources (viz., axioms) and sinks (viz., desired conclusions), can be solved, in principle, by GPS. Proofs in the predicate logic and Euclidean geometry problem spaces are prime examples of the domain the applicability of GPS. It was based on Simon and Newell’s theoretical work on logic machines. GPS was the first computer program which separated its knowledge of problems (rules represented as input data) from its strategy of how to solve problems (a generic solver engine).

The major features of the program are:

  1. The recursive nature of its problem-solving activity.
  2. The separation of problem content from problem-solving techniques as a way of increasing the generality of the program.
  3. The two general problem-solving techniques that now constitute its repertoire: means-ends analysis, and planning.
  4. The memory and program organization used to mechanize the program(this will be noted only briefly, since there will be no space to describe the computer language (IPL’S) used to code GPS-I.

GPS as the authors explains, grew out of an earlier computer programmer, the Logic Theorist, which discovered proofs to theorems in the sentential calculus of Whiteheand and Russell. It is closely tied to the subject matter of symbolic logic. The Logic Theory lead to the idea behind GPS, which is the simulation in the problem-solving behavior of human subjects in the psychological laboratory. The human data were obtained by asking college sophomores to solve problems in symbolic logic, or as we know it “thinking out loud” as much as possible while they worked.

The structure of GPS                                                                                                                                      
GPS operates on problems that can be formulated in terms on objects and operators. As the authors explains, an operator is something that can be applied to certain objects to produce different objects. The objects can be characterized by the features they possess, and by the differences that can be observed between pairs of objects. Operators may be restricted to apply to only certain kinds of objects, and there may be operators that are applied to several objects as inputs, producing one or more objects as output. (Simon)

Using this idea, a computer program can be described as a problem to be solved in these terms. So, the objects are possible contents of the computer memory, the operators are computer instructions that alter the memory content. A program is a sequence of operators that transform one state of memory into another, the programming problem is to find such a sequence when certain features of the initial and terminal states are specified.

In order for the GPS to operate within a task environment it needs several main components:

  1. A vocabulary for talking about the environment under which it operates.
  2. A vocabulary dealing with the organization of the problem-solving process.
  3. A set of programs defining the term of the problem-solving vocabulary by terms in the vocabulary for describing the task environment.
  4. A set of programs applying the terms of the task-environment vocabulary to a particular environment such as: symbolic logic, trigonometry, algebra, integral calculus.

Let’s take a look at the executive organization of the GPS. With each goal type, a set of methods is associated in achieving that goal.

GPS solved simple problems such as the Towers of Hanoi, a famous mathematical puzzle.

The GPS paradigm eventually evolved into the Soar architecture for artificial intelligence.

Conclusion

When looking at computers from a semiotics perspective, we can understand the meanings behind the system and the different layers that make the system. From the interface of a system to machine code, we can find signs and symbols all the way down. It is interesting to see the correlation and dependencies between an artificial language, such as the different programming languages that we use, and the usage of our natural language. Humans as cognitive beings have the advantages of having a symbolic faculty and many sign systems. That helps us to make mental relations between perceptions and thought, and using advance technology today we talk about virtual reality and augmented reality.

Looking at Pierce’s model on meaning-making, we can interpret different levels of symbolic representation, and understand that the world we live in, the technology we use can never be binary, rather it is a dynamic environment with many interrelated processes.

When looking at computers as systems, we have to keep in mind the technical structure of the system, its design and implementation and its function. But, we also have to look at the semiotics process, and when looking at the theoretical framework of a system, we should also look at concepts for analyzing signs and symbols, where users interpret in their context of work. From the semiotic approach, interfaces of a system are not separated from its functionality.

By approaching our digital and computational life from the semiotic design view, and by understanding that semiotics, in its standard conceptualization, is a tool of analysis, we can see that we live in a media continuum that is always already hybrid and mixed, and that everything computational and digital is designed to facilitate our core human symbolic-cognitive capabilities. By using semiotics as a tool of analysis we can turn computers into a rich medium for human-computer interaction and communication.

References

Andersen, Peter B. “Computer Semiotics.” Scandinavian Journal of Information Systems, vol. 4, no. 1, 1992

Andersen, P.B. . A Theory of Computer Semiotics, Cambridge University Press, 1991

Clark, Andy and David Chalmers. “The Extended Mind.” Analysis 58, no. 1, January 1, 1998

De Souza, C.S., The Semiotic Engineering of Human-Computer Interaction, MIT Press, Cambridge, MA, 2005

Eco, U., A theory of Semiotics. The MacMillian Press, London, 1977

Gudwin, R.; Queiroz J. (eds) – Semiotics and Intelligent Systems Development – Idea Group Publishing, Hershey PA, USA, 2006

Halliday, M. A. K., Language as Social Semiotic. The Social Interpretation of Language and Meaning, Edward Arnold, London 1978

Halliday, M. A. K., System and Function in Language, Oxford University Press, Oxford, 1976

Hollan, James, Edwin Hutchins, and David Kirsh. “Distributed Cognition: Toward a New Foundation for Human-computer Interaction Research.” ACM Transactions, Computer-Human Interaction 7, no. 2, June 2000

Hugo, J. “The Semiotics of Control Room Situation Awareness”, Fourth International Cyberspace Conference on Ergonomics, Virtual Conference, 15 Sep – 15 Oct 2005

Irvine, Martin, “The Grammar of Meaning Making: Signs, Symbolic Cognition, and Semiotics.”

Irvine, Martin,  “Introduction to Linguistics and Symbolic Systems: Key Concepts”

Mili, A., Desharnais, J., Mili, F., with Frappier, M., Computer Program Construction, Oxford University Press, New York, NY, 1994

Newell, A., A guide to the general problem-solver program GPS-2-2. RAND Corporation, Santa Monica, California. Technical Report No. RM-3337-PR, 1963

Newell, A.; Shaw, J.C.; Simon, H.A., Report on a general problem-solving program. Proceedings of the International Conference on Information Processing, 1959

Norvig, Peter,  Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp. San Francisco, California: Morgan Kaufmann. pp. 109–149. ISBN 1-55860-191-0, 1992

Peirce, Charles S. From “Semiotics, Symbolic Cognition, and Technology: A Reader of Key Texts,” collected and edited by Martin Irvine.

Ray Jackendoff, Foundations of Language: Brain, Meaning, Grammar, Evolution. New York, NY: Oxford University Press, USA, 2003

Renfrew, Colin. “Mind and Matter: Cognitive Archaeology and External Symbolic Storage.” In Cognition and Material Culture: The Archaeology of Symbolic Storage, edited by Colin Renfrew, 1-6. Cambridge, UK: McDonald Institute for Archaeological Research, 1999.

Rieger, Burghard B, A Systems Theoretical View on Computational Semiotics. Modeling text understanding as meaning constitution by SCIPS, in: Proceedings of the Joint IEEE Conference on the Science and Technology of Intelligent Systems (ISIC/CIRA/ISAS-98), Piscataway, NJ (IEEE/Omnipress), 1998

Sowa, J, Knowledge representation: logical, philosophical and computational foundations, Brooks/Cole Publishing Co. Pacific Grove, CA, USA, 2000

Tanaka-Ishii, K.  “Semiotics of Programming”, Cambridge University Press, 2010

Wing, Jeannette “Computational Thinking.” Communications of the ACM 49, no. 3, March 2006

Intelligent Personal Assistant and NLP

“Alexa, what’s the weather like today?”

As Intelligent Personal Assistants begin to play a more significant role in our daily life, the conversation with the machine is no longer science fiction. But few ever bothered to ask the question: how do we come to a place like this? All the Intelligent Personal Assistant – Siri, Cortana, Alexa… are they inevitable or they happened to be like this? Or, in the end, what enables us to communicate with a machine?

Any Intelligent Personal Assistant could be considered as a complicated system. From software layer to hardware layer, a feasible intelligent personal assistant is the collective effort of many components – both tangible and intangible.

Thought a functioning intelligent personal assistant unit is the result of a bigger structure, the most intuitive part, from a user perspective, is the back and forth procedure of “human-machine interaction”. At the current stage, most of the technology companies that offer intelligent personal assistant service are trying to make their product more “human-like”. This – again – would be an entire project consists of big data, machine learning (deep learning), neural computational network and other disciplines related to or beyond Artificial Intelligence. But on the front-facing end, there is one subsystem we need to talk about – natural language processing (NLP).

What is NLP?

When decomposing the conversation flow between individuals, a three-step procedure seems to be the common practice. The first step would be to receive the information, generally, our ear would pick up the sound wave that is generated by some kinds of vibration and transmitted via air.

The second step would be to process the information. The acoustic signal that was received would be matched with the existing pattern in your brain so as to be entitled to corresponding meanings.

The third step would be the output of information. One would disseminate the message by generating the acoustic signal via transducers so that it could be picked up by the other end to keep the conversation flow.

When it comes to “human-machine interaction”, NLP follows a similar pattern by imitating the three-step procedure of inter-human communication. By definition, NLP is “a field of study that encompasses a lot of different moving parts, which culminates in the 10 or so seconds it takes to ask and receive an answer from Alexa. You can think of it as a process of roughly 3 stages: listening, understanding, and responding.”

In order to handle different stage of the procedure, Alexa was designed as a system with multiple modules. For the listening part, one of the “front-end” modules would pick up the acoustic signal with sensor upon voice commands or “activation phrases”.

This module would be connected to the internet with wireless technologies so that it would be able to send information to the back-end for further processing.

Understanding, which could also be referred to as the processing part, as the speech recognition software would take over and help the computer transcribe the user’s spoken English (or other supported languages) into corresponding texts. This procedure is the tokenization of the “acoustic wave” which is not a self-contained medium. By transforming, certain waves were turned into tokens and strings that machine could handle. The ultimate goal of this analyzing process is to turn the text into data. Here comes one of the hardest part of NLP: natural language understanding. Considering “all the varying and imprecise ways people speak, and how meanings change with context” (Kim, 2018) This would bring in the entire linguistic part of NLP. As NLU “entails teaching computers to understand semantics with techniques like part-of-speech tagging and intent classification — how words make up phrases that convey ideas and meaning.” (Kim, 2018)

This all happens on the cloud, which also simulates how the human brain functions when dealing with natural languages.

When a result was reached, it comes to the final stage – responding. This would be an inverse procedure of Natural Language Understanding since the data would be turned back into text. Now that the machine has the outcome, there would be two more efforts to make. One is prioritizing, which means to choose the data that’s most relevant to the user’s query and this leads to the second effort: reasoning. This refers to the process of translating the responding concept into a human-understandable way. Lastly, “Once the natural-language response is generated, speech synthesis technology turns the text back into speech.” (Kim, 2018)

As we now had some basic recognition of the NLP procedure, we could go back to the questions that were raised at the beginning: what is the point in designing the architecture of the NLP part of an Intelligent Personal Assistant in such a way?

We could talk about the transducer part of the system. This might be quite intuitive at a first glance. A sensor as a transducer would be the equivalent to the human ears to pick up the acoustic wave as needed. But design questions are involved here: what would be the ideal form of the housing of an Intelligent Personal Assistant?

As Siri was introduced to the world as a built-in function of iPhone, it must fit in a compact mobile device with a screen and incorporates only two microphones. This increased portability and flexibility at the cost of reliability.

It is a natural thing for a human to distinguish useful information from background noise. In a daily conversation flow, this refers to the fact that we would consciously pick up the acoustic waves that are relevant to our own conversation but not others.

When this was applied to the human-machine interaction scenario, error prevention of the direction to go: “rather than just help users recover from errors, systems should prevent errors from occurring in the first place.” (Whitenton, 2017) With the development of speech recognition technology, errors in NLU have dropped dramatically. “But there’s one clear type of error that is quite common with smartphone-based voice interaction: the complete failure to detect the activation phrase. This problem is especially common when there are multiple sound streams in the environment” (Whitenton, 2017)

To tackle this problem, Amazon built Alexa its dedicated hardware – Echo which put voice interaction as its top priority. “It includes seven microphones and a primary emphasis on distinguishing voice commands from background noise” (Whitenton, 2017)

NLP and Linguistic

Why is this so important? “Meaning is an event, it happens in the process of using symbols collectively in communities of meaning-making – the meaning contexts, the semantic networks and social functions of digitally encoded content are not present as properties of the data, because they are everywhere systematically presupposed by information users” (Irvine, 2014)

As the very first step in the human-machine interaction, the primary condition on the machine side would be the ability to properly receive the message from the human side. At the same time, context is very important in discussing the human-machine interaction. The purpose of NPL is to generate an experience that’s as close as possible to inter-human communication. As everyone conversation needs a starting point, a responsive Intelligent Personal Assistant “requires continuous listening for the activation phrase” (Whitenton, 2017) so that it could be less intrusive – in the case of Alexa, one would not need to carry it around or to follow any fixed steps to “wake up” the system. The only necessity is a natural verbal signal (Alexa) to trigger the conversation.

After the assistant acquired the information needed, the whole “black box” the lays underneath the surface starts functioning. As mentioned above, an Intelligent Personal Assistant would firstly send all the data to the “back-end”. As language is about coding “information into the exact sequences of hisses and hums and squeaks and pops that are made” (Pinker, 2012). Machines would then need the ability to recover the information from the corresponding stream of noises.

We could look at a possible methodology that machines would resort to in decoding the natural language

Part of Speech Tagging – or syntax. A statistical speech recognition model could be used here to “converts your speech into a text with the help of prebuilt mathematical techniques and try to infer what you said verbally.” (Chandrayan, 2017)

This approach takes the acoustic data and breaks it down into specific intervals e.g. 10 – 20 ms. “These datasets are further compared to pre-fed speech to decode what you said in each unit of your speech … to find phoneme (the smallest unit of speech). Then machine looks at the series of such phonemes and statistically determine the most likely words and sentences to spoke.” (Chandrayan, 2017)

Moving forward, the machine would look at the individual word and tries to determine the word class, the tense etc. As “NLP has an inbuilt lexicon and a set of protocols related to grammar pre-coded into their system which is employed while processing the set of natural language data sets and decode what was said when NLP system processed the human speech.” (Chandrayan, 2017)

Now that we had the foundation of decoding the language – by breaking it down, what would be the next step? Extracting the meaning. Again, the meaning is not a property but an event. In that sense, the meaning is not fixed – it changes all the time.

For inter-personal communication, we feel natural when we constantly refer to the context and spot the subtle differences.

But now, most of the Intelligent Personal Assistant “ is primarily an additional route to information gathering and can complete simple tasks within set criteria” (Charlton, 2017) This means they do not fully understand the user and their intuition.

For instance, when we are asking someone for the price of a flight ticket, the response – besides the actual price – could be “if you are going to a certain place or if you need a price alert for that flight”. But we could not really expect these kinds of follow up answers from an Intelligent Personal Assistant.

So, let’s go back to the inter-personal communication – how do we come up with the follow-up responses in the first place? We would conclude and deduct empirically to interconnect things that could be relevant – such as the intention to go somewhere and the action of asking the price of certain fight tickets. When we have the similar expectation on machines – on one hand, they would have to conduct a similar reasoning process as the ones that we do to draw the conclusion. On the other hand, they need a pool with an adequate amount of empirical resources to draw the conclusion from. The point is that the empirical part could have individual differences – which means the interaction pattern needs to be personalized on top of some general reasoning.

In this sense “Google Assistant is probably the most advanced, mostly because it’s a lot further down the line and more developed in terms of use cases and personalization. Whereas Alexa relies on custom build ‘skills’, Google Assistant can understand specific user requests and personalize the response.”  (Charlton, 2017)

This is not something to be built overnight but rather a long-term initiative: “The technology is there to support further improvements; however, it relies heavily on user adoption … The most natural improvement we expect to see is more personalization and pro-active responses and suggestions.” (Charlton, 2017)

Now that machine has the “artificial language” in hands, the next step would be to translate this language into “meaningful text which can further be converted to audible speech using text-to-speech conversion”. (Charlton, 2017)

This seems to be relatively easier work compared to the Natural Language Understanding part of the NLP. As “The text-to-speech engine analyzes the text using a prosody model, which determines breaks, duration, and pitch. Then, using a speech database, the engine puts together all the recorded phonemes to form one coherent string of speech.” (Charlton, 2017)

Intelligent Personal Assistant as Metamedium

But as you look into the way many answers were generated, computer (in the case of Intelligence Personal Assistant this would be cloud computing) as a metamedium. This is significant in at least two ways.

To begin with, as metamedium, the Intelligent Personal Assistant “can represent most other media while augmenting them with many new properties” (Manovich, 2013) In the specific case of Alexa, the integration of both hardware and software as well as the synergy that was brought up by the synergy is significant.

Sensors, speakers, wireless module, cloud … all these elements could fulfill specific tasks by themselves. But by combining them together, the new architecture not only achieved goals that could never have been accomplished by any of the individual components. But these components, in turn, were entitled with new possibilities: like the sensors that were empowered by the software would be able to distinguish specific sounds from ordinary sounds.

Another important aspect would be the chemical reaction to be generated by the crossfire of all the individual components. In the case of Intelligence Personal Assistant, one of the possibilities could be data fusion: in “Software Takes Command” Manovich had the following description: “another important type of software epistemology is data fusion – using data from different sources to create new knowledge that is not explicitly contained in any of them.” (Manovich, 2013)

This could be a very powerful tool in the evolution of Intelligent Personal Assistant: “using the web sources, it is possible to create a comprehensive description of an individual by combining pieces of information from his/her various social media profiles making deductions from them” (Manovich, 2013) This idea is in line with the vision for an Intelligent Personal Assistant to be more personalized and proactive. If an Intelligent Personal Assistant would be granted proper access to user information and the user would be willing to communicate with the Intelligent Personal Assistant, it would be possible for the system to advance rapidly. So, the advantage of the Intelligent Personal Assistant with NLP capability as a metamedium would be its ability to combine the information from both ends (users and Social Media Platforms) so that it would be able to come up with a better decision.

At the same time, as users became one of the media sources in depicting the big picture of user personas, users would also benefit themselves in this procedure. “combining separate media sources could also give additional meanings to each of the sources. Considering the technique of the automatic stitching of a number of separate photos into a single panorama” (Manovich, 2013)

The Intelligent Personal Assistant, upon getting the input from users via NLP, could be a mirror and a dictionary to the users at the same time. It both reflects users’ characteristics and enhance the user experience due to the nature of it as a metamedium.

Another question that could be answered by the metamedium side of Intelligent Personal Assistant is “why we would need such a system?”. When looking back to the trajectory of technological development, we could notice that the procedure of HCI evolution and the “metamedium” ecology around the computer is pretty much a history of the mutual education of computer and human as well.

Before we get used to a smartphone with built-in camera, people would question the necessity of this idea: why would I need a phone that could take pictures? But now we are so used to using phones as our primary photographing tools and even handle a great part of media production on it. Again – using smartphones for PS and video editing is something that didn’t happen until smartphone as a platform digested camera as an appropriate unit and the hardware development entitled the platform with the capabilities to do so. And this trend might have – to a great extent – led to the popularity of SNS like Instagram and Snapchat.

Similar stories could be applied to Intelligent Personal Assistant. When Siri – as the first mainstream Intelligent Personal Assistant – was released back in 2011, the criticisms it received ranged from requiring stiff user commands and having a lack of flexibility to lacking information on certain nearby places as well as the inability to understand certain English accents. People doubted the necessity of having such a service on their phone to drain the battery. Now, after seven years of progress, not only do we see the boom in Intelligent Personal Assistant, we get used to it as well. Especially in certain scenarios – like when you are cooking, and you want to set an alarm or pull up the recipe or you are driving, and you want to set the navigation app. Intelligent Personal Assistant with NLP capability is – by far – probably the best solution to these used-to-be dilemmas.

In a market research conducted by Tractica, “unique active consumer VDA users will grow from 390 million in 2015 to 1.8 billion worldwide by the end of 2021. During the same period, unique active enterprise VDA users will rise from 155 million in 2015 to 843 million by 2021.  The market intelligence firm forecasts that total VDA revenue will grow from $1.6 billion in 2015 to $15.8 billion in 2021.” (Tractica, 2016)

(VDA refers to Virtual Digital Assistants)

Systems Thinking

After the brief discussion of Intelligent Personal Assistant with a focus on NLP, it is a good time to touch upon an important principle when dealing with the Intelligent Personal Assistant. We spent most of the paper talking about NLP and barely touched a fraction of what NLP really is. Yet NLP is only a subsystem in the Intelligent Personal Assistant architecture which itself, is only a representation of a larger discipline – Artificial Intelligence.

So, when talking about Intelligent Personal Assistant or NLP, we couldn’t regard them as isolated property which does not recognize the universal connection among system and subsystems as well as their interdependence: “systems thinking is non-reductionist and non-totalizing in the methods used for developing explanations for causality and agency: nothing in a system can be reduced to single, independent entities or to other constituents in a system.” (Irvine, 2018)

This requires us to put both Intelligent Personal Assistant and NLP into context. As Intelligent Personal Assistant is the result of the joint work of many other subsystems like NLP, and NLP itself is also built on the foundation of its own subsystem. Any of the units here would not have achieved what we have now on their own.

After all, Graphite and diamond are both consisted of carbon, just a different pattern of the structure of the element. But they end up with a totally different character. When we look at a single point, we would simply miss the whole picture.

Conclusion

Intelligent Personal Assistant is a great representation of Artificial Intelligence in the sense that it creates a tangible platform for a human to interact with. Under this circumstance, NLP as a subsystem provides the Intelligent Personal Assistant with the tool to communicate naturally with its users.

In de-blackboxing NLP, we looked at both the software and hardware layers of NLP, with a step-by-step pattern of listening, understanding, and responding. For different layers and steps, all the components including transducers, cloud, and voice recognition software work both independently and collectively to generate the “natural communication” that we experience in the real life.

For the methodology part, we regard the Intelligent Personal Assistant as a metamedium in analyzing the ability and potential it possesses to evolve and transform. We also touched upon the basic linguistic elements that were used in designing the processes of NLP. Finally, the complexity and systems thinking approach were brought in to emphasize the Intelligent Personal Assistant and NLP as both a self-contained entity and a part of the architecture.

 

Reference

1: Kim, Jessica. “Alexa, Google Assistant, and the Rise of Natural Language Processing.” Lighthouse Blog, 23 Jan. 1970, blog.light.house/home/2018/1/23/natural-language-processing-alexa-google-nlp.

2: Whitenton, Kathryn. “The Most Important Design Principles Of Voice UX.” Co.Design, Co.Design, 28 Apr. 2017, www.fastcodesign.com/3056701/the-most-important-design-principles-of-voice-ux.

3: Irvine, Martin. “Key Concepts in Technology, Week 4: Information and Communication.” YouTube, YouTube, 14 Sept. 2014, www.youtube.com/watch?v=-6JqGst9Bkk&feature=youtu.be.

4: Pinker, Steven. “Steven Pinker: Linguistics as a Window to Understanding the Brain.” YouTube, YouTube, 6 Oct. 2012, www.youtube.com/watch?v=Q-B_ONJIEcE.

5: Cjamdrayam, Promod. “A Guide To NLP : A Confluence Of AI And Linguistics.” Codeburst, Codeburst, 22 Oct. 2017, codeburst.io/a-guide-to-nlp-a-confluence-of-ai-and-linguistics-2786c56c0749.

6: Charlton, Alistair. “Alexa vs Siri vs Google Assistant: What Does the Future of AI Look like?” Gearbrain, Gearbrain, 27 Nov. 2017, www.gearbrain.com/alex-siri-ai-virtual-assistant-2510997337.html.

7: Manovich, Lev. Software Takes Command. vol. 5;5.;, Bloomsbury, London;New York;, 2013.

8: Tractica. “The Virtual Digital Assistant Market Will Reach $15.8 Billion Worldwide by 2021.” Tractica, 3 Aug. 2016, www.tractica.com/newsroom/press-releases/the-virtual-digital-assistant-market-will-reach-15-8-billion-worldwide-by-2021/.

9: Irvine, Martin. “Media, Mediation, and Sociotechnical Artefacts: Methods for De-Blackboxing.” 2018.

Self-Presentation on LinkedIn: From the Perspective of Mediation

Shuqui Liu

Abstract

LinkedIn, as the most popular and reliable social media platform in the field of professional career, exerts an increasingly indispensable influence on social network establishment and jobs’ applications. The paper focuses on the role LinkedIn plays in the process of it from the perspective of mediation. Technically speaking, LinkedIn is an innovative remediation of previous media and technologies and affords more scenarios and allows bilateral communications online. As a meta-medium representing other media, LinkedIn has personal portraits, online resumes, and homepage as tokens to convey meanings of professionality and reliability based on default social consensus. Online presentation on LinkedIn is a simulation of authenticity essentially. Self-presentation on LinkedIn has to face a severe issue that who the audience is.

Key Words: Self-Presentation; LinkedIn; Social Media; Mediation


Introduction

LinkedIn, launched in 2003, is the largest online social media that mainly focus on professional networking, job seeking, and recruitment (Girard & Fallery, 2010). It targets those who have needs to seek jobs or employees. It is a platform both for employers to promote themselves and post recruitments and for job applicants to demonstrate their resumes to leave positive impressions. In addition, people take advantage of LinkedIn to build their new social network and gain more social resources. For example, they search for someone who works in their ideal companies or industries and send messages to them via LinkedIn to grasp useful information.

Out of these unequivocal purposes, users of LinkedIn are playing the roles that they expect to be seen in order to achieve their occupational goals. In other words, they are making self-presentation. It is “the use of behaviors to communicate some information about oneself to others. (Baumeister, 1982)” With the word, Erving Goffman stated in his The Presentation of Self in Everyday Life, people are making “impression management” on a front stage, LinkedIn. According to Goffman, a self is a dramatic effect, which is revealed from demonstrative scenarios (Erving Goffman, 1965).

Though interaction on social media is still based on humans, coincide with the alteration of a whole interactive framework, skills, and consequences that individuals interact and present themselves vary obviously as well. Because of the virtuality of social media, all information expressed on it is symbolic and meaningful, which can be differentiated from reality in real lives’ performance (Hogan. B, 2010). At the same time, it is due to the fact that social media is equipped with abundant symbols and meanings that it becomes a popular platform for individuals to show themselves.

Extant research on self-presentation mainly concentrates on Facebook, Twitter, and Instagram, which cover a broad range of people’ lives. By comparison, fewer papers are about segmented or vertical social media, such as LinkedIn in a professional field. Moreover, the majority of studies focus on psychological or social factors influencing self-presentation or placing it in the context of social constructionism. Therefore, this essay starts from the perspective of social media design, trying to combine media theory, semiosis, and information theory to explain roles that LinkedIn plays in the formation and demonstration of users’ performative self. That is to say, it is necessary to de-blackbox LinkedIn as a social media in the context of self-presentation. What are design principles LinkedIn follow? What’s the characteristics of it as a profession-oriented social media? How does it mediate performative self?

Understand LinkedIn as a Social Media

Tracing back to the origin of the term “medium”, it means “in the middle” or “in between” (Martin Irvine, 2018). It connects two or more things together and establish mutual relationships, such as carriers and content, or signifiers and signified. Depending on the development of technology, forms of media evolve simultaneously, from paper, telegraphs, radios, TVs, to the Internet and so on. Various media forms result in various systems of expression, transmission, and consumption. Thusly, de-blackboxing LinkedIn might be necessary to better understand it as a unique social media which is relatively new compared with preceding media.

Affordance. Prior to the emergence of LinkedIn, there still existed strong needs to both hunts for occupations and personnel, and thusly generating other intermediate actants to bear the same duties. Some newspapers or magazines have a whole page for recruiting advertisement. Offline career fairs or meet-and-greet events are of great necessity as well. Nevertheless, these existing approaches still have disadvantages. For instance, newspapers’ advertisement is a linear transmission of information, lacking sufficient communication between employers and applicants. Career fairs have limitations of time and space, which constrains the scale or increase the costs of searching for matched candidates.

With LinkedIn, whereas, these drawbacks diminish in a sense because it enables bilateral communication and transcends time’s and space’s limits based on the popularity of the Internet. It affords more scenarios than conventional approaches that serve the same functions. Affordance is introduced by perceptual psychologist James Gibson and used by Donald Norman to describe the action possibility perceivable by an actor (Norman, 1999). In the case of LinkedIn, LinkedIn is the actor that takes the responsibilities of newspaper and career fairs.

From the perspective of self-presentation, LinkedIn provides two specific affordances for users. Firstly, we can exhibit or demonstrate our digital imprint (Tufekei, Z., 2008). That’s to say, if we don’t delete or hide our information on our own initiatives, our homepage can be accessed to all visitors all the time. Besides that, self-presentation on LinkedIn often lacks specific contexts, so information is fragmented and isolated without underpinning of an understandable environment. Some scholars named it as “cue-reduced environment” (Walther, J.B., 1996), and some researchers entitled it as “context collapse”(Marwick. A. E, & Boyd, D., 2011).

Remediation. With the boost of multiple online social communication platforms, what we usually name “social media”, such as Facebook and Twitter, increasing amounts of segmented and professional social media emerged, like LinkedIn. It is designed to simulate or re-mediate earlier forms of media that already existed for a long time in human history, such as paper resumes, blogs, instant-messaging tools. In this sense, there is nothing new about LinkedIn. It doesn’t invent anything but merely figures out an innovative to organize modules that we have and make them better to serve its purpose. As a result, with the help of knowledge and using principles that ingrained in people’s cognitive system, LinkedIn is very user-friendly. Even a new user can be familiar with and handle LinkedIn in short time. Besides that, its remediation of previous media forms also turns it into a meta-media which represents other media, such as writing, images, videos, and intercommunication tools.

LinkedIn’s Design and Its Career-Oriented Meanings

Based on what we discussed above, LinkedIn acts as a social media bearing multiple media forms and functional modules. Why does it apply these media and functions? How do they enact influence on personal self-presentation? Charles Sanders Peirce’s trichotomy of signs can be applied to explain that.

Personal Portraits. On LinkedIn, the most direct way to showcase oneself and leave a profound and positive impression on others is personal photos. A professional and attractive photo can catch people’s attention instantly and incite their interest in you at the first glance in the absence of any other background information. Usually, those who have desires to make appealing impression take photos especially for seeking employment.

There are a couple of characteristics of such photos. First of all, it is universal that most people wear formal attires or professional uniforms to take pictures. Obviously, dressing decently make people look more reliable and intelligent, even though it’s hard to tell how strong the correlation is. But it is like a default setting. People follow social norms sometimes unconsciously. In the process of socialization, such as receiving education and affected by mass media, concepts like business formal, business casual, and causal are implanted in people’s mind. Relations between dress codes and appropriate occasions are not established naturally but went through a long history of fashion development and social agreement. According to Piece’s theories, these are symbolic tokens that are built upon social consensus. Aside from dress code, facial expressions and background images are also strictly and potentially regulated (Sigal Tifferet, Iris Vilnai-Yavetz, 2017). The majority of people choose smiling or serious expressions instead of exaggerating or hilarious ones. Background images are usually of pure colors or meaningful representamens.

From these latent norms, an apparent difference of personal images between LinkedIn and other common social media is that photos on LinkedIn are objects which are supposed to convey meanings of professionality and reliability. The meanings behind photos are socially constructed. On the contrary, users can utilize whatever photos they like as their personal images on other social media. Because in that case, they can reveal their personalities and uniqueness.

Online Resume. A resume is the main arena for self-presentation in LinkedIn. It condenses all relevant background information of a job applicant in one page, which sometimes plays a decisive role whether an employer is interested in you. There is a popular saying that, “an employer doesn’t spend more than 5 seconds to scan through a resume.” It could be a little bit exaggerated, but it sheds light on how significant it is to present oneself via resume.

A resume is not a creative invention at all. However, putting resumes on a social media and make them available to all can be another story. An online resume is the most popular and useful modules of LinkedIn. On the one hand, it is for users to present themselves. They dive into all experiences they had before and try to figure out a few spotlights from them. They list their educational and working experience, skills and honors, volunteer history, and other interests, in order to meet requirements that an ideal job sets. On the other hand, headhunter firms or human resources departments are able to have a basic understanding of possible employees efficiently. Compared with online searching and browsing resumes, traditional ways to recruitment are easier to miss people with talents.

In these dual processes, symbolic meanings of resumes reveal. Educational experiences show how intelligent or hard-working a job applicant is. Working experiences reflect competence and experience. Skills examine whether one is equipped with essential skills that a position requests. Nothing is meaningless, because of limited attention of potential employers. All information is based on a default social contract that everything mentioned on resumes is of priority when considering if one is qualified.

Homepage. Technically speaking, the homepage on LinkedIn is similar in appearance to counterparts on Facebook and Twitter. It is a subsidiary platform to exhibit oneself. It allows posing texts, photographs, and videos on your homepage that anyone following you can see.

Though social media mentioned above share analogous functions, they are placed in distinctive contexts so that contain different meanings and purpose. Self-presentation on common social media vary greatly depending on what genres of images one wants to present. In the case of LinkedIn, whereas, the goal is quite clear that everyone wishes to demonstrate an enterprising and responsible self. In consequence, nearly all content posted on one’s page is picked up and organized delicately for the potential audience.

For instance, someone might upload a couple of photographs with working overtime to show how diligent he is. According to information theory, photographs here are encoded with hidden meanings. However, the audience might decode these photos into a completely different meaning that the person is inefficient. What distorts information in the process of transmission is noise. Anything interfering the accuracy of transmission can be regarded as noise, such as cultural barriers and personal understandings. It is due to the clarified purpose in this case that any misunderstanding can be deadly disastrous. No one is willing to take risks to detriment his career life. Thusly, in order to avoid the distortion of information, most people on LinkedIn will take up relatively conservative strategies to promote themselves. That is to say, what they pose usually are with clear expressions and direct connotations.

Characteristics of Self-Presentation on LinkedIn

Essentially, from what stated above, self-presentation on LinkedIn is a self-promotion campaign. LinkedIn acts as an online social platform for users to reconstruct and renovate their self-images to establish a social network and to hunt for jobs. It is similar to a traditional interview, but the most distinguished difference is that LinkedIn is like a comprehensive interview with no time and place, so applicants have no choice but to wear masks to perform professionally whenever they are using it. Once you log in your LinkedIn account, the curtain opens, and the show gets started.

Some previous studies examine self-presentation on social media from the perspectives of perfectionism, narcissism, and personality traits. Undoubtedly, these aspects also affect how individuals present themselves on LinkedIn, but it doesn’t have the strongest explanatory power. Self-presentation on common social media results from needs of self-fulfillment and self-appreciation. Those who immerse in self-presentation usually set high criteria for themselves and attempt to achieve the ideal images via social media. Nevertheless, self-presentation on LinkedIn is out of certain pressure, because people have needs to enhance social resources and lay foundations for a future career. Even if you are not outbound or articulate, profession pressure still urges you to stand on the stage. Therefore, Ervin Goffman’s dramaturgical theory might be more appropriate in the case study of LinkedIn.

Simulation and Authenticity. In early studies of self-presentation, Goffman spotlights interpersonal interaction via languages’ symbols or non-languages’ symbols in face-to-face communication in real lives, which can be compared to live shows. For example, that a waiter smile at his customers doesn’t imply he is really welcoming them but just a universal ritual. He acts as a waiter so that he has to smile.

In the circumstance above, self-presentation fades out in a few seconds and cannot be recorded and reproduced by cameras in most occasions. Unlike its counterparts on LinkedIn, if the subject doesn’t delete it specially, its data can remain for a quite long time. Though there exist some differences between what Goffman said and what we experience on LinkedIn, concepts he provided can still be applied to online communication.

From the perspective of Ervin Goffman, self-presentation is a daily performance. On LinkedIn, whereas, it gets closer to the demonstration. Bernie Hogan argues that self-presentation on the Internet evolves from performance to self-exhibition (Bernie Hogan, 2010). When considering this issue, there two layers to be discussed. First and foremost, the whole process of simulation has to go through at least three procedures, which are categorizing facts about oneself, sorting by importance, and making decisions. Some less crucial information or something equally critical but the owner wants to conceal will be omitted. A job applicant can hide his recent failure in a project but just show what he succeeded in. Hence, an HR might judge his competence inaccurately. Beyond that, in the absence of contexts, such as the coordinate of time and space, it can be vague or even misleading for the audience to understand. That’s why Benjamin argued that “sphere of authenticity is outside the technical [sphere]. (Walter Benjamin, 1936)” Although Benjamin put forward this concept by stating the reproduction of artworks, it is beneficiary to understand the insurmountable gap between one’s image on LinkedIn and himself. Lacking actual and direct interaction, the aura is eroded. Posing information on LinkedIn can be seen as a kind of reproduction of self. No matter how much information a job applicant gives, it is still hard to know him genuinely.

Audience Isolation and Imagined Audience. Daily self-presentation must be placed within certain boundaries. In other words, people adopt various strategies to manage their impressions in a different context. The reasons why context exerts such an indispensable effect is that it involves role plays and imagined audience. Everyone plays multiple roles in the world, such as a father, a citizen, a worker, and a Marvel fan. It’s difficult for one person to play more than one role at the same time because the requirements and expectations sometimes are completely different. Each role has its corresponding responsibilities and thusly personalities. People have designed a set of behavioral patterns for a certain group of audience. If anyone else interrupts the balance, situations must be awkward. That’s what Goff call “audience isolation”.

But audience isolation has two premises. Firstly, the audience must be identifiable. However, in the case of LinkedIn, that who will visit your CV is beyond your control. It could be an HR, an old friend, or even a stranger. The audience of your homepage is not identifiable anymore but merely a kind of “Imagined audience”. Secondly, the audience can be isolated. On LinkedIn, you can adjust the privacy setting to insulate visitors, which is workable. But meanwhile costs of time and energy consumption is so high, and this action precludes the possibilities of some potential social resources from getting touch with you. As a consequence, users have to choose the tactic of “lowest common denominator” which is to perform neutrally and conservatively to avoid any possible troubles.

Conclusion

LinkedIn, as the most popular and reliable social media platform in the field of professional career, exerts an increasingly indispensable influence on social network establishment and jobs’ applications. For the sake of self-promotion, the majority of users present glorified selves on it selectively. This paper focuses on the role LinkedIn plays in the process of it from the perspective of mediation.

Firstly, technically speaking, LinkedIn itself is an innovative remediation of previous media and technologies, such as blogs, CVs, and instant chatting tools. Before the emergence of LinkedIn, the needs to hunt for jobs and talents also exist but were satisfied by other forms, like career fairs and recruitment advertisements on newspapers. Compared with these prior forms, LinkedIn affords more scenarios and allows bilateral communications online. Although these forms remain today, LinkedIn assumes their roles to some extent.

As a meta-medium representing other media, LinkedIn has a lot of modules designed for self-presentation. The paper emphasizes three of them, personal portraits, online resumes, and homepage. By applying Piece’s semiotic theories, these three modules are all tokens to convey meanings of professionality and reliability based on default social consensus. But they also have nuanced differences based on their own characteristics and function as a whole to set up a positive self-image.

Extant studies mainly engaged in Facebook, Twitter, and Instagram, so it is worthwhile to examine what uniqueness LinkedIn has. This paper has a conversation with Erving Goffman’s dramaturgical theory briefly. Online presentation on LinkedIn is a simulation of authenticity essentially, which unavoidably exits a gap between a real person or real interaction. In the word of Benjamin, aura fades away in this process. Beyond that, self-presentation on LinkedIn has to face a severe issue that who the audience is. Because the audience becomes unpredictable and thusly uncontrollable on LinkedIn, users might be more inclined to perform neutrally and safely.

Reference

Baumeister, R. F. (1982). A self-presentational view of social phenomena. Psychological Bulletin, 91(1), 3e26. https://doi.org/10.1037/0033-2909.91.1.3.

Erving Goffman (1965). The Presentation of Self in Everyday Life

Girard, A., & Fallery, B. (2010). Human resource management on the Internet: New Perspectives. Journal of Contemporary Management Research, 4(2), 1e14.

Hogan, B. (2010). The presentation of self in the age of social media: Distinguishing performances and exhibitions online. Bulletin of Science, Technology & Society,30(6), 377-386.

Martin Irvine (2018). Introduction to the Technical Theory of Information. https://drive.google.com/file/d/0Bxfe3nz80i2GblNSalRIS2N5R28/view

Martin Irvine (2018). Introduction to Signs, Symbolic Cognition and Semiotics. Retrieved 2018, May 2, from http://faculty.georgetown.edu/irvinem/CCTP748/

Jakub Macek (2013). More than a desire for text: Online participation and social curation of content.

Joe Cox, Thang Nguyen, Andy Thorpe, Alessio Ishizaka, Salem Chakhar, Liz Meech (2018). Being seen to care: The relationship between self-presentation and contributions to online pro-social crowdfunding campaigns. Computers in Human Behavior.

Norman, D. A. (1999). Affordance, conventions, and design. interactions, 6(3), 38-43.

Marwick, A. E & Boyd. D.(2011). I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience. New media & society, 13(1), 114-133.

Kitrina Douglas & Dacid Carless (2008). Nurturing a Performative Self. Forum: Qualitative Social Research, 9(2)

Walter Benjamin (1936). The Work of Art in the Era of its Technological Reproducibility.

Walther,J. B. (1996). Computer-mediated communication:Impersonal,interpersonal, and hyperpersonal interaction, Communication Research, 23(1), 3-43.

Sigal Tifferet, Iris Vilnai-Yavetz (2017). Self-presentation in LinkedIn portraits: Common features, gender, and occupational differences.

The Interpretation of the Usage of Technology in the Art Works of Nam June Paik

Pioneer of Video Art

Nam June Paik, an American Korean artist, is famous for his appropriating the analog television set as an art object. Becoming one of the first artists to establish video as a serious artistic medium in 1960s, he is regarded as a pioneer in this field and is given the name of “the father of video art”. He is also one of the first artists to break the barriers between art and technology.

Nam June Paik got involved in the Fluxus movement which is an international art movement in 1960s. Fluxus artists challenged the authority of museums and “high art” and wanted to bring art to the masses. Influenced by Zen Buddhism, their art often involved the viewer, used everyday objects, and contained an element of chance. So, when looking at Paik’s artworks, even though they are still installed in the museum like the traditional fine art, the broad use of one of the most popular everyday objects, which is also one of the most influential things in human life in those days, television, makes his works outstand from the serious statues of human body and the historical oil paintings hanging on the wall.

Just have a quick look at a list of some of his popular works that is directly named with TV.
TV Cello (1964)
Magnet TV (1965)
TV Bra for Living Sculpture (1968)
TV Buddha (1974)
TV Garden (1974)

To me. what is so impressive in his work actually is his idea of initiatively erasing the boundary between technology and art. In this research, we will try to interpret how Nam June Paik is applying technology in his art works and do such kind of collaboration work well in the museum and in other remediating platforms, or to say, institutions.

Semiotic Interpretation of the Art Works of Nam June Paik

To think about the role of technology in Nam Jun Paik’s works, one thing that need to be clarify is the definition of “medium” which we may use a lot in the following discussions but might feel noe so clear. Here by saying medium, we are defining this term as the physical substances an artist uses to create an artwork piece. Take oil painting as example, it is understandable that both the oil pigment that is used and the canvas to draw on are media, the plural for medium, of this painting. However, a gel medium like impasto which can “thicken a paint so the artist can apply it in textural techniques[1](Esaak, Shelley, 2018) is also regarded as the medium of art. (To know more about Impasto.)

In this section, a detailed case study on one of the most famous art works of Nam June Paik, Electronic Superhighway: Continental U.S., Alaska, Hawaii,1995, will be a core content going throughout the whole section. While at the time, several brief analysis on more of his interesting works will be applied in order to explain a specific idea better. In this section, instead of building up the institutions that remediate the artwork, or to say offering a “space” for audiences to get access to the art work, nor doing the reproduction works for the artwork, we will work on the technologies as the media that are relevant to the art work itself, becoming part of the interface of the symbolic system of the art work that join in the process of the creation of meanings. In order to study the video technology in a more direct way, we would basically focus more on the collaboration of installation art and video art in which the technologies are more physically reachable.

Briefly De-Blackboxing Electronic Superhighway Physically and Symbolically

[Pic Lost]

Nam June Paik, Electronic Superhighway: Continental U.S., Alaska, Hawaii, 1995

The Electronic Superhighway might be one of the most famous works of Nam June Paik. Due to the introduction from Smithsonian American Art Museum where this work is now being exhibited, it is an approximate 15 x 40 x 4 feet video installation. There are fifty-one channel videos installed and with one closed-circuit television feed. In each of the screen, there are video clips with both images and sound. The bright colorful lines are neon lights which are also customized by electronic. Steel and wood are also used in the construction of this art work.

By lining out the shape of United States and the boundary between neighboring states and set up televisions in groups based on the unit of state, we can see a fusion of politics and art. The boundary lines between two political area is a typical token in politics. Due to some crisis, it is even not only relevant to art, but also can associated with echoing the network of interstate “superhighways” that economically and culturally unified the continental U.S. in the 1950s.

Besides, these physical contours with same clips displaying on the screens in the same area can also be regarded as a separation of different culture inside America. The group television screens in each area are showing different clips of video which, at least from the perspective of Nam June Paik, can represent the most typical character or a most interesting thing of the state. For example, the state of Iowa, “where each presidential election cycle begins, plays old news footage of various candidates, while Kansas presents the Wizard of Oz.”[2]

The Electronic Superhighway actually is a meta-media installation as it also have sounds playing out with the videos. So in this such huge collaboration of mixed and dazzling images as well as sounds, we can see a reflection of the modern life filled with all kinds of images and sounds, information, led by the development of mass communication media, especially the development of television and the advancement of “information superhighway”. An announcement in advance of the explosion of information is presenting here isn’t it?

One more interesting information about clips shown on the TV monitors is that all of them are collected and edited by Nam June Paik himself. Actually, the video technology kept developing from 1960s to 1980s, which allowed the artists to edit moving images more quickly than recording them on films and then do more works on them. It required time to for negatives to be developed but with the new technology it is even possible to edit the images in “real-time”. In 1969, Paik even created his own video synthesizer with Japanese engineer Shuya Abe.[3]

One of his interesting works during this period is TV Garden(1974). It is a single-channel video installation with color television monitors and live plants. There are different images and sounds displaying on each monitor and in this way, in an enclosed space in the museum, a strange harmonious has been expressed.

[Pic Lost]

Nam June Paik, “TV Garden” (detail), 1974/2000, single-channel video installation with color television monitors and live plants; color, sound, Solomon R. Guggenheim Museum, New York. (Copyright Nam June Paik Estate)

Watch a video to gain a better sense of this work.

Another attractive art work which is “more real-time” is the famous Good Morning, Mr. Orwell (1984) which is the first international satellite installation art work. It is seen as a rebuttal to George Orwell’s dystopian vision in his novel 1984. Linking WNET TV in New York and the Centre Pompidou in Paris live via satellite, a reading by Allen Ginsberg in New York was mixed live with a Beuys‘ action taking place in Paris. Even though there were still technology problems such as the connection of satellites between United States and France kept cutting out, Nam June Paik said that “the technical problems only enhanced the ‘‘live’ mood”[4], which from my perspective is a quite ANT (Actor Network Theory) style thought that we will talk about later.

The symbolic meaning of technology itself

The Electronic Superhighway itself is no doubt an interface to the culture meaning system and in this system, or to say network, functions as a node in the network of relations.[5] While to go one step further, to deconstruct this big token into many smaller tokens, the technology contained in some small tokens are not just physical constituent but also carry its own symbolic meaning that contributes to the meaning system.

Taking one television monitor of the installation as a token, without considering the images nor sounds, it, as one of the medium of this art work, is not only a physical medium to display the video, but also carrying symbolic meanings which becomes crucial part of the symbolic system that makes this art work works.

From a macroscopic viewpoint, when “de-black-boxing” the human life in the 1990s in United States, there will certainly be a part for the node of video technology, here to be more specific, the development of television. As what has been said by John Law,“social and the technical are embed in each other”[6], this does not only mean that we cannot explore the human society without studying the “hows” of relational materiality, but also reminds us that when considering the technology elements in the art works, we shall put it into the specific social situation and time period. This one television monitor can be taken as a token of television technology in its period.

All the televisions in this art work are analog TV and this strictly corresponds to the social reality that the digital television had not become consumer product and put into mass producing until the late 1990s and the beginning of 21century. We can hardly find out another media which was so influential among the family unit in a specific period of time.

From a relatively micro perspective, the television monitor is the core part of the total art work as a complicated interface to its meaning system, for the audience to communicate with the art work and the idea that the artist wants to deliver through the installation.

Based on such definition, it seems that in the past, it is the physical characters that are made use of in the creation of art works. However, due to the idea of John Law, every material owns its social symbolic meanings be being a node in the network of the whole society. By apply these meanings which can trigger the spiritual resonate of those people who once experienced or is experiencing that kind of life style. In this way, the TV monitors, the material that is the representmen of video technology, became a power full unit interface to deliver the thoughts of Nam June Paik about the American life experience in the “shadow” of mass communication lead by television. From my perspective the symbolic meaning of a specific material, or to say technology, is part of nature property as when it was invited by human society, it gets involved in the social development and its connection and interaction with other agencies in the society give birth to its social meaning. From this stand point, in Nam June Paik’s art works, the medium are not simply physical material but the material carrying social meanings.

Fusion of Culture Elements

To talk about Nam June Paik’s works from a perspective of culture, in Nam June Paik’s art works, except for his pioneering thoughts on new modern life experience being influenced by mass media, especially by the video technology and “information superhighway”, another impressive character is the fusion of different culture elements in his art works.

Scientific Experiment and Installation Art

The first art work of Nam June Paik that I know was a not really famous one named TV Magnet.

[Pic Lost]

Nam June Paik, “Magnet TV,” 1965, television set and magnet, black and white, silent, Whitney Museum of American Art, New York.

What actually is happening here is that when we put a magnet in the top of an analog television and then energize the television, there will be moving images like what can be seen in the picture. Nam June Paik installed such kind of an interesting phenomenon in the museum and made it a piece of art work. Isn’t it just more like a scientific experiment instead of installation art?
By looking at a video of this art piece, the movement of the lines, or to say color blocks, might help gain the sense better.

Actually, even though being without an academic background of science or engineering, many of Nam June Paik’s works shows a lot of scientific and engineering elements. This is closely relevant to a trend in the field of art started from 1960s that being inspired by the new technologies and having a lot of thoughts on these technologies associating with new modern life, many artists would cooperate with engineers to create their art works and in this way, these artists do not just focus on the traditional defined “art creation”, but also join in the engineered works. Just as what has been mentioned before when talking about

East and West

Nam June Paik is an American Korean. He was born in Korea and have studied in Japan for a long time. While he created most of his amazing works in the field of video art after joining the western artists community. Such kind of remixed culture background makes many of his works having a Eastern aroma with totally western generated technologies like television, video editing and projection. With a Eastern culture background, I got very interested in these works.

Ommah is a “one-channel video installation on 19-inch LCD monitor”[7] with silk robe. he name of this work “Ommah” is a Korean word means mother. A television displaying images of Korean people is covered by a traditional Korean style coat. Watching television is regarded as a family activity and the importance of mother in a family is apparent. Such kind of culture crush really makes sense, especially to the Korean people.

Another really famous art work of him is the series of Buddha.

[Pic Lost] 1974, closed circuit video installation, bronze sculpture

[Pic Lost] 1982, closed circuit video installation, bronze sculpture

[Pic Lost] 1989, closed circuit video installation, bronze sculpture

[Pic Lost] 1997, closed circuit video, stone sculpture,soil

There are four different versions in this series, each created in 1974, 1982, 1989 and 1997. Although the layout of the real-time projectors and the statues of Buddha are different, the concept of combing Western Technology and the Eastern religious thoughts is ever lasted. Through such kind of combinations, Paik established a “connection between Budhish beliefs concerning the reincarnation of all living being and the electronic reproduction of what is always the same”[8].

There are also some classic culture remixes that many artists, from past to nowadays, would like to show in their art works that also have been shown in Nam June Paik’s works. For example, in the Electronic Superhighway, we can see a collaboration of politics and culture. In the TV Garden, there is a collaboration of the ideas of nature and human society.

Technology and the Mediating Institution

When looking at the Fluxus movement, from my perspective, there are two key points in the process of letting “high art” going down to the mass. The first way is to move the art works out of the museum which is a name that born with high art in 1980s. As Malraux said, “Museums and schools are the main mediators of, and interfaces to, art history and to the knowledge of the cultural category of “art” itself. Here by saying art, I think it more refers to “high art”[9].

In this way, I would like to conclude this first way as working on the mediating institutions. The second way refers to the use of “mass material” like the televisions, projections or other new technologies. The second way here can be regarded as working on the interface of an art piece itself. As we have talked about the material which refers to the second way above, in this section, I would like to briefly discover the mediating institutions work on Nam June Paik’s works.

Malraux also knew that the modern — and postmodern — museum inherited a cultural motive for collecting works from diverse cultures and histories, and then presenting collections as a coexisting totality or unity with an underlying idealized history.

In Smithsonian Art Museum where the Electronic Superhighway is on exhibition, there are also many other brilliant artworks on shown, for example the American Indian Portrait, postmodern style statues and a wall with a collection of license plates in United States hanging on. Each artwork is interacting with others and all the works are being remediated in the museum.
A relatively closed space was build up for our case study installation, but if people have just visited the relatively serious portrait shows or the statues before, a more striking feeling might be generated. Take myself as example, even though I know little about the 20th century American life, the moment when I stand in front of these TV collections and light-emitting diodes, being surrounded by different moving images and sounds from the installation, I feel like having gotten involved in a specific context. The images and sounds might not be familiar to me as a foreigner but how they are organized in the unit of states of USA

The wall in the background of the installation is not a really modern style but a quite classic western architecture design. When looking at the art work, it’s hard for me to ignore the walls and pillars with arabesquitics of western styles in 20th century and even earlier.

One interesting experience I would like to mention here is that when I am standing in this dazzling area, a father stood next to me was talking about the clips that were playing on the screens of Virginia to his daughter. I can imagine with the “real experience” on the wall, the visit would be much more movable.

The museum is like a huge mediating machine in which different art works are being remediated and also interacting with each other. Once entering the machine, a person would get involved in the process of remediating. With different life experience and different modes of interpreting the works, each one receives different information from the same artwork and also different ways to arrange the location of each artwork and all the visitors around when people are watching the specific work might also “edit” the interpretation.

In general, based on my personal experience, different culture background of the subject (the audience), different time, different people around, different neighboring artworks around the object can all be variables to the mediating process and the meaning interpretation.

While, what about watching the Electronic Superhighway at home?

There are photos of not only Electronic Superhighway but also many other Nam June Paik’s art pieces all over the Internet, but by looking at these digitalized images, the collections of pixels on the screen of your electronic devices, can we know enough about the real works? I was shocked by the sounds that was generated by the installation when I arrived at the real Electronic Superhighway. People would gain better sense of the start of the explosion of information triggered by the expansion of mass media in daily life and the “information superhighway” (Nam Jun, Paik, 1965) when standing in that area with mixed and disordered sounds coming into the ear from all directions.

How about watching the video of it on Youtube? Watch these two videos taken by different people.

 

I have to admit that for people who have not or do not have the chance to reach the real installation in Smithsonian, they do work on offering a sense of what this installation is about and most of the basic characters of it, the settings, the light and sounds, are all shown. But two me, it is still different from an in-person experience of standing in the area. Except for all the “noises” generated by the remediating process, the chromatic aberration, the poor sound effect and so on, the Electronic Superhighway presents in these videos actually has already been edited by the one who took the videos.

While, to think about it more, though haven’t been generated, will a VR tour works better? I guess yes. But will it be able to take place a real museum experience, I myself vote for a no.

Based on all the online mediating institutions that I know, the initiative of the audience and how real the effect is shown are all still limited by analog technology. To what extent can we analog a real experience in the museum? However, a further, or to say more foundational question, why the real museum experience matters? Are the digital analog technologies trying to imitate a museum experience or just trying out the best way to mediate the artworks and for audience to get access and experience? We have to admit that there are some relatively new genre of art, such as digital art, that are generated by and created based on the new digital technologies and for these works, the dialogic process best works on the electronic screens and have no possibility or need to be taken to the real physical area.

Except for the technology affairs, the collective mind on traditional museum experience will also block the acceptance of people to the digital mediating institutions as to most of the people, the real museum locates there itself means something in the field of art.

To me, on the topic of Nam June Paik’s works, as most of his artworks are still with touchable and hearable physical body, the limited analog technologies still cannot gain their advantages on the production of the dialogic context that is generated by museums.

Citations

[1] Esaak, Shelley. “What Is the Definition of ‘Medium’ in Art?” ThoughtCo, Mar. 23, 2018, thoughtco.com/medium-definition-in-art-182447.

[2] “Nam June Paik, Electronic Superhighway: Continental U.S., Alaska, Hawaii.” Khan Academy, www.khanacademy.org/humanities/ap-art-history/global-contemporary/a/paik-electronic-superhighway.

[3]“TateShots: Nam June Paik.” Tate, www.tate.org.uk/context-comment/video/tateshots-nam-june-paik.

[4] Media Art Net. “Media Art Net | Paik, Nam June: Good Morning, Mr. Orwell.” Medien Kunst Netz, Media Art Net, 3 May 2018, www.medienkunstnetz.de/works/goog-morning/.

[5] Chandler, Daniel. Semiotics: The Basics. Routledge, Abingdon, Oxon;New York, NY;, 2017.

[6] Turner, Bryan S. The New Blackwell Companion to Social Theory. Wiley-Blackwell, Chichester, West Sussex, United Kingdom; Malden, MA, USA, 2009: P141-158

[7] “Ommah.” Art Object Page, www.nga.gov/collection/art-object-page.150881.html.

[8] Dieter Daniels in : Heinrich Klotz (ed.), Contemporary Art, exhib. cat, Museum for Contemporary Art/ Center of Art and Media, Karsruhe, 1997, P.204

[9] Irvine, Martin, Malraux and the Musée Imaginaire: (Meta)Mediation, Representation, and Mediating Institutions

More References

Daniel Chandler, Semiotics: The Basics. 2nd ed. New York, NY: Routledge, 2007.

Martin Irvine, “Introduction to Signs, Symbolic Cognition, and Semiotics: Part I.”

Martin Irvine, “Applying Semiotic Concepts, Models, and Methods.”

Martin Irvine, “The Museum and Artworks as Interfaces: Metamedia Interfaces from Velásquez to the Google Art Project”

Useful Websites

http://www.medienkunstnetz.de/s

https://www.artsy.net/artist/nam-june-paik

https://www.washingtonpost.com/blogs/going-out-guide/post/father-of-video-art-nam-june-paik-gets-american-art-museum-exhibit-photos/2012/12/12/c16fa980-448b-11e2-8e70-e1993528222d_blog.html?utm_term=.5125696eb7cc

 

Han Ideographs In The Unicode Standard

Abstract

Nowadays the Internet environment has become multilingual so that one standard encoding system that enables the exchange of electronic text is necessary. The Unicode Standard is the basis of software that can function all around the world and it provides the underpinning for the World Wide Web and the global business environment of today. Chinese characters, which belong to Han Ideographs have utilized other encoding systems before the Unicode Standard. However, they have several disadvantages and they are not suitable in today’s multilingual world. The Unicode Standard not only solve these problems, but also help those non-English languages transmit online in the globalized environment.

Keywords: Unicode, multilingual Internet, Han Ideographs, globalization, Chinese characters


1.Introduction

The Unicode Standard is the universal encoding and computing industry standard for written characters and text. Unicode solves the discontinuity of the multilingual Internet. It defines a consistent way of encoding multilingual text that enables the exchange of text data in the multilingual environment and creates the foundation for global software.

Unicode is the basis of the software that can be used and function all around the world and it is required in the new Internet protocols and implemented in all modern operating systems. As the universal standard, Unicode aims to unify many hundreds of conflicting ways to encode characters and replace them with a single and universal standard.

Compared to ASCII, abbreviated from American Standard Code for Information Interchange, Unicode characters are represented in one of three encoding forms: a 32-bit form(UTF-32), a 16-bit form(UTF-16) and an 8-bit form(UTF-8). The Unicode Standard is code-for-code identical with International Standard ISO-IEC 106-46.

The Unicode Standard has many advantages. With Unicode Standard, the information technology industry has replaced proliferating character sets with data stability, global interoperability and data interchange, simplified software and reduced costs. The Unicode character encoding treats alphabetic characters, ideographic characters and symbols equivalently, which means they can be used in any mixture and with equal facility. The universality of the Unicode Standard can also be reflected as it is sufficient not only for modern communication for the world’s language, but also to represent the classical forms of many languages. Also, the Unicode Standard is more efficient and flexible than previous encoding system and the new system would satisfy the needs of technical and multilingual computing and would encode a broad range of characters for all purposes, including worldwide publication.

However, at the same time, the Unicode Standard also has disadvantages. As the Internet was emerging as a global phenomenon, commentators often noted that it appeared to be a primarily English-language domain. It is often argued that while minority language are given an online voice by Unicode, the context is still one of western power. Besides, the Unicode Standard does not encode idiosyncratic, personal, novel or private-use characters, nor does it encode logos or graphics. Consequently, the Unicode Standard continues to respond to new and changing encoding and responds to scholarly needs. To preserve world cultural heritage, important archaic scripts are encoded as consensus about the encoding is developed.


2.The History of Chinese Encoding System

2.1 ASCII and Its Disadvantages for Chinese Characters

When computers store letters, they encode them into numbers which are in the binary form. If another computer wants to put these letters on the screen, it converts the numbers back into letters. The computer does it by consulting a map, which tells it, for example, the code number 97 represents the letter ‘a’. Originally based on the English alphabet, ASCII, which constructed in 7-bit code, encoded 128 specified characters into seven-bit integers as shown by the ASCII chart above. Ninety-five of the encoded characters are printable: these include the digits 0 to 9, lowercase letters a to z, uppercase letters A to Z, and punctuation symbols.

ASCII is plenty enough for writing text in English. However, this caused a problem for language with extra letters, symbols or accents. Therefore, different countries began exploiting their new encoding systems.

2.2 GB2312-80, GBK and GB18030

Chinese, as a non-Latin alphabet, is known as the problem of encoding. Before the existence of the Unicode Standard, there are three encoding standards which are used in different parts of China. The Chinese standard encoding system is called ‘GB2312-80’, which is mainly used in mainland and encodes about 6,763 Chinese simplified characters. The ‘Big5’ encoding system is used in Taiwan and encodes about 8,000 Chinese traditional characters which are used in Taiwan. The ‘HKSCS’ encoding system is used in Hong Kong and it also uses Chinese traditional characters. However, the ‘Big5’ and ‘HKSCS’ are two different encoding systems.

These three encoding systems all utilize and extend ASCII. In these systems, one Chinese character, no matter simplified or traditional, is represented by two ASCII characters. So they are compatible with ASCII. However, the three encoding systems are not compatible so that it is almost impossible to show GB and Big5 in the same system. Therefore, it is impossible to see Chinese simplified and traditional characters in the same screen.

Another one of the problems for GB2312-80 is that there are so few Chinese characters that Chinese Ethnic Minorities’ characters are not included. Moreover, the bigger problem is that Chinese characters don’t have their own encoding system. Most computers have already had an ASCII to store English characters. Consequently, some softwares utilize it to draw symbols. However, when these softwares are applied in Chinese system, some symbols are mistaken for Chinese characters and this could cause trouble. Also, if the sentence combines both Chinese and English characters, the system would be confused whether it should belong to ASCII or GB2312-80.

The GBK character set was defined in 1993 as an extension of GB2312-80, while also including the characters of GB13000.1-93 through the unused codepoints available in GB2312. GBK can be used in operating systems such as Windows and Linux. GB18030 is the superset of GBK and includes more characters based on GBK. GB18030 includes thousand of characters of the Chinese Ethnic Minorities. However, nowadays no operating systems can directly utilized GB18030.

2.3 The Usage of The Unicode Standard

The consensual solution to the problem of encoding has been provided by the Unicode Consortium, whose website declares: ‘Unicode provides a unique number for every character, no matter what the platform, no matter what the program, no matter what the language.’ In other words, the Unicode Standard provides the universal and huge code sheet to include all the scripts and alphabets instead of requiring their own code sheet. The Unicode Standard offers a standardized way of encoding all documents in all language and provides a unified representation for every single character. That is to say, the Unicode Standard solved the above problems of GB2312-80 and provides the universal encoding system for all the Chinese characters, no matter simplified ones or traditional ones, no matter the characters used by Han people or Chinese Ethnic Minorities.


3.The History of Chinese Characters

Chinese characters, unlike the alphabetical language, are formed with no letters or combination of letters to represent the sounds of the Chinese language. Rather, they are symbols constructed and used to convey meanings as well as sounds that indicate meaning(Yin, 2006). According to the Chinese Legend, the Historian of Yellow Emperor called Cangjie created the original Chinese characters according to the shape of sun, moon and footprints of animals, etc. in 2650 BCE.

The history of Chinese characters can be divided into two major periods: ancient writing and modern writing. There are six major writing styles associated with these two periods.

Initially, in the Shang Dynasty(1711-1066 BC), oracle bone script was the form of Chinese characters inscribed on tortoise shells and animal bones.The oracle bone script of the late Shang appears pictographic, as does its contemporary, the Shang writing on bronzes. Later in Zhou Dynasty(1066-256 BC), characters were cast or inscribed on bronze bells and vessels and it was called bronze inscription. Oracle bone script is clearly greatly simplified, and rounded forms are often converted to rectilinear ones; this is thought to be due to the difficulty of engraving the hard, bony surfaces, compared with the ease of writing them in the wet clay of the molds the bronzes were cast from.

Towards the end of the Zhou Dynasty, the Qin State began to utilize bamboo strips and pieces of silk as the medium and create a new script called ‘Seal Script’. After the Qin State conquered the other six states and unified China and established the Qin Dynasty, the Seal Script was decreed as the official standard of the writing for the whole country. At this time, all the characters were roughly square in shape and positioning of characters and complexity of the forms become consistent. Small Seal Script has also been proposed for inclusion in Unicode.

However, the seal scripts were quite time-consuming and cumbersome, so a more concise and easier to write script was needed to save time. Therefore, in the Han Dynasty(206 BC – 220 AD), the ‘Clerical Script’ became the officially approved formal way of writing. The largest change between clerical script and seal script was that clerical script dropped the pictorial appearance of Chinese characters almost completely and established the foundation of the structures for modern Chinese characters.

Since the clerical script, the structure of Chinese characters have not changed. However, the strokes have undergone two major changes: regularization and normalization. From the late Han Dynasty to 1955, Chinese characters strokes were smoother and straighter than those clerical script. The regularized clerical scripts are clearer and easier to read and write and became widespread. They have become used for everyday communication and have been the standard of Chinese writing for more than 1,800 years. In the first three and half decades of the 20th, a special government organization first called the Committee for Chinese Language Reform and later the National Language Commission began to normalize Chinese characters to make them systematic, simplified and standardized. In 1955, to systemize Chinese characters, the ‘List of First Group of Standardized Form of Variant Characters’ was officially published and 1,027 character variants are eliminated. The number of strokes in 2,235 of the characters is systematically reduced. The forms of characters for printing type and the stroke order are standardized and normalized.

From oracle bone script to normalized clerical script, the Chinese characters are changing from visualization to symbolization. The graphics and meanings of Chinese characters correspond to signifier and signified according to Ferdinand De Saussure. For each Chinese character, its graphic could tell its specific meaning, and that’s how oracle bone script was developed initially. There are three forms of relationships between the signifier and signified, symbol/symbolic, icon/iconic, and index/indexical. Initially, the relationship between graphics and meanings of Chinese characters is icon/iconic. However, as time went on, Chinese characters have been so modified and normalized that their meanings became less and less similar to their graphics. Based on previous characters, the standardized and normalized Chinese characters also include abstract culture notion and embody more symbolized relationships between graphics and meanings.

For the form of Chinese characters, Chinese characters are monospaced ad each character takes the same vertical and horizontal space, regardless of how simple or complex its particular form is. This is relevant to the history of Chinese printing and typographical practice. The earliest Chinese printing is called Woodblock Printing invented in Tang Dynasty before 220 AD. Woodblock printing accelerated the transmission of words and knowledge, however, all the words in one page needed to be carved on one woodblock so that one little mistake could cause a big trouble. Based on this, Moveable Type was invented by Bi Sheng in the Song Dynasty and each character was placed in a square cell. For alphabetic scripts, movable-type page setting was quicker than woodblock printing. The metal type pieces were more durable and the lettering was more uniform, leading to typography and fonts. The types of glyphs used to depict characters in the Han ideographic repertoire of the Unicode Standard will provide users with the ability to select the font that is most appropriate for a given locale.


4.The Introduction of Han Ideographs

4.1 What Is Han Ideographs And The Necessity of Han Unification

The Unicode Standard contains a set of unified Han ideographic characters used in the written CJK languages.The term ‘CJK’, which means Chinese, Japan and Korea is used to describe the languages that currently use Han ideographic characters. The term Han, derived from the Chinese Han Dynasty, refers generally to Chinese traditional culture. Traditionally, the script was written vertically from right to left. However, in morden usage, the Han script is written horizontally from left to right. Han ideographs are logo-graphic characters, which means that each character represent a word, not just a sound. The Han characters developed from pictographic and ideographic principles. Also, they can be used phonetically.

The size of the full CJK Unicode character is so big and they are represented by different ideograms may approach or exceed 100,000. Apart from the shape of Chinese characters changed and used in other countries such as Japan and Korea, there are currently two main varieties of written Chinese: ‘simplified Chinese’, which is used in the mainland of China and Singapore, and ‘traditional Chinese’, which is used predominantly in Hong Kong, Macau, Taiwan and other oversea Chinese communities. The interconverting between simplified Chinese and traditional Chinese is a complex process because a single simplified character may correspond to multiple traditional Chinese characters. For example, the simplified character U+53F0 台 corresponds to U+6AAF 檯, U+81FA 臺 and U+98B1 颱.

Moreover, vocabulary differences have arisen between Mandarin as spoken in Mainland China and Taiwan. For example, both 旅游(lǚ yóu) in Mainland China and 观光(guān guāng) in Taiwan mean tourism in English. Consequently, merely converting the character content of a text from simplified Chinese to the appropriate traditional Chinese is insufficient, or vice versa. Traditional to Simplified characters is not a one-to-one relationship. However, the vast majority of Chinese characters are the same in both simplified and traditional Chinese.

The character repertoires of the simplified and traditional Chinese are the same. And the Chinese official encoding standard regulates that each had unique coding. There are two national standards in the mainland of China, GB2312-80 and GB12345-90. The former one is used to represent simplified Chinese while the latter one is used to represent traditional Chinese. Similarly, the Unicode Standard contains a number of distinct simplifications for characters, such as U+8AAC 説(shuō) and U+8BF4 说(shuō). Where the simplified and traditional forms exist as different encoded characters in the Unicode Standard, each should be used as appropriate.

Besides Mandarin, Chinese is a language which has different spoken forms that share a single written form. Those different spoken forms besides Mandarin are called dialect. Some dialects are actually mutually unintelligible and distinct languages. For example, Cantonese which is used in Hong Kong and Macau are different spoken forms from Mandarin, although they share the same written form. Apart from dialects, the standard form of written Chinese which was derived from classical Chinese is called literary Chinese. Although they are not used to speak everyday, they can still be seen in the printed form or online. Based on the complexity of Chinese characters, the ideographic repertoire of the Unicode Standard is sufficient for all but the most specialized texts of modern Chinese, literary Chinese and classical Chinese. For the dialects, the current ideographic repertoire of the Unicode Standard should be adequate for many–but not all–written texts.

4.2 The Unicode Standard Defined How Characters Are Interpreted Based On Context

The difference between identifying a character and rendering it on screen or paper is crucial for understanding the Unicode Standard role in text processing. The character identified by a Unicode code point is an abstract entity. Here it is important to figure out the differences between the notion ‘character’, ‘glyph’ and ‘grapheme’.

A character is the smallest component of written language that has semantic value. It is an abstract concept rather than a particular way of drawing the thing. So, letters are characters, so are numbers, punctuations and many symbols. The mark made on the screen or paper is called a glyph. Glyph is the visual representation of the character. Generally most or all of them are mapped to characters via a table in the font. Grapheme is the smallest abstract unit of meaning in a writing system. A grapheme is anything that functions as a character in a specific languages’ written tradition.

The Unicode Standard does not define glyph images. That is to say, the Unicode Standard defined how characters are interpreted rather than how glyphs are rendered. Of course, there are the certain softwares or hardwares rendering engine of the computer to be responsible for the appearance of the characters on the screen. The Unicode Standard does not specify the precise shape, size or orientation of on-screen characters. Consequently, the successful encoding, processing and interpretation of text requires appropriate definition of useful elements of the text and the basic rules for interpreting text.

For many centuries, written Chinese was accepted as written standard throughout East Asia. The influence of the Chinese characters on other modern East Asian languages is similar to the influence of Latin on other Western languages. However, as time went on, the evolution of character shapes and semantic drift over the centuries has resulted in changes to the original forms and meanings. For example, the Chinese character ‘汤’ (tāng) originally meant ‘hot water’. It now means ‘soup’ in Chinese. However, ‘hot water’ remains the primary meaning in Japanese and Korean, whereas ‘soup’ appears in more recent borrowings from Chinese, such as ‘soup noodles’. Still, the identical appearance and similarities in meaning are dramatic and more than justify the concept of a unified Han script that transcends language.

There is some concern that different meanings of the same character used in different countries will lead to confusion. However, computationally, Han characters are often combined to ‘spell’ words and their encoding process depends on the context. It is neither practical nor productive to encode each character separately. There are two reasons to explain it.

First, Han characters’ meaning may not be evident from the constituent characters. Instead, they need to combine characters to explain words. For example, the character ‘矛’(máo) means spear

and the character ‘盾’(dùn) means shield. However, the compound ‘矛盾’(máo dùn)means confliction in Chinese(see Figure 4-1).

Figure 4-1. Han Spelling

Second, the computer requires context to distinguish the meanings of the words represented by coded characters. One word may have different meanings in different context. For example, the word ‘杜鹃’(dù juān)may refer to Rhododendron, which is a kind of plant or Cuckoo, which is a kind of bird depending on its context(see Figure 4-2).

Figure 4-2. Semantic Context for Han Characters

4.3 The Rationales of Han Unification

Han unification is an effort to map multiple character sets of CJK languages into a single set of unified characters. The same Han root character may have different visual representation in Traditional Chinese, Simplified Chinese, Japanese and Korean. For example, the first stroke of ‘户‘ (hù) has three different visual representation. These three characters with different visual representation can be unified with the same code since they share the same root character.

So, one important necessity of Han unification is the desire to limit the size of the Unicode character set. However, before Unicode Standard, different countries use different encoding systems and these encoding systems are not compatible with each other. Characters which evolved from the same root character cannot correspond with each other. Consequently, the Unicode Standard is responsible to solve this problem.

According to the Unicode Consortium, the rationale of Han Unification is Source Separation Rule. If two ideographs are distinct in a primary source standard, then they are not unified. That is to say, the Unicode separate characters in different code whenever the abstract meaning changes. For Han Unification, the characters are not unified by their appearance, but by their definition or meaning. Also, in general, if two ideographs are unrelated in historical derivation, then they are not unified. For example, ‘日‘ and ‘曰’ have two different codes because they are historically unrelated, although they might look similar.

To deal with the use of different graphemes for the same Han unification sememe, Unicode has relied on several mechanisms. First is that to treat it as simply a font issue so that different fonts might be used to render Chinese, Japanese or Korean depending on the users’ environment settings to determine which glyph to use. However, this might cause confusion in the multilingual text. The second mechanism is that Unicode added the concept of variation selectors which are treated as combining characters with no associated diacritic or mark. Instead, by combining with a base character, they signal the two character sequence selects a grapheme variation or a variation of the base abstract character. Such two-character sequence can be mapped to a separate single glyph easily. Since the Unicode Standard has assigned 256 separate variation selectors, it can assign 256 variations for any Han ideograph and it is sufficient for variations to be specific to one language or another and enable the encoding the plain text that includes such grapheme variations.

Han unification has caused considerable controversy, particularly among the Japanese public, who, with the nation’s literati, have a long history of protesting the culling of historically and culturally significant variants. This is because Small differences in graphical representation are also problematic when they affect legibility or belong to the wrong cultural tradition. The widespread use of Unicode would make it difficult to preserve small distinction. Much of the controversy surrounding Han unification is based on the distinction between glyphs, as defined in Unicode, and the related but distinct idea of graphemes. Unicode assigns abstract characters(graphemes), as opposed to glyphs, which are a particular visual representations of a character in a specific typeface.

4.4 CJK Unified Ideographs Blocks

The Han script includes 87,882 unified ideographic characters defined by national, international and industry standards of China, Japan, Korea, Vietnam and Singapore. Because of the large size of the Han ideographic character repertoire, and because of the particular problems that the characters pose for standardizing their coding, this character block description is more extended than that for other scripts and is divided into several subsection. The block is the result of the Han unification.

Table 4-1. Blocks Containing Han Ideographs

Block Range Comment
CJK Unified Ideographs 4E00-9FFF Common
CJK Unified Ideographs Extension A 3400-4DBF Rare
CJK Unified Ideographs Extension B 20000-2A6DF Rare, historic
CJK Unified Ideographs Extension C 2A700-2B73F Rare, historic
CJK Unified Ideographs Extension D 2B740-2B81F Uncommon, some in current use
CJK Unified Ideographs Extension E 2B820-2CEAF Rare, historic
CJK Unified Ideographs Extension F 2CEB0-2EBE0 Rare, historic
CJK Compatibility Ideographs F900-FAFF Duplicates, unifable variants, corporate characters
CJK Compatibility Ideographs Supplement 2F800-2FA1F Unifiable variants

Conclusion

The Unicode Standard has many advantages compared to previous encoding systems and it plays an important role in the globalized environment. The Unicode Standard takes the history of Chinese characters into consideration and contains a set of unified Han ideographic characters used in the written Chinese, Japanese and Korean languages. Because of the large size of the Han ideographic character repertoire, the Han ideograph is divided into several blocks according to the rule of the Han unification. Consequently, the Unicode Standard and the Han ideographs help a lot in the communication of Chinese culture in the multilingual and globalized environment.


Bibliography

  1. Allen, J. D., Anderson, D., Becker, J., Cook, R., Davis, M., Edberg, P., … & Jenkins, J. H. (2012). The Unicode Standard(Vol. 6). Version.
  2. Bates, E. (2014). The emergence of symbols: Cognition and communication in infancy. Academic Press.
  3. Cheng, C. C. (1973). A synchronic phonology of Mandarin Chinese (Vol. 4). Walter de Gruyter.
  4. Culler, J. D. (1986). Ferdinand de Saussure. Cornell University Press.
  5. Gillam, R. (2002). Unicode demystified: a practical programmer’s guide to the encoding standard. Addison-Wesley Longman Publishing Co., Inc..
  6. Hardie, A. (2007). From legacy encodings to unicode: The graphical and logical principles in the scripts of south asia. Language Resources and Evaluation, 41(1), 1-25. doi:10.1007/s10579-006-9003-7
  7. John, N. A. (2013). The construction of the multilingual internet: Unicode, Hebrew, and globalization. Journal of Computer‐Mediated Communication, 18(3), 321-338.
  8. Unicode Consortium. (1997). The Unicode Standard, Version 2.0. Addison-Wesley Longman Publishing Co., Inc..
  9. Unicode Staff, C. O. R. P. O. R. A. T. E. (1991). The Unicode Standard: Worldwide Character Encoding. Addison-Wesley Longman Publishing Co., Inc..
  10. Yin, J. J., 1955. (2006). Fundamentals of chinese characters: Han zi ji chu / yin jinghua. New Haven: Yale University Press.

The Tokenization of Advertising Instances (Final Project)

Wenxi Zhang

Abstract:

As one of the most important and long-lasting components among branding strategy, advertising, incorporating symbolic cognition at multiple levels and is mediated in various forms and substrates, operates in a complicated socio-technical system.

The purpose of this paper is to show that the contemporary advertising follows the Peircean conceptual framework of type-token where the tokenization (i.e., creating perceptible representation), though being medium specific and variant, happens always under the umbrella of an invariant type (i.e., the theme of the advertisement which the advertisers intend to convey and correlate to the brand). By applying the conceptual framework of C.S. Pierce, Roland Barthes, Mikhail Bakhtin, MacRury Iain and Lev Manovich, I explain how the tokenization instantiated in terms of the type is being remediated based on different physical mediums and the cultural encyclopedia. I use Coca-Cola company’s advertising strategy under the campaign “Taste the Feeling” since 2016 as a case study, by analyzing how it presents distinctly on various physical mediums, how it incorporates the spirit of Rio Olympic into the brand and how it specifies the structure of the advertisement in terms of different countries or regions (i.e., in this paper, China and North America).

 1. Introduction:

By using Coca-Cola company’s advertising strategy since it launched “Taste the Feeling” as its campaign and theme in 2016 as a case study, I explore and analyze how advertisements as tokens based on the type, i.e., Taste the Feeling is remediated via multiple physical mediums (e.g., television, outdoor, new media online) and by other movements or events (e.g., Olympic) and how the type according to which they are instantiated is invariant and re-enforced during the processes. Further, since audiences as semiotic agents have different cultural memories and knowledge which would affect their interpretation of semiotic representations, I would discuss how the influence of cultural encyclopedia as a hierarchal tree structure is embedded in the advertising processes.

Therefore, the main thesis of this essay is that while in advertising, the tokenization of advertisement can be re-tokened, re-instanced and re-mediated across different mediums (not just physical mediums but also every agency that mediate the tokenization), the type, or general genre of advertising (i.e., the theme per se), is invariant and re-enforced by the unlimited instantiations of tokenization.

In the first part of the paper, I select three advertisements on three different physical mediums, i.e., outdoor street billboard, television and online software application, I use C.S. Pierce’s triadic model of cognitive correlations and his conceptual framework of the three modes of signs to analyze how audiences select and perceive these features and conduct multi-level interpretations. Meanwhile, I also apply Mikhail Bakhtin’s idea of dialogism where the interpretation of message receiver is always involved in one’s dialogism with others, in this case, i.e., the physical context while perceiving the advertisement. I also combine Lev Manovich’s idea about new media’s being interactive and how such interactivity is involved in the generation of meaning as a ongoing dynamic event.

In the second part of the paper, I use the Rio Olympic campaign in 2016 as a context, by studying how different Olympic specified advertisements launched differently in China and North America, I combine C.S. Pierce’s conceptual framework and Roland Barthes’ legacy which still apply well in contemporary media and analyze how interpretation across multi-level is formed under the umbrella of cultural encyclopedia and how the spirit of Rio Olympic serves as an intertextual discourse which involves into the dialogism between itself and “Taste the Feeling” as the theme of Coca-Cola and how the advertisements combine these concepts together. I show that operating under the wave of Rio-Olympic, the theme “Taste of Feeling” (i.e., the type) is still invariant no matter how the tokenization is customized.

2. The tokenization across different media.

Coca-Cola starts a new round designing and distributing advertisements upon launching the new campaign “Taste the Feeling” in 2016. It maximizes the utilization of different mediums including traditional media such as television, cinema, outdoor and new media based on computing and internet.

“Coca-Cola is one brand with different variants, all of which share the same values and visual iconography. People want their Coca-Cola in different ways, but whichever one they want, they want a Coca-Cola brand with great taste and refreshment.” Says de Quinto, the Chief Marketing Officer of Coca-Cola (Coca-Cola Announces New ‘One Brand’ Marketing Strategy and Global Campaign, 2016). “Taste the Feeling”, therefore, first relates the audiences to the feeling of the moment when they drink the Coca-Cola, i.e., delicious and refreshing, which thus requires the advertisers to accurately simulate such user-experience of the product itself. Besides, feeling itself could be interpreted differently in terms of different people and different scenes, in other words, different media presentations are scene specific and the tokenization, i.e., advertisement instantiated based on the concept “Taste the Feeling”, should also be designed in terms of different platforms. In the following subsections, I select a TV commercial, an outdoor street billboard and a customization-supported advertising app for analysis. As Irvine mentioned, meaning is always Remix+, the emergence of meaning always goes through the combinational and dialogic process where we select syntactically possible units in contexts of prior symbolic relations and encyclopedia values and re-contextualize the selected unit by embedding it in the compositional structure of the new expression, in other words, we are not only interpreting the advertisement itself, but also interacting with the context surrounding us where every element of the context could affect our interpretation of the token (Irvine, 2014, pp.21,29). However, no matter how the structure of the tokens changes depending on different situations, the core concept, i.e., “Taste the Feeling” is invariant.

I design a diagram to illustrate how the representation of the concept, i.e., in the figure, the object (invariant) is re-mediated depending on different physical mediums and how humans as semiotic agents’ interpretation of the perceptible representation of the object is influenced by the physical context.

Figure 1. Tokenization being medium specific

2.1 Street Billboard in front of commercial street

– Taste the Feeling of pursuing fashion and being relaxed through shopping.

Figure 2. The Coca-Cola Festival Bottle. (n.d.).

Above is an advertisement presented on a street billboard on a commercial street. At the top right corner of it are the logo of Coca-Cola and the “Taste the Feeling” slogan. The textualization of this advertisement is realized by using a picture of a beautiful girl in blonde hair holding a bottle of Coca-Cola, she is in a stylish fit and her nails are decently polished, she slightly turns her head left and laughs happily. Being presented on a commercial street, this advertisement is a mixture of iconic sign and symbolic sign. Since one of the designing ideas that advertisers follow to appeal to their consumers is by depicting people in advertisement like those consumers, or resembling the ideal that consumers aspired towards, the girl in this advertisement is an average ideal type of a lot of female potential consumers who the advertisers assume to be more likely the group of people shopping or hanging out at the commercial street (MacRury, 2008, p.173). The sign, on the other hand, is symbolic where by perceiving the dress style of the girl, consumer can easily connotate to beauty and healthiness (by her bronze skin and well-defined body shape). The surrounding environment contributes to the interpretation as well where the shopping malls, fashionable clothes and make-up suit all represent consumer’s pursuit of beauty and the whole experience of shopping itself represents a kind of feeling where the customers can leave stress behind them for an afternoon, enjoying shopping and pursuing beauty. Thus, by simulating users’ feeling of taking a break while shopping and enjoying their beauty, the advertisers thus instantiates a specific scene-based token which still represent the core concept, i.e., Taste the feeling (the feeling of being beauty, fashion, modern, healthy, relaxed, etc.).

2.2 Television commercial

– less limitation on time and more space for representing multiple kinds of feelings.

As another kind of traditional media, television, nevertheless, presents advertisement differently compared to street billboards (to be sure, the street billboards I mention here are the traditional street billboard instead of the fancy versions on, for example, Times Square) which presents visually static textualization. TV commercials, instead, are mostly motion picture sequenced featured by sound effect or music (most of the time theme song), while TV commercial have more space to represent the core concept compared to static traditional outdoor commercials, what is more challenging is how to make the frames well-organized and to match the image, sound, text simultaneously and at the same time to make sure the concept is well conveyed. In this case, the structure of the token, i.e., the advertisement presented on Television as media, is remediated.

This is a screen shot of a one-minute TV commercial that Coca-Cola published on March 2016 (Taste the Feeling, 2016). At the same time, Coca-Cola adds a line for explanation to emphasize what the commercial is trying to convey: there’s a coke for every feeling. The advertisement uses a sequence of scenes each of which is fitted with a text following a general mode: A with B, and in the next scene the B in the previous scene would become A’ where a B’ is generated matching with the visual motion image (e.g. Strangers with fire à fire with Coca-Cola).

The TV commercial, to some extent, is an integration of multiple street billboards where feelings of different people under different scenes are all simulated and put together. Such A with B model is a great example of a symbolic chain since according to C.S. Pierce, signs yield interpretants expressible in further signs in unlimited and open-ended chains or networks, this model utilize the time flexibility in TV commercial and is therefore able to convey as much tokens of feelings as time allows (Irvine, n.d., p.19). Besides, while advertisement texts are constructed with visual cues that imply an endless chain of meaning from which the viewer can choose some and ignore others, audiences’ interpretation of each scene is therefore guided by the textual narratives (Danesi, 2013, p.473).

The sound effects are also elements in the structure of the token (TV commercial) that are remediated by television as a medium. At the beginning of the commercial, the scene being displayed is the whole process of pouring Coca-Cola into a bottle containing ice featured with the sound effect of liquid falling onto the glass and pieces of ice crushing on each other. The whole scene could be explained as a reproduction of the prototype, i.e., customers conducting the whole process (i.e., pouring the Coke in reality) and such structure which enables the combination of sound and motion image sequence, is not feasible on other medias such as street billboard or traditional print media. However, what is still unchanged is the concept “Taste the Feeling”: each scene of the sequence in the commercial (e.g., hanging out with friends, having a crush on a stranger, playing games with family, etc.), no matter how authentic it is simulating specific feelings of customers in different cases, still goes back to the core concept where customers can taste, experience the feeling that Coca-Cola brings them.

2.3 interactive advertising app

– Taste the feeling of customizing your meta-feeling

“Taste the Feeling” launches at a time when computing and internet-based technology is growingly becoming matured. New media, as a convergence of two separate historical trajectories: computing and media technologies, thus is a necessary interactive platform to confront contemporary potential customers (lev Manovich, 2001, p.20). Unlike traditional media where the order of presentation is fixed, the user can now interact with a media object (lev Manovich, 2001, p.49). Thus, the tokenization of the feeling that need to be instantiated could be remediated and redefined through interactive mediums.

Figure 3. Yi Se Lie Ke Kou Ke Le Hu Dong Guang Gao Pai-Zi Ding Yi Hu Wai Guang Gao (2013).

Above is a real time customizable advertising app launched in Israel. User can imagine whoever or whatever that fits in the blank in the sentence “Share a coke with __”. When customers walk closer to the screen, the customized advertisement will thus be displayed.

This advertisement on new media thus provides multiple tokens of feeling that customers could taste and relate to the taste of Coca-Cola. The literal message conveyed from the sentence thus provides a hint for customers to experience the feeling of sharing coke with friends, lovers, family or strangers. The textualization incorporates the purpose of advertisement, i.e., attracting the customers, guiding them to generate positive attitude towards the product, turning such attitude to purchasing behavior, stimulating repeatable behavior and eventually letting them to persuade other surrounding people to purchase as well. By correlate Coca-Cola to the feeling of sharing, therefore, the advertisers embed such correlation into customers’ memory.

New media, by being interactive, remediates the tokenization of “Taste the Feeling”. Instead of guiding user’s interpretation by fixed presentation, it puts user into a dialogism with the advertisement where meaning, as an event, is mutually stimulated. Thus, customers can also experience the feeling of self-defining and producing the feeling, in other words, the meta-feeling.

3. Taste the feeling under the umbrella of cultural encyclopedia

– Coke incorporating all kinds of feeling in Olympic

Advertisement is one of the most important ways to transfer a product into a brand. Earlier in the 20th century, Roland Barthes has brought the idea of myth onto table where he argued that myth is a type of speech defined by its intention much more than by its literal sense (Barthes, 2006, p.265). While in a language, the sign is arbitrary and unmotivated, the mythical signification, on the other hand, is never arbitrary, it is always in part motivated, and unavoidably contains some analogy (Barthes, 2006, p.266). He pointed out the idea of literal message, i.e., the product of denotation and cultural message, i.e., the connotation in his work rhetoric of the image where by such intention of the information transmitter, the reader, or the receiver, could choose some and ignore others among all the perceptible information (Barthes, n.d., p.516).

Barthes’s idea is widely used in multiple field including politics, films, advertisement, etc. at that time and in contemporary media as well. To convey the idea of a brand, the negotiation between brand maker and brand user is crucial, a main purpose of such brand communication, according to Thellefsen, is creating symmetry (Thellefsen et al., 2013, p.486).

The following diagram shows the structure to be followed under the purpose of creating symmetry.

Figure 4. Creating Symmetry between brand maker and brand user. (Thellefsen et al., 2013, p.486).

The intersection node of the triadic model indicates a shared cultural background, or memory between the target community and the brand maker’s intention and expectation (Thellefsen et al., 2013, p.486). As the context and background knowledge is crucial here, Barthes’ earlier idea which mostly utilized Saussure’s model (i.e., the signified and signifier) is better to be incorporated into C.S. Pierce’s model where he brought the third element, i.e., the interpretant which is the sense made of the sign and the acknowledgement that correlates the representamen and object. Pierce’s model provides a dynamic continuum through the meaning generation which is operated under the cultural encyclopedia, i.e., a system of culturally organized meanings and values, codes, genres and symbolic association (Irvine, 2014, pp.20,25).

In the following paragraphs, I would use Coca-Cola’s advertising strategy during Rio Olympic in 2016 as a case study where I would focus on two points: first, by comparing the difference of the structure of the advertisement such as the usage of image, the extra textual narratives as tokens in America and China, I analyze how the advertisers expect and involve the advertisement into the interpretation of target community across multi-levels, i.e., the immediate, dynamic and final interpretants generation (Irvine, n.d., p.1); Second, I demonstrate that the discourse formed around the spirit of Olympic becomes a intertextual one, itself, together with the theme of Coca-Cola since 2016, i.e., Taste the Feeling, generates a dialogism which deeply influence the instantiation of the tokens in terms of the theme and how the spirit of Olympic remediates the tokenization representing the theme “Taste the Feeling” and how the theme is still invariant though being represented by different tokens during this special time period. In terms of the analysis of customers’ interpretation processes, Dr. Irvine’s diagram for the outline of semiotic models for kinds and levels of interpreted Meanings is a great model to apply (Irvine, n.d., p.1).

Figure 5. Peirce’s model of material-cognitive correlations for semiotic substrates. (Irvine, n.d., p.1).

3.1 “此刻是金”, Chinese advertisement during Rio Olympic, 2016.

– Coca-Cola China embracing Chinese audiences’ feeling of Rio Olympic

“此刻是金”, namely, gold is this moment, is the slogan defined during Rio Olympic, 2016 for the marketing strategy among China. The literal message that gold convey, in this case, is gold medal and success which are provided by the context under the Olympic period. “this moment”, on the other hand, relates customers to every moment in their daily lives. By correlating “gold” to “this moment”, the advertisers thus start with the moment of success in everyday lives that people can easily relate themselves to and brings the spirit of Olympic to customers no matter how ordinary they think they are (besides, the advertisers also intend to convey that no matter you are celebrities or ordinary people, you always have something in common, i.e., the ordinary but meaningful and special moment).

Considering how the advertisement could stimulate the symmetry through the negotiation with customers, it is important to understand what it is that matters to Chinese audiences. While in Chinese culture, family and collectivism are two of the most important ideologies. Such cultural knowledge where staying close to your family, friends, colleagues, etc., helping each other, valuing corporation and being supportive and caring are considered ethical is embedded in everyone’s cultural memory since they are educated so from the first day when they went to school. Therefore, relating those successful but ordinary moment to family and friends is an easy way to stimulate the emotional symmetry among the Chinese audiences.

Figure 6. Ke Kou Ke le Sheng Wen Ci Ke Shi Jin. (2016).

The picture is a series surrounding the theme “Gold is this moment” where each one is featured by three scenarios and a text illustration. In the first section, the instances that we can immediately perceive are the logo of Coca-Cola, and the Olympic logo which displays five parallel lines each of which has one of the color of the Olympic rings, the major image where a girl is smiling while hugging a man, behind whom is a man who is clapping his hands, the textual narrative which says “gold is the tears of your proud family” and two other images the first of which is a man holding tightly some stuffs and looking at them and the second of which is two women hugging each other head to head. From these physical substrates we are thus able to form our first level symbolic recognition (according to Dr. Irvine, the recognitions generated at this level are the immediate interpretants): this is an advertisement of Coca-Cola and it is related to Olympic, the one that is hugging with the girl is her family and the one behind her clapping his hands is someone that is congratulating her success, the two other images could be relate to the same pattern with the help of the textual narrative, i.e., the success of someone and the proud family behind the person (Irvine, n.d., p.4). The output of our first code correlation selections for recognition then enables us to move on to the next level of interpretation, which, if we relate back to Barthes’s idea, is the process of connotation which generates the cultural message where we select additional code correlations to larger genre and meaning categories, the cultural and social values we live in and experience everyday that are deeply embedded in our cultural memory thus enables us to understand what the advertisers intended to convey and what’s the ideology behind and how it relates to the brand per se (Barthes, n.d., p.155; Irvine, n.d., p.4). While China is highly influenced by the Confucian culture where family is a crucial part which treasures harmony and morality, the three scenarios thus relate many Chinese people to their own memories when their parents get up early every morning cooking them breakfast, take them to school, interest-oriented classes, auditions, competitions, etc. or stay up late with them when they are preparing for big exams. The daily narratives such as no matter who you are, how bitter your life tastes, how struggle you are pursuing your own dream, the family is always behind you, supporting you and never leave you and your success or failure is never just belonging to yourself are also very common in China and are considered as “小确幸” (i.e., ordinary but real happiness) for Chinese people. Thus, Chinese people can interpret the advertisement in such way that the spirit of Olympic is not just success itself, but also the family behind you and such interpretation can easily invoke their emotion symmetry and thus relate the value of Coca-Cola as a brand to such cultural ideology.

In a similar way, the second section, with the textual narratives “gold is the encouragement behind success” and the major image where the pianist Lang Lang is practicing piano with his tutor behind. The advertiser, in this section, intends to convey the cultural ideologies that a good student need to honor and respect one’s teacher and a good teacher needs to treat one’s students as if they are one’s children. Coca-Cola, in this case, extend the meaning of gold into virtue and goodness and relates the brand onto it.

The third textual narratives, i.e., gold is the gesture of cheering up among brothers, (to be sure, brothers don’t necessarily mean siblings in China, it could also refer to good friends, colleagues, etc.), it provides hint to guide the readers’ interpretation of the three images where in the major image, the guy behind the camera gives a thumbs-up gesture to his brother. In the second picture, there are co-workers gathering together celebrating their project and in the third picture, there is a guy (to be sure, we can’t tell directly from the image but from the narratives we know that this is a male instead of a female) in his helmet riding a motor. This section is a contemporary token of the collectivism as a cultural ideology where people value the virtue of working or simply hanging out in a team.

These sections use both celebrities and ordinary people as the characters where both the successful moments of ordinary people and the ordinary moment of successful people are shown in the series. This thus refer to another cultural ideology or virtue in China where people believe and treasure the ordinary people for being hardworking and successful in their own fields. By defining the intersection between the connotation of Olympic to success and every customer’s own moment of working hard to achieve their own success, Coca-Cola therefore establishes a firm bond among Chinese audience during the Rio Olympic period.

3.2 “THATSGOLD”, the advertisement launched in North America during Rio Olympic 2016.

– Taste the unbeatable feeling of Rio in North America

Figure 7. Coca-Cola North America athletes Ashton Eaton and Alex Morgan feature in the #ThatsGold campaign. (2016).

Figure 8. How Coca-Cola Is Activating Its #ThatsGold Campaign for Rio 2016. (2016).

Above are two advertisements launched across America during Rio Olympic 2016. The two advertisements follow similar pattern structuring the advertisement where in there is a logo of Coca-Cola at the left-top corner, five parallel lines featuring the five colors of Olympic rings, the slogan THATSGOLD and a textual narratives at the right-bottom corner and the image of athletes on different scenarios (there is still a slight difference where the first advertisement adds the overall theme “Taste the Feeling” below the logo but since all those advertisement during the Olympic period are designed under the umbrella, this slight difference can thus ignored in the following explanation).

The first advertisement offers us the immediate physical instances including two athletic athletes (i.e., Alex Morgan, a key member of the U.S. Women’s National Team that won the 2015 FIFA Women’s World Cup and Ashton Eaton, a gold medalist in men’s decathlon at the London 2012 Olympic Games) with big smiles holding Coca-Cola in glass bottles, bright sunny day, grass and the Olympic rings behind them (Coca-Cola Goes for Gold in Rio 2016 Olympic Games with Global #ThatsGold Campaign, 2016). The textual narrative, i.e., unbeatable taste is a pun which refers to both the taste of being unbeatable in the competition as a champion and the unique, best taste of Coca-Cola as a beverage as well.

The image of the second advertisement selects a swimming pool as a background where Nathan Adrian, the “Fastest Man in the Pool” who amassed three Olympic gold medals and seven World Championship golds since 2008, holding a glass bottle of Coca-Cola, is celebrating with another swimmer after finishing a training or competition, sending up pearly spray (Coca-Cola Goes for Gold in Rio 2016 Olympic Games with Global #ThatsGold Campaign, 2016). The textual narratives “A refreshing finish” refers both to the feeling of finishing a round of swimming and to the feeling of refreshment by drinking Coca-Cola.

Unlike the Chinese advertising series where the advertisers select both celebrities (i.e., Lang Lang, Yang Sun, Ting Zhu) and ordinary people as the characters, the campaign launched in North America incorporates an elite group of five Team USA athletes and hopefuls and an Olympic legend who have won a total of more than 20 Olympic medals, including nine gold: Alex Morgan, Ashton Eaton, Tatyana McFadden, Nathan Adrian, Leo Manzano and Nastia Liukin (Coca-Cola Goes for Gold in Rio 2016 Olympic Games with Global #ThatsGold Campaign, 2016). Since these athletes are well-known in America, American audiences can thus immediately select these characters as features and refer to energy, refreshment, championship at the first level as literal message and then generate connotation which refers to patriotism and nationalism as cultural message.

The following is a table comparing the advertisement series between North American version and Chinese version in terms of Pierce’s model of material-cognitive correlations for semiotic substrates. (To be sure, I explained the previous cases with the concept denotation and connotation which are based on Saussure’s static sign model just for corresponding to some resources I refer to, but I’m still applying Peirce’s triadic model to them).

First level correlation Second level correlation
R (only visual elements are analyzed due to space limitation, but textual narratives are also crucial) O I R’ O’ I’ (The next level would correlate the I’ to Coca-Cola)
China

parents-children

teacher-student

co-workers

Family affection, special moments of ordinary people  

 

Immediate

Interpretant

Familism, collectivism Chinese culture
North America Athletes,

Sunshine,

Swimming pool

Energy, brightness, refreshment,

championship

Nationalism,

Patriotism

American culture

 

Table 1: Comparison between Chinese version and North American version.

3.3 Taste the Feeling of Olympic: make the invariant theme special under the Olympic campaign

How do advertisers enable people to relate Coca-Cola immediately to “Taste the Feeling”? According to Lencastre, the brand makers turn a product into a brand by adding augmented identity (e.g., slogans, labels, mascots, iconic signs, etc.) beyond the core identity and actual identity of the product per se (Lencastre, 2013, pp.493-494). Advertisement, thus follows the Peircian principles of firstness, secondness and thirdness where the thirdness is the repeated interpretation in space and time of the relationship between the immediate stimuli and the objects (a process of recognition, short-term memory to long-term memory), in other words, to immediately correspond the product to the brand and the brand to its value profile by such repetition (Thellefsen et al., 2013, p.565). By branding and advertising, the brand makers and advertisers thus want to create an emotional filter for the product which is intended to create an emotional state in the brand user mirrored in the product, so that product become a cognitive-symbolic habit of interpretation (Thellefsen et al., 2013, p.563). Therefore, for Coca-Cola as product since 2016, the value profile that Coca-Cola company wants to build on it would be every kind of feeling customers could imagine presenting on the slogan “Taste the Feeling”. The propaganda team thus emphasize that every moment for everyone could be made special by drinking Coca-Cola. In this case, certain periods, spaces, events, movements, etc., all become parts of the “every moment for everyone”. “Taste the feeling”, according to Peirce, is the invariant type that could be instantiated infinitely based on different scenarios. The Rio Olympic campaign, is thus a setting of one of the tokens of the type where the spirit of Olympic itself serves as a medium that could remediate audiences’ interpretation of the feeling of tasting Coca-Cola.

Figure 9. The tokenization of “Taste the Feeling” under Rio Olympic 2016 campaign.

Above is my explanation of how advertisers instantiate “Taste the Feeling” as a type under Rio Olympic 2016 as a context for generating tokens and relate the value to Coca-Cola both as a product and as a brand. Taking Coca-Cola and Olympic as two objects to analyze, the advertisers would stand on the potential customers’ viewpoint to generate multi-level interpretations based on these two objects and the concepts extended from them, i.e., the series of signs from “Taste the Feeling” of Coca-Cola and the spirit of Olympic of Rio 2016. While the customers retrieve their memories for interpretation which relies on the social-cultural context they live, advertisers need to define socio-cultural specific values as symbols that may stimulate the emotion symmetry among customers and simulate their feelings as prototypes, thus based on different countries or regions (here the example is China and North America), advertisers generate different elements that would relate audiences to their context specific interpretation of the Olympic spirit and feelings (to be sure, this does not happen purely sequentially, different levels of signs of Coca-Cola on the left and Olympic on the right actually form a network where each node can be a source of otherness that involves intertextually into the dialogism and affect the reader’s perception and interpretation of other nodes simultaneously and dynamically). Finally, among those signs, advertisers thus are able to find the intersection by which they can generate new augmented identity around Coca-Cola under the umbrella of the Olympic campaign.

4. Conclusion:

Whenever you drink a bottle of Coca-Cola, you are never merely drinking, or tasting the coke itself. Every single part of the whole propaganda chain behind that is highly symbolic.

I demonstrate in the Coca-Cola example that the invariant type in the advertisement can be instantiated infinitely depending on different mediums (to be sure, the mediums here include not only physical mediums such as television, print, but also anything that can mediate and remediate the tokenization including different scenarios, movements, historical-socio-cultural context, etc.).

The first part of the paper demonstrate that different physical mediums remediate the tokenization in terms of an invariant type, In the example of Coca-Cola, the invariant type is “Taste the Feeling” which can be instantiated infinitely. I explained that the advertisers simulate the feeling that customers would experience medium specifically and realize the theme “Taste the Feeling” by various means and platforms. However, no matter how the structures of the tokenization are remediated depending on various physical mediums, the intention to correlate customers the feeling drinking Coca-Cola to every special moment they experience is not changed, but even repeated and re-emphasized.

I use Rio Olympic in 2016 in the second part of the paper as a context which remediates the tokenization of the advertisement to incorporate the spirit of Olympic. By applying C.S. Pierce’s triadic model which could be further expanded to dynamic and multi-level interpretation, I consider cultural encyclopedia as an umbrella that is not ignorable in customers’ interpretation of the advertisement. I compare the advertisement launched in North America and China by analyzing how cultural value and memory are imbued into different versions of advertisement under Olympic campaign. I finally concluded that cultural context and specific campaign are also mediums that remediate, retokenize and re-instantiates the tokenization of the theme, however, “Taste the Feeling” is always the type that is never changed no matter how the advertisements vary.

Advertising, of course, is just one element on the whole brand wheel, i.e., from the name, logo, slogan-jingle, to the textualization which put the real advertisement onto the table, and finally the design of the package of the product (Danesi, 2013, p.465). And the design of every “department” on this brand wheel, can be considered as the tokenization which could be instantiated without limitation following the general rule of the type. One of the limitation in this paper is that we don’t have enough space to talk about the tokenization in the whole brand wheel. Besides, I only use space (i.e., China and North America) as the variance to talk about how multi-level interpreted meanings are generated under the culture encyclopedia, but since the cultural context not only various across regions but also changes dynamically through time, to explore how such tokenization along the time line would make the research more comprehensive.

Further, the translation of slogan in different countries can also be explored deeply in the future research. From the phonology, morphonology, to the management of lexicon and syntax, to semantics and pragmatic restricted by cultural context (e.g. different idiom works differently depending on the country or region), etc., are worth exploring in the future research.

Work cited

  1. Bolter, J.D & Grusin, R. Remediation: Understanding New Media. Cambridge, MA: The MIT Press, 2000.
  2. Barthes, R. (2006). Myth today. Cultural theory and popular cultural: a reader, 3, 293-302.
  3. Barthes, R. (n.d.). Rhetoric of the Image. Retrieved 2018, May 2, from https://faculty.georgetown.edu/irvinem/theory/Barthes-Rhetoric-of-the-image-ex.pdf
  4. C. (2016, March 31). Taste the Feeling – Sri Lanka (English). Retrieved from https://www.youtube.com/watch?v=5FsnuHf7vFA
  5. Chandler, D. (2017). Semiotics: The Basics (2nd ed.). New York, NY: Routledge.
  6. Coca-Cola Goes for Gold in Rio 2016 Olympic Games with Global #ThatsGold Campaign. (2016, July 13). Retrieved from https://www.coca-colacompany.com/press-center/press-releases/coca-cola-goes-for-gold-in-rio-2016-olympic-games-with-global-thatsgold-campaign
  7. Coca-Cola Announces New ‘One Brand’ Marketing Strategy and Global Campaign. (2016, January 19). Retrieved from https://www.coca-colacompany.com/stories/taste-the-feeling-launch
  8. Danesi, M. (2013, September). Semiotizing a product into a brand. Social Semiotics,23(4), 464-476. doi:10.1080/10350330.2013.799003
  9. The Coca-Cola Festival Bottle. (n.d.). Retrieved from http://alexandruvasile.com/portfolio/coca-cola-festival-bottle/
  10. Yi Se Lie Ke Kou Ke Le Hu Dong Guang Gao Pai-Zi Ding Yi Hu Wai Guang Gao (2013). Retrieved from http://iwebad.com/case/2259.html
  11. Ke Kou Ke Le Sheng Wen “Ci Ke Shi Jin”, Ao Yun Jing Shen Yi Chu Ji Fa. (2017, July 9). Retrieved 2018, May 2, from http://ytsports.cn/news-10704.html?id=62
  12. How Coca-Cola Is Activating Its #ThatsGold Campaign for Rio 2016. (2016, August 22). Retrieved from https://www.portada-online.com/2016/08/08/how-coca-cola-activates-the-rio-2016-thatsgold-campaign/
  13. Irvine, M. (2018). Introduction to Signs, Symbolic Cognition and Semiotics. Retrieved 2018, May 2, from http://faculty.georgetown.edu/irvinem/CCTP748/
  14. Irvine, M. (2018). A Student’s Guide to Mikhail Bakhtin: Key Terms and Main Theories. Retrieved 2018, May 2, from http://faculty.georgetown.edu/irvinem/CCTP748/
  15. Irvine, M. (2014). Remix and the Dialogic Engine of Culture: A Model for Generative Combinatoriality. The Routledge Companion to Remix Studies (E. Navas et al, Eds). New York, NY: Routledge, 15-42.
  16. Lencastre, P. D., & Côrte-Real, A. (2013, September). Brand response analysis: A Peircean semiotic approach. Social Semiotics,23(4), 489-506. doi:10.1080/10350330.2013.799005
  17. MacRury, I. (2008). Advertising. New York, NY: Routledge.
  18. Manovich, L. (2001). The language of new media. Cambridge, MA: MIT Press.
  19. MacRury, I. (2008). Advertising. New York, NY: Routledge.
  20. Thellefsen, T., & Sørensen, B. (2013). Negotiating the meaning of brands. Social Semiotics,23(4), 477-488. doi:10.1080/10350330.2013.799004
  21. Thellefsen, T., Sørensen, B., & Danesi, M. (2013). A note on cognitive branding and the value profile. Social Semiotics,23(4), 561-569. doi:10.1080/10350330.2013.799010

draft proposal ongoing — Wency

draft & ongoing proposal — Wency

Research questions:

  1. The development of the content of advertisement through timeline. How does it imply the perception and interpretation of the content from different communities through different time and space? Study the different level of semiotic and meaning generation from perception, feature extraction, pattern recognition, pattern mappings: token into type, syntactic combination, semantic frames to cultural encyclopedia and dialog.

Analyze objects and possible keywords:

  • Peirce’s triangle: object, representamen, interpretation;
  • Three modes of signs: symbolic, iconic, indexical;
  • Meaning is not something stable, it’s an event, it’s generating through dialogic and is affected by macro cultural encyclopedia through time.
  • Barthes’s mythology, connotation.
  • Through learning slogan and images/ patterns: type-token, syntax and grammar, semantics.

– From emphasizing the function to personalized and emotional requirement, how does such mythology work?

– How does coca cola combine their concept during certain events (e.g.: Olympic) (How do we correlate the connotation between two objects).

– persuasion: how does sugar + water become different and stand out among other beverages.

– How do these advertisements simulate your perception drinking the product?

– How does the cultural encyclopedia affect people’s perception and interpretation?

EX: the barrier of translation of idiom, ambiguity of language in different country; different spokesperson through different timeline.

 

  1. Advertisement on different media: e.g.: why Louis Vuitton doesn’t have advertisement on YouTube?

 

Analyze objects and possible keywords:

  • Reproduction, remediation. Digital media/meta-medium
  • People’s connotation about digital media and mass communicationà aura, prestige, wealth, scarcity.
  • Interface/channel through information exchange, both users and producers become transmitter of information. (QR code)
  • Socio-technical system, the precondition of digital advertising and the socio-economic-political-cultural drive behind it.

 

– How does digital media being as an interface affect the user and the brand’s interaction?  (enhance? Speed up? More data collection and multiple types?)à compared to traditional marketing.

– semiotics behind brand

– semiotics behind different media platform for presenting the advertisement (accessible? Approachable? Mass perception?) EX: advertisement embedded in dramas, reality shows, films, Youtube Channels, Instagram, search engine and advertisement, etc.

– the perception on the same content on different platforms.

– affordance

 

Weeks to be explored:

Week 4, 5, 6, 9, 10, 12, 13

  • Language: our modeling system for meaning & symbolic thought
  • The grammar of meaning-making: sign systems & symbolic cognition
  • Applying semiotic methods to media: how meaning systems work
  • Media theory: from medium to mediation
  • From medium and mediation to de-blackboxing socio-technical systems
  • From computing machines to digital media interfaces & metamedia
  • Mediation, representational media & metamedia to the Google art project.

Ongoing bibliography:

  • Steven Pinker, “How Language Works.” Excerpt from: Pinker, The Language Instinct: How the Mind Creates Language. New York, NY: William Morrow & Company, 1994: 83-123.
  • In Bennett, P., & In McDougall, J. (2013). Barthes’ Mythologies today: Readings of contemporary culture.
  • Danesi, Marcel. “Semiotizing a Product into a Brand.” Social Semiotics23, no. 4 (September 2013): 464–76.
  • Lencastre, Paulo de, and Ana Côrte-Real. “Brand Response Analysis: A Peircean Semiotic Approach.” Social Semiotics23, no. 4 (September 2013): 489–506.
  • MacRury, Iain. Advertising. London; New York: Routledge, 2008.
  • Thellefsen, Torkild, and Bent Sørensen. “Negotiating the Meaning of Brands.” Social Semiotics23, no. 4 (September 2013): 477–88.
  • Thellefsen, Torkild, Bent Sørensen, and Marcel Danesi. “A Note on Cognitive Branding and the Value Profile.” Social Semiotics23, no. 4 (September 2013): 561–69.
  • Berger, A. A. (1997). Narratives in popular culture, media, and everyday life. Sage.