Category Archives: Assignments

Intelligent Personal Assistant and NLP

“Alexa, what’s the weather like today?”

As Intelligent Personal Assistants begin to play a more significant role in our daily life, the conversation with the machine is no longer science fiction. But few ever bothered to ask the question: how do we come to a place like this? All the Intelligent Personal Assistant – Siri, Cortana, Alexa… are they inevitable or they happened to be like this? Or, in the end, what enables us to communicate with a machine?

Any Intelligent Personal Assistant could be considered as a complicated system. From software layer to hardware layer, a feasible intelligent personal assistant is the collective effort of many components – both tangible and intangible.

Thought a functioning intelligent personal assistant unit is the result of a bigger structure, the most intuitive part, from a user perspective, is the back and forth procedure of “human-machine interaction”. At the current stage, most of the technology companies that offer intelligent personal assistant service are trying to make their product more “human-like”. This – again – would be an entire project consists of big data, machine learning (deep learning), neural computational network and other disciplines related to or beyond Artificial Intelligence. But on the front-facing end, there is one subsystem we need to talk about – natural language processing (NLP).

What is NLP?

When decomposing the conversation flow between individuals, a three-step procedure seems to be the common practice. The first step would be to receive the information, generally, our ear would pick up the sound wave that is generated by some kinds of vibration and transmitted via air.

The second step would be to process the information. The acoustic signal that was received would be matched with the existing pattern in your brain so as to be entitled to corresponding meanings.

The third step would be the output of information. One would disseminate the message by generating the acoustic signal via transducers so that it could be picked up by the other end to keep the conversation flow.

When it comes to “human-machine interaction”, NLP follows a similar pattern by imitating the three-step procedure of inter-human communication. By definition, NLP is “a field of study that encompasses a lot of different moving parts, which culminates in the 10 or so seconds it takes to ask and receive an answer from Alexa. You can think of it as a process of roughly 3 stages: listening, understanding, and responding.”

In order to handle different stage of the procedure, Alexa was designed as a system with multiple modules. For the listening part, one of the “front-end” modules would pick up the acoustic signal with sensor upon voice commands or “activation phrases”.

This module would be connected to the internet with wireless technologies so that it would be able to send information to the back-end for further processing.

Understanding, which could also be referred to as the processing part, as the speech recognition software would take over and help the computer transcribe the user’s spoken English (or other supported languages) into corresponding texts. This procedure is the tokenization of the “acoustic wave” which is not a self-contained medium. By transforming, certain waves were turned into tokens and strings that machine could handle. The ultimate goal of this analyzing process is to turn the text into data. Here comes one of the hardest part of NLP: natural language understanding. Considering “all the varying and imprecise ways people speak, and how meanings change with context” (Kim, 2018) This would bring in the entire linguistic part of NLP. As NLU “entails teaching computers to understand semantics with techniques like part-of-speech tagging and intent classification — how words make up phrases that convey ideas and meaning.” (Kim, 2018)

This all happens on the cloud, which also simulates how the human brain functions when dealing with natural languages.

When a result was reached, it comes to the final stage – responding. This would be an inverse procedure of Natural Language Understanding since the data would be turned back into text. Now that the machine has the outcome, there would be two more efforts to make. One is prioritizing, which means to choose the data that’s most relevant to the user’s query and this leads to the second effort: reasoning. This refers to the process of translating the responding concept into a human-understandable way. Lastly, “Once the natural-language response is generated, speech synthesis technology turns the text back into speech.” (Kim, 2018)

As we now had some basic recognition of the NLP procedure, we could go back to the questions that were raised at the beginning: what is the point in designing the architecture of the NLP part of an Intelligent Personal Assistant in such a way?

We could talk about the transducer part of the system. This might be quite intuitive at a first glance. A sensor as a transducer would be the equivalent to the human ears to pick up the acoustic wave as needed. But design questions are involved here: what would be the ideal form of the housing of an Intelligent Personal Assistant?

As Siri was introduced to the world as a built-in function of iPhone, it must fit in a compact mobile device with a screen and incorporates only two microphones. This increased portability and flexibility at the cost of reliability.

It is a natural thing for a human to distinguish useful information from background noise. In a daily conversation flow, this refers to the fact that we would consciously pick up the acoustic waves that are relevant to our own conversation but not others.

When this was applied to the human-machine interaction scenario, error prevention of the direction to go: “rather than just help users recover from errors, systems should prevent errors from occurring in the first place.” (Whitenton, 2017) With the development of speech recognition technology, errors in NLU have dropped dramatically. “But there’s one clear type of error that is quite common with smartphone-based voice interaction: the complete failure to detect the activation phrase. This problem is especially common when there are multiple sound streams in the environment” (Whitenton, 2017)

To tackle this problem, Amazon built Alexa its dedicated hardware – Echo which put voice interaction as its top priority. “It includes seven microphones and a primary emphasis on distinguishing voice commands from background noise” (Whitenton, 2017)

NLP and Linguistic

Why is this so important? “Meaning is an event, it happens in the process of using symbols collectively in communities of meaning-making – the meaning contexts, the semantic networks and social functions of digitally encoded content are not present as properties of the data, because they are everywhere systematically presupposed by information users” (Irvine, 2014)

As the very first step in the human-machine interaction, the primary condition on the machine side would be the ability to properly receive the message from the human side. At the same time, context is very important in discussing the human-machine interaction. The purpose of NPL is to generate an experience that’s as close as possible to inter-human communication. As everyone conversation needs a starting point, a responsive Intelligent Personal Assistant “requires continuous listening for the activation phrase” (Whitenton, 2017) so that it could be less intrusive – in the case of Alexa, one would not need to carry it around or to follow any fixed steps to “wake up” the system. The only necessity is a natural verbal signal (Alexa) to trigger the conversation.

After the assistant acquired the information needed, the whole “black box” the lays underneath the surface starts functioning. As mentioned above, an Intelligent Personal Assistant would firstly send all the data to the “back-end”. As language is about coding “information into the exact sequences of hisses and hums and squeaks and pops that are made” (Pinker, 2012). Machines would then need the ability to recover the information from the corresponding stream of noises.

We could look at a possible methodology that machines would resort to in decoding the natural language

Part of Speech Tagging – or syntax. A statistical speech recognition model could be used here to “converts your speech into a text with the help of prebuilt mathematical techniques and try to infer what you said verbally.” (Chandrayan, 2017)

This approach takes the acoustic data and breaks it down into specific intervals e.g. 10 – 20 ms. “These datasets are further compared to pre-fed speech to decode what you said in each unit of your speech … to find phoneme (the smallest unit of speech). Then machine looks at the series of such phonemes and statistically determine the most likely words and sentences to spoke.” (Chandrayan, 2017)

Moving forward, the machine would look at the individual word and tries to determine the word class, the tense etc. As “NLP has an inbuilt lexicon and a set of protocols related to grammar pre-coded into their system which is employed while processing the set of natural language data sets and decode what was said when NLP system processed the human speech.” (Chandrayan, 2017)

Now that we had the foundation of decoding the language – by breaking it down, what would be the next step? Extracting the meaning. Again, the meaning is not a property but an event. In that sense, the meaning is not fixed – it changes all the time.

For inter-personal communication, we feel natural when we constantly refer to the context and spot the subtle differences.

But now, most of the Intelligent Personal Assistant “ is primarily an additional route to information gathering and can complete simple tasks within set criteria” (Charlton, 2017) This means they do not fully understand the user and their intuition.

For instance, when we are asking someone for the price of a flight ticket, the response – besides the actual price – could be “if you are going to a certain place or if you need a price alert for that flight”. But we could not really expect these kinds of follow up answers from an Intelligent Personal Assistant.

So, let’s go back to the inter-personal communication – how do we come up with the follow-up responses in the first place? We would conclude and deduct empirically to interconnect things that could be relevant – such as the intention to go somewhere and the action of asking the price of certain fight tickets. When we have the similar expectation on machines – on one hand, they would have to conduct a similar reasoning process as the ones that we do to draw the conclusion. On the other hand, they need a pool with an adequate amount of empirical resources to draw the conclusion from. The point is that the empirical part could have individual differences – which means the interaction pattern needs to be personalized on top of some general reasoning.

In this sense “Google Assistant is probably the most advanced, mostly because it’s a lot further down the line and more developed in terms of use cases and personalization. Whereas Alexa relies on custom build ‘skills’, Google Assistant can understand specific user requests and personalize the response.”  (Charlton, 2017)

This is not something to be built overnight but rather a long-term initiative: “The technology is there to support further improvements; however, it relies heavily on user adoption … The most natural improvement we expect to see is more personalization and pro-active responses and suggestions.” (Charlton, 2017)

Now that machine has the “artificial language” in hands, the next step would be to translate this language into “meaningful text which can further be converted to audible speech using text-to-speech conversion”. (Charlton, 2017)

This seems to be relatively easier work compared to the Natural Language Understanding part of the NLP. As “The text-to-speech engine analyzes the text using a prosody model, which determines breaks, duration, and pitch. Then, using a speech database, the engine puts together all the recorded phonemes to form one coherent string of speech.” (Charlton, 2017)

Intelligent Personal Assistant as Metamedium

But as you look into the way many answers were generated, computer (in the case of Intelligence Personal Assistant this would be cloud computing) as a metamedium. This is significant in at least two ways.

To begin with, as metamedium, the Intelligent Personal Assistant “can represent most other media while augmenting them with many new properties” (Manovich, 2013) In the specific case of Alexa, the integration of both hardware and software as well as the synergy that was brought up by the synergy is significant.

Sensors, speakers, wireless module, cloud … all these elements could fulfill specific tasks by themselves. But by combining them together, the new architecture not only achieved goals that could never have been accomplished by any of the individual components. But these components, in turn, were entitled with new possibilities: like the sensors that were empowered by the software would be able to distinguish specific sounds from ordinary sounds.

Another important aspect would be the chemical reaction to be generated by the crossfire of all the individual components. In the case of Intelligence Personal Assistant, one of the possibilities could be data fusion: in “Software Takes Command” Manovich had the following description: “another important type of software epistemology is data fusion – using data from different sources to create new knowledge that is not explicitly contained in any of them.” (Manovich, 2013)

This could be a very powerful tool in the evolution of Intelligent Personal Assistant: “using the web sources, it is possible to create a comprehensive description of an individual by combining pieces of information from his/her various social media profiles making deductions from them” (Manovich, 2013) This idea is in line with the vision for an Intelligent Personal Assistant to be more personalized and proactive. If an Intelligent Personal Assistant would be granted proper access to user information and the user would be willing to communicate with the Intelligent Personal Assistant, it would be possible for the system to advance rapidly. So, the advantage of the Intelligent Personal Assistant with NLP capability as a metamedium would be its ability to combine the information from both ends (users and Social Media Platforms) so that it would be able to come up with a better decision.

At the same time, as users became one of the media sources in depicting the big picture of user personas, users would also benefit themselves in this procedure. “combining separate media sources could also give additional meanings to each of the sources. Considering the technique of the automatic stitching of a number of separate photos into a single panorama” (Manovich, 2013)

The Intelligent Personal Assistant, upon getting the input from users via NLP, could be a mirror and a dictionary to the users at the same time. It both reflects users’ characteristics and enhance the user experience due to the nature of it as a metamedium.

Another question that could be answered by the metamedium side of Intelligent Personal Assistant is “why we would need such a system?”. When looking back to the trajectory of technological development, we could notice that the procedure of HCI evolution and the “metamedium” ecology around the computer is pretty much a history of the mutual education of computer and human as well.

Before we get used to a smartphone with built-in camera, people would question the necessity of this idea: why would I need a phone that could take pictures? But now we are so used to using phones as our primary photographing tools and even handle a great part of media production on it. Again – using smartphones for PS and video editing is something that didn’t happen until smartphone as a platform digested camera as an appropriate unit and the hardware development entitled the platform with the capabilities to do so. And this trend might have – to a great extent – led to the popularity of SNS like Instagram and Snapchat.

Similar stories could be applied to Intelligent Personal Assistant. When Siri – as the first mainstream Intelligent Personal Assistant – was released back in 2011, the criticisms it received ranged from requiring stiff user commands and having a lack of flexibility to lacking information on certain nearby places as well as the inability to understand certain English accents. People doubted the necessity of having such a service on their phone to drain the battery. Now, after seven years of progress, not only do we see the boom in Intelligent Personal Assistant, we get used to it as well. Especially in certain scenarios – like when you are cooking, and you want to set an alarm or pull up the recipe or you are driving, and you want to set the navigation app. Intelligent Personal Assistant with NLP capability is – by far – probably the best solution to these used-to-be dilemmas.

In a market research conducted by Tractica, “unique active consumer VDA users will grow from 390 million in 2015 to 1.8 billion worldwide by the end of 2021. During the same period, unique active enterprise VDA users will rise from 155 million in 2015 to 843 million by 2021.  The market intelligence firm forecasts that total VDA revenue will grow from $1.6 billion in 2015 to $15.8 billion in 2021.” (Tractica, 2016)

(VDA refers to Virtual Digital Assistants)

Systems Thinking

After the brief discussion of Intelligent Personal Assistant with a focus on NLP, it is a good time to touch upon an important principle when dealing with the Intelligent Personal Assistant. We spent most of the paper talking about NLP and barely touched a fraction of what NLP really is. Yet NLP is only a subsystem in the Intelligent Personal Assistant architecture which itself, is only a representation of a larger discipline – Artificial Intelligence.

So, when talking about Intelligent Personal Assistant or NLP, we couldn’t regard them as isolated property which does not recognize the universal connection among system and subsystems as well as their interdependence: “systems thinking is non-reductionist and non-totalizing in the methods used for developing explanations for causality and agency: nothing in a system can be reduced to single, independent entities or to other constituents in a system.” (Irvine, 2018)

This requires us to put both Intelligent Personal Assistant and NLP into context. As Intelligent Personal Assistant is the result of the joint work of many other subsystems like NLP, and NLP itself is also built on the foundation of its own subsystem. Any of the units here would not have achieved what we have now on their own.

After all, Graphite and diamond are both consisted of carbon, just a different pattern of the structure of the element. But they end up with a totally different character. When we look at a single point, we would simply miss the whole picture.


Intelligent Personal Assistant is a great representation of Artificial Intelligence in the sense that it creates a tangible platform for a human to interact with. Under this circumstance, NLP as a subsystem provides the Intelligent Personal Assistant with the tool to communicate naturally with its users.

In de-blackboxing NLP, we looked at both the software and hardware layers of NLP, with a step-by-step pattern of listening, understanding, and responding. For different layers and steps, all the components including transducers, cloud, and voice recognition software work both independently and collectively to generate the “natural communication” that we experience in the real life.

For the methodology part, we regard the Intelligent Personal Assistant as a metamedium in analyzing the ability and potential it possesses to evolve and transform. We also touched upon the basic linguistic elements that were used in designing the processes of NLP. Finally, the complexity and systems thinking approach were brought in to emphasize the Intelligent Personal Assistant and NLP as both a self-contained entity and a part of the architecture.



1: Kim, Jessica. “Alexa, Google Assistant, and the Rise of Natural Language Processing.” Lighthouse Blog, 23 Jan. 1970,

2: Whitenton, Kathryn. “The Most Important Design Principles Of Voice UX.” Co.Design, Co.Design, 28 Apr. 2017,

3: Irvine, Martin. “Key Concepts in Technology, Week 4: Information and Communication.” YouTube, YouTube, 14 Sept. 2014,

4: Pinker, Steven. “Steven Pinker: Linguistics as a Window to Understanding the Brain.” YouTube, YouTube, 6 Oct. 2012,

5: Cjamdrayam, Promod. “A Guide To NLP : A Confluence Of AI And Linguistics.” Codeburst, Codeburst, 22 Oct. 2017,

6: Charlton, Alistair. “Alexa vs Siri vs Google Assistant: What Does the Future of AI Look like?” Gearbrain, Gearbrain, 27 Nov. 2017,

7: Manovich, Lev. Software Takes Command. vol. 5;5.;, Bloomsbury, London;New York;, 2013.

8: Tractica. “The Virtual Digital Assistant Market Will Reach $15.8 Billion Worldwide by 2021.” Tractica, 3 Aug. 2016,

9: Irvine, Martin. “Media, Mediation, and Sociotechnical Artefacts: Methods for De-Blackboxing.” 2018.

The Interpretation of the Usage of Technology in the Art Works of Nam June Paik

Pioneer of Video Art

Nam June Paik, an American Korean artist, is famous for his appropriating the analog television set as an art object. Becoming one of the first artists to establish video as a serious artistic medium in 1960s, he is regarded as a pioneer in this field and is given the name of “the father of video art”. He is also one of the first artists to break the barriers between art and technology.

Nam June Paik got involved in the Fluxus movement which is an international art movement in 1960s. Fluxus artists challenged the authority of museums and “high art” and wanted to bring art to the masses. Influenced by Zen Buddhism, their art often involved the viewer, used everyday objects, and contained an element of chance. So, when looking at Paik’s artworks, even though they are still installed in the museum like the traditional fine art, the broad use of one of the most popular everyday objects, which is also one of the most influential things in human life in those days, television, makes his works outstand from the serious statues of human body and the historical oil paintings hanging on the wall.

Just have a quick look at a list of some of his popular works that is directly named with TV.
TV Cello (1964)
Magnet TV (1965)
TV Bra for Living Sculpture (1968)
TV Buddha (1974)
TV Garden (1974)

To me. what is so impressive in his work actually is his idea of initiatively erasing the boundary between technology and art. In this research, we will try to interpret how Nam June Paik is applying technology in his art works and do such kind of collaboration work well in the museum and in other remediating platforms, or to say, institutions.

Semiotic Interpretation of the Art Works of Nam June Paik

To think about the role of technology in Nam Jun Paik’s works, one thing that need to be clarify is the definition of “medium” which we may use a lot in the following discussions but might feel noe so clear. Here by saying medium, we are defining this term as the physical substances an artist uses to create an artwork piece. Take oil painting as example, it is understandable that both the oil pigment that is used and the canvas to draw on are media, the plural for medium, of this painting. However, a gel medium like impasto which can “thicken a paint so the artist can apply it in textural techniques[1](Esaak, Shelley, 2018) is also regarded as the medium of art. (To know more about Impasto.)

In this section, a detailed case study on one of the most famous art works of Nam June Paik, Electronic Superhighway: Continental U.S., Alaska, Hawaii,1995, will be a core content going throughout the whole section. While at the time, several brief analysis on more of his interesting works will be applied in order to explain a specific idea better. In this section, instead of building up the institutions that remediate the artwork, or to say offering a “space” for audiences to get access to the art work, nor doing the reproduction works for the artwork, we will work on the technologies as the media that are relevant to the art work itself, becoming part of the interface of the symbolic system of the art work that join in the process of the creation of meanings. In order to study the video technology in a more direct way, we would basically focus more on the collaboration of installation art and video art in which the technologies are more physically reachable.

Briefly De-Blackboxing Electronic Superhighway Physically and Symbolically

[Pic Lost]

Nam June Paik, Electronic Superhighway: Continental U.S., Alaska, Hawaii, 1995

The Electronic Superhighway might be one of the most famous works of Nam June Paik. Due to the introduction from Smithsonian American Art Museum where this work is now being exhibited, it is an approximate 15 x 40 x 4 feet video installation. There are fifty-one channel videos installed and with one closed-circuit television feed. In each of the screen, there are video clips with both images and sound. The bright colorful lines are neon lights which are also customized by electronic. Steel and wood are also used in the construction of this art work.

By lining out the shape of United States and the boundary between neighboring states and set up televisions in groups based on the unit of state, we can see a fusion of politics and art. The boundary lines between two political area is a typical token in politics. Due to some crisis, it is even not only relevant to art, but also can associated with echoing the network of interstate “superhighways” that economically and culturally unified the continental U.S. in the 1950s.

Besides, these physical contours with same clips displaying on the screens in the same area can also be regarded as a separation of different culture inside America. The group television screens in each area are showing different clips of video which, at least from the perspective of Nam June Paik, can represent the most typical character or a most interesting thing of the state. For example, the state of Iowa, “where each presidential election cycle begins, plays old news footage of various candidates, while Kansas presents the Wizard of Oz.”[2]

The Electronic Superhighway actually is a meta-media installation as it also have sounds playing out with the videos. So in this such huge collaboration of mixed and dazzling images as well as sounds, we can see a reflection of the modern life filled with all kinds of images and sounds, information, led by the development of mass communication media, especially the development of television and the advancement of “information superhighway”. An announcement in advance of the explosion of information is presenting here isn’t it?

One more interesting information about clips shown on the TV monitors is that all of them are collected and edited by Nam June Paik himself. Actually, the video technology kept developing from 1960s to 1980s, which allowed the artists to edit moving images more quickly than recording them on films and then do more works on them. It required time to for negatives to be developed but with the new technology it is even possible to edit the images in “real-time”. In 1969, Paik even created his own video synthesizer with Japanese engineer Shuya Abe.[3]

One of his interesting works during this period is TV Garden(1974). It is a single-channel video installation with color television monitors and live plants. There are different images and sounds displaying on each monitor and in this way, in an enclosed space in the museum, a strange harmonious has been expressed.

[Pic Lost]

Nam June Paik, “TV Garden” (detail), 1974/2000, single-channel video installation with color television monitors and live plants; color, sound, Solomon R. Guggenheim Museum, New York. (Copyright Nam June Paik Estate)

Watch a video to gain a better sense of this work.

Another attractive art work which is “more real-time” is the famous Good Morning, Mr. Orwell (1984) which is the first international satellite installation art work. It is seen as a rebuttal to George Orwell’s dystopian vision in his novel 1984. Linking WNET TV in New York and the Centre Pompidou in Paris live via satellite, a reading by Allen Ginsberg in New York was mixed live with a Beuys‘ action taking place in Paris. Even though there were still technology problems such as the connection of satellites between United States and France kept cutting out, Nam June Paik said that “the technical problems only enhanced the ‘‘live’ mood”[4], which from my perspective is a quite ANT (Actor Network Theory) style thought that we will talk about later.

The symbolic meaning of technology itself

The Electronic Superhighway itself is no doubt an interface to the culture meaning system and in this system, or to say network, functions as a node in the network of relations.[5] While to go one step further, to deconstruct this big token into many smaller tokens, the technology contained in some small tokens are not just physical constituent but also carry its own symbolic meaning that contributes to the meaning system.

Taking one television monitor of the installation as a token, without considering the images nor sounds, it, as one of the medium of this art work, is not only a physical medium to display the video, but also carrying symbolic meanings which becomes crucial part of the symbolic system that makes this art work works.

From a macroscopic viewpoint, when “de-black-boxing” the human life in the 1990s in United States, there will certainly be a part for the node of video technology, here to be more specific, the development of television. As what has been said by John Law,“social and the technical are embed in each other”[6], this does not only mean that we cannot explore the human society without studying the “hows” of relational materiality, but also reminds us that when considering the technology elements in the art works, we shall put it into the specific social situation and time period. This one television monitor can be taken as a token of television technology in its period.

All the televisions in this art work are analog TV and this strictly corresponds to the social reality that the digital television had not become consumer product and put into mass producing until the late 1990s and the beginning of 21century. We can hardly find out another media which was so influential among the family unit in a specific period of time.

From a relatively micro perspective, the television monitor is the core part of the total art work as a complicated interface to its meaning system, for the audience to communicate with the art work and the idea that the artist wants to deliver through the installation.

Based on such definition, it seems that in the past, it is the physical characters that are made use of in the creation of art works. However, due to the idea of John Law, every material owns its social symbolic meanings be being a node in the network of the whole society. By apply these meanings which can trigger the spiritual resonate of those people who once experienced or is experiencing that kind of life style. In this way, the TV monitors, the material that is the representmen of video technology, became a power full unit interface to deliver the thoughts of Nam June Paik about the American life experience in the “shadow” of mass communication lead by television. From my perspective the symbolic meaning of a specific material, or to say technology, is part of nature property as when it was invited by human society, it gets involved in the social development and its connection and interaction with other agencies in the society give birth to its social meaning. From this stand point, in Nam June Paik’s art works, the medium are not simply physical material but the material carrying social meanings.

Fusion of Culture Elements

To talk about Nam June Paik’s works from a perspective of culture, in Nam June Paik’s art works, except for his pioneering thoughts on new modern life experience being influenced by mass media, especially by the video technology and “information superhighway”, another impressive character is the fusion of different culture elements in his art works.

Scientific Experiment and Installation Art

The first art work of Nam June Paik that I know was a not really famous one named TV Magnet.

[Pic Lost]

Nam June Paik, “Magnet TV,” 1965, television set and magnet, black and white, silent, Whitney Museum of American Art, New York.

What actually is happening here is that when we put a magnet in the top of an analog television and then energize the television, there will be moving images like what can be seen in the picture. Nam June Paik installed such kind of an interesting phenomenon in the museum and made it a piece of art work. Isn’t it just more like a scientific experiment instead of installation art?
By looking at a video of this art piece, the movement of the lines, or to say color blocks, might help gain the sense better.

Actually, even though being without an academic background of science or engineering, many of Nam June Paik’s works shows a lot of scientific and engineering elements. This is closely relevant to a trend in the field of art started from 1960s that being inspired by the new technologies and having a lot of thoughts on these technologies associating with new modern life, many artists would cooperate with engineers to create their art works and in this way, these artists do not just focus on the traditional defined “art creation”, but also join in the engineered works. Just as what has been mentioned before when talking about

East and West

Nam June Paik is an American Korean. He was born in Korea and have studied in Japan for a long time. While he created most of his amazing works in the field of video art after joining the western artists community. Such kind of remixed culture background makes many of his works having a Eastern aroma with totally western generated technologies like television, video editing and projection. With a Eastern culture background, I got very interested in these works.

Ommah is a “one-channel video installation on 19-inch LCD monitor”[7] with silk robe. he name of this work “Ommah” is a Korean word means mother. A television displaying images of Korean people is covered by a traditional Korean style coat. Watching television is regarded as a family activity and the importance of mother in a family is apparent. Such kind of culture crush really makes sense, especially to the Korean people.

Another really famous art work of him is the series of Buddha.

[Pic Lost] 1974, closed circuit video installation, bronze sculpture

[Pic Lost] 1982, closed circuit video installation, bronze sculpture

[Pic Lost] 1989, closed circuit video installation, bronze sculpture

[Pic Lost] 1997, closed circuit video, stone sculpture,soil

There are four different versions in this series, each created in 1974, 1982, 1989 and 1997. Although the layout of the real-time projectors and the statues of Buddha are different, the concept of combing Western Technology and the Eastern religious thoughts is ever lasted. Through such kind of combinations, Paik established a “connection between Budhish beliefs concerning the reincarnation of all living being and the electronic reproduction of what is always the same”[8].

There are also some classic culture remixes that many artists, from past to nowadays, would like to show in their art works that also have been shown in Nam June Paik’s works. For example, in the Electronic Superhighway, we can see a collaboration of politics and culture. In the TV Garden, there is a collaboration of the ideas of nature and human society.

Technology and the Mediating Institution

When looking at the Fluxus movement, from my perspective, there are two key points in the process of letting “high art” going down to the mass. The first way is to move the art works out of the museum which is a name that born with high art in 1980s. As Malraux said, “Museums and schools are the main mediators of, and interfaces to, art history and to the knowledge of the cultural category of “art” itself. Here by saying art, I think it more refers to “high art”[9].

In this way, I would like to conclude this first way as working on the mediating institutions. The second way refers to the use of “mass material” like the televisions, projections or other new technologies. The second way here can be regarded as working on the interface of an art piece itself. As we have talked about the material which refers to the second way above, in this section, I would like to briefly discover the mediating institutions work on Nam June Paik’s works.

Malraux also knew that the modern — and postmodern — museum inherited a cultural motive for collecting works from diverse cultures and histories, and then presenting collections as a coexisting totality or unity with an underlying idealized history.

In Smithsonian Art Museum where the Electronic Superhighway is on exhibition, there are also many other brilliant artworks on shown, for example the American Indian Portrait, postmodern style statues and a wall with a collection of license plates in United States hanging on. Each artwork is interacting with others and all the works are being remediated in the museum.
A relatively closed space was build up for our case study installation, but if people have just visited the relatively serious portrait shows or the statues before, a more striking feeling might be generated. Take myself as example, even though I know little about the 20th century American life, the moment when I stand in front of these TV collections and light-emitting diodes, being surrounded by different moving images and sounds from the installation, I feel like having gotten involved in a specific context. The images and sounds might not be familiar to me as a foreigner but how they are organized in the unit of states of USA

The wall in the background of the installation is not a really modern style but a quite classic western architecture design. When looking at the art work, it’s hard for me to ignore the walls and pillars with arabesquitics of western styles in 20th century and even earlier.

One interesting experience I would like to mention here is that when I am standing in this dazzling area, a father stood next to me was talking about the clips that were playing on the screens of Virginia to his daughter. I can imagine with the “real experience” on the wall, the visit would be much more movable.

The museum is like a huge mediating machine in which different art works are being remediated and also interacting with each other. Once entering the machine, a person would get involved in the process of remediating. With different life experience and different modes of interpreting the works, each one receives different information from the same artwork and also different ways to arrange the location of each artwork and all the visitors around when people are watching the specific work might also “edit” the interpretation.

In general, based on my personal experience, different culture background of the subject (the audience), different time, different people around, different neighboring artworks around the object can all be variables to the mediating process and the meaning interpretation.

While, what about watching the Electronic Superhighway at home?

There are photos of not only Electronic Superhighway but also many other Nam June Paik’s art pieces all over the Internet, but by looking at these digitalized images, the collections of pixels on the screen of your electronic devices, can we know enough about the real works? I was shocked by the sounds that was generated by the installation when I arrived at the real Electronic Superhighway. People would gain better sense of the start of the explosion of information triggered by the expansion of mass media in daily life and the “information superhighway” (Nam Jun, Paik, 1965) when standing in that area with mixed and disordered sounds coming into the ear from all directions.

How about watching the video of it on Youtube? Watch these two videos taken by different people.


I have to admit that for people who have not or do not have the chance to reach the real installation in Smithsonian, they do work on offering a sense of what this installation is about and most of the basic characters of it, the settings, the light and sounds, are all shown. But two me, it is still different from an in-person experience of standing in the area. Except for all the “noises” generated by the remediating process, the chromatic aberration, the poor sound effect and so on, the Electronic Superhighway presents in these videos actually has already been edited by the one who took the videos.

While, to think about it more, though haven’t been generated, will a VR tour works better? I guess yes. But will it be able to take place a real museum experience, I myself vote for a no.

Based on all the online mediating institutions that I know, the initiative of the audience and how real the effect is shown are all still limited by analog technology. To what extent can we analog a real experience in the museum? However, a further, or to say more foundational question, why the real museum experience matters? Are the digital analog technologies trying to imitate a museum experience or just trying out the best way to mediate the artworks and for audience to get access and experience? We have to admit that there are some relatively new genre of art, such as digital art, that are generated by and created based on the new digital technologies and for these works, the dialogic process best works on the electronic screens and have no possibility or need to be taken to the real physical area.

Except for the technology affairs, the collective mind on traditional museum experience will also block the acceptance of people to the digital mediating institutions as to most of the people, the real museum locates there itself means something in the field of art.

To me, on the topic of Nam June Paik’s works, as most of his artworks are still with touchable and hearable physical body, the limited analog technologies still cannot gain their advantages on the production of the dialogic context that is generated by museums.


[1] Esaak, Shelley. “What Is the Definition of ‘Medium’ in Art?” ThoughtCo, Mar. 23, 2018,

[2] “Nam June Paik, Electronic Superhighway: Continental U.S., Alaska, Hawaii.” Khan Academy,

[3]“TateShots: Nam June Paik.” Tate,

[4] Media Art Net. “Media Art Net | Paik, Nam June: Good Morning, Mr. Orwell.” Medien Kunst Netz, Media Art Net, 3 May 2018,

[5] Chandler, Daniel. Semiotics: The Basics. Routledge, Abingdon, Oxon;New York, NY;, 2017.

[6] Turner, Bryan S. The New Blackwell Companion to Social Theory. Wiley-Blackwell, Chichester, West Sussex, United Kingdom; Malden, MA, USA, 2009: P141-158

[7] “Ommah.” Art Object Page,

[8] Dieter Daniels in : Heinrich Klotz (ed.), Contemporary Art, exhib. cat, Museum for Contemporary Art/ Center of Art and Media, Karsruhe, 1997, P.204

[9] Irvine, Martin, Malraux and the Musée Imaginaire: (Meta)Mediation, Representation, and Mediating Institutions

More References

Daniel Chandler, Semiotics: The Basics. 2nd ed. New York, NY: Routledge, 2007.

Martin Irvine, “Introduction to Signs, Symbolic Cognition, and Semiotics: Part I.”

Martin Irvine, “Applying Semiotic Concepts, Models, and Methods.”

Martin Irvine, “The Museum and Artworks as Interfaces: Metamedia Interfaces from Velásquez to the Google Art Project”

Useful Websites