Author Archives: Yang Jiang

Intelligent Personal Assistant and NLP

“Alexa, what’s the weather like today?”

As Intelligent Personal Assistants begin to play a more significant role in our daily life, the conversation with the machine is no longer science fiction. But few ever bothered to ask the question: how do we come to a place like this? All the Intelligent Personal Assistant – Siri, Cortana, Alexa… are they inevitable or they happened to be like this? Or, in the end, what enables us to communicate with a machine?

Any Intelligent Personal Assistant could be considered as a complicated system. From software layer to hardware layer, a feasible intelligent personal assistant is the collective effort of many components – both tangible and intangible.

Thought a functioning intelligent personal assistant unit is the result of a bigger structure, the most intuitive part, from a user perspective, is the back and forth procedure of “human-machine interaction”. At the current stage, most of the technology companies that offer intelligent personal assistant service are trying to make their product more “human-like”. This – again – would be an entire project consists of big data, machine learning (deep learning), neural computational network and other disciplines related to or beyond Artificial Intelligence. But on the front-facing end, there is one subsystem we need to talk about – natural language processing (NLP).

What is NLP?

When decomposing the conversation flow between individuals, a three-step procedure seems to be the common practice. The first step would be to receive the information, generally, our ear would pick up the sound wave that is generated by some kinds of vibration and transmitted via air.

The second step would be to process the information. The acoustic signal that was received would be matched with the existing pattern in your brain so as to be entitled to corresponding meanings.

The third step would be the output of information. One would disseminate the message by generating the acoustic signal via transducers so that it could be picked up by the other end to keep the conversation flow.

When it comes to “human-machine interaction”, NLP follows a similar pattern by imitating the three-step procedure of inter-human communication. By definition, NLP is “a field of study that encompasses a lot of different moving parts, which culminates in the 10 or so seconds it takes to ask and receive an answer from Alexa. You can think of it as a process of roughly 3 stages: listening, understanding, and responding.”

In order to handle different stage of the procedure, Alexa was designed as a system with multiple modules. For the listening part, one of the “front-end” modules would pick up the acoustic signal with sensor upon voice commands or “activation phrases”.

This module would be connected to the internet with wireless technologies so that it would be able to send information to the back-end for further processing.

Understanding, which could also be referred to as the processing part, as the speech recognition software would take over and help the computer transcribe the user’s spoken English (or other supported languages) into corresponding texts. This procedure is the tokenization of the “acoustic wave” which is not a self-contained medium. By transforming, certain waves were turned into tokens and strings that machine could handle. The ultimate goal of this analyzing process is to turn the text into data. Here comes one of the hardest part of NLP: natural language understanding. Considering “all the varying and imprecise ways people speak, and how meanings change with context” (Kim, 2018) This would bring in the entire linguistic part of NLP. As NLU “entails teaching computers to understand semantics with techniques like part-of-speech tagging and intent classification — how words make up phrases that convey ideas and meaning.” (Kim, 2018)

This all happens on the cloud, which also simulates how the human brain functions when dealing with natural languages.

When a result was reached, it comes to the final stage – responding. This would be an inverse procedure of Natural Language Understanding since the data would be turned back into text. Now that the machine has the outcome, there would be two more efforts to make. One is prioritizing, which means to choose the data that’s most relevant to the user’s query and this leads to the second effort: reasoning. This refers to the process of translating the responding concept into a human-understandable way. Lastly, “Once the natural-language response is generated, speech synthesis technology turns the text back into speech.” (Kim, 2018)

As we now had some basic recognition of the NLP procedure, we could go back to the questions that were raised at the beginning: what is the point in designing the architecture of the NLP part of an Intelligent Personal Assistant in such a way?

We could talk about the transducer part of the system. This might be quite intuitive at a first glance. A sensor as a transducer would be the equivalent to the human ears to pick up the acoustic wave as needed. But design questions are involved here: what would be the ideal form of the housing of an Intelligent Personal Assistant?

As Siri was introduced to the world as a built-in function of iPhone, it must fit in a compact mobile device with a screen and incorporates only two microphones. This increased portability and flexibility at the cost of reliability.

It is a natural thing for a human to distinguish useful information from background noise. In a daily conversation flow, this refers to the fact that we would consciously pick up the acoustic waves that are relevant to our own conversation but not others.

When this was applied to the human-machine interaction scenario, error prevention of the direction to go: “rather than just help users recover from errors, systems should prevent errors from occurring in the first place.” (Whitenton, 2017) With the development of speech recognition technology, errors in NLU have dropped dramatically. “But there’s one clear type of error that is quite common with smartphone-based voice interaction: the complete failure to detect the activation phrase. This problem is especially common when there are multiple sound streams in the environment” (Whitenton, 2017)

To tackle this problem, Amazon built Alexa its dedicated hardware – Echo which put voice interaction as its top priority. “It includes seven microphones and a primary emphasis on distinguishing voice commands from background noise” (Whitenton, 2017)

NLP and Linguistic

Why is this so important? “Meaning is an event, it happens in the process of using symbols collectively in communities of meaning-making – the meaning contexts, the semantic networks and social functions of digitally encoded content are not present as properties of the data, because they are everywhere systematically presupposed by information users” (Irvine, 2014)

As the very first step in the human-machine interaction, the primary condition on the machine side would be the ability to properly receive the message from the human side. At the same time, context is very important in discussing the human-machine interaction. The purpose of NPL is to generate an experience that’s as close as possible to inter-human communication. As everyone conversation needs a starting point, a responsive Intelligent Personal Assistant “requires continuous listening for the activation phrase” (Whitenton, 2017) so that it could be less intrusive – in the case of Alexa, one would not need to carry it around or to follow any fixed steps to “wake up” the system. The only necessity is a natural verbal signal (Alexa) to trigger the conversation.

After the assistant acquired the information needed, the whole “black box” the lays underneath the surface starts functioning. As mentioned above, an Intelligent Personal Assistant would firstly send all the data to the “back-end”. As language is about coding “information into the exact sequences of hisses and hums and squeaks and pops that are made” (Pinker, 2012). Machines would then need the ability to recover the information from the corresponding stream of noises.

We could look at a possible methodology that machines would resort to in decoding the natural language

Part of Speech Tagging – or syntax. A statistical speech recognition model could be used here to “converts your speech into a text with the help of prebuilt mathematical techniques and try to infer what you said verbally.” (Chandrayan, 2017)

This approach takes the acoustic data and breaks it down into specific intervals e.g. 10 – 20 ms. “These datasets are further compared to pre-fed speech to decode what you said in each unit of your speech … to find phoneme (the smallest unit of speech). Then machine looks at the series of such phonemes and statistically determine the most likely words and sentences to spoke.” (Chandrayan, 2017)

Moving forward, the machine would look at the individual word and tries to determine the word class, the tense etc. As “NLP has an inbuilt lexicon and a set of protocols related to grammar pre-coded into their system which is employed while processing the set of natural language data sets and decode what was said when NLP system processed the human speech.” (Chandrayan, 2017)

Now that we had the foundation of decoding the language – by breaking it down, what would be the next step? Extracting the meaning. Again, the meaning is not a property but an event. In that sense, the meaning is not fixed – it changes all the time.

For inter-personal communication, we feel natural when we constantly refer to the context and spot the subtle differences.

But now, most of the Intelligent Personal Assistant “ is primarily an additional route to information gathering and can complete simple tasks within set criteria” (Charlton, 2017) This means they do not fully understand the user and their intuition.

For instance, when we are asking someone for the price of a flight ticket, the response – besides the actual price – could be “if you are going to a certain place or if you need a price alert for that flight”. But we could not really expect these kinds of follow up answers from an Intelligent Personal Assistant.

So, let’s go back to the inter-personal communication – how do we come up with the follow-up responses in the first place? We would conclude and deduct empirically to interconnect things that could be relevant – such as the intention to go somewhere and the action of asking the price of certain fight tickets. When we have the similar expectation on machines – on one hand, they would have to conduct a similar reasoning process as the ones that we do to draw the conclusion. On the other hand, they need a pool with an adequate amount of empirical resources to draw the conclusion from. The point is that the empirical part could have individual differences – which means the interaction pattern needs to be personalized on top of some general reasoning.

In this sense “Google Assistant is probably the most advanced, mostly because it’s a lot further down the line and more developed in terms of use cases and personalization. Whereas Alexa relies on custom build ‘skills’, Google Assistant can understand specific user requests and personalize the response.”  (Charlton, 2017)

This is not something to be built overnight but rather a long-term initiative: “The technology is there to support further improvements; however, it relies heavily on user adoption … The most natural improvement we expect to see is more personalization and pro-active responses and suggestions.” (Charlton, 2017)

Now that machine has the “artificial language” in hands, the next step would be to translate this language into “meaningful text which can further be converted to audible speech using text-to-speech conversion”. (Charlton, 2017)

This seems to be relatively easier work compared to the Natural Language Understanding part of the NLP. As “The text-to-speech engine analyzes the text using a prosody model, which determines breaks, duration, and pitch. Then, using a speech database, the engine puts together all the recorded phonemes to form one coherent string of speech.” (Charlton, 2017)

Intelligent Personal Assistant as Metamedium

But as you look into the way many answers were generated, computer (in the case of Intelligence Personal Assistant this would be cloud computing) as a metamedium. This is significant in at least two ways.

To begin with, as metamedium, the Intelligent Personal Assistant “can represent most other media while augmenting them with many new properties” (Manovich, 2013) In the specific case of Alexa, the integration of both hardware and software as well as the synergy that was brought up by the synergy is significant.

Sensors, speakers, wireless module, cloud … all these elements could fulfill specific tasks by themselves. But by combining them together, the new architecture not only achieved goals that could never have been accomplished by any of the individual components. But these components, in turn, were entitled with new possibilities: like the sensors that were empowered by the software would be able to distinguish specific sounds from ordinary sounds.

Another important aspect would be the chemical reaction to be generated by the crossfire of all the individual components. In the case of Intelligence Personal Assistant, one of the possibilities could be data fusion: in “Software Takes Command” Manovich had the following description: “another important type of software epistemology is data fusion – using data from different sources to create new knowledge that is not explicitly contained in any of them.” (Manovich, 2013)

This could be a very powerful tool in the evolution of Intelligent Personal Assistant: “using the web sources, it is possible to create a comprehensive description of an individual by combining pieces of information from his/her various social media profiles making deductions from them” (Manovich, 2013) This idea is in line with the vision for an Intelligent Personal Assistant to be more personalized and proactive. If an Intelligent Personal Assistant would be granted proper access to user information and the user would be willing to communicate with the Intelligent Personal Assistant, it would be possible for the system to advance rapidly. So, the advantage of the Intelligent Personal Assistant with NLP capability as a metamedium would be its ability to combine the information from both ends (users and Social Media Platforms) so that it would be able to come up with a better decision.

At the same time, as users became one of the media sources in depicting the big picture of user personas, users would also benefit themselves in this procedure. “combining separate media sources could also give additional meanings to each of the sources. Considering the technique of the automatic stitching of a number of separate photos into a single panorama” (Manovich, 2013)

The Intelligent Personal Assistant, upon getting the input from users via NLP, could be a mirror and a dictionary to the users at the same time. It both reflects users’ characteristics and enhance the user experience due to the nature of it as a metamedium.

Another question that could be answered by the metamedium side of Intelligent Personal Assistant is “why we would need such a system?”. When looking back to the trajectory of technological development, we could notice that the procedure of HCI evolution and the “metamedium” ecology around the computer is pretty much a history of the mutual education of computer and human as well.

Before we get used to a smartphone with built-in camera, people would question the necessity of this idea: why would I need a phone that could take pictures? But now we are so used to using phones as our primary photographing tools and even handle a great part of media production on it. Again – using smartphones for PS and video editing is something that didn’t happen until smartphone as a platform digested camera as an appropriate unit and the hardware development entitled the platform with the capabilities to do so. And this trend might have – to a great extent – led to the popularity of SNS like Instagram and Snapchat.

Similar stories could be applied to Intelligent Personal Assistant. When Siri – as the first mainstream Intelligent Personal Assistant – was released back in 2011, the criticisms it received ranged from requiring stiff user commands and having a lack of flexibility to lacking information on certain nearby places as well as the inability to understand certain English accents. People doubted the necessity of having such a service on their phone to drain the battery. Now, after seven years of progress, not only do we see the boom in Intelligent Personal Assistant, we get used to it as well. Especially in certain scenarios – like when you are cooking, and you want to set an alarm or pull up the recipe or you are driving, and you want to set the navigation app. Intelligent Personal Assistant with NLP capability is – by far – probably the best solution to these used-to-be dilemmas.

In a market research conducted by Tractica, “unique active consumer VDA users will grow from 390 million in 2015 to 1.8 billion worldwide by the end of 2021. During the same period, unique active enterprise VDA users will rise from 155 million in 2015 to 843 million by 2021.  The market intelligence firm forecasts that total VDA revenue will grow from $1.6 billion in 2015 to $15.8 billion in 2021.” (Tractica, 2016)

(VDA refers to Virtual Digital Assistants)

Systems Thinking

After the brief discussion of Intelligent Personal Assistant with a focus on NLP, it is a good time to touch upon an important principle when dealing with the Intelligent Personal Assistant. We spent most of the paper talking about NLP and barely touched a fraction of what NLP really is. Yet NLP is only a subsystem in the Intelligent Personal Assistant architecture which itself, is only a representation of a larger discipline – Artificial Intelligence.

So, when talking about Intelligent Personal Assistant or NLP, we couldn’t regard them as isolated property which does not recognize the universal connection among system and subsystems as well as their interdependence: “systems thinking is non-reductionist and non-totalizing in the methods used for developing explanations for causality and agency: nothing in a system can be reduced to single, independent entities or to other constituents in a system.” (Irvine, 2018)

This requires us to put both Intelligent Personal Assistant and NLP into context. As Intelligent Personal Assistant is the result of the joint work of many other subsystems like NLP, and NLP itself is also built on the foundation of its own subsystem. Any of the units here would not have achieved what we have now on their own.

After all, Graphite and diamond are both consisted of carbon, just a different pattern of the structure of the element. But they end up with a totally different character. When we look at a single point, we would simply miss the whole picture.

Conclusion

Intelligent Personal Assistant is a great representation of Artificial Intelligence in the sense that it creates a tangible platform for a human to interact with. Under this circumstance, NLP as a subsystem provides the Intelligent Personal Assistant with the tool to communicate naturally with its users.

In de-blackboxing NLP, we looked at both the software and hardware layers of NLP, with a step-by-step pattern of listening, understanding, and responding. For different layers and steps, all the components including transducers, cloud, and voice recognition software work both independently and collectively to generate the “natural communication” that we experience in the real life.

For the methodology part, we regard the Intelligent Personal Assistant as a metamedium in analyzing the ability and potential it possesses to evolve and transform. We also touched upon the basic linguistic elements that were used in designing the processes of NLP. Finally, the complexity and systems thinking approach were brought in to emphasize the Intelligent Personal Assistant and NLP as both a self-contained entity and a part of the architecture.

 

Reference

1: Kim, Jessica. “Alexa, Google Assistant, and the Rise of Natural Language Processing.” Lighthouse Blog, 23 Jan. 1970, blog.light.house/home/2018/1/23/natural-language-processing-alexa-google-nlp.

2: Whitenton, Kathryn. “The Most Important Design Principles Of Voice UX.” Co.Design, Co.Design, 28 Apr. 2017, www.fastcodesign.com/3056701/the-most-important-design-principles-of-voice-ux.

3: Irvine, Martin. “Key Concepts in Technology, Week 4: Information and Communication.” YouTube, YouTube, 14 Sept. 2014, www.youtube.com/watch?v=-6JqGst9Bkk&feature=youtu.be.

4: Pinker, Steven. “Steven Pinker: Linguistics as a Window to Understanding the Brain.” YouTube, YouTube, 6 Oct. 2012, www.youtube.com/watch?v=Q-B_ONJIEcE.

5: Cjamdrayam, Promod. “A Guide To NLP : A Confluence Of AI And Linguistics.” Codeburst, Codeburst, 22 Oct. 2017, codeburst.io/a-guide-to-nlp-a-confluence-of-ai-and-linguistics-2786c56c0749.

6: Charlton, Alistair. “Alexa vs Siri vs Google Assistant: What Does the Future of AI Look like?” Gearbrain, Gearbrain, 27 Nov. 2017, www.gearbrain.com/alex-siri-ai-virtual-assistant-2510997337.html.

7: Manovich, Lev. Software Takes Command. vol. 5;5.;, Bloomsbury, London;New York;, 2013.

8: Tractica. “The Virtual Digital Assistant Market Will Reach $15.8 Billion Worldwide by 2021.” Tractica, 3 Aug. 2016, www.tractica.com/newsroom/press-releases/the-virtual-digital-assistant-market-will-reach-15-8-billion-worldwide-by-2021/.

9: Irvine, Martin. “Media, Mediation, and Sociotechnical Artefacts: Methods for De-Blackboxing.” 2018.

Museum or shelter

Is there a right way to get along with art and history? This is the question I have after walking through the readings of the week. Or to be more specific – is museum the “right way” to approach art and history?

I love museums and I have been to some of the major museums, from the general ones – “The Met”, the Louvre, to the ones with a focus – MOMA, the WWII museum, to some of the “specialties” – the spy museum, the mob museum. Different museums have different characteristics, different curation logic and different ways to engage the audiences. As far as I enjoyed many of them, I never stop questioning: is this the right if not the best way to arrange all the artifacts and create such a space for us to access all the “points of interests”?

This is a complicated question. Museums are not for everyone in the first place. Most of the museums have geographical attributions, one would need to overcome the barrier in time and space to have access a certain museum, not to mention things inside the museum—they have another layer of barrier – the intellectual barrier. Everything has to work within a context—this is especially true to the art and historical objects. This is why many museums would invest heavily in providing the audiences with more background information. This could be achieved by guides (audio machines or human guides), a restored virtual space (a Chinese temple) and the most common practice – a brief introduction (brochure or small blocks on the wall).

There are many other ways to enhance the sense of presence in a museum – the ultimate goal is to help the audiences understand – how this self-contained artifact in front of you could be connected to a larger context and thus being transformed into something different and unique. So why do we deprive these objects of their original context and put them in a place like a museum? One would argue this is economies of scale – it would be impossible for one to access art and history in such a scale without gathering things in a museum. But this is to say the idea of the museum itself was out of a “value shelter”.

So, when I saw the Google art project, my reaction would be: this is a new layer in the entire hierarchy of accessing art and history. Everything could be a carrier of are and history. Some could be richer than others due to all kinds of reasons. We preserve them and study them so that we could gain a better understanding of the comprehensive context – the context of our world. The museum is a collective effort and representation – effort in increasing the density of the facts and representation of much specific art and history. But it’s never the best way – it’s a way and that’s it. Google art is a way. You could still argue people would stop going to the museums as they now have access to high-quality virtual experience online and this could be true. But chances are that people who are not interested in the art or history gained their first lesson online and want to explore afterward. After all, some exposure is better than zero exposure.

Collective mutation, individual evolution

Before diving into the materials for this week, I did have a hard time figuring out why the computer would be regarded as “metamedium”. Now I probably still couldn’t say I’m 100% accurate on the topic, but I do feel the power that computer has in absorbing all kinds of media and mediums, simulating, and transforming them, integrating them to form synergy, while providing them back with new possibilities and potentials. It’s like an incubator and a black hole at the same time, and that’s the truly fascinating part about computer and its “meta” characteristic.

Here’s one of my favorite quotes of the week:” Another important type of software epistemology is data fusion— using data from different sources to create new knowledge that is not explicitly contained in any of them. For example, using the web sources, it is possible to create a comprehensive description of an individual by combining pieces of information from his/ her various social media profiles and making deductions from them. Combining separate media sources can also give additional meanings to each of the sources. Consider the technique of the automatic stitching of a number of separate photos into a single panorama, available in most digital cameras. Strictly speaking, the underlying algorithms do not add any new information to each of the images (i.e., their pixels are not modified). But since each image now is a part of the larger panorama, its meaning for a human observer changes. The abilities to generate new information from the old data, fuse separate information sources together, and create new knowledge from old analog sources are just some techniques of software epistemology.”

There is really a great indication of the vitality that computer and all the architectures on top of it could bring to the areas and disciplines involved.

From my perspective, the procedure of HCI evolution and the “metamedium” ecology around the computer is also a history of the mutual education of computer and human – Before we get used to a smartphone with built-in camera, people would question the necessity of this idea: why would I need a phone that could take pictures? But now we are so used to using phones as our primary photographing tools and even handle a great part of media production on it. Again – using smartphones for PS and video editing is something that didn’t happen until smartphone as a platform digested camera as an appropriate unit and the hardware development entitled the platform with the capabilities to do so. And this trend might have – to a great extent – led to the popularity of SNS like Instagram and Snapchat.

This helical trajectory also reflects the power of the idea “metamedium”, it empowers, and it gets empowered.

Think about what you want, not what’s allowed

The two major takeaways from the readings and other materials this week is the common ground that computational thinking shared with mathematical thinking and engineering thinking, but more importantly, as they do share many of the patterns and visions, they somehow developed their own characteristics on top of the factors they took from each other.

 

For the intersections between computational thinking and mathematical thinking, it’s easy to understand that since mathematics serves as the foundation of the computational thinking, there would be a major indication and representation of mathematical way of thinking in computational thinking, but the one identical difference is that “when we execute our solutions on a machine or as human we are constrained by the physics of the machine – we can’t represent or reason about all the integers because we are only representing a finite number of them in our machine.”

 

As for the distinction between computational thinking and engineering, thinking is that when we are dealing with computational thinking, we are talking about building a program – in which you are building a system, you would be engineering an artifact, so it’s reasonable for you to borrow from all the discipline of engineering. But as we – the human are the ones to define the system, there’s one thing, in particular, we could manipulate – the software. In software, you could do anything you could build virtual worlds that define the laws of nature or laws of physics because it’s a virtual world, you could invent your rules. This means you are not constrained by the physical world.

 

The way I take this is to think beyond the tangible layer of the computer. The hardware is getting fancier every day – faster processor, bigger memory, but that’s not the core of computational thinking – not even close. We should think in the way that: what the hardware we have in hand would allow us to do, but what we need and what we want to do, and try to make the hardware to incorporate that.

 

 

Be aware of the complexity

The topic this week is really a wake-up call for any linear and simplified mindset when dealing with systems – “Systems thinking is non-reductionist and non-totalizing in the methods used for developing explanations for causality and agency: nothing in a system can be reduced to single, independent entities or to other constituents in a system.”

I recently looked into the popular “McKinsey methodology”, MECE principle is probably the most identical one. As the name suggests, MECE refers to “Mutually Exclusive, Collectively Exhaustive”. When applying this framework to any problems, “MECE principle suggests that all the possible causes or options be considered in solving these problems be grouped and categorized in a particular way. Specifically, all the information should be grouped into categories where there is no overlap between categories (mutually exclusive) and all the categories added together covers all possible options (collectively exhaustive).” (Cheng, 2011)

Now I have two main critiques of this principle:

1: The very foundation of this principle is that all the factors within the system act on a linear causal relationship.

2: It assumes the system is stable and transparent.

For the first point, it is more likely that the way the system functions is not the sole interaction between two single factors, it would be a collective reaction in which many entities, subsystems, agencies within the system work together. So, each of them could be the cause and the effect at the same time. Any attempt to simplify the procedure would end up in a partial and incomplete map of what going on with the system.

For the second point, the principle understates the difficulty of de-black boxing the system. The system is a dynamic concept which means when we look at the perceptible “representations and interface cues and conventions and the results of processes returned”, we are looking at the outcome of a dynamic process. That being said, there would be uncertainties in this process. So, it would be hard to assert that we have exhausted all the possibilities.

A good example could be when a part of India had a serpent issue (too many serpents), the government announced that anyone would be awarded with a serpent killed. But after a while, people started to raise serpent themselves in order to get the award with a died body. The government soon found out about this and abolished the policy. This makes people release the serpent they have and the result? More serpent than the policy was initiated.

The government in this scenario used a simple causal reasoning: the number of the serpent would decrease as long as they are being killed. But something else happened during the procedure and led to a different result.

Graphite and diamond are both consisted of carbon, just a different pattern of the structure of the element. But they end up with a totally different character. When we look at a single point, we would simply miss the whole picture.

The next president? Facebook

You thought what you see every day is random, whether you like it or not, you wouldn’t really look into the reason that you are receiving certain information, but you probably should — at least starting from now.

According to NYT, ” Federal regulators and state prosecutors are opening investigations into Facebook. Politicians in the United States and Europe are calling for its chief executive, Mark Zuckerberg, to testify before them. Investors have cut the value of the social networking giant by about $50 billion in the past two days.” It’s all about the same thing, whether this dominant player in SNS service mishandled users’ data.

We live in this “information age”, SNS seems to a primary source for many,  as some features of the platforms like Facebook do seem to facilitate the flow of information. We post and re-post, tweet and re-tweet. We like the articles that we support and share the  emotion with like-minded crowd.

The problem is most of the users are not aware of the algorithm that runs behind the scene. The result could be a self-fulfilling prophecy — you see what you want, and only what you want.

How accurate could the reverse-engineering be with the data and trace that you left on SNS? The answer is it would be accurate enough to influence your decision and determine the result of a presidential election.

As for YouTube, the idea is primarily to generate more possibility by integrating more functions into a single place.

It’s a medium, as information flows over YouTube. All the files that it contains, regardless of the format, make it a medium.

It a platform, so that people could gather and share things, this further allows the possibility of remediation. The more you have, the bigger that chance would be for users to generate remix outcomes.

Context, you would need, and respect it, you will

“Meaning is not a property that lays in the data or bit, it’s an event that happens at the same time as we pick up and decode the encrypted data or bits”.

It’s hard for people to really look at “communication”, as so many things are just taking for granted. “Too much information – we call our era the information age and complaint about information overload. As social beings, there are few moments in a day that don’t involve communication and interaction with others in language and other symbolic media” (Irvine, 2014). There are different kinds of mediums. From the basic ones that we started with — air. The sound wave and vibration were transmitted via air and being picked up by our ear.

Our eyes are another main receiver of information. Before language even exists, we communicate with body language and probably gestures.  Later, the written text became an important carrier of information.  We learned how to recognize words and extract the information that was encoded into the texts.

Now we have all kinds of digital mediums, videos, movies, musics. They could be coverted into various forms, Though we are still dealing with them with mainly our ears and eyes, the richness of the media and midiums provide more stimulation to our sensory organ than ever before. We tend to combine different layers of sense together to gain a bettter perception.

One thing that do touched me is the concept of “noise”. When we refered to a acoustic concept, noise clearly means the disharmony sound that interferes with the musical sound.  But it could be widened to all kinds of factors that would prevent us from getting the appropriate message. We have always been thinking that we have better medium and better technology so that we would have better capability in delivering information. The answer is yes and now, while the successful delivery in an “advanced medium” could have achieved a better result, we have to keep in mind that it also requires more external factors to support the technology.

The possibilities in fusion

On the Chinese New Years Eve (15th Feb.), I went to a special cornerstone event at the the Kennedy Center of Performing Art, which featured world-renowned artist and UNESCO Global Goodwill Ambassador Tan Dun.  The highlights of the night are Guan Xia’s Hundred Birds Flying Toward the Phoenix with traditional suona soloist Wenwen Liu, andTan Dun’s own Triple Concerto for Piano, Violin and Violoncello: Crouching Tiger, Hidden Dragon

Tan’s works often incorporate audiovisual elements; use instruments constructed from organic materials, such as paper, water, and stone; and are often inspired by traditional Chinese theatrical and ritual performance.

As for that particular night, I was exposed to the combination of suona and the symphony orchestra. On top of that, water was also used as a source of sound in the Crouching Tiger, Hidden Dragon.

My question for this would be:how a music genre would influence and shape the artists and their works and how the artists would give feedback to that genre.

What’s the possibilities and inspirations in adding alienized sound into certain music piece.

How would certain adaptation change the language of a music work?

Yang’s note

“[A] sign is something by knowing which we know something more… [A]ll our thought and knowledge is by signs” (Peirce, CP 8.332)

The human history is the history in which we are always making efforts to communicate more efficiently, at the same time, we de-black box the communication procedure and everything that constitutes it to better understand ourselves.

But as Irvine said, “we have to unlearn some things, (re)define words in more precise ways, and learn the vocabulary of the discipline to apply the concepts and learn new things.”, That to be said “In talking about meaning structures as signs, or, rather, sign functions, sign processes, symbolic functions or symbolic activity, we’re not talking about, or modeling problems on, things like street signs, logos, advertising, or everyday things that we often call “signs.” Likewise, by “symbol” we don’t mean the common usage of the term for religious or other cultural signs with some kind of “inner” or “hidden” meaning, or “special characters” like the symbols used in mathematics”

Beyond this, the part that I’m interested in is the “recursive” properties of sign system. “we can always go “meta” and describe what goes on in any system of signs with sets of signs from the same system”

In my understanding, the procedure of human communication, regardless of its form (it could be face to face, text, email, phone call, video etc.), the mechanism of it is the encoding and decoding of the messages that were transmitted via all kinds of mediums. That why metalanguage could be so important.

According to the media richness theory, the face to face would be the most effective way to send to receive information, but even under that kind of circumstances, information inequivalence and misunderstanding seems to be inevitable. Not to mention when we would be using other mediums. That’s why we would keep striving for better affordance in our communication tools. New symbols and meaning systems keep coming up on different media platforms. We have emojis for SNS, we have camera languages like close-up. Then we have a parallel system for us to perceive them. This is the mechanism that maintains the proper flow of information in this society.

You see now, you see the future

Some people would probably never realize the powerful tool that they are carrying around, that’s their language. This is my feeling out of the materials for this week.  As Jackendoff noted at the beginning of his book, people tend to have a relatively narrow and shallow understanding of linguistic as a discipline so that they “don’t recognize that there is more to language than this, so they are unpleasantly disappointed when the linguist doesn’t share their fascination.”

Both the readings and the video inspired me to look at the language from perspectives that I would never bother to before. From the distinctions among language, written language grammar and thoughts, to how children acquire language in the first place. The true complexity of language doesn’t lay in the functionality aspects of it (they are important for sure), but the subtler mechanism and component inside this black box.

Now I couldn’t help but thinking about my favorite science fiction from 2016 – “Arrival”. I don’t talk too much about it from a movie point of view, but some of the ideology and setting from it.

To begin with, language would be where everything starts. In order to communicate with the Alien and avoid an unnecessary war, a linguist was sent to make contact with the aliens. I traveled to Mexico this past winter break, in the southeastern part of the country is the Yucatán Peninsula, but this name if a misunderstanding, when the Spanish first got the place, they ask the local Mayan what the place was called. The answer was “Yucatán” which means “I don’t understand”.

Another highlight of the movie would be the exploration of Sapir – Whorf hypothesis. The linguist in the movie gained a different perception of time after she acquired the alien language. This would definitely be a dramatized portrait, but “Whorf argued that because the Hopi [the Native American group he was studying] have verbs for certain concepts that English speakers use nouns for, such as, thunderlightningstormnoise, that the speakers view those things as events in a way that we don’t. We view lightning, thunder, and storms as things. He argued that we objectify time, that because we talk about hours and minutes and days as things that you can count or save or spend.” So, we could see some trace of linguistic ideas behind the movie.

References:

1: Ray Jackendoff, Foundations of Language: Brain, Meaning, Grammar, Evolution. New York, NY: Oxford University Press, USA, 2003.

2:  Steven Pinker: Linguistics as a Window to Understanding the Brain