Category Archives: Week 4

How do babies learn language?

As Irvine mentions in his presentation, language isn’t just a box of words to string together, but has built in rules for combining words into grammatical phrases, sentences, and complex sequences. Language is a system that is made of subsystems, and each of these subsystems/layers play an important role for the whole architecture. Let’s take a look at these different subsystems.

Phonology: The system of speech sounds in a language, and the human ability to make
spoken sounds in a language.

Morphology: Minimal meaning units as the first level mapping​of spoken sounds and meaning
units

Lexicon: The “dictionary” or vocabulary of language: words understood as minimal
meaning units marked with syntactic (grammatical) and lexical functions
(word class function).

Syntax: Syntax​(or more loosely, grammar​) describes the rules​and constraints​for
combining words in phrases and sentences (combinatorial​rules) that speakers of
any natural language use to generate new sentences and to understand those
expressed by others.

Semantics:  how speakers and auditors understand words
in a sentence unit

Pragmatics: The larger meaning situation, codes, knowledge, speech acts, and kinds of
discourse surrounding and presupposed by any individual expression.

The process of learning language  is natural and we all are born with the ability to learn it. Let’s take the example of babies. From the research done there are three stages in which children develop their language skills.

The first one is learning sounds. In the first couple of months that a baby is born, they can make and hear different sounds in different languages. Babies learn which phonemes belong to the language they are learning and which don’t. The ability to recognize and produce those sounds is called “phonemic awareness,” which is important for children learning to read.

The second stage is learning words. At this stage, children essentially learn how the sounds in a language go together to make meaning. As Dr. Bainbridge explains, this is a significant step because everything we say is really just a stream of sounds. To make sense of those sounds, a child must be able to recognize where one word ends and another one begins. These are called “word boundaries.” However, children are not learning words, exactly. They are actually learning morphemes, which may or may not be words.

Stage three is learning sentences. During this stage, children learn how to create sentences. That means they can put words in the correct order.

Of course that the rates at which a language develops is affected by many factors, but what this shows is that since we are born, our brain is capable of learning a language, and then by studying it and learning the grammar we can develop more complex sentences.

It is more interesting to think about the process of learning more than one language, and study shows that the younger we are, the easier it is to learn more than one language. From my personal experience, I started learning English, when I was in the third grade. In order to really understand and learn a language and be fluent at it, there are 4 skills.

Practicing and repeating these different skills helped me to learn the language faster.

To conclude, language is a complex system made of all these different subsystems, and understanding each part, helps us to correctly use it in a meaningful way.

Reference:

Bainbridge, Carol “How Do Children Learn Language?” from https://www.verywellfamily.com/how-do-children-learn-language-1449116. 4 December 2017

Irvine, Martin “Introduction to Linguistics and Symbolic Systems: Key Concepts”

Jackendoff, Ray Foundations of Language: Brain, Meaning, Grammar, Evolution. New York, NY: Oxford University Press, USA, 2003.

Pinker,Steven  “How Language Works.” Excerpt from: Pinker, The Language Instinct: How the Mind Creates Language. New York, NY: William Morrow & Company, 1994: 83-123.

 

 

Word-Chain Devices, Foreign Language Learning and Artistic Appreciation

After learning about the word-chain devices (Markov model), I realized that I am actually teaching Chinese under the influence of it unconsciously. A problem that most of my students are confronted with is that they have a really limited accumulation of adjectives and adverbs in Chinese which poses an obstacle during their writing compositions or trying to express their feelings. As they have already gain the sense of the basic syntax rules of Chinese, they know how to combine the words selected from each column into a grammatical Chinese sentence. The problem is that one of the column that is necessary for the combinatorial system is almost empty. In order to help them improving their opinion-expressing ability as soon as possible, what I am doing recently is that I would offer them three sets of antonyms expressing positive and negative attitudes per week. In this way, every time when they want to express their attitude, they can search the “positive expression column” or “negative expression column” in their mind, select the proper one and continue their combinations.

As a native Chinese speaker, associating with my personal experience of learning English and Korean, it seems that the more unskilled I am with a language, the word-chain devices worked more apparently and typically in my mind. When dealing with Korean conversation, I usually retrieve the “grammar formula” in my mind, select the proper words from different lists in my mind lexicon, then assemble them following the formula, or to say, the grammatical rules. While when speaking or writing in Chinese, I do will search in my mind lexicon to find the best vocabulary for my expression but the working process of the word-chain devices, or the more complexed assembling models, works in an unconscious way.

People following the synaptic rules not only can create sentences for daily communication, but can also create grammatically correct but meaningless sentences, which means that “Sentences can make no sense but can still recognized as grammatical.” (Steven Pinker,1994)The language as a medium to conveying information reaches different audiences with different contents. How Mark Twain parodied the romantic description of nature actually reminds me of the art appreciation things in human society. If I submit the poem These Lacustrine Cities of John Ashbery as my work in a writing test of a language course, I could hardly pass the exam. However when the poem is taught in the literature class as one of the most famous works of outstanding postmodernism poetry Ashbery, with the help of professor, everyone in the classroom seems to get what the writer is talking about. The separation between semantics and pragmatics somehow helps me to analyze this phenomenon in the way of artistic appreciation and criticizing. Marcel Duchamp holds an opinion that art will do what art will do. No matter what the object is put on exhibition, people who are standing around it will interpretive the object in an artistic way. To analyze this phenomenon in a analogical way of studying semantics and pragmatics, take the Bicycle Wheel of Marcel Duchamp as example, when such a wheel is seen on the road or in Wikipedia, the majority of us would recognize it simply as a part of a bicycle in a “context-free”way, which is somehow just like the “semantic” meaning of this object, as a normal industrial product. However, when being put in an art gallery with a name card of famous French artist Marcel Duchamp next to it, audiences together move into a context of modern art, the wheel is now regarded as a great absurdism work, an ikon expressing the sarcasm on the over-seperation of high art and mass art. The information saved in the object now is decoding in a typical “pragmatic” way. Shall we put ourselves into a professional artistic context in order to appreciate and criticize the art works or shall we just enjoy them in our personal contexts? From my perspective, when being told that the object is an “artistic work”, we get ourselves involved into the artistic context unconsciously, the wheel is not a wheel and the urinal is now a fountain.

References

Steven Pinker, “How Language Works.” Excerpt from: Pinker, The Language Instinct: How the Mind Creates Language. New York, NY: William Morrow & Company, 1994: 83-123.

Martin Irvine, “Introduction to Linguistics and Symbolic Systems: Key Concepts”

You see now, you see the future

Some people would probably never realize the powerful tool that they are carrying around, that’s their language. This is my feeling out of the materials for this week.  As Jackendoff noted at the beginning of his book, people tend to have a relatively narrow and shallow understanding of linguistic as a discipline so that they “don’t recognize that there is more to language than this, so they are unpleasantly disappointed when the linguist doesn’t share their fascination.”

Both the readings and the video inspired me to look at the language from perspectives that I would never bother to before. From the distinctions among language, written language grammar and thoughts, to how children acquire language in the first place. The true complexity of language doesn’t lay in the functionality aspects of it (they are important for sure), but the subtler mechanism and component inside this black box.

Now I couldn’t help but thinking about my favorite science fiction from 2016 – “Arrival”. I don’t talk too much about it from a movie point of view, but some of the ideology and setting from it.

To begin with, language would be where everything starts. In order to communicate with the Alien and avoid an unnecessary war, a linguist was sent to make contact with the aliens. I traveled to Mexico this past winter break, in the southeastern part of the country is the Yucatán Peninsula, but this name if a misunderstanding, when the Spanish first got the place, they ask the local Mayan what the place was called. The answer was “Yucatán” which means “I don’t understand”.

Another highlight of the movie would be the exploration of Sapir – Whorf hypothesis. The linguist in the movie gained a different perception of time after she acquired the alien language. This would definitely be a dramatized portrait, but “Whorf argued that because the Hopi [the Native American group he was studying] have verbs for certain concepts that English speakers use nouns for, such as, thunderlightningstormnoise, that the speakers view those things as events in a way that we don’t. We view lightning, thunder, and storms as things. He argued that we objectify time, that because we talk about hours and minutes and days as things that you can count or save or spend.” So, we could see some trace of linguistic ideas behind the movie.

References:

1: Ray Jackendoff, Foundations of Language: Brain, Meaning, Grammar, Evolution. New York, NY: Oxford University Press, USA, 2003.

2:  Steven Pinker: Linguistics as a Window to Understanding the Brain

Discrete Infinity of Language

The discrete infinity of language means unlimited productivity from the finite means as a major design feature of language (Irvine, 2014). Discreteness means that the boundary between linguistic symbols is clear. Since the linguistic symbols are discrete, the chain of linguistic symbols can be segmented part by part until the smallest linguistic symbols are assigned. For example, “You are hungry” can be divided into subject “you”, verb “are” and adjective “hungry”.

The order of the three linguistic symbols can be changed and the meaning of the new sentence will be totally different. If I change the order of the subject and verb, the new sentence will be “Are you hungry?” In this case, I am asking for a question instead of stating a fact. This is one significant difference between human beings and animals. The way that animals communicate with each other, such as the dance of bees and the sound of chimpanzees, etc. are continuous and can’t be syncopated. However, the linguistic symbols of human beings are discrete and they can be used repeatedly and repeatedly to combine with other linguistic symbols.

Consequently, they can express infinite thoughts with limited linguistic symbols.
Language is the essential mean of human communication and meaning-making and linguistic meaning emerges from a whole communication environment. Human beings can create and understand new context. Because of the discrete infinity of language, it’s necessary to combine the context to understand the meanings of language correctly. The same sentence in different environment or context can have significantly different meanings. For example, when “She is hot” is used in summer, it means the girl feels very hot about the weather. When “She is hot” is used in the beach, it may be a compliment to the girl’s appearance and stature. Consequently, not only the different combination of linguistic symbols can lead to infinite meanings, but also the different environment influences infinite meanings as well.

Although specific meanings are unlimited, they are bounded only by human-scale limits such as personal knowledge, time and memory (Irvine, 2014). This brings a new need for language, namely creativity. In the video presentation, Language and the human brain, Chomsky thinks that language is words, rules and interfaces. Among these, rules include syntax, morphology and phonology. One important quality is that rules focus on creativity, which means the ability to produce and understand new language. Rules allow for open-ended creativity, including the expression of unfamiliar meanings and the production of vast numbers of combination. That is consistent with the discrete infinity of language. Language has the potential and creativity to develop infinitely.

An extension of the discrete infinity of language is that ambiguous sentences imply different meanings from the same phase structure.

Journalists say that when a dog bites a man that is not news, but when a man bites a dog that is news (Pinker, 1994).

That’s an interesting case of the discrete infinity of language. In fact, ambiguous sentences are not rare both in spoken language and written language. And that reflects the miraculous of language. Even the change of the combination of different words can lead to totally different meanings.

Bibliography:
1. Irvine, M. (2014). Introduction to Linguistics and Symbolic Systems: Key Concepts.
2. Pinker, S. (2007). The language instinct: How the mind creates language. New York: Harper Perennial Modern Classics.

Reflections on How Language Works

This week, the assigned readings altogether introduced some basic concepts of linguistic, which direct to solve the myth that how language works. A statement by Chomsky, “Colorless green ideas sleep furiously”, as an example of language’s capability of producing syntactically sound but sensibly incoherent meanings, is mentioned several times by different authors. Pinker resides on the statement to illustrate the the old interpretation of language as word-chain device, which has potential to create infinite meanings through combinatorial mechanisms.

From this reflection, I recalled a computer program created by someone out of pranking purposes, allowing people to randomly pick several words from a given word list to automatically compose prose resembling postmodern or contemporary poetry. How this program operates perfectly epitomizes a manipulation of the word-chain device to use language. Nevertheless, how people laugh at the production of this program indicates that people are able to derive some meanings from any seemingly nonsense, and this implication is what I find somewhat paradoxical to the premise of linguistic. Even speaking of myself, when I first saw the statement by Chomsky, I actually attempted to grasp meaning out of it, although according to pinker, the sentence is utterly uninterpretable. I initially thought that the statement was just an artistic articulation using rhetorical devices to express an implicit meaning.

Therefore, I speculate that any semantic structures, if grammatically correct, are able to generate meaning to audiences. This is probably related to the unique ability of the human species to associate lexicons with subjects, and the ultimate acquisition of language enables anyone to interpret intangible subjects.

And yet, as a non-native speaker, my opinion cannot relegate to everyone whose mother language is English. I wonder if the derivative of meaning is related to social evolution so that how people master a language is constantly changing? Since the notion, our faculty of language always links to subjects, and discovery and invention of new artifacts undoubtedly will enrich our cognition about the outer world. Our utilization of language will probably also be affected by our surroundings, subject to social and cultural changes.

In addition, mass media also potentially affect the usage of language. Proliferation of creativity on mass media has stimulate people’s ability to encode infinite mass of messages from finite units, which in turn aggravate the audience’s ability to decode more easily and inventively once exposed to a cornucopia of explosive information.

References:

Andrew Radford, et al. Linguistics: An Introduction. 2nd ed. Cambridge, UK: Cambridge University Press, 2009. Excerpts.

Steven Pinker, “How Language Works.” Excerpt from: Pinker, The Language Instinct: How the Mind Creates Language. New York, NY: William Morrow & Company, 1994: 83-123.

Language and Thoughts – Shuqi

What interests me most is the saying in the lecture that “language is not thought itself, but a way of expressing thought”. He argued that babies and other mammals can also communicate without speech, and types of thinking go on without languages, such as music and images. These explanations make sense to me, but I start thinking about whether language affects people’s ways of thinking. If so, then how does this process happen? Are the bonds with language and thoughts strong or not?

When I looked up materials, it was unsurprising to find that a lot of linguists had argued this issue for a long time. One of the most prevalent hypothesis is called Sapir-Whorf Hypothesis. It includes two kinds of relationships of language and thoughts. The first one is linguistic determinism which regards that language plays a decisive role in people’s thoughts. Whereas, such a point of view has been proved incorrect. It is obvious that people with different mother languages share a large number of common senses, emotions, and principles. Bilinguals, especially those who learned two languages as the first language, do not feel divisive themselves simply because of two languages they command. The second hypothesis of Sapir-Whorf Hypothesis is linguistic relativity. It said that the structures of languages, such as grammars, semantics, and phonology, can influence people’s ways of thinking or behaviors profoundly and unconsciously. This hypothesis is still controversy in academia (at least according to the limited materials I found). In a classic experiment, people with mother languages emphasizing “east, south, north, and west” inclined to use these words to describe directions while people more familiar with “left and right” use these two words more frequently. Hence, this experiment might prove that the hypothesis makes sense to some extent.

However, it is still tricky to determine whether ways of thinking affect the evolution of language, or in reverse. It is the question “which came first, chicken or eggs?” Language is the product of human beings’ desires to express, communicate, and share. It is built upon specific cultural contexts. When there was a need, there came an according to word to satisfy it. It is a meaning-making system and a mediate agency to help people express. The routine way of humans to say or write something is that we come up with our thoughts, organize them in a logical way, choose appropriate words and grammars, and then express them. Therefore, if a thought doesn’t exist, certain words or expression will not develop along with it either.

Based on my experiences, I am more in favor of languages are just approaches to express thoughts. I am trilingual, Mandarin, Cantonese, and English, and I am learning French and know a little bit Japanese. In most occasions, when I learn a language as a foreign language, the first step is the translation. Only by translating unfamiliar words to familiar ones can I understand these meanings. Nevertheless, with the deepening of studying, it is easy to find that translation fails to work all the time. For instance, a joke being translated can be not funny at all. A beautiful poem being translated can lose its rhyme and aesthetic enjoyment. Inevitably, some latent meanings eclipse via the process of translation.

From my perspective, the majority of gaps among the translation of various languages result from social and cultural context. People growing in a specific social environment can be cultivated or socialized with a specific culture, and thusly develop specific thoughts, such as religions, nature, arts, and sports. In this sense, it is because of the differences among social environment that people have different thoughts. And based on thoughts, people create their own language systems.

To sum up, I doubt how language can affect people’s ways of thinking and need more experiments to underpin this point of view. My opinion might be conservative and traditional, which is that thoughts come from daily lives and culture instead of languages and drive the birth and development of languages.

 

Reference:

Steven Pinker (2012),  Language and the Human Brain,  https://www.youtube.com/watch?v=Q-B_ONJIEcE

Cardiff (2013), Twelves Weeks to Learn Linguistics, https://www.douban.com/note/322570822/

Whorf, B.L. (1956). “The Relation of Habitual Thought and Behavior to Language”. In Carroll, J.B. Language, Thought, and Reality: Selected Writings of Benjamin Lee Whorf.

What is language and a further thinking of human being and AI–Wency

What is language and a further thinking of human being and AI–Wency

Starts from asking what language is, professor Pinker seems to provide a clear explanation in his video where he mentions three components: words, rules and interface (Pinker, 2012). By using the example of “duck”, Pinker points out that words are the arbitrary of the sign. If we go back to Chandler’s work in week 2 where he mentioned the first mode of the relationship between signifier and signified (i.e. Symbol/Symbolic), we can tell that Pinker’s explanation matches well with this concept (Chandler, 2007, p.36). As for me, this seems to be the beginning start from which language creates a distinction between human beings and other species: by going beyond resemblance or direct connection, human beings are able to develop an arbitrary, indirect connection between their mind and the objects they perceive.

The second component mentioned by Pinker, is Rules including phonology, morphology and syntax. An interesting point going beyond the sound (i.e. phonology) and the generation of complex words (i.e. morphology), is human being’s ability of the infinite use of finite media (Pinker, 1994, p.87). In other words, in a discrete combinational system like language, there can be an unlimited number of completely distinct combinations based on finite numbers of words and rules (p.84). This human capacity, while Pinker in the video mentions Chomsky’s thinking of how children forms questions based on declarative sentences, is more likely to be achieved by an invisible superstructure which can be described by the phrase structure grammar which stands like a tree (in the earlier study people tends to use the word-chain machine to explain the infinite properties which turns out to be unable to deal with complex situations) (Pinker, 1994, p.101). This tree structure, is also used later when Pinker mentions how human beings are able to understand and interpret the words they hear. But how is this tree formed in a 4-year old child’s mind? This brings us to the discussions of developmental linguistics (Rdford, 2009, p.1). One major argument (though still many arguments left) is Chomsky’s innateness hypothesis which he believes the language faculty is innate, biologically endowed within the human brain (p.7). This can be also described as FOL (i.e. Faculty of language) which is the natural human cognitive capability that enables anyone to learn a natural language (Irvine, 2018, p.7). Pinker, in his video, also mentions an interesting hypothesis that children might have an endowed universal grammar which applies to all categories of the language in their mind. (Pinker, 2012). Though these hypotheses and viewpoints are not all confirmed today, they can still help us distinguish human capacities to acquire language with other species. Besides, the concept of universal grammar, if established, can be used to decide whether any language can be a language, that is, satisfying the structure features common to all languages (Irvine, 2018, p.7).

Now, look back on Professor Irvine’s introduction, we have experienced the first four layers (i.e.: phonology, morphology, lexicon and syntax) that enable language to work. A common mistake here is that many people overestimate the interdependence between syntax and semantic, nevertheless, after all, meaning has its own characteristic combinational structure, one that is not simply “read off” syntax (Jackendoff, 2003, p.427). Therefore, we are able to refer to the third component of language, according to Pinker, i.e. Language as interface which enable us to understand what people are saying and convey the information people understand as well during conversations. I find this part very exciting because it provides a further discussion for us to think of the reason why it is hard for computer to understand language and how this will lead to the future of AI. In the example The dog likes ice cream, we can probably describe the process as follows:

It seems that the first four layers here can be understood together as a database which human being’s mind as a parser can refer to, when the sentence comes, human beings are able to store the elements in their memory (mostly short-term) whilst refer to the database to define the category of each word, where in the surface tree the word should belong to and what else is needed to complete the structure. Here Pinker points out two interesting computational burdens: memory and decision making (Pinker, 1994, p.201). As for me, it might not be a bad idea to illustrate how human beings and computer experience differently under these two burdens.

  1. Human beings

  1. Computers

 

As for human beings, an innate biological limitation makes it hard for them to hold long-term memory (Pinker, 1994, p.205), besides, in the later example which illustrates the short onion sentence, human beings’ problem, is not only the amount of memory, but better, the ability to keep a particular kind of phrase in memory. From this aspect, computers seem to have a large advantage where inside its hardware architecture, each module is well controlled by the sophisticated control unit beyond which the assembly language seems to be far more convincing then unstable biology based human brains. The computer also has the hard disk to restore long-term data, whilst for human beings to reach their “hard disk” in their brains for long-term memory seems to be much more time-consuming and difficult then that of computers.

Fortunately, human beings seem to perform much better than computers when it comes to decision making. It is argued that computer seems to be too sophisticated to make a decision in an ambiguous condition (Pinker, 1994, p.109). Here I want to go back to the distinction between automatic and autonomous based on my insufficient understanding. It seems that while human beings are able to selectively assimilate the relevant element in their context and input them onto their parser, for computer it seems to be much harder to do so. From my perspective, the reason can be divided into two aspects: unconsciousness and consciousness. While many people are arguing that computer is merely playing an imitation game in the earliest Turing test where the machine takes the content provided by the previous person as an input, by using the established algorithm it was therefore able to make some subtle modifications based on the existing elements in the sentences and makes the actual imitation alike a real conversation, now scientist are investing more and more on deep learning and machine learning to make computer more and more conscious of the larger context based on which it can respond more and more human-like. Unconsciousness, on the other hand, seems to be a huge bottleneck in the development of AI. If we now look back on the very beginning of Pinker’s video, that might be the mysterious of language, and further, happens on the last three layers which are more abstract than the first four layers according to professor Irvine: semantic, pragmatic, discourse and text linguistics (Irvine, 2018, p.6).

 

Reference:

  1. Chandler, D. (2007). Semiotics: The Basics. New York, NY: Routledge.
  2. Irvine, M. (2018). Introduction to Linguistics and Symbolic Systems: Key Concepts.
  1. Jackendoff, R. (2003). Foundations of Language: Brain, Meaning, Grammar, Evolution. New York: Oxford University Press.
  2. Pinker, S. (1994). How Language Works. The language Instinct: How the Mind Creates Language. New York: William Morrow & Company.
  3. Pinker, S. (October 6, 2012). Linguistics as a Window to Understanding the Brain. Retrieved from https://www.youtube.com/watch?v=Q-B_ONJIEcE
  4. Radford, A. (2009). Linguistics: An Introduction. Cambridge, UK: Cambridge University Press.