What is language and a further thinking of human being and AI–Wency

What is language and a further thinking of human being and AI–Wency

Starts from asking what language is, professor Pinker seems to provide a clear explanation in his video where he mentions three components: words, rules and interface (Pinker, 2012). By using the example of “duck”, Pinker points out that words are the arbitrary of the sign. If we go back to Chandler’s work in week 2 where he mentioned the first mode of the relationship between signifier and signified (i.e. Symbol/Symbolic), we can tell that Pinker’s explanation matches well with this concept (Chandler, 2007, p.36). As for me, this seems to be the beginning start from which language creates a distinction between human beings and other species: by going beyond resemblance or direct connection, human beings are able to develop an arbitrary, indirect connection between their mind and the objects they perceive.

The second component mentioned by Pinker, is Rules including phonology, morphology and syntax. An interesting point going beyond the sound (i.e. phonology) and the generation of complex words (i.e. morphology), is human being’s ability of the infinite use of finite media (Pinker, 1994, p.87). In other words, in a discrete combinational system like language, there can be an unlimited number of completely distinct combinations based on finite numbers of words and rules (p.84). This human capacity, while Pinker in the video mentions Chomsky’s thinking of how children forms questions based on declarative sentences, is more likely to be achieved by an invisible superstructure which can be described by the phrase structure grammar which stands like a tree (in the earlier study people tends to use the word-chain machine to explain the infinite properties which turns out to be unable to deal with complex situations) (Pinker, 1994, p.101). This tree structure, is also used later when Pinker mentions how human beings are able to understand and interpret the words they hear. But how is this tree formed in a 4-year old child’s mind? This brings us to the discussions of developmental linguistics (Rdford, 2009, p.1). One major argument (though still many arguments left) is Chomsky’s innateness hypothesis which he believes the language faculty is innate, biologically endowed within the human brain (p.7). This can be also described as FOL (i.e. Faculty of language) which is the natural human cognitive capability that enables anyone to learn a natural language (Irvine, 2018, p.7). Pinker, in his video, also mentions an interesting hypothesis that children might have an endowed universal grammar which applies to all categories of the language in their mind. (Pinker, 2012). Though these hypotheses and viewpoints are not all confirmed today, they can still help us distinguish human capacities to acquire language with other species. Besides, the concept of universal grammar, if established, can be used to decide whether any language can be a language, that is, satisfying the structure features common to all languages (Irvine, 2018, p.7).

Now, look back on Professor Irvine’s introduction, we have experienced the first four layers (i.e.: phonology, morphology, lexicon and syntax) that enable language to work. A common mistake here is that many people overestimate the interdependence between syntax and semantic, nevertheless, after all, meaning has its own characteristic combinational structure, one that is not simply “read off” syntax (Jackendoff, 2003, p.427). Therefore, we are able to refer to the third component of language, according to Pinker, i.e. Language as interface which enable us to understand what people are saying and convey the information people understand as well during conversations. I find this part very exciting because it provides a further discussion for us to think of the reason why it is hard for computer to understand language and how this will lead to the future of AI. In the example The dog likes ice cream, we can probably describe the process as follows:

It seems that the first four layers here can be understood together as a database which human being’s mind as a parser can refer to, when the sentence comes, human beings are able to store the elements in their memory (mostly short-term) whilst refer to the database to define the category of each word, where in the surface tree the word should belong to and what else is needed to complete the structure. Here Pinker points out two interesting computational burdens: memory and decision making (Pinker, 1994, p.201). As for me, it might not be a bad idea to illustrate how human beings and computer experience differently under these two burdens.

  1. Human beings

  1. Computers

 

As for human beings, an innate biological limitation makes it hard for them to hold long-term memory (Pinker, 1994, p.205), besides, in the later example which illustrates the short onion sentence, human beings’ problem, is not only the amount of memory, but better, the ability to keep a particular kind of phrase in memory. From this aspect, computers seem to have a large advantage where inside its hardware architecture, each module is well controlled by the sophisticated control unit beyond which the assembly language seems to be far more convincing then unstable biology based human brains. The computer also has the hard disk to restore long-term data, whilst for human beings to reach their “hard disk” in their brains for long-term memory seems to be much more time-consuming and difficult then that of computers.

Fortunately, human beings seem to perform much better than computers when it comes to decision making. It is argued that computer seems to be too sophisticated to make a decision in an ambiguous condition (Pinker, 1994, p.109). Here I want to go back to the distinction between automatic and autonomous based on my insufficient understanding. It seems that while human beings are able to selectively assimilate the relevant element in their context and input them onto their parser, for computer it seems to be much harder to do so. From my perspective, the reason can be divided into two aspects: unconsciousness and consciousness. While many people are arguing that computer is merely playing an imitation game in the earliest Turing test where the machine takes the content provided by the previous person as an input, by using the established algorithm it was therefore able to make some subtle modifications based on the existing elements in the sentences and makes the actual imitation alike a real conversation, now scientist are investing more and more on deep learning and machine learning to make computer more and more conscious of the larger context based on which it can respond more and more human-like. Unconsciousness, on the other hand, seems to be a huge bottleneck in the development of AI. If we now look back on the very beginning of Pinker’s video, that might be the mysterious of language, and further, happens on the last three layers which are more abstract than the first four layers according to professor Irvine: semantic, pragmatic, discourse and text linguistics (Irvine, 2018, p.6).

 

Reference:

  1. Chandler, D. (2007). Semiotics: The Basics. New York, NY: Routledge.
  2. Irvine, M. (2018). Introduction to Linguistics and Symbolic Systems: Key Concepts.
  1. Jackendoff, R. (2003). Foundations of Language: Brain, Meaning, Grammar, Evolution. New York: Oxford University Press.
  2. Pinker, S. (1994). How Language Works. The language Instinct: How the Mind Creates Language. New York: William Morrow & Company.
  3. Pinker, S. (October 6, 2012). Linguistics as a Window to Understanding the Brain. Retrieved from https://www.youtube.com/watch?v=Q-B_ONJIEcE
  4. Radford, A. (2009). Linguistics: An Introduction. Cambridge, UK: Cambridge University Press.