NLP and Google Translate

NLP is an interdisciplinary field that combines the knowledge of linguistic, computer science, and artificial intelligence. The two main categories of NLP are Natural Language Understanding, which presents how a computer can pull out meaningful information from a cluster of words and further categorize it, like Gmail recognizing the spam emails and generating them into trash mails for us. The second category is Natural Language Generation, which is more complicated since it is also responsible for understanding the context, distinguishing further subtracting the key information from these contexts. And the purpose is to make the machine understand the human’s natural language. So it can build a connection that allows the occurrence of interactions between humans and machines. In order to achieve this goal, several methods have been used. Morphology: This allows the computer to learn from different word roots and categorize base on the sharing similarity. Distributional Semantics: Learn the meaning of words based on the occurrence and the frequency of a word that appeared in the same sentence or context. And also the encoder-decoder models where a computer can encode a sentence, and decode it into a string of unsupervised clusters. However, since there are too many key details for the machine to remember, RNN occurred, where a computer can learn just the representation of each word, and make predictions. For example, every time I use google documents when I repetitively write about a single word over and over again under a specific topic. It will automatically provide suggestions, usually in phrases or terms, which are super relevant to the topics I’m writing about. Like for now, since I repetitively talk about AI-related stuff. The google document just gives me suggestions like “computer science”, “machine learning”, “natural language processing”, whenever I type something starting with “machine”, “computer” and “natural”, even though some of the time I didn’t tend to type that. 

Since the computer couldn’t understand human natural languages, it is necessary to transform those characters into the language that the computer can understand, for example, numbers like 0, 1. The phrase structure rule is the grammar for computers. And the parse tree can tag every word of a speech and further reveals how the sentence is structured, which allows the computer to access, process, and respond to the information more efficiently. How does google translate work exactly? Since we cannot use the word-to-word translation because the natural language requires a reasonable sequence to be understood. So, the first step is to code the natural language into numbers that can be operated by computers, which is called Vecoter Mapper, and then in order to get the language we want, the break-out sentences have to be generated into a whole again. We simple reverse the whole process, from the recurrent neural network back to the vector mapper. In the whole process, the neural network allows the computer to learn the patterns from the massive amount of real examples based on actual conversations and writings. 

Questions:  How does the computer “understand” the grammar exactly? Or in fact, it doesn’t, it just finds the pattern after all. 



CS Dojo Community. (2019, February 14). How Google Translate Works – The Machine Learning Algorithm Explained! YouTube.

CrashCourse. 2017. Natural Language Processing: Crash Course Computer Science #36

CrashCourse. 2017. Machine Learning & Artificial Intelligence: Crash Course Computer Science #34

“Machine Translation.” 2021. In Wikipedia

“Natural Language Processing.” 2021. In Wikipedia

A Neural Network for Machine Translation, at Production Scale. (2016, September 27). Google AI Blog.