Category Archives: Week 7

Phonemes From Our Phones: Natural Language Processing

Imagine creating a step-by-step list to understanding language. That’s the daunting task that computer scientists face when developing algorithms for Natural Language Processing (or NLP).

First of all, there are roughly 150,000 words in the English language (Oxford English Dictionaries, n.d.), and many of them are synonyms or words with multiple meanings. Think of words like “back” or “miss,” which would be maddening to understand for an English learner: “I went back to the back of the room and laid on my back.” or “Miss, even though you missed the point, I will miss you when you’re gone.”

After parsing through those tens of thousands of words and all their associated meanings and variations, there arises the issue of dialects. English in Australia sounds different than English in Ireland, which sounds different than English in Canada. Moreover, even within a country, there can be multiple dialects: in the United States, consider how different people sound from Mississippi compared to Michigan, or Massachusetts compared to New Mexico. This blog post by internet linguist Gretchen McCulloch dives into some of these issues, and raises another interesting point: how do we teach computers to read, pronounce, and/or understand abbreviations and the new forms of English specific to internet communication, such as “lol,” “omg,” and “smh”?

Other issues such as tone and inflection can drastically change the meaning of a sentence when spoken aloud. I found one example from the Natural Language Processing video from Crash Course Computer Science to be especially powerful, where they took a simple sentence “She saw me” and changed the meaning 3 times by altering the inflection (Brungard, 2017):

“Who saw you?” … “She saw me.”

“Who did she see? … “She saw me.”

“Did she hear you or see you?” … “She saw me.”

I want to take a brief moment to appreciate the Crash Course Computer Science video series. That series takes extremely dense and complex topics and packages them into brief, comprehensive, lighthearted videos with delightfully animated (and often topical) visual aids and graphics. I will undoubtedly be returning to them for many more computer science-related quandaries. 

Anyway, all these different obstacles that make natural language processing difficult to program and code for computer scientists (vocabulary, synonyms, dialects, inflection, tone, etc.) change from language to language. So designing for Spanish or Chinese or Arabic will have many similar obstacles as English, while also presenting new and different hurdles unique to each language and its particular nuances. Luckily for us, companies like Google are rolling out huge supercomputers like BERT, “with 24 Transformer blocks, 1024 hidden layers, and 340M parameters,” that are capable of processing (and, in effect, “learning”) billions of words across multiple languages and “even surpassing human performance in the challenging area of question answering” (Peng, 2018). This helps explain why “talking robots” like Siri and Alexa have become less creepy-sounding, more efficient, and much more popular in recent years. 

Obviously, NLP is a huge undertaking for computer scientists, and there is still plenty of work to be done before computers can consistently, efficiently, and seamlessly understand and interact with human language. But with the sheer amount of language and linguistic data available online now (and increasing at an exponential rate), we may look back on this conversation in 5-10 years and laugh. And the computers might laugh with us.

 

References

Brungard, B. (2017). Natural Language Processing [Video] (Vol. 36). PBS Digital Studios. Retrieved from https://www.youtube.com/watch?v=fOvTtapxa9c

How many words are there in the English language? (n.d.). Oxford English Dictionaries. Retrieved from https://en.oxforddictionaries.com/explore/how-many-words-are-there-in-the-english-language/

McCulloch, G. (2017). Teaching computers to recognize ALL the Englishes. Retrieved from https://allthingslinguistic.com/post/150556285220/teaching-computers-to-recognize-all-the-englishes

Peng, T. (2018, October 16). Best NLP Model Ever? Google BERT Sets New Standards in 11 Language Tasks. Medium. Retrieved from https://medium.com/syncedreview/best-nlp-model-ever-google-bert-sets-new-standards-in-11-language-tasks-4a2a189bc155

Using Google Translate for Pidgin English

At a most basic level, language  translation tools like Google Translate are designed to encode content from one language into vectors,  decipher the related word in another language through an attention mechanism then decode the vector into the desired language. This process seems efficient in translating simple sentences of a certain length, but brings into question the effectiveness of translation programs on complex sentences. By complex sentences, I do not mean longer sentences with sophisticated vocabulary, I mean satire, comedy or even sarcasm. Translation of complex sentences are heavily dependent on machine learning and the training of attention mechanisms.  Training broadens the database of semantic, syntactical and contextual information available for the designers of translation tools. Ultimately, end to end systems like NLPs have high performance requirements which subsequently require a-lot of processing power for operation. It can also be assumed that the high level of power needed for operating these systems impede the flexibility of these systems to process new information and languages.

One example of this may be when translation applications are introduced to different dialects of an already existing language like pidgin english. Google Translate obviously has a robust database of the english language and larger dialects like American English or British English. However, pidgin english exists across a variety of cultural groups which have their own syntax.  Since volume is likely a driver for the depth of database knowledge used for translation, does all pidgin get categorized as one “language” and is translated accordingly? In this instance, clustering which is a common machine learning technique would likely be used for efficiency of operating the language processing tool but hinder its accuracy in translation. By clustering pidgin english as one language, the language processing tool would have more data to assess during attention mechanism matching. It is unclear what other machine learning techniques could be used to manage a small languages like dialects of pidgin english and increase the accuracy of pidgin english translation. I also wonder how translation apps would manage languages like Esperanto which has semantic roots from romance languages, but is an artificial language which does not belong to any linguistic family. The existing semantic, syntactical and contextual information exists for the different languages which were used to develop Esperanto but how can the attention mechanism analyze the rules from different languages simultaneously.

Daniel Jurafsky and James H. Martin, Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, 2nd ed. (Upper Saddle River, N.J: Prentice Hall, 2008). Selections.

Thierry Poibeau, Machine Translation (Cambridge, MA: MIT Press, 2017). Selections.

How Google Translate Works: The Machine Learning Algorithm Explained (Code Emporium). Video.

Natural Language Processing + Google Translate

Language translation is more complex than a simple word-to-word replacement method. As seen in the readings and videos for this module, translating a text in another language needs more context than a dictionary can provide. This “context’ in language is known as grammar. Because computers do not understand grammar, they need a process in which they can deconstruct sentences and reconstruct them in another language in a way that makes sense. Words can have several different meanings and also depend on their structure within a sentence to make sense. Natural Language Processing addresses this problem of complexity and ambiguity in language translation.  The PBS Crash Course video breaks down how computers use NLP methods.

Deconstructing sentences into smaller pieces that could be easily processed:

  • In order for computers to deconstruct sentences, grammar is necessary
  • Development of Phrase Structure Rules which encapsulate the grammar of a language

Using phrase structures, computers are able to construct parse trees

*Image retrieved from: https://www.youtube.com/watch?v=fOvTtapxa9c

Parse Trees: link every word with a likely part of speech+ show sentence construction

  • This helps computers process information more easily and accurately

The PBS video also explains this is the way that Siri is able to deconstruct simple word commands. Additionally, speech recognition apps with the best accuracy use deep neural networks. 

Looking at how Google Translate’s Neural Network works, the Code Emportium video describes a neural network as a problem solver. In the case of Google Translate, the neural networks job or problem to solve, is to take an English sentence (input) and turn it into a French translation (output).

As we learned from the data structures module, computers do not process information the way our brains do. They process information using numbers (vectors). So, the first step will always be to convert the language into computer language. For this particular task, a Recurrent Neural Network will be used (neural network specifically for sentences).

Step 1. Take English sentence and convert into computer language (a vector) using a recurrent neural network

Step 2. Convert vector to French sentence (using another recurrent neural network)

Image retrieved from: https://www.youtube.com/watch?v=AIpXjFwVdIE

According to research from a 2014 paper on Neural Machine Translation, the Encoder-Decoder Architecture model pictured above works best for medium length sentences with 15-20 words (Cho et al). The Code Emporium video tested out the LSTM-RNN Encoder method on longer sentences, and found that the translations did not work as well. This is due to the lack of complexity in this method. Recurrent Neural Networks use past information to generate the present information. The video gives the example:

“While generating the 10th word of the French sentence it looks at the first nine words in the English sentence.” The Recurrent Neural Network is only looking a the past words, and not the words that come after the current word. In language both the words that come before and after are important to the construction of the sentence. Therefore, a BiDirectional Neural Network is able to do just this.

Image retrieved from: https://www.youtube.com/watch?v=AIpXjFwVdIE

Bidirectional neural networks (looks at words that come before it and after it) Vs. Neural Network (only looks at words that come before it)

Using the BiDirectional model – which words (in the original source) should be focused on when generating the translation?

Now, the translator needs to learn how to align the input and output. This is learned by an additional unit called an attention mechanism (which French words will be generated by which English words)

This is the same process that Google Translate uses – on a larger scale

Google Translate Process & Architecture / Layer Breakdown

Image retrieved from video: https://www.youtube.com/watch?v=AIpXjFwVdIE

English translation is given to the encoder, which translates the sentence into a vector (each word gets assigned a number), then an attention mechanism is used next to determine the English words to focus on as it generated a French word, then the decoder will translate the French translation one word at a time (focusing on words determined by attention mechanism).

Works Cited

CrashCourse. Data Structures: Crash Course Computer Science #14. YouTube, https://www.youtube.com/watch?v=DuDz6B4cqVc.
CrashCourse. Machine Learning & Artificial Intelligence: Crash Course Computer Science #34. YouTube, https://www.youtube.com/watch?v=z-EtmaFJieY.
CrashCourse. Natural Language Processing: Crash Course Computer Science #36. YouTube, https://www.youtube.com/watch?v=fOvTtapxa9c.
CS Dojo Community. How Google Translate Works – The Machine Learning Algorithm Explained!YouTube, https://www.youtube.com/watch?v=AIpXjFwVdIE.
Thierry Poibeau, Machine Translation (Cambridge, MA: MIT Press, 2017). Selections

Deblackboxing Deep Learning Machine Translation

Major AI players in the domain such as Google and Facebook are moving forward to deep learning. Deep learning approaches are especially promising because they learn, and also, they are not fixed for any specific task. Deep learning architectures such as deep neural networks have been applied to fields including natural language processing.

A deep learning machine translation system is simply composed of an “encoder” and a “decoder”. The encoder converts sequence to vector, a kind of language that computer can understand. And the decoder converts vector to sequence, which is the output perceivable by users. The encoder and decoder system are called long short-term memory recurrent neural networks (LSTM-RNN).  The advantage of LSTM-RNN is that it is good at dealing with long sentences. The way that LSTM-RNN deals with complexity of sentences is to associate a word with other words in the input. The main character of deep learning translation approach is that it is based on the hypothesis that words appearing in similar contexts may have a similar meaning. The system thus tries to identify and group words appearing in similar translational contexts in what is called “word embeddings” (Poibeau, 2017). In other word, this approach understands a word by embedding it into the context, which enhances the accuracy of translation.

I would like to use English-Chinese translation as an example. Translating Chinese can often be tricky because it has a different alphabet system with grammar rules and one word can have several different meanings and pronunciations. To solve the problem, Google neural machine translation is relying on eight-layer LSTM-RNNs to have a more accurate translation in terms of the context.

Also, there is an interesting term called “attention mechanism” that is used in deep learning machine translation. By learning a large number of data, this model learns to decide which parts to focus its attention on while generating from one language to another. After this step, this mechanism helps to align input with output. In other word, it helps to make sure that the output represents the major meaning of the input.  However, not every language has the same sequence as English does and this raises the difficulty of alignment. English-Japanese translation can be good example here since Japanese has a quite different syntax system.

Deep learning is a good tool for translation, but it is not perfect. It can still make significant errors that a human translator would never make, like mistranslating proper names or rare terms, and translating sentences in isolation rather than considering the context of the paragraph or page. So there is still a long way to go.

 

References

Ethem Alpaydin,  Machine Learning: The New AI. Cambridge, MA: The MIT Press, 2016.

Thierry Poibeau, Machine Translation (Cambridge, MA: MIT Press, 2017).

How Google Translate Works: The Machine Learning Algorithm Explained (Code Emporium). Video.

 

Google Translate: Bi-directional Recurrent Neural Network

I often use Google Translate when reading, but I have no idea about how it works at all until this week’s study. Google Translate is based on something called “statistical machine translation”. The hidden principle and procedures is not magic, but a series training and processing based on statistics.

Machine translation was always regarded to be inaccurate and full of mistakes until recent years with the development of machine learning. In fact, machine translation is not easy at all. Translation requires fully understanding the sentence to be translated and having an even better knowledge of the target language. (machine translation,62) One of the biggest challenge is that language itself is ambiguous. Different people may understand one same sentence in different ways. For instance, when I once translated Trump’s Twitter related to recent government shutdown, I was puzzled with his meaning. In his Twitter, he said that “Every nation has not only the right but the absolute duty to protect its borders and its citizens. A nation without borders is a nation not at all. Without borders we have the reign of chaos, crime, cartels and believe it or not coyotes.” The word “coyote” can be understood in two ways——one is its self-meaning, the other one is illegal migrants (a kind of metaphor). I don’t know which one to choose to translate.

There are two architectures in Google Translation——encoder and decoder. First, convert a sentence that need to be translate into a sector (a series of number that can be readable by computers) with the help of bi-directional recurrent neutral network, which is called encoding process. Second, convert the sector into a translated sentence with another bi-directional recurrent neutral network, which is called decoding process. There are 8 layers of LSTM-RNN that have residual connections between layers with some tweaks for accuracy and speed between encoder and decoder. In the process, Google Translate continuously identify the best possible alignment and find correspondent at word level by learning pattern and data from thousands of transaction examples. Therefore, Google Translate is also called example based translations.

Karen Hao, “The Technology Behind OpenAI’s Fiction-Writing, Fake-News-Spewing AI, Explained,” MIT Technology Review, February 16, 2019.

Thierry Poibeau, Machine Translation (Cambridge, MA: MIT Press, 2017). Selections.

https://blog.statsbot.co/machine-learning-translation-96f0ed8f19e4

Difficulty in Machine Translation, From English to Korean

The latest two language translation technology models based on statistics or artificial neural machine translation, namely Statistical Machine Translation and Neural Machine Translation. Statistical Machine Translation requires an enormous amount of data while Neural Machine Translation utilizes a large neural network and the deep learning which make it possible to acquire context-sensitive translation.

Korean, as well as Chinese and Japanese, belong to the Han Ideographs. English and Korean are significantly different in terms of structure, such as the distribution of subjects, the word order, the forms of verbs, and so on. English uses a Subject-Verb-Object structure, while Korean uses a Subject-Object-Verb structure. For example,

Besides, omitting subjects in Korean creates confusion in comprehending the meaning. For example,

As you can see, the ‘영희가’ can be omitted in Korean, which can cause a problem in understanding the sentence. Consequently, in order to avoid this, the context in a discourse needs to be closely considered, and this requirement works as a challenge for Machine Translation.

Another difficulty is that in Korean, speech is divided into polite form and impolite form, depending on who you talk to, which is extremely important in Korean since if used inappropriately, it seems quite rude. And the differences between polite form and impolite form is complicated. The politest and formalist form of speech is ending in ‘십니다’, while the less polite and formal form is ending in ‘~요’. And the above two polite speech is when talking to the elder, superiors, or people that you are not familiar with. For example, ‘I listen to music’ in Korean is ‘음악을듣습니다’ or ‘음악을들어요’ when speaking in a polite form. However, when you talk to your friend, subordinate, or people younger than you, the same English sentence will be translated to ‘음악을듣다’ instead, which is an informal and impolite form of speech in Korean. The application of polite or impolite is totally dependent on the context, sometimes a younger person can still talk to an older person in an impolite form, if they are close friends or the older person agree with this.

Besides, the usage of 1stperson and 2ndperson is different in polite and impolite form of speech in Korean. For example,

when I input ‘Do you eat lunch’ in Google translate, the translated one is quite impolite, and it’s very rude to ask like this to the elder, etc. If you watch some Korean dramas, you might find the first character, ‘너’(neo), which means ‘you’ is common to be seen when you look down on the others, and it is quite rude. The ‘I’ and ‘You’ are different in polite and impolite form of speech in Korean. ‘I’ is’저’ in polite form and ‘나’ in impolite form, while “You” is ‘당신’ in polite form and ‘너’ in impolite form.

Also, there are some differences depending on the gender of people. In Korean, people seldom directly call the name of the elder, even if their ages gaps are small. For example, a 13-year-old girl must call a 14-year-old ‘sister’, or else it will be rude. The word ‘older sister’ and ‘older brother’ is used by both boys and girls in English, however, Korean use different words depending on their gender. If a girl call her older sister, she must say ‘엄니’. If a boy calls his older sister, he must say ‘누나’. If a girl calls her older brother, she must say ‘오빠’. If a boy calls his older brother, he must say ‘형’.

These differences significantly add to the complexity and difficulty in machine translation from English to Korean. And here is a video talking about some problems with machine translation from English to Korean and why Korean Machine Translation is terrible.

 

Reference:

Teller, V. (2000). Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition. Computational Linguistics26(4), 638-641.

Kim, S., & Lee, H. (2017). A Study on Machine Translation Outputs: Korean to English Translation of Embedded Sentences. 영어영문학, 22(4), 123–147.

From Natural Language to Natural Language Processing

By Linda Bardha

Language impacts our every day life. Language helps express our feelings, desires, and queries to the world around us. We use words, gestures and tone to portray a broad spectrum of emotion. The unique and diverse methods that we use to communicate through written and spoken language is a large part of what helps us to bond with each-other.

Technology and research in the fields of linguistics, semiotics and computing, had helped us to communicate with people from different countries and cultures through translating tools.

The process of learning a language  is natural and we all are born with the ability to learn it, but language isn’t just a box of words that you string together, put it through a database or dictionary, and “magically” is translated into another language.  There are rules for combining words into grammatical phrases, sentences, and complex sequences. Language is a system that is made of subsystems, and each of these subsystems/layers play an important role for the whole architecture. This might seem like a process that we often overlook, since we don’t think about it when we speak or when we write. But this is not the case when we use tools for translation. That’s why there are issues when you translate text from one language to another through a tool like Google translate. Since each language has it’s own sets or rules and grammar, that makes it harder for a correct and cohesive translation.

As the video on google translate explains, neural networks help with this transitions.

What do we understand with the term Neural Network?

In the field of computer science, an artificial neural network is a classifier. In supervised machine learning, classification is one of the most prominent problems. The aim is to assort objects into classes.  As Jurasfky  and Martin explain, the term “supervised” refers to the fact that the algorithm is previously trained with “tagged” examples for each category (i.e. examples whose classes are made known to the NN) so that it learns to classify new, unseen examples in the future. In supervised machine learning, it is important to have a lot of data available in order to run examples.  As Poibeau points out in his book, a text corpus is necessary for machine translations.  With the increasing amount of translations available on the
Internet, it is now possible to directly design statistical models for machine translation. This approach, known as statistical machine translation, is the most popular today. Robert Mercer, one of the pioneers of statistical translation, proclaimed: “There is no data like more data.” In other words, for Mercer as well as followers of the statistical approach, the best strategy for developing a system consists in accumulating as much data as possible. These data must be representative and diversified, but as these are qualitative criteria that are difficult to evaluate, it is the quantitative criterion that continues to prevail.

Other important part for successful translations in statistical machine translations are machine learning algorithms and natural language processing. Machine Learning in the context of text analytics is a set of statistical techniques for identifying parts of speech, entities, sentiment, and other aspects of text. The techniques can be expressed as a model that is then applied to other text (supervised machine learning). It also could be a set of algorithms that work across large sets of data to extract meaning, which is known as unsupervised machine learning. Different from the supervised machine learning, unsupervised machine learning refers to statistical techniques to get meaning out of a collection of text without pre-training a model. Some are very easy to understand like “clustering” which just means grouping “like” documents together into sets called “clusters.” These can be sorted based on importance using hierarchical clustering from the bottom-up or the top-down.

Now that we know that the process of translation requires a corpus, incorporation of neural networks and statistical techniques, it needs one more component to complete this process: Natural Language Processing. Natural Language Processing(NLP) broadly refers to the study and development of computer systems that can interpret speech and text as humans naturally speak and type it. Human communication is frustratingly vague at times; we all use colloquialisms, abbreviations, sarcasm or irony when we speak, and sometimes even make spelling mistakes. All of these make it computer analysis of natural language difficult. There are three main components  of a given text that need to be understood in order for a good translation to happen:

  1. Semantic Information – which is the specific meaning of each individual word.
  2. Syntax Information – which is the set of rules, principles, and processes that governs the structure of a sentence.
  3. Context Information – which is understanding the context that a word, phrase or sentence appears in.

After we looked at these concepts and how they work, let’s take a look at design principles and mechanisms of Neural Machine Translation. 

  • Translates whole sentences at a time, rather than just piece by piece.
  • This is possible because of end-to-end learning system built on Neural Machine Translation, which basically means that the system learns over time to create better, more natural translations.
  • NMT models use deep learning and representation learning.
  • NMT requires a lot of processing power, which is still one of its main drawbacks. The performance and time requirements are even greater than for statistical machine translation. However, according to Moore’s law, processing power should double every 18 months, which again offers new opportunities for NMT in the near future.

While NMT  is still not perfect, specially with large technical documents, it shows progress from the previous version Phrase Based Machine Translation (PBMT), which is one mode of Statistical Machine Translation. Now the attention is how to improve it even more, and one suggestion is to look at “hybrid” models, which uses both of the previous methods. The efficiency of each depends on a number of factors such as the language used and the available linguistic resources (such as corpus).

References:

Daniel Jurafsky and James H. Martin, Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, 2nd ed. (Upper Saddle River, N.J: Prentice Hall, 2008). Selections.

Thierry Poibeau, Machine Translation (Cambridge, MA: MIT Press, 2017). Selections.

How Google Translate Works: The Machine Learning Algorithm Explained (Code Emporium). Video.

Barak Turovsky. Found in translation: More accurate, fluent sentences in Google Translate. (Nov. 15, 2016) found at https://blog.google/products/translate/found-translation-more-accurate-fluent-sentences-google-translate/

Seth Redmore. Machine Learning vs. Natural Language Processing, Lexalytics (Sep. 5, 2018) found at https://www.lexalytics.com/lexablog/machine-learning-vs-natural-language-processing-part-1

United Language Group. Statistical Vs. Neural Machine Translation found at https://unitedlanguagegroup.com/blog/statistical-vs-neural-machine-translation/

 

 

Machine Translation, from Statistical to Neural Era

Tianyi Zhao

Translation applications have been increasingly popular as globalization accelerates. From the merely function as dictionaries of word translation to achieve paragraph or idiom translation, machine translation has been widely applied with the enrichment of various languages and rapid technical evolution. Machine translation has become a significant field of computer science, computational linguistics and machine learning. Beginning with rule-based systems, machine learning has been advanced to statistical and neural approaches, which are two prevalent ones currently. However, as deep learning develops, neural network is gradually replacing statistical machine translation.

Statistical Machine Translation

Figure 1. Statistical Machine Translation Pipeline

(Source: https://www.researchgate.net/figure/Basic-Statistical-Machine-Translation-Pipeline_fig2_279181014)

Statistical machine translation uses predictable algorithms to teach machine to translate with parallel bilingual text corpus. The machine leverages from what it has been taught, which are the translated text, to predict the translation of the foreign languages. It is data-driven, which only needs the corpus of both source and target languages. However, the word or phrase alignment breaks down the sentences into independent words or phrases during translation. The word cannot be considered and translated until the previous one has finished. Besides, the corpus collection is costly in time and efforts. Statistical approach cannot be predominant, because “[it] consists for the most part in developing large bilingual dictionaries manually.” (Poibeau, 139) Additionally, the translation results may have superficial fluency that may cause misunderstanding.

Neural Network Machine Translation

Figure 2. Neural Machine Translation

(Source: https://www.morningtrans.com/welcome-to-the-brave-new-world-of-neural-machine-translation/)

Neural machine translation is more advanced approach than the statistical one. It is based on the neural networks in the human brain, so similarly the information is delivered to different “layers” to be processed before output. Compared to statistical approach, neural machine translation does not require alignment between the languages. Instead, it “attempts to build and train a single, large neural network that reads a sentence and outputs a correct translation.” (Bahdanau, 1) It is a encoder-decoder model, in which the source sentence is encoded into a fix-length vector from which a decoder generates a translation. Being applied deep learning techniques, neural machine translation can teach itself to translate based on the statistical models. According to Ethem Alpaydin, the process of neural machine translation starts with multi-level abstraction in lexical, syntactic and semantic rules. Then a high-level abstract representation is extracted, and the translated sentence will be generated as “decoding where we synthesize a natural language sentence” in the target language “from such a high-level representation.” (Alpaydin, 109) It combines context to find more accurate words and automatically adjusts to a more natural sentences syntactically that are smoother and more readable.

All in all, although statistical machine translation is still prevailing, it will be superseded by the emerging neural networks. I believe neural machine translation will be the near future, because it has the advantages of quality and speed, which are precisely the true values of machine translation.

 

Works Cited

Alpaydin, Ethem. Machine Learning: The New AI. The MIT Press. 2016.

Kaplan, Jerry. Artificial Intelligence: What Everyone Needs to Know. Oxford UP, 2016.

Juan Migual, Alexander, Necip Fazil Ayan. “Transitioning entirely to neural machine translation.” Facebook Code, Aug. 3, 2017. https://code.fb.com/ml-applications/transitioning-entirely-to-neural-machine-translation/

Poibeau, Thierry. Machine Translation. MIT Press, 2017.

Hao, Tianyong, et al. “Natural Language Processing Empowered Mobile Computing.” Wireless Communications and Mobile Computing, vol. 2018, Hindawi, 2018, p. 2, doi:10.1155/2018/9130545.

Bahdanau, Dzmitry, et al. “Neural Machine Translation by Jointly Learning to Align and Translate.” arXiv.org, Cornell University Library, arXiv.org, May 2016, http://search.proquest.com/docview/2079082715/.

“Statistical vs. Neural Machine Translation.” United Language Group.

Google Translate’s Next Level Neural Machine Translation System

Google Translate’s system most likely works through extensive classification algorithms of all the languages that they support on their system. Classification functions as a type of algorithm that categorizes the features of data and stores it for both machine learning and retrieval once the application is in use.

It was stressed within the CrashCourse Machine Learning video that conceptualizing the process of machine learning and how fast AI truly computes machine learning translation is impossible due to how sophisticated it is. This is evident, based on how Google Translate’s user interface provides a response to application users in almost real time – much faster than having to whip out a translator book. As defined in the CrashCourse videos, machine learning is a set of techniques that sit inside the even more ambiguous goal of artificial intelligence. Those set of techniques are made up of various different components that contribute to both machine learning and intelligence for artificial intelligence. Users input a set of strings that consist of numbers, letters, or punctuations. This set is called an array, which is made up of binary numbers and stored amongst eachother so that once a command is made to access a certain string, it goes straight to that binary code. Structs are also characteristics of machine learning that consist of compound data structures beyond numbers and simplistic data. They store several pieces of data (think a group) and are then inputted into the AI system to then be outputted out. Google Translator’s system must function in queues, where it’s “first-in first-out” fashion, in comparison to stacks that works from top to bottom. Additionally, Google Translator utilizes artificial neural networks to take in what the user is typing and output it into the desired answer. What we don’t see/pay much mind to is the hidden layer in between the input and output that organizes the input, classifies it, and outputs it properly.

In Google’s Neural Network for Machine Translation article, it’s interesting to delve into their concept of phrased-based machine translation, and how that technology has developed into the “Google Neural Machine Translation system to be phrased based, however more colloquial than a solely phrased based system or human translation. The difference between the phrase based and google neural machine translation system is that the Google version scans and classifies each word being translated and then matches it to a weighted distribution over the most relevant words to the target language.

Ambiguity vs. Predictability in ML and Google

Annaliese Blank

Some key themes for this week would be grammar and online translation of language. My goal for this week was to unpack this more and see how machine translation and google translate work. I use google translate all the time and I wanted to see its operations since I use the Spanish translations a lot for my travels. I have been to many parts of Mexico and Argentina where I specifically used Google translate to begin a foundation before staying with my previous host families. I took Spanish from third grade forward and even during high school, google translate definitely peaked at my school. It was like the perfect solution to so many problems when other websites or textbooks just couldn’t get the job done enough. The key word here is enough, Google translate when we unpack this does so much more than a simple translation, it does a grammar check and conversational check and makes sure that the current translation is the correct verbal translation, depending on what region you’re in since some areas don’t use the same versions of Spanish.

To further this, I really enjoyed the Machine Learning piece. I especially wanted to make connections here on machine learning. All of this really got me thinking about translation. A question I’d like to raise is, what exactly is translation and how can we understand the process, perhaps through other technology than google, like machine translation? What is the criteria?

In the Machine Learning piece, Martin and Jurafsky, helped me gather some fundamentals on my inquiries. When we de-black box this, we can see there is no perfect way to translate something, especially how I mentioned before that the “perfect” translation doesn’t exist in all of the same locations, since not all language is “universal”. They say, “Technical texts are not translated in the same way as literary texts. A specific text concerns a world that is remote from the world of the reader in the target language. The translator has to choose between saying close to the original text or making use of paraphrasing to ensure comprehension. The tone and style of the text are highly subjective” (Machine Learning, pg. 19). This got me thinking, How, can we trust machine translation or google translate so much if it is impossible to gain 100% accuracy? Where does this trust reside?

 Some other important areas I found really interesting were the discussions of morphology and syntax. Morphology deals with the structures of the words and syntax designs the sentence. For computing, or machine translation this is really hard to do because there is one thing they mentioned the most was AMBIGUITY, high amounts of uncertainty. From what I have read and gathered this seems to be still a main limitation to online translations and could potentially still be a problem for google translate in the future, this problem isn’t fixable. How does ambiguity exist if predictability prevails?

After watching these videos and crash course sites, I feel I have gathered a better understanding of what google translate does and how language can be modified in machine learning and coding. Coding in its own way is its own language.

And finally to describe the levels of a technology for this week, I wanted to continue the topic of google translate more. In the last google video they say, in the tap to translate feature, there is a button where you can translate a message sent to you, and say in voice-over or type out a message in English, and have them send it back in the designated language. It can be any phrase or character count up to 5000 characters. The translation options run as simple as English, Spanish, French, Italian, Russian, Chinese, Arabic. Etc. Once the translation is complete, you can send it, save it, or drag it wherever you need it onto a different app. This is most definitely the least complex NPL and is very easy to manage. What happens here is any of these languages are translated based off their original entry in English and then re-configured into the pre-set language translation of their preference. For AI, this is a game changer because the translations become predictable if the machine has already learned the appropriate inputs. This really helped me understand how this translation process works. Having pre-set features of things like syntax too go hand-in-hand for producing the best translated result. All of this now has led to Google’s most current upgrade, which is “translate as you type” where I mentioned before the pre-set features of the system allow this predictability to be heightened and makes the translation much easier for anyone.

I wanted to take this a step further and I looked up how google translate works in other ways, such as handwriting translation, creating your own verbal phrases, slow down pro-nunciations of certain phrases, and connects your style of languge from facebook messenger on to the translation tab. More details can be found here; https://www.express.co.uk/life-style/science-technology/976492/Google-Translate-how-to-use-Google-Translate-how-accurate

I feel I have gathered a better sense of machine learning and google translate in relation to AI, but I still feel stuck on artificial vs. natural systems.

Data Structure Crash Course:

  • Arrays- values stored in memory
  • Indexes
  • Strings- arrays of characters
  • Null characters
  • Matrix – array of arrays (3 total)

Machine Learning & Artificial Intelligence:

  • Algorithms give computers the ability to learn from the data and allow the ability to give and make decisions
  • Input layer, Output layer, Neuron Layer

How Google Translate Works:

  • language translations- word by word
  • curated data base to help translate pairs
  • tokens- smallest units of language
  • grammar- defines ordering of tokens
  • syntax analysis- does the structure look correct?
  • Semantic analysis- meaning, does this sentence make sense in context?
  • Neural network – component that learns to solve problems and allows the network to learn patterns and data
  • This helps with the translation process
  • Encoder – Decoder architecture – the pathway to insert vectors that carry out translations

Daniel Jurafsky and James H. Martin, Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, 2nd ed. (Upper Saddle River, N.J: Prentice Hall, 2008).

https://en.wikipedia.org/wiki/Google_Neural_Machine_Translation

https://translate.google.com/intl/en/about/

https://www.youtube.com/watch?v=AIpXjFwVdIE

https://www.youtube.com/watch?v=DuDz6B4cqVc

https://www.youtube.com/watch?v=z-EtmaFJieY

 

https://learningenglish.voanews.com/a/eight-things-you-didnt-know-google-translate-could-do/4097446.html