'Google AI Forum' is an event organized by Google to provide an opportunity to more deeply study artificial intelligence and machine learning in AI-First age with more clear descriptions and examples. This event will take place on a monthly basis.
The first step of the lecture was Park Yeong-chan, a Google Tech Leader and software engineer, who gave an overview of AI and machine learning.
To begin with, artificial intelligence (AI) is a science technology that makes things smarter through a combination of various computer science technologies. Park Yeong-chan said that this is like a future dream that will take hundreds of years to come, while others talk about solving tiresome duties instead of humans. And its sub-concept, Machine Intelligence, is a machine-assisted technique for solving problems on specific topics. It is characterized by the fact that it is specialized on a small number of subjects, just as 'Alpha Go' acts only on ‘Go(Baduk)’.
Machine Learning is a technique that allows a machine to train itself through examples instead of typing each of its operations into a single program. Many development methods of machine learning were developed for 3~40years. Through neural networks, which mimic the actual cranial nerves, millions or billions of neurons extend knowledge by transferring information entered by each of them to different neurons. In this process, the neurons form several layers and learn information that they transmit from each layer, and this is called Deep Learning. Through this process, the neuron network of the highest layer learns a very abstract pattern since the pattern input by each layer continues to be learned.
On the other hand, there are currently three methods used for machine learning.
First, Supervised Learning, which is the most commonly used method, is a way of repeating the learning process until the answer is settled when the data about the specific situation is entered. This method is highly accurate and is a good way to learn. Since it requires a lot of data, the better results will be obtained with the more data on a particular situation.
Next, Unsupervised Learning is a way of getting answers by connecting similar things while continuing to check all data without any data or samples. This learning method is used mainly in the experimental part that humans do not yet understand, or in the area where it is difficult to obtain samples.
Finally, Reinforcement Learning is slightly different from the above two learning methods. It is a way to learn by itself by providing results from repeating random acts without giving specific information. This method is the most difficult method and its actual use is small, but the research about it is the most advanced.
Park Yeong-chan introduced that Google is currently using machine learning on Gmail Spam Search, Speech Recognition, Photo Search, Image Recognition and Automatic Translation, and explained about the reason that artificial intelligence is recently receiving a lot of attention while its research has been done for decades by saying, “It is because the research has progressed at a faster pace and the results of research conducted on machine learning have emerged through rapidly developing computing infrastructure, cheaper storage space and the emergence of new deep-running models.” Also, even if the same researches are done, those with more data, better research model, or higher computation are superior. So, AI technology is expected to grow at a faster pace than it has ever been in the near future and show results one by one.
In the second step, Mike Schuster, Google Research Scientist, introduced ‘Google Neural Machine Translation’ technology where AI technology is applied through video lecturing.
Schuster explained about the reason why Google has been focusing its efforts on translation services that 50% of the content on the Internet is in English, but only 20% of the world's population can speak English. In other words, to make information more accessible and to resolve cross-country communications, translation needs to be improved, so Google is paying lots of attention to translation services.
Today, Google translates over 140 billion words and more than one billion sentences are translated through this service. There are about 500 million people who actively use Google translation services on a monthly basis, covering 99% of all online users with 103 languages.
In particular, 'Google Neural Machine Translation', which was released in September of 2016 and applied to 8 language combinations in November, is different from conventional phrase-based machine translation in that the sentence is divided into words and phrases. It translates the whole sentence at once, understand and rearrange the most appropriate translation according to the context, and provide translations that are close to natural sentences according to grammatical rules.
It includes an end-to-end learning system that improves translation quality by learning from millions of cases, and it was confirmed that the translation quality of some language combinations applied for demonstration was improved through the introduction of ‘Google Neural Machine Translation’. When translating English sentences into Korean and then translating them into English, the accuracy of conventional phrase-based machine translation was not high. However, after using 'Google Neural Machine Translation,’ it was possible to get a translation result with a high degree of accuracy.
In fact, when comparing improvements of translation quality, it showed high accuracy improvement in English translation from Korean, Turkish, or Chinese. Under its influence, the traffic of English-Korean translation in the Android environment has risen up to 50% in the past two months.
At the same time, 'Google Neural Machine Translation' has adopted 'Zero Shot Translation' which enables translation between multiple languages in a single system, so that the translation quality has improved as well as enabling translation of a combination of multiple languages that are not tested through multi-language training. For example, the translation knowledge between English and Korean, English and Japanese has enabled the translation of a combination of Korean and Japanese that has not been trained.
Moreover, the research about common languages that present sentences with the same meaning in similar ways regardless of language revealed that sentence data of similar meaning are gathered in cluster unit.
As a result of this learning, Schuster said that the translation speed, which took an average of about 10 seconds per sentence, was reduced to an average of 0.2 seconds in about two months.
On the other hand, he talked about the prospects that there is still room for improvement in 'Google Neural Machine Translation' in the future. He said, "Humans can easily translate numbers and dates that are not translated correctly through mechanical translations. Mechanical translations also sometimes mistranslate short and infrequently used sentences, and proper nouns such as names and brand names. However, you will be able to meet steadily developing ‘Google Neural Machine Translation’ as the research group of experts is working day and night to solve them."
Copyright ⓒ Acrofan All Right Reserved