Artificial Intelligence (AI) has been a hot topic lately.
It seems you can’t read through any major news outlet without bumping into at least one or two cover stories on what AI means for the future, not to mention the leaps and bounds in progress it has already made.
Computer automation systems will completely change the scope of what’s possible and how humans interact with machines, these articles tell us. From digital assistants like Amazon Echo and Windows Cortana to cars that drive us to work, the more we hear about AI, the more it seems we’re right on the cusp of a life almost completely controlled by machines.
And to a certain extent, these predictions aren’t too far off. It’s fair to say that some of the hype surrounding AI is reasonable. The advancements made in recent history are definitely impressive and in the language industry, all eyes are on Machine Translation (MT) and what AI could mean for the future of translation. In particular, there’s been a lively discussion about whether Neural Machine Translation (NMT) will eliminate the need for human translators. And, thanks to mainstream media articles about Google’s new NMT system, many would believe the absence of translators in the near future is a strong possibility.
But this insistence upon a translator-less future isn’t as solidified as it’s made out to be. MT has definitely come a long way, but the new technology is in no position to completely replace humans.
An Improvement, But Not A Paradigm Shift
In November, Google announced it had implemented NMT into Google Translate, a move that drew widespread attention from industry pros and the media. Why was it such a big deal? Google claimed it had dramatically increased the accuracy of translations by moving to a Deep Neural Network model and away from its statistical translation processes.
NMT is able to examine a sentence in its entirety when translating, remembering the first piece of the sentence as it finishes decoding the last word. Statistical MT, or phrase-based MT, can only focus on chunks or “phrases” of a sentence at a time.
Google’s new NMT system was far more accurate than statistical methods, the company said, and reached near “human accuracy.” But, as the market-based research company Common Sense Advisory (CSA) points out, there are ways in which Google’s claims can be misleading.
Earlier this month, CSA posited that the methods for testing the accuracy of Google’ NMT had some flaws. Namely, the humans at Google who evaluated the NMT results were non-translators, CSA says.
And if these testers were not translators used to working with a translation company, but only fluent speakers of the target language, they wouldn’t be the best judge of what makes for an LSP-ready translation.
“…fluency is not enough to guarantee full understanding of translation issues and this method tends to privilege fluent – natural sounding – translation over accurate translation,” CSA states. The method did not outline the number of errors the MT output had or how they severe they were, either.
“Accordingly, Google has demonstrated that the NMT output is more fluent than the older system and that it is almost at the level of bad or preliminary human translation, but not that it is on the level with good quality human translation,” the CSA article reads.
Teaching Computers Common Sense
The declaration that Google’s new MT system reaches the level of human accuracy has been taken as truth by many. As good as this sounds, it’s far from the case.
One issue that impedes MT’s ability to correctly translate is the fact that it deciphers text sentence by sentence. This means that errors occur if context from a previous sentence is needed to correctly translate a subsequent segment.
Add this to the fact that MT is not yet able to pick up on nuance or metaphor found in human speech, and there is a big margin for error.
As this article in the Economist points out, computer scientists can use data to train machines, but you can’t train common sense. In particular, computers don’t have the knowledge of real-world situations to make sense of text that may be ambiguous.
The article uses the example of a public sign on a fountain that reads, “This is not drinking water.” A human would understand the sign to mean he or she shouldn’t drink the water; however, a computer may take the phrase to mean that the fountain is not committing the action of drinking water.
AI’s shortcomings are apparent in digital assistants like Siri. When you ask your iPhone to find a restaurant for you, it doesn’t have the “intelligence,” to only show eateries that are open. And, if Siri has in fact correctly answered your question, you may find possible locations that have been closed for years.
The human brain is an organ that scientists do not yet fully understand, unable to decode all of the intricate processes that allow us gather and analyze loads of information at once. If we still can’t figure out all of the brain’s complexities, how can we deliberately make a machine replicate it?
An Added Capability
At least for the time being, MT solutions have nowhere near the capacity to replace human translators. Until developers are able to create NMT models that match the human brain in common knowledge, recognizing nuance and the ability to think on their feet, humans will continue to be a necessary piece of the translation process.
If anything, as MT has continued to develop, it’s proven to become better at assisting humans in translating documents. As the Economist article puts it, the “translator of the future” will not be a MT system, but a human being acting as a quality control officer. This person will be responsible for analyzing MT-translated text and seeing what needs updating.
New MT models will change the way translators do their job; but they will by no means put linguists out of work. With documents MT can’t handle, translators will continue to fill in the gaps.