If you ask a professional translator what he thinks of machine translations, he might say: “You might get accurate translations sometimes. But it won’t replace human translation anytime soon.”
It’s a fact: algorithm-based translations are based on translation memories and focus on translating words. Human translators focus on meaning. They have a context, a frame of reference, that machines just don’t. Here are two stories illustrating just that.
Algorithms are still a long way from giving accurate translations
The problem algorithms – and human translators – face when a word has several meanings is choosing the correct one. It’s worse for phrases and even worse yet for idiomatic expressions.
Some mistakes are funny. Some aren’t. At least not for this man in Israel who posted a picture of himself with the caption “Good morning” in Arabic. The man went to work, reached his construction site, took the picture and posted it to Facebook. Nothing ominous so far, except for Facebook’s automatic translation into English and Hebrew.
According to the newspaper “Haaretz“, The proprietary algorithms understood that to be “hurt them” and “attack them” in English and Hebrew respectively. Enough to worry some citizens, who then sent a copy of the message to authorities.
Police arrested the man but thankfully realized their mistake a few hours later. They did so after seeing the bulldozer in the picture and no Arabic translator was present to correct the mistake.
The words used by the original poster, it turned out, were a local form of Arabic and had no official transliteration in Facebook. And there seemed to be only a one-letter difference between the local version of “Good morning everyone” and “hurt them”. In translation, small details do make a big difference.
Machines still lack a human element before being able to give accurate translations
Facebook algorithms are not the only ones struggling with translation accuracy. In January 2016, Google Translate users had an unfortunate experience with the software in an already tense diplomatic context. If you used it to translate from Russian to Ukrainian, the algorithms translated “Russian Federation” into “Mordor”, the middle-earth region occupied by the Dark Lord Sauron and the orcs.
But the incident did not stop there. The translation for “Russians” became “occupants” and Serguey Lavrov became a “little sad horse”. Screenshots rapidly spread acorss the internet.
According to Google, an incorrect text analysis was to blame.
Indeed, Google servers use statistics, rather than common sense, to find their translations. They analyse bilingual documents and websites and keep the most commonly used translations. This is another reason why automated algorithms are yet to reliably give accurate translations.