Updated: May 12, 2021
A new article on translation website Slator explores some intriguing news: Professional translators may prefer human-generated translations, but when it comes to the average person, machine translation wins the day.
The article cites a Google study in which professional translators were given a vast selection of translated documents to evaluate. Translations performed by humans or very high-level AI outranked the others.
But this may not be so for the “everyman”. Author Seyma Albarino points out that according to other studies, crowd workers (non-professional, uncertified freelancers performing translation jobs) often have difficulty determining if a translation has been performed by a human or advanced AI.
Additionally, while there’s a lot of debate about whether the most important quality of a translation is fluency (readability) or accuracy, for crowd workers, the most important thing seems to be clarity. They show a clear preference for literal translations, which machines tend to produce.
We often think that studies and research are supposed to give us answers. In some cases, they can lead to more questions. For me, at least, that’s certainly the case here.
For one thing, the studies don’t seem to have considered different types of translation and the different needs of translation clients.
For instance, what about readers faced with a translation of a literary text? Machines still have difficulty with nuances like figurative language, idioms, and humor. It’s difficult to imagine that any type of non-human translation would measure up in this field.
Or take healthcare and pharma translation, where accuracy isn’t just important - it’s literally vital. How would a patient or healthcare provider react to a machine translation versus a human one? The question brings to mind an article we posted a few weeks ago, where we looked at how machine-generated medical translation is still a risky issue, no matter how advanced AI has become. This is especially true for less common languages that don’t have enough source material for ‘bots to create accurate algorithms.
Another question that these translation preference studies bring to mind is how a particular translation process might affect results. For example, medical product and packaging documentation goes through a complex system of translation, back translation, and other checks. On the other hand, a commercial or general translation wouldn’t follow the exact same procedure. Would the final result of these procedures have a different appeal to translators and crowd workers?
One thing these questions all bring to light is that in an ideal scenario, a translation could be checked by both professionals and semi-professional translators or laypersons (depending on the field, industry jargon in the document, etc.). This could make a translation process longer as well as more costly, but it may be worth keeping in mind.
We’re still not sure about the roles of humans and machines when it comes to language. Still, these studies don’t contradict what seems to be where translation is headed: a world where human translators may use machines to help with everything from easily automated aspects of a translation project, to ensuring the uniformity of translated industry-specific terms.
In other words, it’s not a competition with a clear winner, but a future where humans and machines work together to produce the best possible translations.
Contact Our Writer – Alysa Salzberg