It‘s terrible and sad. Even more so because AI still gets things wrong all the time.
I’m a simultaneous interpreter, and it’s a bloodbath out there. Partially because anyone who needs a translator or an interpreter by default is unable to verify the accuracy of the translation/interpretation - they can only tell if it’s smooth, believable, and such. And, AIs are great at being believable even when they make shit up…
I tested a few local models to see how complete and recent their training data is. I want to use it to see if company A at xyz address is the same as Company B at xyz.1 address. I asked them for recent events and found a lot of gaps. So I asked for the roster of the 1992 dream team. Very mixed results. Open AIs model got 11/12 players correct but absolutely insisted that Christian Laettner was not the 12th player. I went back and forth with it to see if I could get it to accept my knowledge as is. It wouldn’t. I’m terrified about what happens when these AI bots have the ability to update Wikipedia in order to make the facts match their incomplete training data.
You don’t have to be very good at a language to know when a translation is horrible. I’m not very good at spanish and I can do better than machine translations.
What most people managing translations don’t get is that they are essentially using the tools that translators use, but skipping the value adding step.
I’ve been doing translation as a side gig for years. Lately I’ve been doing some translations for an NGO that deals with addiction management, of which I’m part.
The materials have a lot of nuances, and need the translator to understand them, to properly convey the concepts.
The usual process for translation is to feed the original to a machine language translation software, and then work with both versions side by side, in a translation management software, tools that make editing and proofing faster and easier by a human, to achieve the best result.
Last time, someone in the organization, mono lingual, decided to do a handbook translation with ChatGPT, or something like that. They then gave the result to a colleague and me.
The resulting translation was exactly what we expected.
A problem was that some bilingual people were shown the results, and reported that the results were amazing, without realizing that they were commenting on the wow factor, not on the accuracy of the result, especially because they had not done a critical side by side comparison.
My colleague and I did the editing work, were paid less, but the end result was the usual translation quality.
The commissioning person at the org boasted that AI translation was great, obviating our work, to get their brownie points.
TLDR: translation has used machine translation as a first step for a long time, with results edited and polished by humans. Ignorant decision makers are skipping that crucial step, getting sub-par results, oblivious to the fact.
My company thankfully still employs simultaneous interpreters for meetings and has one translator on staff. I think, at least in part, because of how bad translation tools can be from EN <> JA.
I’d expect any translation requiring zero mistakes and translator’s official responsibility wouldn’t be hurt by this.