How Large Language Models Improve Document-Level AI Translation – slator.com

In an era when large language models (LLMs) are reshaping AI translation, two recent studies have emerged with strategies to tackle document-level machine translation (MT). One from the University of Zurich introduces a method that treats document-level translation as conversation. Instead of translating a whole document in one go or splitting it into isolated segments, […]
How France’s Inria Aims to Improve AI Translation for Low-Resource Languages – slator.com

Large language models (LLMs) have significantly improved AI translation for high-resource languages, but performance remains uneven for low-resource languages (LRLs). In a March 6, 2025 paper, researchers Armel Zebaze, Benoît Sagot, and Rachel Bawden from Inria, the French National Institute for Research in Digital Science and Technology, introduced Compositional Translation (CompTra), an LLM-based approach designed […]
Unbabel Tackles Metric Bias in AI Translation – slator.com

In a March 11, 2025 paper, Unbabel introduced MINTADJUST, a method for more accurate and reliable machine translation (MT) evaluation. MINTADJUST addresses metric interference (MINT), a phenomenon where using the same or related metrics for both model optimization and evaluation leads to over-optimistic performance estimates. The researchers identified two scenarios where MINT commonly occurs and […]
Alibaba Says Large Reasoning Models Are Redefining AI Translation – slator.com

In a March 14, 2025 paper, researchers from Alibaba‘s MarcoPolo Team explored the translation capabilities of large reasoning models (LRMs) like OpenAI’s o1 and o3, DeepSeeks’s R1, Anthropic’s Claude 3.7 Sonnet, or xAI’s Grok 3, positioning them as “the next evolution” in translation beyond neural machine translation (NMT) and large language models (LLMs). They explained […]
How to Balance Cost and Quality in AI Translation Evaluation – slator.com

As large language models (LLMs) gain prominence as state-of-the-art evaluators, prompt-based evaluation methods like GEMBA-MQM have emerged as powerful tools for assessing translation quality. However, LLM-based evaluation is expensive and computationally demanding, requiring vast amounts of tokens and incurring significant API call expenses. Scaling evaluation to large datasets quickly becomes impractical, raising a key question: […]
How Microsoft Wants to Address Gender Bias in AI Speech Translation – slator.com

Gender bias in speech translation (ST) systems has long been a concern for researchers and users alike. In a January 10, 2025 paper, researchers from Microsoft Speech and Language Group presented their approach to addressing speaker gender bias in large-scale ST systems. The researchers identified a persistent masculine bias in ST systems, even in cases […]
Slator Pro Guide: AI in Interpreting – slator.com

Slator’s Pro Guide: AI in Interpreting is a must-have for interpreting service and solutions providers, providing a concise snapshot of the latest applications of AI and large language models (LLMs) in interpreting. This Pro Guide will get you up to speed on the value that AI can add to your company and the new interpreting […]
How Apple Wants to Fix Hallucinations in AI Translation – slator.com

In a January 28, 2025, paper Rajen Chatterjee and Sarthak Garg from Apple, along with Zilu Tang from Boston University, presented a framework for mitigating translation hallucinations in large language models (LLMs). According to the researchers, “this is among the first works to demonstrate how to mitigate translation hallucination in LLMs.” They explained that LLM-based […]