In a recent Slator article, Alibaba highlighted the transformative potential of large reasoning models in AI translation.
Building on this, a new study by researchers at the University of Mannheim and the University of Technology Nuremberg examines whether reasoning-enabled large language models (LLMs) are also better at evaluating AI translation quality.
“Most existing work leverages non-reasoning LLMs, leaving open the question of whether reasoning LLMs offer further benefits,” the researchers said.
They tested two reasoning models — OpenAI’s o3-mini and DeepSeek-R1 — comparing them against their non-reasoning counterparts to assess whether reasoning capabilities improve alignment with human judgments.
The results were mixed. OpenAI’s o3-mini performed well, consistently outperforming its non-reasoning counterpart (GPT-4o-mini). In contrast, DeepSeek-R1 — despite being explicitly trained for reasoning — fell short of DeepSeek V3, its non-reasoning counterpart.
“The relatively poor performance of DeepSeek-R1 across evaluation tasks warrants closer examination,” the researchers noted, suggesting that the gap may be due to insufficient multilingual training or a lack of task-specific fine-tuning.
This did not appear to be the case for OpenAI’s o3-mini. Its strong performance indicates that the model may incorporate training elements particularly suited for AI translation evaluation.
Reasoning Is Not Enough
According to the researchers, the findings indicate that reasoning alone is not enough. Instead, its effectiveness depends significantly on model architecture and implementation.
“This architecture-dependent performance suggests that reasoning capabilities alone do not guarantee improved evaluation quality, but rather that the implementation and specific post-training approach for enhancing reasoning capabilities matters significantly,” they said.
Given that most reasoning models are extremely large — and therefore difficult to deploy in practice — the researchers also explored whether smaller, distilled variants could offer similar evaluation performance at a lower computational cost.
They found that distilled models can retain some of their evaluation strength. A 32B parameter version of DeepSeek-R1 performed close to the full model in performance, but a smaller 8B variant showed a sharp drop, underscoring the trade-offs between size and evaluation capability.
“Effective distillation of evaluation relevant reasoning requires sufficient model capacity, with smaller distilled models potentially losing critical capabilities required for nuanced evaluation,” they explained.
The researchers described their work as “the first systematic evaluation of reasoning-based LLMs” for AI translation evaluation.
“Our findings reveal that the relationship between reasoning capabilities and evaluation performance is more nuanced than initially hypothesized,” they concluded, adding that future work should focus not just on adding reasoning capabilities, but on aligning reasoning strategies with the specific demands of AI translation evaluation tasks.
Authors: Daniil Larionov, Sotaro Takeshita, Ran Zhang, Yanran Chen, Christoph Leiter, Zhipin Wang, Christian Greisinger, and Steffen Eger