In a March 21, 2024 paper, researchers from the University of Sydney, China University of Petroleum, Nayang Technological University, and JD Explore Academy introduced a two-stage fine-tuning technique to address the off-target translation issue.

The off-target translation issue occurs when a large language model (LLM) generates translations that deviate from the intended language direction or task instructions. This issue is particularly common in zero-shot translation settings, where the model is required to translate between language pairs that were not explicitly seen during training.

As the researchers explained, LLMs customized for translation tasks have demonstrated remarkable translation capabilities, even competing with supervised-trained commercial machine translation (MT) systems. However, off-target translation remains an “unsolved problem,” particularly for low-resource languages.

To mitigate the off-target translation issue and improve the performance of LLMs on translation, they introduced a two-stage fine-tuning approach specifically designed to improve the instruction-following capability of LLMs, with a particular focus on improving the model’s ability to adhere to language translation directions.

In the first stage, the LLMs undergo fine-tuning using the maximum likelihood loss (MLE) on a multilingual translation dataset. The MLE loss function assesses the likelihood of the correct output given the input and model parameters. By minimizing this loss during training, the model is trained to generate translations that are more likely to be accurate based on the training data and in alignment with the provided instructions and target outputs. According to the researchers, this initial fine-tuning process aims to unlock the basic translation capabilities inherent in the LLMs.

In the second stage, they created instruction-conflicting samples (i.e., samples where the provided instructions conflict with the actual content or task that needs to be performed) by replacing the language translation directions with incorrect ones within the provided instructions. These samples introduce scenarios where the model must navigate conflicting information.

During this phase, an additional unlikelihood loss is introduced to train the model on these instruction-conflicting samples. By incorporating this extra loss function, the model is encouraged to assign lower probabilities to incorrect translations. This process helps the model learn to handle conflicting instructions, thereby improving its ability to follow the correct language translation directions and generate accurate translations in zero-shot scenarios.

The researchers applied this technique to fine-tune the LLaMA model and conducted experiments across 16 language translation directions to evaluate its effectiveness. 

The results revealed significant reductions in the off-target translation ratio, with improvements of -92.2% and -29.9% on the IWSLT and WMT benchmarks, respectively. The researchers noted that this led to notable improvements in translation quality, with average increases of +23.0/ +12.4 BLEURT and +5.2/ +6.1 SacreBLEU in IWSLT/ WMT datasets.

The researchers plan to release the code and models on GitHub.

Authors: Changtong Zan, Liang Ding, Li Shen, Yibing Zhan, Weifeng Liu, and Dacheng Tao



Source link