In a September 6, 2024, paper, Tejas Deshpande and Nidhi Kowtal from the Pune Institute of Computer Technology, along with Raviraj Joshi from the Indian Institute of Technology Madras, introduced Chain-of-Translation Prompting (CoTR), a new prompting technique designed to improve the performance of large language models (LLMs) for low-resource languages.
The researchers explained that multilingual LLMs struggle to process input sentences (i.e., the actual text the LLM has to work on) in low-resource languages due to the limited data available for training or fine-tuning. As a result, “speakers of low-resource languages are frequently excluded from the benefits of advanced NLP technologies,” said the researchers, emphasizing the need for new techniques to close this gap.
To address this challenge, they explored new prompting strategies that leverage the multilingual translation abilities of LLMs and introduced CoTR.
CoTR restructures the traditional prompts by first translating the input sentence from a low-resource language into a higher-resource language, such as English, where LLMs typically perform better. The LLM then executes the NLP task — such as sentiment analysis or text generation — on the translated text, followed by an optional retranslation of the output back into the original language. “All these steps are specified in a single prompt,” the researchers emphasized.
CoTR can be applied to various tasks, including sentiment analysis, hate speech classification, subject classification, and text generation.
The researchers tested CoTR on Marathi, an Indic language with a significant speaker base but insufficient digital and linguistic resources, making it a challenge for NLP models to handle.
To validate CoTR’s effectiveness, they compared it against standard prompting methods across various tasks — including sentiment analysis, hate speech detection, news categorization, and news headline generation — using various models such as GPT-4o, GPT-4o Mini, Llama 3.1 405B, and Gemma-9B.
They found that translating the Marathi input sentence into English and then performing the task using a single prompt yielded superior results compared to directly processing Marathi text with a standard prompt. CoTR consistently outperformed standard prompting strategies across a variety of models and datasets.
“The results underscore the potential of translation-based prompting strategies to significantly improve multilingual LLM performance in low-resource languages,” the researchers said.
2024 Slator Pro Guide: Translation AI
The 2024 Slator Pro Guide presents 20 new and impactful ways that LLMs can be used to enhance translation workflows.
They also noted that the most significant performance gains using CoTR were observed with smaller models, such as Llama3-8B.
The researchers highlighted that their work “significantly contributes to multilingual NLP by demonstrating the potential of translation-based prompting strategies, particularly with a single prompt, to enhance NLP performance in low-resource languages.”
Looking ahead, they plan to combine CoTR with Chain-of-Thought prompting to further improve NLP accuracy for low-resource languages. “Together, these strategies should create a robust framework that improves model performance and reliability in Marathi NLP tasks,” they said.