LOQUATICS NEWS READER

[ad_1]

The sun shone brightly as industry leaders gathered for a series of presentations and panels at SlatorCon Silicon Valley in Menlo Park on September 5, 2024. Anna Wyndham, Slator Head of Research, moderated the first afternoon panel session, titled “The Language AI Stack.”

In this panel, Phrase’s CEO, Georg Ell, and Uber’s Head of Globalization, Hameed Afssari, engaged in a lively discussion about the practical applications of large language models (LLMs) and their transformative impact on the localization industry, with AI as a tech stack.

The language AI stack, concurred the two leaders, is a concept encompassing the computing power behind LLMs, the applications built on those models, and the services layer that facilitates their use in the enterprise.

The language AI stack addresses various aspects of LLMs in localization, including machine translation (MT), workflow optimization, and linguistic asset management. At Phrase, the infusion of LLM capabilities into its signature platform has been a game-changer, explained Ell.

This SaaS language technology platform has evolved significantly in recent years, and LLMs are being used for automated scoring, routing, and improvement of translations, leading to increased efficiency and quality, added Ell.

The CEO went on to explain how in a typical localization workflow the system automatically selects the most suitable MT engine, either from aggregated third-party options or customer-provided ones, and then leverages existing language assets to perform a translation.

The translation is subsequently scored, and if the score meets a predefined threshold, the translation is published. Otherwise, it undergoes automated language quality assessment (LQA). Significant errors trigger a human review, while minor issues or low-priority content might be automatically corrected.

The entire process then feeds back into improving language assets, which in turn refines the initial MT engine selection for future translations. The Phrase Platform also leverages a diverse range of MT engines, including customer-built and partner-built solutions, to optimize the translation process.

“The nice thing here is that we aggregate machine translation from many third parties, but customers can bring their own, they can build their own, and partners can bring their own… and then incorporate those into our platform as well. So we’re really aiming to build a very open platform that is friendly for LSPs to build on top of, as well as enterprises to make use of,” said Ell about how the process works.

The Language AI Stack Customer Experience 

As a Phrase customer, Uber has experienced firsthand the benefits of integrating LLMs into its localization workflow. Afssari highlighted the importance of maintaining quality at every stage of the content lifecycle and emphasized the collaborative effort between Uber and Phrase to achieve this.

The Uber executive also noted the increasing use of fully automated translation at Uber, with 85% of strings now being translated without human intervention. 

This shift was possible thanks to a combination of automated quality checks, such as Phrase Quality Performance Scores (Phrase QPS) and an in-house automated Multidimensional Quality Metrics (MQM) framework, alongside rigorous human review and audit processes.

With the rapid advancements in LLM technology, the potential applications in localization are vast, remarked Afssari. Among the possibilities, he identified quality scoring and post-editing as two areas with the most significant business impact.

Afssari also announced that Uber now offers quality evaluation, human review and audit of LLM and MT as additional services through it’s scaled solutions.

Over the AI Horizon

Another topic of discussion was the competitive landscape in the LLM space, dominated by a few major players like OpenAI and Google. Ell acknowledged the challenges faced by smaller, less well-funded companies in this arena and emphasized the importance of choosing the right model for specific use cases. 

Ell also highlighted the growing trend towards open-source solutions, driven by companies like Meta, which could level the playing field and foster greater innovation.

The increased attention that localization is receiving from the C-suite is another noteworthy trend. Ell attributed this shift to more recognition of localization’s strategic importance for today’s global business and the transformative potential of LLMs.

In this regard, Afssari also stressed the importance of proactive communication and education to ensure that localization is viewed not just as an operational function but as a strategic asset that can drive business value. 

Afssari also said that he envisions a future where MT, enhanced by LLMs, becomes the first line of content creation, followed by further LLM-enabled refinement and quality assessment.

Looking ahead, both Ell and Afssari see significant opportunities for language service providers (LSPs) and localization professionals. Ell encouraged LSPs to reposition themselves as multilingual content providers and actively participate in the growing ecosystem around platforms like Phrase.

Overall, the conversation underscored the importance of collaboration, continuous learning, and a willingness to embrace change to unlock the full potential of language AI and drive real value through localization.

[ad_2]

Source link

News provided by