Georg Ell, Phrase CEO, and Simone Bohnenberger-Rich, PhD, Chief Product Officer addressed SlatorCon Remote June 2024 attendees to discuss what can be achieved with AI-enabled automation in localization LQA. The Phrase executives answered questions related to the integration of AI into different offerings and the types of benefits to be gained through automation.

To get the discussion started, Slator asked Ell about the way in which Phrase is helping its clients address the changes in content generation and localization, especially when done on a massive scale. 

The CEO explained that there is great interest in the way content generation is changing and how AI is enabling these changes.

To Ell, such changes include the not-too-distant possibility of hyper volumes in the internet being streamed in real time, with highly customized content for each user. In his estimate, about 20% of localization clients have begun to automate, amidst a sense of urgency to handle mushrooming content volumes and eventually reach hyper-automation.

Hype vs Reality in AI Innovation

Picking up on the theme of massive content volumes, Bohnenberger-Rich added that the hype around what AI can do is changing expectations around localization costs, of doing a lot more for a lot less money. “Sometimes we see that localization teams get reduced budgets in anticipation that GenAI will just lead to cost reductions,” Bohnenberger-Rich added. “That’s a little bit [of] what I call blind enthusiasm.”

Bohnenberger-Rich followed by saying that teams eventually realize that it takes a lot more than generative AI and large language models (LLMs) to get real automation, cut costs, and solve a real use case. 

The CPO explained that Phrase has been “implementing large language models and AI where they generate the most value based on what they can do with a workflow component and an advanced analytics component. That gives you a lot of intelligence around what you can do and when.”

Addressing the complexity of the quality assurance (QA) component in localization, Bohnenberger-Rich also described how key innovations in QA at Phrase can remove a lot of that complexity and eliminate unnecessary human interventions. 

This begins with a change in the way that human intervention in QA is decided, shifting from assumptions about subject or language pair difficulty, for example, to an automated and systematized assessment of quality levels. That is part of what Phrase’s Quality Performance Score (Phrase QPS) is able to accomplish, explained the CPO.

Bohnenberger-Rich also touched on the issue of trusting that an automated QA process will perform equally well at any scale. She explained that automating such a highly subjective process requires cutting through that subjectivity with consistency. It also requires the product to show transparently what low and high quality are, so that clients in turn understand why the AI is making certain decisions, she added.

The Opportunities for LSPs

Georg Ell spoke about the way in which language services providers (LSPs) can implement automation to increase revenues as they help their clients localize more content. LSPs can serve as technology partners because a lot of buyers lack the knowledge to decide which direction to take with AI localization automation.

Ell stressed the need to look at the language industry as being part of an ecosystem, where Phrase, for example, is a language technology company. In that ecosystem, the data and experts that LSPs have can help improve LLMs, so “bringing those two things together is key to unlocking the value of large language models because you need to customize them to specific use cases,” said the CEO.

The CEO added that in the context of very high volumes, certain business models will need to change, as the pressure will be put on doing much more in a lot less time, including where human intervention will happen in QA. At the same time, LSPs will have many options to implement those changes, he said.

The End of Sequential QA Cycles

Bohnenberger-Rich explained how multiple QA categories can be automated within many different QA loops. Referring to the way LLMs can be trained and fine-tuned with things like high-quality translation memories, she mentioned that it is understood that certain QA categories within those loops can be trusted to be properly automated, but it all depends on the use case.

The CPO added that taking into account the needs of each use case, parts of the QA process would be more or less automated. She described the process as starting with the use case, the problem that needs to be solved, and working backwards to assign automated steps along the way.

“Customers can determine, depending on the use case and the need, how far they want to move the needle towards speed, scalability, and automation, and for what type of assets they prefer an exception loop,” remarked Bohnenberger-Rich.

At Phrase, the teams have taken into consideration feedback received from customers to fine-tune the QPS model, added Ell. To him, ideally, and that might be next, an automated QA system would give the choice to the customer to be more or less QA-rigid according to the use case and type of content.



Source link