LOQUATICS NEWS READER


For aspiring young changemakers who grew up watching TED Talks, it may be hard to believe that the nonprofit has been around since 1984.

Over the past four decades, the organization’s focus has expanded from technology and design to include just about every topic under the sun — with hopes to eventually be available in every viewer’s preferred language. 

Beginning in 2009, a corps of 200 dedicated volunteers transcribed, translated, and subtitled presentations, known as TED Talks, into their native languages for friends and family. Their grassroots efforts, while impressively impactful, eventually highlighted the need for a cost-effective option for dubbing at scale.

“TED’s commitment to accessibility drives us to innovate,” explained Helena Batt, Director of Localization at TED Conferences, speaking at SlatorCon Silicon Valley 2024

Once a niche platform, TED Talks have now captured close to 35bn views, with about 70% of those taking place outside the United States, primarily in non-English-speaking countries. 

“Over the years, we’ve heard from many of our viewers in these regions who clearly want to experience TED Talks in their own language,” Batt said. “Viewing habits vary widely, and some regions prefer dubbing over subtitles.” 

TED’s initial exploration of synthetic dubbing was not an instant success. Batt, who described the attempts as “less than perfect,” noted that the challenge of balancing vocal features, accurate lip-sync, and emotional nuance often resulted in dubs “that felt awkward and disconnected from the original.”

“Actually, one of our speakers put it bluntly: ‘It just doesn’t sound like me,’” Batt recalled.

‘Creative License’ for Translators

That feedback eventually prompted TED to partner with Panjaya, which helped TED take a “huge leap forward” in synthetic dubbing.

The process starts with AI capturing a speaker’s unique voice and rhythm, then transcribing and translating the speech. Next, the AI generates dubs in multiple languages. The final step is syncing the audio with the speaker’s lip movements.

But AI alone “wasn’t enough. We knew we needed the human touch to bring that emotion back into the talks,” Batt said. 

TED currently works with about 70,000 volunteer translators, many of whom have translated for the nonprofit for many years and are familiar with the TED Talks format. 

Translators typically step in during the process of generating dubs and can make changes to the actual text used for dubbing. They also have some creative license to add pauses or change a speaker’s pitch, for example, to help a talk “truly resonate in their language.” 

“We’re experimenting with using our existing library of SRTs that are really created for subtitles and bringing that into the dubbing process” — Helena Batt, Director of Localization, TED Conferences

Of course, “just” adding humans and walking away can sometimes cause more problems than it can solve. Translating dubs is a unique skill and addressing translation quality, when translations are crowdsourced, is challenging. 

The established process for subtitling, refined over a period of years, involves multiple (usually two) translators to check and review each other’s work. TED is currently exploring a similar approach for AI dubbing.

“We’re experimenting with using our existing library of SRTs that are really created for subtitles and bringing that into the dubbing process and seeing if that helps us reduce the time for translators to refine the dubbings, or if we have them use AI-generated dubbings or content that’s […] already adjusted to the narration of the talks, the pauses, et cetera,” Batt said. “We’re seeing different results in different languages, and so it’s a continuous learning process.”

TED launched its first AI-adapted multilingual talks earlier in 2024, piloting an initiative with select talks in a limited number of languages to test for viability and popularity with viewers. 

The targeted regions included Brazil, Germany, France, Spain, and Italy, which were identified as markets where native language content and dubbing were widely preferred. 

But everywhere AI dubbing was tested, Batt said, establishing and following ethical practices was a must. In Batt’s experience, transparency builds trust. On the speaker side, this meant securing full consent from presenters and involving them in the process early on; for viewers, TED clearly labeled AI-adapted content as such. 

The results from the launch speak for themselves: Views surged by 115%; video completions more than doubled; and in the first month alone, AI-adapted content captured over 12m views and 0.5m interactions on social media. 

Batt tempered expectations a bit: “Some challenges definitely remain, at least for now, like improving lip-sync quality, refining accents, managing — or, I would say, mitigating — AI’s unpredictable ability, and also ensuring that the content stays true to its initial intent, while also giving our translators that creative agency.”

At the same time, Batt said that TED is looking forward to bringing in more diverse speakers, expanding its partnership with Panjaya to cover more languages, and supporting a deeper presence in regions such as India and China. The ultimate goal is advancing AI to the point of developing more human-like and natural output. “For us, this is just the beginning,” she added.



Source link

News provided by