In 2023, the Hollywood actor strike that paralyzed a number of localization activities in film, TV, and gaming productions, among others, allowed for dubbing work to continue. The July 2024 SAGAFTRA union’s strike would require members to stop working altogether in video games, specifically, as covered by the union’s “Interactive Media Agreement.” 

“Localization for foreign video games covered under the Interactive Localization Agreement (ILA) is struck work,” says a statement published by the union. 

The list of services becoming off limits to members includes acting; singing; “voice acting, including performing sound-alike voice services;” and “authorizing the use of your voice or likeness (which includes integration or reuse of work already performed)” in video games. 

The latest strike highlights a key point of contention: the use of AI in voice acting. SAG-AFTRA aims to protect its members from being replaced by AI technology, just as game studios are ever more tempted by the potential of AI dubbing to lower costs and shorten production timelines.

We asked readers if they think the video game actor strike will be a net positive for voice actors, and over a third (36.8%) believe that will be the case. A group of about the same size (34.2%) thinks it is probable, while less than a quarter (23.7%) find it unlikely and the rest (5.3%) think it will not be so.

Flickileaks

“Iyuno is aware of a recent security issue, involving unauthorized access to confidential content. Protecting our clients’ confidentiality and ensuring the security of their content is our highest priority,” read the statement made by the media localization company about a security breach resulting in the leak of unreleased content.

According to a “What’s on Netflix” news article, the Netflix post-production partner was the target of the hacking of the localized versions of “at least nine Netflix original series and movies.” Netflix apparently got to work immediately to remove the leaked content and secure its systems, but not before clips, footage, and complete episodes of several series had made their way to social networks and torrent sites.

Some in the anime community voiced their discontent about the leak on social media, begging the hackers and takers to stop sharing.

We wanted to know how important readers think confidentiality is in their area of language services and technology as the Iyuno content leak brings to the forefront the vulnerabilities of content security for language service providers (LSPs). The vast majority of respondents (73.3%) think it is absolutely critical.

Less than a quarter (20.0%) of readers consider confidentiality important, and a small group (6.7%) does not see it as important.

Everybody is Using AI Left and Right… Right?

Language AI, especially machine translation (MT) tools, is reaching higher levels of democratization. People from all backgrounds can now perform multilingual speech and text conversion and generation using Internet sites and consumer grade applications.

Language AI meant for consumers is bringing technologies like automated captions, machine dubbing, speech translation, and AI text generation to desktops and mobile devices, including multilingual communications and web plugins. Not too concerned about end quality, these consumers are mostly trying to communicate in other languages as best as the AI allows, usually making it clear that it was AI that translated the text or voice. 

But before consumers could easily use language AI, individual content creators were already experimenting with it in commercial ways, allowing them to reach broader audiences on video platforms and websites. Things like automated transcription services, subtitles, and captions are increasingly affordable and accessible. Many are also free, up to certain volumes.

Language AI is expanding localization capabilities at unprecedented levels for everyone, so we asked Slator readers to share whether their friends and acquaintances (outside the language industry) use language AI technologies. Respondents believe that a little over a third (35.4%) use AI sometimes, and another third (31.3%) use it often. They also find that about a quarter have probably tried it out (27.1%), and the rest (6.2%) have never tried it out.

News Media and AI Translation

An analysis of articles on the AI vs human translators topic conducted by Slator in July 2024 highlighted some recurrent themes, with about a quarter of the sampled publications mentioning human translators as “indispensable.” 

Citing concerns about losing the human touch in creative, culturally accurate translations as AI translation becomes pervasive, a significant portion of media coverage praises the abilities of human translators, contrasting them with obvious limitations of AI such as hallucinations and use of incorrect terms. 

A larger segment of coverage (40%) portrays the decline of translators’ livelihoods as something inevitable, citing job losses and layoffs due to AI. The narrative tends to use some familiar qualifiers, such as declining demand for human translation, AI as a threat to translator jobs, and even speculations about the end of foreign language education.

We asked readers how accurate they think translation-related AI coverage of major media outlets is, and over a third (36.8%) answered that it is not very accurate. Two groups are almost evenly split between those who believe it is somewhat accurate (23.7%) or very accurate (21.1%), and the rest find it to be consistently wrong (18.4%).



Source link