LOQUATICS NEWS READER


A group of researchers from Amazon’s AWS posted a tutorial on the AWS Machine Learning Blog titled “Video auto-dubbing using Amazon Translate, Amazon Bedrock, and Amazon Polly.” This is the latest in a series of detailed how-tos published by AWS that describe how to use various Amazon technologies.

On the different blogs grouped under the AWS Blog Home, multiple free resources abound, provided users are willing to focus on using Amazon language processing technologies. Amazon’s free content is indeed prolific, and some of its very specific tutorials (along with free datasets) could be used for an entire stream of tasks to build a full Amazon-stack app.

For this latest blog post on machine dubbing, AWS researchers collaborated with MagellanTV, a documentary streaming service, and Mission Cloud. Mission Cloud is a cloud services management company that acts as a consultancy on behalf of AWS, recommending solutions executed on Amazon technologies.

Magellan TV reached out to Mission Cloud for an affordable automatic dubbing solution. In the tutorial, the researchers begin by describing Mission Cloud’s recommendation as a cost-effective solution for the automatic dubbing of videos for the streaming platform, using Amazon Translate, Amazon Bedrock, and Amazon Polly. 

Cascade Processing

The blog post describes the process as involving the initial translation of video captions using Amazon Translate, and then Amazon Bedrock for post-editing. 

MAIN IMAGE - 2024 Market Report

Slator 2024 Language Industry Market Report — Language AI Edition

The 140-page flagship report features in-depth market analysis, language AI opportunities, survey results, and much more.

For the video captions, users can use something as simple as a spreadsheet to collect the audio inputs and then upload the file to an Amazon Simple Storage Service (Amazon S3) bucket. This triggers the whole process of obtaining a dubbed video file and a translated caption file, beginning with machine translation (MT) using Amazon Translate.

Amazon Bedrock (which offers foundation models through a single API from multiple AI companies, not just Amazon) is used to improve MT output quality and automatic synchronization of audio and video, continues the post.

The next stage involves human editors using Amazon Augmented AI for a review, says the post. This is followed by the generation of synthetic voices for the video using Amazon Polly. All the while, AWS Step Functions manages the different process steps, run on AWS Lambda or AWS Batch. 

For the full tutorial, go here.



Source link

News provided by