LOQUATICS NEWS READER

[ad_1]

Chip manufacturer and AI powerhouse NVIDIA — the third-most valuable company in the world at the time of writing — has announced a suite of microservices to help developers integrate generative AI into apps, specifically for machine translation (MT) across 30 languages, plus transcription and text-to-speech capabilities.

“Microservice” refers to a specific setup for applications. Traditionally, all functionality is entwined in a single, tightly integrated unit. Microservice architecture, by contrast, segments applications into independently deployable modules.

The microservice approach permits simultaneous work on an application’s different components, which speeds up development and allows updates to roll out individually, without impacting the entire application.

“The microservices architecture is particularly well-suited for developing generative AI applications due to its scalability, enhanced modularity and flexibility,” NVIDIA explained in a July 2024 blog post.

NVIDIA NIM, a set of accelerated inference microservices, runs AI models on NVIDIA GPUs anywhere (i.e., in the cloud, a data center, or a centralized workstation). NVIDIA’s microservices span a range of fields and use cases, from healthcare and data processing to retrieval-augmented generation (RAG).

MAIN IMAGE - 2024 Market Report

Slator 2024 Language Industry Market Report — Language AI Edition

The 140-page flagship report features in-depth market analysis, language AI opportunities, survey results, and much more.

In March 2024, NVIDIA launched dozens of “enterprise-grade generative AI microservices,” followed shortly by the June 2024 release of NVIDIA ACE generative AI microservices “to accelerate the next wave of digital humans.” 

The NVIDIA ACE suite included NVIDIA Riva, GPU-accelerated multilingual speech and translation microservices, for automatic speech recognition, text-to-speech, and machine translation. One ideal use case, according to NVIDIA’s materials, seems to be “transform[ing] chatbots into engaging, expressive multilingual assistants and avatars.”

Similarly, NVIDIA’s latest announcement highlights several possibilities for integrating multilingual voice capabilities into apps, such as customer service bots, interactive voice assistants, and multilingual content platforms.

The latest NVIDIA blog post walks readers through performing basic inference tasks directly through their browsers using interactive speech and translation model interfaces in the NVIDIA API catalog.

Large language models (LLMs), which have powered some of the most recent and impactful advances in MT, benefit from microservices. LLMs require significant computation resources, while microservices can help efficiently scale resource-intensive components while minimizing the effect on the rest of the system.

NVIDIA continues to pursue research in the field of MT, most recently with a September 20, 2024 paper on EMMeTT: Efficient Multimodal Machine Translation Training. Its ongoing focus on improving language technology and specifically language AI is notable considering that the company’s bread and butter, developing and manufacturing chips, is several degrees removed from translation, transcription, and text-to-speech capabilities.

It also pits NVIDIA in competition with a number of heavy hitters in the tech space, including IBM, Microsoft’s Azure, and AWS, which calls itself “the most complete platform for microservices.”

[ad_2]

Source link

News provided by