[ad_1]
OpenAI has announced that it has raised USD 6.6bn in its latest funding round — reaching a valuation of USD 157bn — making it one of the most valuable private companies in the world.
Venture capital firm Thrive Capital led the round and sweetened the deal by promising a further USD 1bn in investment if the AI firm meets revenue expectations next year. Additional investors included Microsoft, Apple, Nvidia, and Khosla Ventures.
Commenting on the funding, the company stated “Every week, over 250 million people around the world use ChatGPT to enhance their work, creativity, and learning. Across industries, businesses are improving productivity and operations, and developers are leveraging our platform to create a new generation of applications. And we’re only getting started.”
“We aim to make advanced intelligence a widely accessible resource. […] By collaborating with key partners, including the U.S. and allied governments, we can unlock this technology’s full potential,” it added.
At its Developer Day on October 1, 2024, the company took one step further to unlocking the technology’s potential by launching its Realtime API, allowing developers to build in speech-to-speech applications without the need for multiple models that firstly recognize speech, infer, or reason, and then output using text-to-speech models.
The result is reduced latency for speech-to-speech output, and “more natural conversational experiences”.
2024 Slator Pro Guide: Translation AI
The 2024 Slator Pro Guide presents 20 new and impactful ways that LLMs can be used to enhance translation workflows.
No Simultaneous Interpreting Just Yet
The development comes five months after OpenAI first announced real-time speech translation in the application, although when prompted, ChatGPT told Slator it “doesn’t have the capability to listen and interpret simultaneously”, adding that “instead I focus on consecutive interpretation.”
“Simultaneous interpretation, where I listen and speak at the same time, is a complex task that requires real time audio processing. My design is more suited to processing input, then generating a response, rather than doing both simultaneously,” the AI added. The test was run on an iPhone using OpenAI’s latest advanced voice mode on the 4o version.
While interpreting was not demoed at the company’s OpenAI DevDay this week, the company has confirmed that it is piloting real-time interpreting with the Minnesotan Enterprise Translation Office, suggesting that the functionality may soon be available.
[ad_2]
Source link