OpenAI is rolling out enhancements to ChatGPT‘s Advanced Voice Mode (AVM) feature for paid subscribers, promising more natural and human-like interactions, alongside a new real-time language AI speech translation capability.
AVM leverages natively multimodal models, specifically GPT-4o, which are engineered to directly “hear” and generate audio. “Just ask Voice to translate between languages, and it will continue translating throughout your conversation until you tell it to stop or switch,” stated the Release Notes from June 7, 2025.
In a little more than a year, OpenAI has gone from announcing real-time speech translation in May 2024 with a live demo to releasing the foundational technology for advanced voice interactions in September 2024 and the voice enhancements and live speech translation capability just announced.
The “Realtime API,” designed to enable developers to build speech-to-speech experiences, was also introduced in October 2024, and new voices were added in April 2025.
Live Speech Translation
AVM with a live speech translation flow in ChatGPT has some users already sounding off on X about its strengths, like @JeffreyJonah5, who commented “This changes everything if you’re traveling abroad. ChatGPT can now stay in translation mode no reset needed, just talk. The new voice sounds way more human. More emotion, better pacing. It’s getting scary good.”
The upgraded AVM is now available to all paid ChatGPT users in the Plus, Teams, Enterprise, and Edu paid tiers. Users can access the feature by tapping the Voice icon within the message composer.
While this is an important update to the AVM feature, OpenAI does acknowledge some known limitations, including occasional minor decreases in audio quality and infrequent hallucinations that produce unintended sounds, like ads or just gibberish.
At the time of publication, Slator was able to corroborate the AVM translation feature works only on the mobile interface, and that it switches between language learner mode and conversation mode even after prompting it for translation — as reported by some early users on X — and that it sometimes simply does not translate and stays silent.
OpenAI states that it is actively working to resolve these issues. In the meantime, the company has updated its Frequently Asked Questions site with more feature information.