OpenAI introduces five new voices for ChatGPT, enhancing conversational abilities and adding custom instructions for a more personalized experience.
OpenAI has started rolling out its highly anticipated “Advanced Voice” feature for select users of ChatGPT. This new addition allows for a more intuitive and human-like conversation experience. The release is targeted at Plus and Team users, who will receive access over the course of the week.
One standout feature is the system’s ability to apologize in over 50 languages, a playful nod to the delays in launching the update. The Advanced Voice Mode enhances ChatGPT’s performance, offering a smoother, faster, and more natural interaction with users. It’s designed for ChatGPT 4.0, incorporating major conversational improvements.
Among the new features are five additional voices named Arbor, Maple, SXol, Spruce, and Vale, complementing the pre-existing voices: Breeze, Juniper, Cove, and Ember. These new voices aim to make exchanges more engaging, allowing for conversational interruptions and topic switches during the dialogue.
Moreover, OpenAI introduces custom instructions and “memories” with this update. Users can now set preferences for the chatbot, and it can retain crucial information from previous conversations to better serve individual needs.
However, it’s not without limitations. OpenAI acknowledged that the voice feature may not yet perform well with in-car Bluetooth or speakerphone systems, and there may be interruptions caused by background noise. Additionally, some users have noted that the new voices may not live up to expectations compared to the once-rumored “Sky” voice model, which was scrapped following a legal issue.
The “Sky” model sparked controversy when actress Scarlett Johansson expressed her shock at the resemblance between her voice and the AI’s vocal features. Johansson revealed she was approached in 2023 to lend her voice to the project but ultimately declined. Despite the controversy, OpenAI insists that any likeness was purely coincidental.