OpenAI’s New ChatGPT Voice Assistant: A Delayed Debut
OpenAI has finally made its eagerly awaited voice assistant feature accessible to all paid ChatGPT users. After a four-month delay, the company has begun rolling out the advanced voice mode. This feature is now available to ChatGPT Plus subscribers and ChatGPT Team users. The rollout for Enterprise and Edu plans will follow next week.
Initially unveiled at a product launch event in May, the voice assistant was showcased for its impressive capabilities. It can quickly respond to both written and visual prompts using a spoken voice. However, concerns regarding safety and potential misuse led to a delay in its widespread release.
In July, a limited number of ChatGPT Plus customers were granted access to the voice assistant, but with certain limitations. OpenAI emphasised that the assistant would not be able to mimic the speech patterns of specific individuals and implemented filters to prevent the generation of copyrighted audio content.
Despite the expanded rollout, the new voice assistant still falls short of some of the capabilities originally demonstrated. The computer-vision feature, which allowed the chatbot to provide spoken feedback on dance moves using a smartphone camera, is currently unavailable.
To enhance user experience, OpenAI has introduced five additional voices to the feature, bringing the total number of options to nine. These new voices feature enigmatic and arboreal names like arbour, spruce, and maple.
As OpenAI continues to refine and expand the capabilities of its voice assistant, it remains to be seen how this new feature will impact the way users interact with AI-powered chatbots.