
OpenAI’s after -training researcher Manuka Stratta announced these changes in a video that the company shared from official social media accounts. In particular, updates aim to solve the problem of artificial intelligence assistants ‘users’ thinking or breathing breaks as the end of speech. If you frequently use the advanced sound mode, you know that this is an important problem. Therefore, it was appropriate to correct it.
More changes for paid users
This change applies to free and paid users. However, there are more comprehensive changes for paid subscribers. Chatgpt Plus, Teams, Edu, Business and Pro subscribers face remarkable improvements in the personality of the artificial intelligence assistant as well as less deduction. According to the transmitted, the new voice assistant can now give more direct, interesting, concise, specific and creative responses.
To try advanced sound mode, users can touch the sound icon in the lower right corner of the chatgpt application home screens. Users can select one of the following nine sounds, each of which has a different tone and character.
- Arbor: comfortable and versatile
- Breeze: Live and sincere
- Cove: calm and directly
- EMER: Self -confident and optimistic
- Juniper: open -minded and cheerful
- Maple: cheerful and sincere
- Sol: Knowledgeable and comfortable
- Spruce: Calm and approval
- Vale: Clever and Curious
Meanwhile, the Advanced Sound Mode has a one -day limit. When you reach this limit, the standard sound model starts to be used directly. However, this limit is updated every day. The limit itself can also vary according to the intensity.