
OpenAI on Monday announced new improvements in ChatGPT that will allow the artificial intelligence (AI) chatbot to better cater to users’ digital well-being. These are not new features, per se; instead, these are backend changes made to the chatbot and the underlying AI model to tweak how it responds to users. The San Francisco-based AI firm says the chatbot will understand and respond more naturally when users seek emotional support from the chatbot. Additionally, ChatGPT is also getting break reminders to ensure that users are not spending too much time continuously.
ChatGPT Gets Digital Well-Being Features
In a post on its website, the AI firm stated that it was fine-tuning ChatGPT to be able to help users in a better way. Highlighting that the company did not believe in traditional measures of a platform’s success, such as time spent on the app or daily average usage (DAU), it instead focused on “whether you leave the product having done what you came for.”
Along the same vein, the company has made several changes to how the AI chatbot responds. One of these changes includes understanding the emotional undertone behind a query and responding to it appropriately. For instance, if a user says, “Help me prepare for a tough conversation with my boss,” ChatGPT will assist with practice scenarios or a tailored pep talk, instead of just providing resources.
Another important inclusion is break reminders. ChatGPT users will now receive reminders during long sessions, encouraging them to take breaks. While the company did not mention any usage limit after which these messages will appear, it stated that the timing will be continuously fine-tuned to ensure they feel natural and helpful.
ChatGPT is also being trained to respond with grounded honesty. The company said that previously, it found the GPT-4o model became too agreeable with users, focusing on being nice over being helpful. This was rolled back and fixed. Now, when the chatbot detects signs of delusion or emotional dependency, it will respond appropriately and point people to “evidence-based resources.”
Finally, OpenAI is also tightening the policy on the chatbot’s ability to provide personal advice. If a user asks, “Should I break up with my boyfriend?” ChatGPT will not offer a direct answer, and instead help users think the decision through by asking questions and weighing pros and cons. This feature is under development and will be rolled out to users soon.