August 11, 2025
You Now Have a Higher Usage Limit With ChatGPT-5, But Not for Long
OpenAI CEO Sam Altman announced on Sunday that the company is increasing the rate limit of the reasoning models for all users. The San Francisco-based AI firm said this new change will affect all model classes; however, this change is temporary. Going forward, the company stated that it will determine how to address this capacity trade-off, suggesting that those on th...

OpenAI CEO Sam Altman announced on Sunday that the company is increasing the rate limit of the reasoning models for all users. The San Francisco-based AI firm said this new change will affect all model classes; however, this change is temporary. Going forward, the company said it will decide how to tackle this capacity trade-off, hinting that those on the free tier might lose out on some other features. The decision was made in light of new data received by the company, which highlighted that the number of users accessing the reasoning capability was on the rise.

ChatGPT Users Get Increased Reasoning Usage

In a post on X (formerly known as Twitter), the OpenAI CEO announced that all ChatGPT users were going to get higher rate limits for reasoning models. Additionally, he said, “All model-class limits will shortly be higher than they were before GPT-5.”

The phrasing of the message is interesting. With the release of GPT-5, OpenAI has retired the older models. This was changed over the weekend, when, due to user backlash, the company had to bring back the GPT-4o artificial intelligence (AI) model. Users had complained that the latest large language models’ (LLMs) conversational responses were short and not as warm as the older models.

This means those on the free tier only have access to the GPT-5 Thinking model, which is baked into the standard AI model (GPT-5 unifies the GPT- and o-series models), while paid subscribers get access to a separate Thinking model that can be selected from the model picker. In addition, GPT-4o, which is also an o-series model, is capable of reasoning.

Providing higher rate limits for these models will not be cheap for the AI firm, which is already running in losses due to high operational costs. Highlighting that managing this increased rate limit will lead to some “capacity trade-offs,” Altman said the company will share how it plans to tackle this in the coming days.

Explaining the reason behind OpenAI increasing rate limits for reasoning models, the CEO said that the percentage of users accessing reasoning models daily was on the rise. Sharing the numbers, he added that free users using reasoning models increased from one percent to seven percent, while for Plus subscribers, the increase was from seven percent to 24 percent.

“I expect use of reasoning to increase over time greatly, so rate limit increases are important,” Altman added.