
Gemini and ChatGPT both participated in this year’s International Math Olympiad (IMO) and achieved gold medal-level scores in the competition. Google DeepMind highlighted that its artificial intelligence (AI) chatbot officially entered the competition and was able to solve five out of six questions, following the test’s rules, without any human interaction. On the other hand, OpenAI’s experimental research model was used for the test, and its results were independently evaluated. The San Francisco-based AI firm says scores were finalised after unanimous consensus.
Gemini and ChatGPT Score 35/42 in IMO
In separate posts on X (formerly known as Twitter), Google DeepMind CEO Demis Hassabis and OpenAI’s Member of Technical Staff Alexander Wei announced that their chatbots achieved gold medal-level scores in 2025 IMO. Both Gemini and ChatGPT solved five out of six problems and scored 35 out of 42 marks, which is considered enough for a gold medal. While Gemini Deep Think model was used for the competition, OpenAI used an unnamed experimental research model for the Olympiad.
IMO is one of the oldest-standing annual mathematics competitions for school students. It was first held in 1959 in Romania, and at present, students from more than 100 countries participate in the competition. The competition focuses on mathematical proofs instead of solution-based problems. This means participants have to use logic, various mathematical theorems, and knowledge of applied mathematics to provide a proof. The quality of proof is then graded by evaluators, and participants are given scores.
Hassabis said Gemini was able to operate end-to-end in natural language and produce mathematical proof directly from the problem descriptions within the 4.5-hour time limit. This enhanced Gemini Deep Think model will now be made available to select testers and mathematicians, and later rolled out to the Google AI Ultra subscribers.
According to a TechCrunch report, OpenAI also participated in the competition, but not officially. The company is said to have hired three former IMO medalists who understood the grading system as third-party evaluators. The AI firm reportedly reached out to IMO with the scores. Wei, in his post, highlighted that the scores were announced after unanimous consensus.
In a separate post, Hassabis indirectly called out OpenAI for not following all of the official rules and lengthy processes that IMO asked other AI labs to follow. He also hinted at OpenAI announcing results prematurely on Friday and said, “we didn’t announce on Friday because we respected the IMO Board’s original request that all AI labs share their results only after the official results had been verified by independent experts & the students had rightly received the acclamation they deserved.”