In a landmark event for artificial intelligence, cutting-edge generative AI models from Google and OpenAI achieved gold-level scores at the prestigious 2025 International Mathematical Olympiad (IMO), held this month in Queensland, Australia. Despite this milestone, human contestants still outperformed the machines five students earned perfect scores, something neither AI model could achieve.
Google announced that an advanced version of its Gemini chatbot solved five out of six complex mathematics problems, earning 35 out of a possible 42 points the minimum required for a gold medal. This is a significant leap from last year, when its model only managed a silver-medal score at the IMO in Bath, UK. Notably, Gemini completed the problems within the 4.5-hour competition time, a major improvement from the two to three days required previously.
OpenAI’s experimental reasoning model also matched Gemini’s 35-point score. According to OpenAI researcher Alexander Wei, the model’s performance marked the achievement of “a longstanding grand challenge in AI” at what he described as “the world’s most prestigious math competition.”
The IMO involves solving six high-difficulty problems, each scored out of seven points. Contestants are under 20 years old and represent the top mathematical talent globally. This year’s event featured 641 participants from 112 countries.
Despite the AI models’ strong showing, humans retained the upper hand. Approximately 10 percent of human participants earned gold medals, with five achieving flawless 42-point scores an accomplishment neither AI model matched.
IMO president Gregor Dolinar praised the AI-generated solutions as “astonishing,” noting they were often clear, precise, and easy to follow. However, he emphasized caution. “It is very exciting to see progress in the mathematical capabilities of AI models,” he said, while also pointing out that organisers could not verify the amount of computing power used or the extent of any human assistance.
As AI continues to inch closer to human-level problem-solving in mathematics, this year’s IMO shows that, for now, young human minds remain the gold standard.