OpenAI has introduced two new AI models o3 and o4-mini describing them as its most advanced yet in reasoning and multimodal capabilities. The announcement, made on Wednesday via a post on X, comes just days after the rollout of the GPT-4.5 API for developers.
According to OpenAI, these models can “agentically” use and integrate all available tools within ChatGPT. This includes web browsing, Python coding, image analysis, file interpretation, and image generation. In essence, the models can navigate complex tasks by combining multiple modalities, showcasing unprecedented flexibility and utility.
OpenAI CEO Sam Altman praised the new models on X, stating, “o3 and o4-mini are out! They are very capable. o4-mini is a ridiculously good deal for the price.” He also announced the release of Codex CLI, a new open-source coding assistant designed to run locally on users’ computers. This tool aims to enhance developer experience by harnessing the powerful coding abilities of o3 and o4-mini.
The o3 model stands out as a premium AI solution, excelling in domains such as programming, mathematics, science, and visual reasoning. It is positioned as a highly capable model for users with complex, multi-layered tasks. Meanwhile, o4-mini is designed as a cost-effective alternative with a much higher user capacity, making it ideal for high-throughput environments where quick and intelligent reasoning is key.
Looking ahead, Altman confirmed that a more advanced o3-Pro model is in development and will be made available to ChatGPT Pro users in the coming weeks.
Despite the innovation, OpenAI continues to grapple with model naming confusion. Altman humorously acknowledged this on X, saying, “How about we fix our model naming by this summer… until then, everyone gets a few more months to make fun of us (which we very much deserve).”
Nonetheless, the o3 and o4-mini releases signal a strong step forward for OpenAI as it continues to push the boundaries of AI utility, performance, and accessibility.