OpenAI, the Microsoft-backed leader in AI software, announced the launch of GPT-4o Mini, a smaller, more cost-effective AI model designed to make advanced AI technology more accessible and less energy-intensive. This strategic move aims to broaden OpenAI’s customer base amid fierce competition from industry giants like Meta and Google.
GPT-4o Mini is priced competitively at 15 cents per million input tokens and 60 cents per million output tokens, making it over 60% cheaper than its predecessor, GPT-3.5 Turbo. Despite its smaller size, GPT-4o Mini boasts impressive capabilities, outperforming the GPT-4 model in chat preferences and achieving a remarkable 82% on the Massive Multitask Language Understanding (MMLU) benchmark. This score surpasses Google’s Gemini Flash and Anthropic’s Claude Haiku, which scored 77.9% and 73.8% respectively.
Olivier Godemont, OpenAI’s Head of Product API, emphasized the importance of affordability in democratizing AI technology: “For every corner of the world to be empowered by AI, we need to make the models much more affordable. I think GPT-4o Mini is a really big step forward in that direction.”
The new model supports text and vision in the application programming interface (API) and plans to expand to include text, image, video, and audio inputs and outputs in the future. This makes GPT-4o Mini an attractive option for companies with limited resources looking to integrate generative AI into their operations.
ChatGPT’s Free, Plus, and Team users can access GPT-4o Mini starting today, with enterprise users set to gain access next week. This rollout is part of OpenAI’s broader strategy to make AI technology faster and more affordable for developers building applications.
OpenAI has not disclosed the exact size of GPT-4o Mini but stated it is comparable to other small models like Llama 3 8b, Claude Haiku, and Gemini 1.5 Flash. The company claims GPT-4o Mini is faster, more cost-efficient, and smarter than these leading small models.
The introduction of GPT-4o Mini reflects OpenAI’s commitment to maintaining its leadership in the AI market while addressing the growing demand for affordable, high-performance AI solutions. As smaller models continue to improve, they are becoming increasingly popular among developers for high-volume, simple tasks due to their speed and cost efficiencies.