OpenAI has introduced a new mini version of its latest AI model, GPT-4o. This smaller version of the model is designed to be faster and more affordable than the full version, catering to developers who may not have a large budget for AI-related costs but still want to incorporate AI into their projects in a more lightweight manner.
The GPT-4o mini currently supports text and images, with plans to add video and audio capabilities in the future. It is said to be over 60% cheaper than the GPT 3.5 Turbo, OpenAI’s previous smallest model. Moreover, the GPT-4o mini outperforms other small models in the industry benchmark for reasoning, known as MMLU.
This development from OpenAI marks a significant step in making AI technology more accessible and cost-effective for smaller developers who are looking to integrate AI into their applications or websites. The GPT-4o mini’s ability to understand speech, video, and text demonstrates the model’s versatility and potential for various applications in different fields.
The introduction of the GPT-4o mini is a positive move towards democratizing AI technology and making it more widely available to developers of all sizes. As OpenAI continues to innovate and enhance its AI models, we can expect to see further advancements in the capabilities and accessibility of AI technology in the future.