OpenAI's o3 and Other AI News That Will Shape AI in 2025
Here's the best AI news you missed in the end of 2024.
OpenAI, Google, and other companies have released many news at the end of 2024 that will shape AI in 2025.
We got new improved models, tools finally released to the public, ChatGPT updates, and more! Let’s dive into everything these last weeks of 2024 have brought us and how to make the most of them.
o3 & o3-mini – AGI?
OpenAI saved what is likely the most important announcement of the “12 days of OpenAI” for last (day 12): the unveiling of the new frontier models, o3 and o3-mini.
The o3 model is a major leap forward, excelling in areas like math and science and surpassing its predecessor, o1, in both capabilities and response quality. Impressively, it even outperforms humans in some competitions.
The o-family of models also achieved scores above 85% (human performance), breaking "new ground in the Arc-AGI space," thanks to a new benchmark. This milestone underscores the growing partnership between OpenAI and Arc, with a formal collaboration set for 2025.
What sets o3 apart is its ability to infer solutions to new problems based on prior examples. This represents a bold shift in OpenAI’s vision—transforming ChatGPT from a passive tool into an active agent capable of independent reasoning and action.
In essence, we’re inching closer to achieving true AGI. However, OpenAI remains cautious, choosing not to label it as such. Even François Chollet (creator of ARC-AGI) has said that there's still a fair number of easy ARC-AGI-1 tasks that o3 can't solve (some of the type "easy for humans and hard for AI") like the one below.
While these models aren’t being released just yet, public safety testing for researchers is already underway.
OpenAI o1 and o1 Pro Mode available in ChatGPT
The first day of the “12 days of OpenAI” kicked off with a lot of excitement as OpenAI introduced its latest update. Sam Altman and his team launched o1 and o1 Pro Mode, which they described as a direct response to feedback from many of us ChatGPT users. These updates take the initial features of o1-preview and elevate them—making the model smarter, faster, and more capable, especially in its multimodal abilities, which boost both performance and accuracy.
The chart below highlights the performance of these models in tests like math competitions, coding challenges, and the GPQA Diamond benchmark.
According to human evaluations, the o1 Pro Mode reduced errors by 34% compared to o1-preview and delivered responses 50% faster. But beyond the numbers, OpenAI’s message is clear: “Look, I’m now far more reliable than my earlier version.”
Probably what surprised everyone the most was the price of o1 pro—$200/month. I’ll admit, the $200 monthly price tag for ChatGPT Pro might seem steep at first glance—I thought the same. But the features it offers are undeniably impressive (especially for solving complex problems)
Run Python code in ChatGPT (Canvas update)
Since OpenAI first introduced Canvas in ChatGPT, I’ve seen it as one of the most user-centric and customizable tools they’ve released. The update on day 4 takes it to the next level, making Canvas an essential tool for programmers.
With Canvas, you can quickly edit and refine text, getting suggestions, explanations, and deeper insights as needed. On top of that, you can now run Python code directly within the same workspace, with outputs that include both text and visual graphs. Check how it works in my video below.
Another exciting addition is Canvas’s integration with custom GPTs. Now, you can create GPTs tailored to specific topics or tasks
Sora finally released (but it’s not as good as Google’s new Veo-2)
Day 3 was focused on Sora, a tool many of you are already familiar with. Sora allows users to generate videos from text prompts, and this latest release brings some exciting upgrades.
Videos can now be created in 1080p resolution, with durations extended to up to 20 seconds. This applies across widescreen, vertical, and square formats. On top of that, users can explore curated community feeds, offering inspiration for their next visual projects. Another key improvement is Sora’s enhanced safety measures. OpenAI has introduced safeguards to promote transparency and reduce the risk of misuse, ensuring a more secure environment for everyone.
Although there was some initial excitement about Sora, soon Google Veo-2 made Sora look like a very basic tool.
Just compare this video generated with Sora
with this video made with Veo-2
Many video comparisons like this were published on X and it made Sora look bad.
Geo-2 isn’t available to the public, though, while Sora is included with the standard ChatGPT Plus subscription. However, for those looking for increased usage limits or higher resolutions—up to 10 times more—the Pro subscription offers an upgraded Sora experience.
ChatGPT and Apple Intelligence
The integration of ChatGPT into iOS for iPhone, iPad, and macOS is an update many may have seen coming—but it’s no less exciting.
Let’s start with Siri. You can now use Siri to directly ask ChatGPT to handle tasks on your behalf, making it easier than ever to get things done. In the example below, I asked Siri to generate a poem using ChatGPT and put it directly into the Apple Notes app and then I asked to add an image.
Another standout feature from this release is the enhanced writing tool, which lets users create and summarize documents, including PDFs, through ChatGPT. On top of that, the new Visual Intelligence functionality allows you to explore objects using the iPhone’s camera.
These features are seamlessly connected across devices, creating a smooth workflow that boosts both productivity and creativity.
ChatGPT Video finally released (but it was overshadowed by Google’s Gemini 2.0)
The addition of video functionality to Advanced Voice Mode takes ChatGPT to a whole new level. You can now turn on your camera to start a live video and even share your screen in real-time during conversations with ChatGPT.
This update feels like an early Christmas gift for ChatGPT users. It was a smart move by OpenAI, allowing responses to be generated in real-world, interactive scenarios—literally bringing the tool into a more dynamic environment.
Sadly for OpenAI, ChatGPT video was overshadowed by Gemini 2.0. In the latest Gemini update, you can also start a live video call and share your screen with Gemini. For this, you only need a free Google account, and after testing it, I was just amazed! (just check my video below)
Some minor updates to ChatGPT
Here are the latest minor updates to ChatGPT that you should know. These news aren’t as big as the previous ones, but are still relevant if you’re a ChatGPT user.
Day 7 had a very nice update: projects. When starting a new project in ChatGPT, I often struggled to keep track of it. Chats would get buried under dozens of other conversations, and finding the right one later is a nightmare. With this new update, OpenAI has introduced smart folders, making it easy to organize your conversations in ChatGPT.
You can now create a dedicated folder for each project and even customize it. If you have other related chats, you can move them into the same folder, giving you a clear and organized workspace.
On top of that, you can add files (PDFs, TXT, DOC, etc) directly into your project folders and also instructions to tailor the way ChatGPT responds in the project.
On day 8, ChatGPT Search got an update. In case you don’t know, a few months ago, OpenAI introduced Search for paid users, allowing ChatGPT to access real-time information and pull answers directly from the web. With this update, Search has been taken to the next level. It’s now faster, better optimized for mobile devices, and you can even perform searches while chatting with ChatGPT, thanks to the Video in Advanced Voice update. The best part? Search is now free for all registered users.
One feature worth highlighting is the clean integration of location-based searches, which display maps with specific locations directly in ChatGPT.
While this might not seem revolutionary to some—since similar searches can easily be done on Google—I really like how OpenAI is creating an ecosystem where all these tools are conveniently accessible inside the ChatGPT interface.
On day 10, ChatGPT got a phone number that you can call in the USA. Since ChatGPT launched on the web about two years ago, OpenAI has made it clear that one of its goals was to bring this AI to our phones. That vision is now a reality, with ChatGPT available through phone calls and WhatsApp.
If you’re in the US, you can call ChatGPT directly by dialing 1-800-CHATGPT (242-8478). You can also chat with ChatGPT on WhatsApp by adding the same phone number to your contacts. This is clearly aimed at users who may not be tech-savvy but want a quick, easy way to interact with AI.
Finally, on day 11, there was an announcement centered on ChatGPT’s desktop applications, with a focus on macOS. The goal is for ChatGPT to gradually evolve into an autonomous agent, capable of performing tasks independently on your behalf.
The update adds support for more note-taking and coding apps, including Apple Notes, Notion, Quip, and Warp.
Another key addition is the “Talk to Apps” feature, which allows you to interact with applications using Advanced Voice Mode. This feature is perfect for live debugging in terminals, brainstorming ideas in documents, or getting feedback on speaker notes.
For now, Windows users will need to wait a bit longer, as this update hasn’t rolled out for their platform yet.
Final Thoughts
The latest weeks of 2024 have been intense in the world of AI. There were many announcements—some more impactful than others, depending on who you ask. That said, the o3 announcement stood out to me as a clear indicator of OpenAI’s future direction.
It’s no coincidence that the word agent was a recurring theme throughout these updates. We’re steadily moving past the era of slow, incremental AI progress and edging closer to the highly anticipated AGI.
Is this good news? At this point, I can only remain cautiously optimistic. This momentum won’t stop simply because some might wish it would. That said, as I’ve written in previous articles, it’s important not to get swept up in the excitement too easily. Staying critical and thoughtful will be key as we navigate this technological revolution.