I've spent the last 48 hours trying to figure out what the heck OpenAI o1 is. Between the big claims and the technical details, I've found myself in a whirlwind of thoughts about the future of AI. Here's what I got out of it, and why I think you should care, too.
OpenAI o1, a new chapter in the evolution of AI
When OpenAI announced its new o1 model (already Q-Star, Already Strawberry), the first thing I thought was, “Here we go again with the hype.” But the more I dug into the details, the more I realized that we could be looking at something really interesting.
OpenAI o1 It's not just an update of the old GPT-4. It’s a whole new approach to how AI models “think.” And yes, I put “think” in quotes because, well, it’s complicated.
The concept of “private thought”
The most intriguing feature of OpenAI o1 is what the folks at Sam Altman call a “private thought chain.” Basically, the model takes the time to “think” before responding. Sound familiar? It should, because that’s exactly what we humans do when we tackle a complex problem, considering it in a step-by-step process.
Imagine asking a friend to solve a physics problem. They probably wouldn’t spit out the answer right away, but would spend a few moments thinking, maybe muttering to themselves. Well, OpenAI o1 does pretty much the same thing, but silently and much, much faster.

From the Math Olympics to the Chemistry Lab
OpenAI says o1 can compete in the International Mathematical Olympiad and Codeforces programming competitions. Not bad for a “machine,” right? But there’s more. Apparently, it can emulate the skills of PhD students in physics, chemistry, and biology. with success rates reaching 83% in areas where o1's predecessor, gpt-4, scored 13%.
Now, before you start worrying about your academic future, let’s remember that we are talking about very specific benchmarks. The real question, rather, is: how does all this translate into the real world?
The AGI Question: Are We Really Closer?
oh the AGI (General Artificial Intelligence). The Holy Grail of AI. I've talked about it many times here on Near Future. OpenAI has made it their mission, but are we really any closer with o1?
My short answer is: probably not. At least not yet.
Despite its impressive progress, OpenAI o1 still appears to be as error-prone and hallucinatory as its predecessors. And OpenAI CEO Sam Altman himself made a veiled reference to “stochastic parrots,” a term used to describe language models that appear to understand but are actually just replicating patterns. We're not there yet, Cocorito.
OpenAI o1 from theory to practice: the challenges of implementation
I repeat: it's one thing to excel in academic benchmarks, it's another to function in the real world. As he pointed out Jim Fan Nvidia says applying OpenAI o1 to real-world products will be “much more difficult than beating academic benchmarks.”
It's a bit like the difference between winning at chess and navigating traffic in a big city. Both require intelligence, but of a very different kind. But there are some very important things to consider: for example, how o1 will make us rethink the entire AI infrastructure. Traditionally, most of the computing power was spent on training the model. With o1, we are seeing a shift towards inference, that is, the model's ability to process new information.
It's as if we're going from an AI that memorizes a lot of information to one that knows how to use that information to reason about new things. And that's a big leap.
OpenAI o1: Should We Ultimately Be Excited or Cautious?
As is often the case with great technological innovations, the answer is: both. OpenAI o1 is undoubtedly a significant step forward in the field of AI. Its reasoning capabilities are impressive and could pave the way for truly revolutionary applications.
On the other hand, we are still far from AGI. And there are still many open questions about how this technology will perform in the real world, outside of controlled benchmarks.
The future is… complicated
OpenAI o1 is redefining what we think is possible with AI. It’s pushing the boundaries of not just what machines can do, but how they do it. After all, true intelligence isn’t just about solving problems, it’s about asking the right questions.
What do you think? Are you ready for a future where machines “think” before speaking? Or do you prefer the more direct approach of current models? The debate on what “thinking” really means has only just begun. And perhaps, this is the most valuable contribution that OpenAI o1 is already offering us.