The Slow Path to AI Self-Improvement
How recursive improvement will likely happen slowly, and then all at once.
Every now and then, you hear this idea come up about AI reaching some kind of breaking point. The theory goes that once AI gets smart enough, it'll be able to improve itself, which leads to even more improvements, and then boom - superintelligence. Or maybe it hits a wall, who knows.
But I've been thinking a lot about this lately, and I don't think that's how it's going to play out. What we're actually seeing suggests something much more gradual - and in my opinion, more interesting.
What We're Actually Seeing
Take development tools, for instance. Aider, which I keep bringing up because it's probably my favorite AI programming tool, does something really interesting: they track exactly how much of their code is written by AI versus humans. Looking at their data over time, you can see this fascinating trend where the AI's contribution to the codebase keeps growing. In recent updates, we're seeing the AI write 70-80% of new code.
Now, this isn't the kind of self-improvement people usually talk about. The underlying AI models aren't actually getting better by themselves. It needs a human in the loop who gives instructions to the model, to apply changes to its “future version:”. But it's like watching the very first baby steps of tools learning to improve themselves, even if humans are still very much guiding the process.
Where This Gets Interesting
The really fascinating part is how all these improvements feed into each other. As models get better - faster, smarter, more reliable - the tools built with them get better too. These better tools then help us develop and improve things faster, including new AI models.
Think about what's happening with the newest models. They're not just getting bigger - they're getting noticeably better at reasoning, generating high-quality code, and understanding complex problems. This means development tools can integrate them more effectively, speeding up the whole process of building and improving software - including AI systems themselves.
It's Not About a Single Breakthrough
I don't think we're ever going to see that dramatic moment where AI suddenly becomes capable of improving itself. Instead, what we're probably going to see is this gradual acceleration where:
Tools get incrementally better at helping us develop AI
Development cycles get shorter
We generate better training data
Each improvement makes the next one a bit easier
The whole process slowly picks up speed
Sure, we'll probably hit walls along the way. There are always limitations to what technology can do. But what's exciting is that this process is already happening. Not with some dramatic breakthrough, but with simple tools getting better at improving themselves, one commit at a time.
Beyond Just Programming
And it's not just about programming tools improving themselves. We're starting to see AI models becoming more capable across all sorts of fields. OpenAI's O1 model shows promising reasoning capabilities that could eventually help accelerate scientific discovery. Google DeepMind's AlphaProof and AlphaGeometry 2 are making interesting progress in solving mathematical problems.
While we're still in very early days - these models are far from replacing mathematicians or scientists - we're seeing the beginnings of something interesting. As models get better at reasoning, problem-solving, and specialized tasks, they might help accelerate progress in their own development. Future versions of models like O1 could potentially help researchers understand better model architectures, or mathematical insights from improved geometry-focused models might lead to more efficient training methods.
Looking Ahead
Some of these improvements might seem small on their own - a tool that can write more of its own code, models that can handle larger codebases, better synthetic data generation. But together, they're slowly changing how we develop and improve AI systems. And as these systems get better at helping with scientific research and mathematical proofs, they might even help us figure out better ways to build AI. We're at the start of this journey, and while progress might be gradual, it's fascinating to watch it unfold.
The path to more advanced AI probably isn't going to be as dramatic as some people think. Instead of waiting for some magical self-improvement threshold, we're watching a gradual evolution where each improvement lays the groundwork for the next. And personally, I think that's way more interesting than the sci-fi version.