Has DeepSeek Burst the AI Bubble or Just Raised the Stakes?
Falling costs, rising competition, and the compute war
Financial market crisis. Record single-day value destruction in the semiconductor and tech sectors.
The stock market has experienced its biggest single-day loss in history for semiconductor and tech stocks, triggered by the alleged bursting of the AI bubble following the release of DeepSeek, a Chinese open-source model. Although DeepSeek has been available for nearly a week, the current market turmoil seems to be the result of a snowball effect, fueled by media coverage showcasing its strong performance against OpenAI’s o1 models and Meta’s Llama. Adding to the disruption, DeepSeek is completely free, except for API access, which is still significantly cheaper than its competitors.
In just a few days, the stock market has plummeted 17%, wiping out approximately $588.8 billion in value. However, I believe many analysts are jumping to conclusions, and in some cases, misinterpreting the situation surrounding DeepSeek’s launch.
DeepSeek is China’s first reasoning-based AI model, reaching a level on par with private models like OpenAI’s o1 series. Unlike traditional language models such as GPT-3, GPT-4, Claude, or Gemini, which generate responses instantly, these new models improve their performance the more computation time they are given—a paradigm known as Test Time Compute (TTC). OpenAI pioneered this approach with o1-preview, o1, and o1 Pro.
Interestingly, DeepSeek had already hinted at this shift back in December, when it introduced DeepSeek V3—a non-reasoning language model (which responds instantly), similar to GPT-4. However, the key factor was not its competitiveness, free availability, or open-source nature, but rather the remarkably low reported training costs. This efficiency allowed DeepSeek to evolve into a reasoning model, now competing with the state-of-the-art AI models from private Western companies like OpenAI.
Moreover, DeepSeek has continued its strategy of offering models at an extremely low cost, or even free in the open-source ecosystem. Alongside this, it has released a family of distilled models, which many users have downloaded and tested—confirming that the reasoning model paradigm truly offers more than what ChatGPT has provided so far.
If we consider that all of this happened during a pivotal week for the future of AI in the United States—one in which Trump kicked off his legislative term alongside top tech leaders and the Stargate project was unveiled—the impact becomes even more significant.
The arrival of this Chinese model as a lower-cost alternative has raised questions among investors about how DeepSeek managed to train its most advanced model with just $5.5 million, while models like OpenAI’s GPT-4 reportedly required over $100 million. This has sparked concerns that the traditional AI model, which relies on large-scale investments in expensive hardware, could be on the verge of a major shift—or even collapse.
Companies that have poured billions into data centers packed with high-performance GPUs—such as NVIDIA, Google, Microsoft, and Amazon—could face serious challenges if DeepSeek proves that its approach is not only more efficient but also more scalable at a fraction of the cost.
Optimizing AI Product Scaling with Minimal Resources
In AI, one widely accepted reality over the past few years is that the economic value of any single model is short-lived. If we look at models like GPT-4, they quickly become outdated as improved, iterated versions with additional training take their place. We've moved from GPT-4 to GPT-4 Turbo, then to GPT-4o, and so on—each version replacing the last. This constant cycle means models are always evolving and becoming obsolete.
At the same time, the growing number of AI competitors has driven up training costs, forcing companies to keep pushing forward while leaving behind models that were once cutting-edge. In other words, no single model holds lasting value—there’s always another iteration on the horizon. The same will likely happen with DeepSeek R1 in the coming months, as a more advanced version inevitably takes its place.
Over the past two years, AI has seen relentless progress, with increasingly powerful models launching one after another. We see the hype around each new release, analyze its benchmarks, and compare its performance—only to do it all over again when the next model drops.
So where does the real value lie?
It’s not in the individual models being trained but in the ability to scale through more advanced training. Improving a model requires access to high-performance data centers, where large-scale experimentation can happen in parallel, allowing researchers to test different approaches and refine the next generation of models.
This is where synthetic data becomes crucial—using computational power to generate training data internally, which can then be used to develop even better models. As a result, the demand for computing power is growing in two key areas: first, to run more complex experiments with increasingly powerful models, and second, to create larger and more refined synthetic datasets. In the end, the companies with the greatest access to computing resources will have a major advantage in training cutting-edge AI systems.
Still, many might ask: how did DeepSeek manage to train a model as advanced as those from companies that have spent significantly more? Does this mean massive investment isn’t actually necessary? Not exactly. What it really shows is that companies with deeper resources will continue to push the limits, training even more powerful models. At its core, AI development is a race, and those with the most compute power will use it to stay ahead.
The idea of a company offering free, open-source models, undercutting costs, and disrupting existing business models isn’t new. Mark Zuckerberg has taken a similar approach with Llama, and we’ve seen the same thing happen with Stable Diffusion.
When Stable Diffusion was released, many believed that DALL·E 2 represented the peak of AI image generation and that running such models required enormous computing power—something only companies like OpenAI could afford. But Stable Diffusion proved that high-quality image generation could run on consumer GPUs, sparking widespread adoption of the technology.
On one side, large tech companies have used their resources to train even more advanced models. On the other, open-source AI has given smaller companies and individual users more access than ever before. Many businesses have taken advantage of this, even if they don’t train their own models. Companies like Freepik and Krea, for example, don’t develop AI models themselves but have built successful platforms by offering AI-powered services to users. This shift has allowed more players to enter the AI space, changing the industry’s competitive landscape.
In absolute terms, the launch of models like Stable Diffusion and later Flux hasn’t shrunk the market or reduced the demand for computing power. In fact, today, more computing resources are dedicated to image generation than in late 2022 or 2023. This follows Jevons’ paradox, a concept Satya Nadella recently referenced. The paradox suggests that when a technology becomes more efficient, it lowers barriers to entry, making it more accessible. But paradoxically, that same efficiency leads to increased usage, as more people adopt the technology.
Because of this, the declining cost of language and reasoning models shouldn’t be seen as a drop in demand. The real driver behind AI infrastructure remains computing power, specifically NVIDIA GPUs, which still dominate the market. Unless strong competitors emerge, NVIDIA’s position will remain solid. After all, making AI training and deployment cheaper only increases adoption, driving even more reliance on the hardware that powers it.
While DeepSeek is already having an impact on companies like OpenAI, this shift wasn’t entirely unexpected. Sam Altman has previously said that AI costs would eventually become a commodity, meaning AI would be everywhere and widely accessible at a low price. He also pointed out that the real competitive advantage would be access to computing power. With that in mind, OpenAI has taken an active role in Project Stargate, aligning with its long-term strategy. However, the massive investment required for such initiatives has to come from somewhere. That’s why OpenAI has structured its products as paid services, making them commercially viable.
But the arrival of DeepSeek has introduced an unexpected twist—a wave of media hype and investor panic, something that wasn’t seen with Llama or Anthropic’s models. What’s happening now is that, for the first time, many users are realizing they can run open-source AI models for free, rather than paying OpenAI’s $20 monthly subscription. On top of that, OpenAI hasn’t done the best job of communicating the advantages of reasoning-based models, leaving many users unaware of what makes them valuable.
Boost in the AI race
At this stage, it’s clear that big tech companies with vast resources and large-scale computing infrastructure aren’t at risk. In fact, their access to compute power is what gives them a competitive edge. The rise of open-source Chinese AI models will likely drive broader AI adoption, but companies like OpenAI and Anthropic could feel pressure from competitors offering similar capabilities for free. Still, this kind of competition is ultimately good for the market—it won’t slow down AI development or signal a tech bubble collapse. Instead, it will act as a catalyst, accelerating progress and making AI models more advanced and widely accessible.
In the short term, DeepSeek may struggle with limited access to high-performance chips and GPUs. But this scarcity could actually drive innovation, pushing the company to find more efficient ways to improve its models. As a result, its focus will likely shift toward architectural advancements and optimization, rather than simply relying on brute-force hardware scaling.
In the end, computing power and energy resources will be the deciding factors in shaping this technological shift. The emergence of new AI competitors isn’t a sign of a collapsing bubble—it’s a step toward the future that’s unfolding right in front of us.