ai bubble go brr or nah
I’m sure you may have heard of the economic “vicious cycle” at play. Billions of dollars are spent investing in architecture, data cleaning and storage, training, all as a select subset of companies continue to pass money back and forth, growing richer. Most real-world examples work something like this:
- OpenAI needs GPUs, the chip needed to run large language models, so they invest $100 into NVIDIA, a chip-making company.
- NVIDIA then invests $100 into OpenAI in the name of supporting NVIDIA’s products’ research. So, OpenAI gets GPUs for a net price of $0, NVIDIA gets the funding it needsthey need to keep manufacturing GPUs, and as a result, nothing has any worth internally to these companies, resulting in extremely inflated prices for consumers.
So, with 2026 around the corner, everyone is holding their breath to see if the big beast can keep feeding itself. And as always, the truth is a little more complicated than a yes or a no, as are the implications.
By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. — Eliezer Yudkowsky, American AI researcher and writer
definitions#
I’ll start by defining what an economic bubble is, and draw parallels seen so far to the AI movement. An economic bubble typically follows six stages:
- Displacement: The initial phase, where a new innovation captures investor and consumer attention.
- Boom: Stock prices begin a steady ascent, initially supported by real statistics illustrating the value of the innovation - efficiency gain and productivity increases. Media attention and new investors are attracted. Startups pop up around the world, raising money at million- and billion-dollar company valuations.
- Euphoria: A period of intense speculation where valuations accelerate rapidly. Buying into the new innovation is now primarily driven by the “Fear Of Missing Out” (FOMO), rather than the innovation’s value.
- Profit-Taking: Some savvy investors start selling to lock in gains, leading to increased volatility of company valuations, though overall optimism may remain high.
- Stagnation and Panic: All of a sudden, the new innovation just isn’t as “cool”, useful, necessary as it once was. Widespread loss of confidence in the market triggers rapid, forced selling and a sharp decline in prices (the bubble “bursts”). Startups around the world begin to be either acquired or go bankrupt as large players vertically assume their niches.
- Depression and Stabilization: The market finds its bottom, and valuations return to more reasonable levels that reflect the actual utility of the product, leaving behind necessary infrastructure that new, more sustainable innovations can utilize.
So far, we are beginning to see signs of stages 1 through 4. ChatGPT 3’s unveiling at the end of 2022 caused the entire industry to hold its breath in apprehension, and ChatGPT 3.5 Turbo was unparalleled in its utility, resembling the “Displacement” stage. Michael Burry, a famous shorter of stocks, and Gary Marcus, one of the fathers of the AI movement before the Gen-AI hype-train, have both spoken openly about liquidating their investments in AI software and infrastructure companies, resembling the “Profit-Taking” stage. And we have certainly been living in stage 3, “Euphoria,” hearing about teenagers (some of my fellow classmates, even) raising millions of dollars at billion-dollar market caps on their own spins of AI agents.
Fundamentally, economic bubbles pop as a result of drastically overvalued technology. It is a mark of modern capitalism that a community can reflect on a given thing being valued more than its actual worth and decide all at once to stop investing in it. Prior bubbles, such as the housing bubble and the .com bubble, reflected this. People spent $100 in 1995 ($212 today) for a 2-year .com URL registration, and then eventually realized that, because there was no scarcity of digital real estate, charging a week’s worth of groceries for a customized link was lowkey pointless.
It is this phenomenon that people worry has transferred to AI. To advocates of the AI bubble popping, the cost of investing in the growth of this movement has become disproportionately high with respect to the actual return on one’s investment.
what even is the bubble#
However, to consider whether the bubble will pop, I think we should view it not from an economic standpoint, but from an intellectual standpoint. Largely, economic bubbles are due to a new invention shaking up the scene and generating new avenues of revenue, and they pop when people realize that the valuation does not match the value. I propose the definition of an intellectual bubble as an overinflation of an innovation’s value with respect to driving innovation itself. That is, we are hyping up AI with more than just money. This is not only a monetary problem, but a problem of poor prioritization for companies, naive optimism about what scale can replace, and a growing reluctance to invest ourselves in understanding, judgment, and human responsibility.
The reason our intellectual bubble grows strained is not that we are placing disproportionate emphasis on genAI as an invention; rather, we are placing disproportionate emphasis on scaling genAI as a viable and sustainable solution. The following are problems that I believe will neitherbe solved by genAI in its current stage, nor by scaling.
- We are running out of data. Elon Musk has publicly commented on how xAI has run out of tweets to train their LLMs on, and we are slowly reaching a peak of quality of data. It’s no longer enough to train on purely text, and increasing the size and breadth of our models won’t improve hallucination rates or provide better information. We are slowly beginning to hit a ceiling of what can be achieved through knowledge of language alone, and traditional language generation AI is not enough to create an intrinsic understanding of concepts with human-like data.
- Speaking of human-like data, a notoriously essential part of human learning that current genAI fails to replicate is continual learning—learning over time. Humans aggregate observations over time to formulate understanding, while AI trains on an entire sum of data at once. Unfortunately, scale only makes it harder to find an incremental learning approach, making it hard to remove or adjust model biases from the training stage. Humans learn incrementally and with much less data than AI, and models of intelligence should mimic this stage, relying on prototypical generalization to “fill in the gaps” until they can be filled by data.
- Additionally, humans can recall specific examples that produced their generation of ideas—models only retain a fuzzy image of things they were trained on, so they cannot explain why they’ve arrived at an understanding. What you see when you prompt ChatGPT to explain its solution is a recollection of times it’s seen solutions be explained, not the actual flow of reasoning that spurs the idea. This is notably why hallucinations are not solved by scaling up. Grounding in fuzzy summaries rather than real, structured evidence results in the tendency to make up things that are only probably correct.
In addition to the above problems, scaling LLMs receives slowly diminishing gains, and the cost of building expansive datacenters is being felt through the environment. GPT-3 required 1,287 megawatt hours of electricity to train (estimated by scientists at Google and UC-Berkeley), and GPT-5 is 300 times larger. 1,287 megawatt hours of electricity are enough to power 120 U.S. homes for a year, and as AI in its modern stages evolves beyond just the training stage, there’s no telling how much more power it will consume.
conclusions#
So, with all that being said, will the AI bubble pop? Economically, I don’t think so. NVIDIA, Google, OpenAI, Anthropic, and most other large-scale AI companies will continue to be self-supporting for years to come. However, I do think the intellectual bubble is sure to pop in 2026. People will realize that the AI hype train we all chase is not all we make it out to be. The valuations of large companies will take a hit the more they do nothing new to show promise. The problems I laid out will continue to be relevant with the currently implemented solutions.
Ultimately, however, all these large players have multiple widely-used products that will continue to be widely used until a better solution is found, and for the time being, these companies are best poised to build those better solutions. I think the companies that will feel it worst will be startups - built to be one-trick ponies exemplifying the best that LLMs and genAI have to offer, these companies are at firsthand risk to the many flaws of genAI, and will easily be replaced by bigger companies that assume their services vertically.
Where will the startup world migrate to? My assumption is that they will move to the field of symbolic AI—building systems with interpretability, determinism, and true understanding of discrete concepts in mind. The data representations we wield will be more structured than a cloudy point in n-dimensional space, and when we ask AI why it makes decisions, we will be able to point at the training data that has resulted in the decision-making. Additionally, I think there will be a lot of exploration in smart robotics - the ability to generalize outside of a trained domain will be extremely important in giving properties of intelligence to robots.
To me, the genAI bubble bursting will be a good thing. We will always be chasing intelligence—either an attempt to replicate it or accumulate it. However, our goal should be to never stagnate in our approach towards achieving intelligence. Though scale is certainly a promising response (it does explain some of why humans can operate so much more efficiently than other animals), it’s not the only thing that matters. Humans learn continuously, from millions of years of evolution from themselves and others, and our learning is structured by very physical bounds. In order to truly achieve things like AGI and global generalization, we need to understand the fundamental thinking processes of humans and give the same structure to artificial intelligence. Agentic AI—designing workflows for our LLMs to scaffold their skills—is a good next step, as it provides some very sequential structure, but by no means is it a new baseline we should cruise a year and a half on.
I’m particularly excited about the promise of neurosymbolic methods—”neuro” like neural networks we currently use, and “symbolic” like the original methods produced by the AI movement in the 1980s. Neural methods make use of high-dimensional vectorization as their representation, while symbolic methods leverage structured representations with discrete relations. For anyone interested, I highly recommend this article (https://arxiv.org/abs/2202.12205), which gives a breakdown of the many ways neural and symbolic methods can be combined. Rather than being caught up in generalizability, the original AI movement was mostly behavioral psychology and computer science meshing to create rule-based frameworks that worked as humans did. Although they were rigid in what they did and oftentimes not parallelizable, they could provide answers with reasoning grounded in data they had been trained on, and were extremely interpretable. By leveraging the strict guidelines and structure of symbolic systems with the fuzzy matching and information gains of neural representations, we achieve something truly novel. A recent paper of mine, Hierarchical Semantic Retrieval with Cobweb, leverages neural representations within a symbolic model to showcase prototype-driven organization for information retrieval. We need to give an acquisition of structure to our methods of learning to resemble the way humans acquire understanding of the world around us.
I’m interested to see where all of this goes and what 2026 holds - LLMs are the first stage in a path of necessary steps to achieving intelligence greater than our own, and I can only hope that as this intellectual bubble comes to a close, we continue to discover mechanisms of intelligence by observing human ingenuity.
- Karthik