At GMO, we have always defined a bubble as a two-standard deviation divergence of the price of any asset class above its long-term real price trend. The U.S. stock market has now been in bubble territory for a prolonged period. Sooner or later, the bubble will burst, and the price will return to its historic level.
In our study of over 300 two-sigma bubbles across all asset classes for which we had good history, we identified a few genuine paradigm shifts in resources (why wouldn't there be, since all are finite?) including in oil after 1970, as well as in developing markets, such as India, as their economies became more capitalist. But in large, developed equity markets, two-sigma bubbles had always broken and retreated all the way to the pre-existing trend. We saw this play out in the U.S. equity market in 1929, 1972, and 2000, and in the U.S. housing market (a three-sigma bubble, the largest in any important U.S. asset class) in 2006.
But unlike every bubble before it, we have yet to see this from the December 2021 peak in the U.S. equity market, despite all the classic signs of a historic bubble top: crazy speculation, such as meme stocks, followed by a collapse of the most speculative stocks; and significant outperformance of quality stocks against the market. Yes, there was a handsome bear market in 2022 right on cue (-25% in the S&P 500, -35% in growth stocks, and almost -50% in the Magnificent 7), but this quite painful bear market was nipped in the bud, so to speak, by the introduction of ChatGPT in December 2022.
AI is a fast-moving and awe-inspiring invention that seems highly consequential—even world-changing—to almost everybody, myself included. It had an effect on what was then a deflating market of something like a multi-stage rocket. The rule from history is that great technological innovations lead to great bubbles. Especially when the technology is so obviously impressive—the science-fiction dream of a computer that can talk to you fluently, and that anyone with a phone can access anytime—that absolutely everybody will want to put their money in. This pattern of highly visible and self-evidently significant innovations leading to market euphoria, then to over-investment, and thus to severe market decline has repeated again and again throughout history, from railways to electricity to radio to the internet. AI is maybe the most visibly impressive innovation of the last 100 years, perhaps of a magnitude equal to the railways of the 19th century. It should not be surprising that it appears to be moving through the same pattern both rapidly and powerfully.
AI is certainly still an immature technology. Possibly the biggest problem with current large language models (LLMs) is that their mistakes, or ‘hallucinations’, are so plausible. As we searched through the scientific literature on toxicity and fertility, they made up multiple reasonable-sounding studies with agreeable and convenient results, complete with realistic-looking citations, and mixed them in among real scientific papers. They make exactly the kind of errors that you would not catch at a casual glance, or indeed, errors you would prefer not to catch. But current LLMs are hopefully (hopefully for investors, certainly) just an opening phase in AI progress—just as in the past, similarly amazing innovations had bubbles and busts before reaching a maturity in which, after digging out of their earlier collapses, they often surpassed the wildest dreams of the early speculators. If AI can start to make advances in biotechnology, materials, and energy, or even to start improving itself, the future could be very interesting indeed.
As to what happens if (or when) AI becomes far more intelligent than us, the recent book, If Anyone Builds It, Everyone Dies (Yudkowsky and Soares 2025), is a chilling and very persuasive read. The authors’ analogies and detailed examples serve as powerful reminders of our innate optimistic bias. And optimism is perhaps the most central and critical of our biases. It may indeed be that optimism has been a real help to our survival and success as a species. Now, however, with the last saber-toothed tiger long gone, our optimism causes us humans to have trouble with bubbles and financial crises, as well as other unpleasant but powerful developments, such as climate change and toxicity. When in doubt, we assume the best—despite history telling us repeatedly that things have turned out badly all too often. Over the whole of history, for example, more technologically advanced civilizations have crushed less advanced ones mercilessly and often casually, as if the damage was all incidental. And not just human history, but biological history too: no invasive species has yet made friendly allowances for other species. I’m sorry, Dave (as HAL said), I’m afraid that’s the way it is.
On the other hand, in the last century or so, as we have advanced scientifically and culturally, we've come to value other groups, tribes, and even species, more and more. So there is a chance that an even more advanced intelligence than ours may follow us in this regard. As wishful thinkers, most of us will certainly expect that. But if we were much more intelligent, we would have done a much better job of dealing with long-term problems than we have so far. With just our intelligence and self-interest, we should have done a much better job than we have done so far of preserving our commons—clean air, clean water, fertile soil, and 2.1 healthy children per family. AIs may have some chance of concluding that it is we, the humans, who are the biggest danger to life on earth! And they would be quite justified in doing so.
But as far as risks like that go, markets very seldom even try to predict important change. In complete contrast, markets extrapolate today's conditions into the distant future. Thus, in a deep recession, the market's PE ratio is extremely low. Consider the December 1974 bear market bottom, at 7.5 times earnings. Not only were the then-current earnings crushed, but their predicted recovery—reflected by the PE—was remarkably slow despite a long history of quite rapid mean reversion to the contrary. Conversely, in economic booms with peak earnings, high PE ratios signal continued abnormally strong long-term economic growth. So in October 1929, wonderful earnings were multiplied by the then-record PE of 21 times, not matched until 1972, which had similar peak earnings. More recently, in March 2000, we saw an all-time record PE of 35 times, multiplying record earnings once again. And here we are today, with investors extrapolating record earnings, continued rapid advances in AI technology, and a strong recent economy, discounting the future as if these conditions are guaranteed forever.
What is more likely, instead, is that when investor confidence sooner or later reaches its limits, the deflating of the AI bubble will lead to a major stumble for the economy, a plunge in profits, and a severe decline in valuations. For now, though, the key signs of a major bubble top—a collapse of the most speculative stocks, pronounced outperformance of quality stocks, and usually, a slowing of the rate of rise of the broad market—are not yet evident.
This article was republished with permission. The original post appeared here.
Jeremy Grantham is a Co-founder, Chairman and Long-term Investment Strategist of GMO. The views expressed are those of the author and are subject to change at any time based on market and other conditions. This is not an offer or solicitation for the purchase or sale of any security and should not be construed as such. References to specific securities and issuers are for illustrative purposes only and are not intended to be, and should not be interpreted as, recommendations to purchase or sell such securities.