The Green Vision for AI - Part 1: Hype, Hallucinations and Hyperscalers
How we ended up in the AI Bubble.
“To keep warming to 1.5 degrees, countries must cut emissions by at least 45 per cent compared to 2010 levels.” United Nations
“…electricity demand from data centres worldwide is set to more than double by 2030 to around 945 terawatt-hours (TWh), slightly more than the entire electricity consumption of Japan today” International Energy Agency
We are now in an age of limits, with only a short window left for determining whether we reach some form of sustainability or whether we lock-in collapse. In that context, the investment a globally significant amount of time, money and resources (physical and people) into the AI bubble is a huge mistake. It is actively damaging society and the environment globally, while taking focus and resources away from the changes we actually need.
In Part 1, I’ll give a tour of origins and shape of this bubble, to provide the basis for understanding in Part 2 why the AI bubble will shift even more wealth and power into the US tech industry elite, while making it much harder to achieve a socially just and environmentally sustainable world. Part 3 will then look at the consequences of the UK of the Labour government’s embrace of AI, and Part 4 will set out an alternative Green Vision for AI.
Hype
“In a decade, perhaps everyone on earth will be capable of accomplishing more than the most impactful person can today.” Sam Altman, CEO OpenAI.
A brave new world is on the horizon, powered by a rapidly advancing new technology deployed by visionary companies led by heroic leaders. If we remove the blockers, ignore the Luddites, and invest the resources, it will deliver a new utopia. The hype is stratospheric, the money is pouring in, and, sure, the practical benefits so far have been somewhat limited, but that will all change very soon - just keep the faith!
That technology is Artificial Intelligence (AI), or more specifically, Large Language Models (LLMs). In the last 5 years, LLM AI has grown rapidly, to reach the point where the likes of Sam Altman can predict that AI will transform the world, and organisations like OpenAI (still technically a non-profit) have reached nominal valuations in the hundreds of billions of dollars, despite making vast losses.
For those of us with a more cynical view, this has all the signs of a bubble - unreasonable expectations set by boosters in industry and the media, attracting hot money chasing outsize future returns, resulting in companies burning through cash which they claim is needed to reach scale and profitability, while delivering little of immediate value.
Hallucinations
LLMs are amazing - you just type in a question and out pops a convincing, detailed response which could include pictures, graphs, code, maps, references, even video - it’s no wonder that first impressions (for many people of ChatGPT 3) wowed us.
But once you dig a bit deeper, you hit some issues. Maybe you try following up an academic reference…and find it doesn’t exist. Maybe you ask it a specific e.g. maths or geography question…and get a strange response. So what’s going on?
The fundamental problem (to put it very simply) is that an LLM doesn’t actually know anything - it has no concept of facts, it only has patterns - so it just makes a (very effective) guess as to how to respond based on your input and the patterns it can match those to, and while that pattern might be of a specific fact relevant to the input, often it generates a plausible but not factual response (known as a hallucination).
This is acceptable if you just want to know something fairly general, and are not truly relying on the detailed output being correct, so e.g. getting an AI summary of an article to decide whether it is worth reading in detail is ok, or getting a generic bit of “art”, but relying on it to e.g. answer specific questions or provide specific facts is asking for trouble. At the moment, these concerns are dismissed with claims that future models with better algorithms and more training data etc. will turn this around, that we are on the path to a true Artificial General Intelligence (AGI) - or even Artificial Superintelligence (ASI), with talk of whether that might need to be given their own “human” rights. Yet in reality accuracy rates are getting worse, with no clear route to overcoming this "feature”.
Hyperscalers: the Power (and Water and Emissions and Waste) behind the Bubble
It is questionable whether existing software-based services are truly sustainable on a lifecycle basis, but until recently the Hyperscalers, as the main source of the tech industry’s direct environmental impact emissions through the operation of their data centres, were confident that they were on track for net zero. For example, just two years ago, Microsoft was “committed to achieving zero carbon emissions and waste by 2030” (tellingly, the original web page has recently been deleted by Microsoft).
But Hyperscalers (or at least their data centre divisions) exist to build and operate data centres (and hardware manufacturers exist to manufacture the kit that fills them), so in our unsustainable infinite growth economy, the only metrics they truly value are the massively increased revenue and profit that a doubling of data centre capacity will generate.
So when LLMs came along, everything changed. Gone was the expectation of steady, manageable, growth. With tens of billions of dollars flowing into LLMs and their supporting hardware and infrastructure, forecasts were revised upwards, to a doubling of global data centre capacity in just the next 5 years. There was no way the Hyperscalers were going to miss out on this bonanza, so environmental targets were unofficially ditched.
The Bubble
Why now
The AI industry is not new (it was part of my degree in the 90s), and while its use was increasing, by the start of the 2020s it was still mainly restricted to important, but relatively niche, applications. The current bubble has resulted from the combination of the development of LLMs, the tech industry’s business model, and the particular circumstances of the tech industry in 2020s.
To understand how all this came together, we first need to step back c. 20 years, to a time when the hangover from the Dotcom Bubble had recently passed, but there were not yet any oligopolic web-based companies, many markets were badly served, and many opportunities had not even been imagined never mind exploited. Venture capital financiers stepped back in, eventually funding thousands of start-ups, this time with a ruthless focus on identifying and pushing those that could scale (or be bought out by a larger competitor). The roll out of fast broadband, smartphones, cloud computing and the rise of software-as-a-service more generally, enabled the growth of oligopolies and even near-monopolies, particularly in social media, retail, search, and cloud services. This capture of revenues from advertising, media, retail and business-to-business by a small number of tech companies turned them into global behemoths, with their primary owners becoming some of the richest people the world has ever seen.
These companies had each built their own “moat” (consisting of network effects, strong Intellectual Property Rights (IPR), and barriers to exit from their services) that have made it very difficult for customers to leave them and hence for competitors to now challenge them. Supine governments have been unwilling to tackle the resulting oligopolies and monopolies, allowing them to lock-in immense ongoing revenue and profits.
By the early 2020s, markets were saturated, and there had been few truly ground-breaking recent products (e.g. look at the hype then failure of Virtual Reality, the Metaverse, self-driving cars etc.). For the tech industry and the venture capitalists, these were major problems - their twin mantras of growth and bleeding edge innovation no longer seemed to have any foundation, and their claims of exceptionalism were further undermined by evidence of their ruthless business practices (e.g. long gone was Google’s “Don’t be evil” motto).
Artificial Intelligence had long been the “nuclear fusion” of IT - a potentially game-changing but perennially delayed technology without a clear route to mass adoption. LLM AIs seemed to overcome this, by offering something intuitive that seemed to have huge potential. With the venture capitalists lacking any other major new tech growth areas, the tech industry needing to spend its vast (under-taxed, oligopolistic) profits on something, and the Hyperscalers and hardware manufacturers needing a reason to build and equip more and bigger data centres, they were all keen to boost LLM AI as the next big thing.
The alignment of these drivers has resulted in immense amounts of money and resources being consumed over the last 5 years to design, train and deploy bigger and bigger LLMs, hosted in more and larger data centres utilising more and more cutting-edge AI processors.
Why a bubble
Almost all software products benefit from massive economies of scale. Once you have made the investment in the development of a product like e.g. Google Search, Facebook or SalesForce etc., then provided you scale your user base, the revenue from each transaction easily outweighs the cost of delivering e.g. a tweet or an electronic invoice or a sale on a shopping site, the initial investment is easily recouped, and the rest is pure profit.
While the largest products will still need significant data centre infrastructure to process all these transactions, this is still a tiny cost per transaction compared to the revenue generated, which is how (when allowed to operate as oligopolies) the largest tech firms generate tens of billions of dollars of profit each year.
LLMs operate fundamentally differently. They require an order of magnitude more computational power to process each transaction than traditional software products. LLMs also need to be trained up front, with each generation requiring more and more investment to achieve that - again very different from traditional software products that just need incremental improvement.
This is why even for expensive LLM services, losses go up the more they are used (the inverse of the traditional software business model, or in fact any business model capable of turning a profit). There is currently no foreseeable route for the LLM AI industry to break even, never mind make a return on the ever-increasing investment capital being thrown at them.
Even if we were to assume that, rather than pursuing ever more bloated models, the industry focused on efficiency and hence could turn a profit on each transaction, then it would still need to develop products that people and business would want to buy. The LLM companies claim that Agentic AI, “autonomous systems that can make decisions and perform tasks without human intervention”, could replace lots of staff while generating better outputs. Sounds great, until we consider the hallucination problem. LLMs just cannot be trusted. If an organisation relies on an LLM, and it gives the wrong price, hallucinates personal information, or provides made-up “facts”, then the organisation is going to be liable.
This is why, despite all the hype, very few actual LLM use cases have been successfully implemented by business or by the public sector. Eventually, enough investors will realise this, a tipping point will be reached, and the AI bubble will burst (the first hints that might happen soon have started to appear).
Next: AI Bubble Impacts in an Age of Limits
We’ve had economic bubbles on and off for the best part of four centuries, so what makes the AI bubble particularly bad from a Green perspective? Sure, bubbles cause problems when they burst, a bunch of unwise investors lose their money, but we clear up the wreckage, salvage what was genuinely useful, and move on, so why worry too much about this one?
In Part 2, I will look at why the scale, timing and impacts of the AI bubble are particularly important in determining whether we reach some form of sustainability or whether we lock-in collapse. We will see that, as it is currently constituted, the AI bubble is aimed at shifting even more wealth and power into the US tech industry and ultimately to the tech billionaires, while simultaneously making it much harder to achieve a socially just and environmentally sustainable world.