Let's get real about Britain's AI status
Britain is already a world leader in artificial intelligence. Sort of.
It has the third most developed start-up ecosystem for AI. The UK secured more investment last year than the whole of the EU combined. British universities are surpassed only by American and Chinese institutions for their research in this critical domain.
But matching the US and China pound for pound isn’t feasible. China had nearly five times more private investment in AI over the past decade than the UK; the US had 15 times.
And the scale of purchasing power of their companies in the AI race is at another level: Meta is planning to purchase 350,000 NVIDIA AI chips – 70x the UK Government. Four Chinese companies (Baidu, ByteDance, Tencent and Alibaba) have each made orders for AI chips larger than the UK’s entire ten-year semiconductor budget.
As we’ve seen with Rishi Sunak’s tiggerish enthusiasm for AI, the foundations and political will are there to grow even further. But the UK risks losing ground – even with more direct competitors such as France, which recently overtook the UK as the European leader for private investment into generative AI start-ups.
For Britain’s rhetoric to match reality, two key things are essential: data and compute. Britain has the third largest data pool, but it is growing at nearly half the speed of Germany and France. And the compute available in Britain has slumped from third to tenth globally over the last two decades.
In our last Future Frontiers post we argued that we can’t lead in all aspects of every frontier technology, and that we need to make decisions on the public goods we supply and the private enterprise we enable. AI is no exception.
1. Finding a niche
Let’s start with AI safety, an area the Prime Minister has brought into the spotlight. Unless AI is trusted, any advancements will be curtailed. And clearly AI could severely impact our national security in the wrong hands. But the direct economic benefits of leading on safety are weak.
Benefits hinge on the emergence of a potential safety market, where companies develop tools and processes to evaluate AI models. This is unlikely to happen, as many AI developers already have in-house safety evaluation capabilities.
The Government’s hopes to build expertise through the AI Safety Institute risk scuppering any private market. And it would also risk ministers mandating safety for a wide range of AI models, posing a chilling effect for business investment.
Safety evaluations are not a wise use of public resources. Evaluations are complicated and there is no consensus on how to best capture the complexity of AI capabilities. Nor is the AI Safety Institute best placed to evaluate models.
Instead the AI Safety Institute should focus on drawing up and leading on global standards, along with conducting basic AI research.
Noble work, but not particularly lucrative.
So what should Britain’s niche be? The UK should lead on foundation models, the highly capable AI trained on extensive data. They underpin general-purpose services like ChatGPT, or focus on drug discovery, automated vehicles, robotics, or video games.
Foundation models have huge economic potential. They’re already thought to be saving one billion years of research time by predicting protein structures and discovering new materials for next-generation electronics.
And one model can serve as the backbone to develop hundreds of other AI models tailored to different tasks — a domino effect of innovation that feeds off one another.
This is where the UK can develop an edge. Though the UK’s eight foundation models are dwarfed by the US, it remains a European leader – in 2023 it was home to more than Germany, France and Spain combined.1 And while there will inevitably be a small set of dominant, proprietary foundation models, having a diverse set of smaller foundation models is vital to seize AI's potential.
Foundation models also propel another British strength: generative AI, creating novel content like text, images, videos, and music. With 240 start-ups in this area, the US has by far the largest number. But again, the UK is far ahead of other European countries, with 36 start-ups:
What are the barriers?
Britain has an advantage on generative AI, but other nations are catching up.
France overtook the UK last year as Europe’s top destination for venture capital funding in generative AI start-ups with Germany narrowly behind, driven in large part by Paris-based Mistral and Germany’s Aleph Alpha.2
Britain therefore needs more home-grown foundation models. The more models are built, the greater the opportunity for AI-powered advancements. Any country relying on a small handful of foundation models risks constraining innovation: each model has its own inherent shortcomings – which limit AI innovation to the capabilities – and limitations – of these. To put it simply, more models, fewer boundaries.
To do this, we must break down the costly barriers AI builders face. The cost of training and running AI models is extraordinarily high. OpenAI’s GPT-4 model powering ChatGPT cost close to $100 million to train, and large foundation model builders can spend 80% of their capital on compute resources alone.
Once you consider the cost of experimenting with models, both the successes and failures, the true price tag is likely even higher. Tackling access to data and access to compute is crucial for reducing costs.
2. Investing in public goods
Digging out the data
If there’s one thing that requires urgent Government attention, it’s data - the fuel of AI and an increasingly competitive global asset. Yet the world is on track to exhaust high-quality language data before 2026.
Data access could also become more restrictive, making AI development more burdensome. Copyright concerns, and potential changes to the law, could tighten access to readily available data.
Britain has an enormous advantage on data, with the third largest data pool in the world. But its growth is among the slowest – French and German data pools are growing at nearly twice the speed. Other nations are doing more to unlock data, with the EU launching its Common European Data Spaces, and America’s National AI Research Resource.
Crucially Britain is sitting on a goldmine of unique data sources, like the UK Biobank and NHS data. Much remains largely illegible for AI, despite the Government’s promise to improve it.
DeepMind attempted to embed AI across the NHS, but the project was later scaled down to a task management app due to data issues. “You can’t generate an AI recommendation from data held on pen and paper”, DeepMind explained, before the app was eventually abandoned.
The Government isn’t yet doing enough to tackle the problem. The AI Research Resource (AIRR) is focused on building supercomputers while overlooking data. Unique, well-curated data is key to a competitive edge. Our recommendation is simple, but potentially transformative.
The Government should establish a British Library for Data – a centralised, secure platform to collate high-quality data for scientists and start-ups.
The library should work with public services to make their data AI-ready and bring Government-held datasets together. It should include language and multimodal data with robust privacy-preserving mechanisms.
The library should be open to contributions from archives, universities, and private companies. Starting with NHS data, the library would create a potent resource for AI advancement, particularly for new powerful, tailored foundation models.
The UK does not compute
At the same time, the Government must focus on compute – the computational power to build and run AI models.
We’re not only failing to keep pace, but falling far behind. Britain has plummeted down the global league tables on compute, from third in the world in 2005 to tenth in 2022.
The Government knows this is a problem. It has committed £1.5 billion to improve the public provision of compute resources for researchers and industry.
However, the powerful exascale computer, absorbing £900 million of this fund, sits outside the AIRR and will be focused primarily on broader modelling and simulation projects, not AI-specific workloads.3
The problem isn’t just computational power, but access to it. British start-ups say access to cost-effective compute is a critical barrier. And the number of UK university-based research teams working on large-scale AI models has dropped from around ten in the mid-2010s to none by 2022.
Despite this problem, the UK’s efforts to improve access to compute have been pretty underwhelming. Since the AIRR’s launch over a year ago, it’s unclear how the UK’s AI sector will be able to access the Government’s new supercomputers.
Only in the last Budget did Jeremy Hunt announce the Government’s intention to publish a plan on how compute access for researchers and “innovative companies” will be managed. But a realistic plan is needed to implement it.
The Government should launch a new ‘AI Catalyst’ scheme to streamline access to national compute resources.
The scheme would provide start-ups and researchers with a one-stop platform to view the real-time availability of, and request access to, compute resources across the Government’s supercomputers.
Building on the US National AI Research Resource, which provides researchers with access to national compute resources, and the EU’s AI Factories, but enhancing their implementation plans to better provide start-ups and scientists with the confidence of sustained support, AIRR should be giving a clear pathway to access national supercomputers, such as Isambard-AI and Dawn.
While part of the AIRR offering, access to the AI Catalyst would be administered by each supercomputer facility based on clear criteria that start-ups and researchers would need to meet.
Creating this AI Catalyst would signal that the UK is serious about supporting the growth of startups, incentivising more AI-first companies, and bolster investors’ confidence. It would also improve research output and international standing of UK research institutions.
3. Unlocking Private Enterprise
What politicians definitely shouldn’t do
The biggest risk to Britain’s AI hopes is a knee-jerk regulatory reaction from politicians. So far the Government has struck the right balance, taking a liberal approach to AI regulation that doesn’t stifle innovation.
Last year’s AI White Paper carefully focused on empowering existing regulators to issue guidance and regulate the use of AI within their industries; it didn’t argue for creating new bodies. Because of Britain’s approach, we can attract people who find the more restrictive regulatory regimes in America and EU too burdensome to comply with.
But there have been worrying signals the UK’s pro-business approach is wobbling. The Government's response to the White Paper hinted at potential binding regulations of foundation models, with reports that legislation may be coming with more stringent controls.
That would be a major mistake. Instead the UK should continue to pursue a streamlined approach to AI regulation, fostering investment and innovation. It should look to seize on its competitive advantage to be more agile than the US and the EU.
What next?
AI is one of the Government’s five priority technologies in which the UK can clearly gain strategic advantage. By unlocking its data wealth and expanding access to compute, Britain has a real opportunity to lead on AI, with a laser-focus on foundation models to power the AI sector.
But that relies on politicians refraining from heavy-handed regulation and investing in public goods where necessary. Next up from Onward’s Future Frontiers project, we’ll be looking at quantum technologies and where the UK’s focus should be.
A model is linked to a specific country if any of the authors of the paper introducing it is affiliated with an institution based in that country. When authors of a model are from multiple countries, the model can be counted several times.
The graph shows only VC funding. The displayed funding raised by Aleph Alpha excludes a $375 million grant.
According to conversations with those working on this project.