UK Prime Minister launches new AI Safety Institute

A new global hub based in the UK and tasked with testing the safety of emerging types of AI has been backed by leading AI companies and nations, as the world’s first AI Safety Institute launches today (2 November).

After four months of building the first team inside a G7 Government that can evaluate the risks of frontier AI models, it has been confirmed today that the Frontier AI Taskforce will now evolve to become the AI Safety Institute, with Ian Hogarth continuing as its Chair. The External Advisory Board for the Taskforce, made up of industry heavyweights from national security to computer science, will now advise the new global hub.

The Institute will carefully test new types of frontier AI before and after they are released to address the potentially harmful capabilities of AI models, including exploring all the risks, from social harms like bias and misinformation, to the most unlikely but extreme risk, such as humanity losing control of AI completely. In undertaking this research, the AI Safety Institute will look to work closely with the Alan Turing Institute, as the national institute for data science and AI.

In launching the AI Safety Institute, the UK is continuing to cement its position as a world leader in AI safety, working to develop the most advanced AI protections of any country in the world and giving the British people peace of mind that the countless benefits of AI can be safely captured for future generations to come.

World leaders and major AI companies have today expressed their support for the Institute as the world’s first AI Safety Summit concludes. From Japan and Canada to OpenAI and DeepMind, the collective backing of key players will strengthen international collaboration on the safe development of frontier AI – putting the UK in prime position to become the home of AI safety and lead the world in seizing its enormous benefits.

Leading researchers at the Alan Turing Institute and Imperial College London have also welcomed the Institute’s launch, alongside representatives of the tech sector in TechUK and the Startup Coalition.

Already, the UK has agreed two partnerships: with the US AI Safety Institute, and with the Government of Singapore to collaborate on AI safety testing – two of the world’s biggest AI powers.

Deepening the UK’s stake and influence in this transformative technology, it will also advance the world’s knowledge of AI safety – with the Prime Minister committing to invest in its safe development for the rest of the decade, as part of the Government’s record investment into R&D.

The launch of the AI Safety Institute marks the UK’s contribution to the collaboration on AI safety testing agreed by world leaders and the companies developing frontier AI at a session in Bletchley Park this afternoon.

New details revealed today, as governments from across the globe gathered for a second day of talks, set out the body’s mission to prevent surprise to the UK and humanity from rapid and unexpected advances in AI. Ahead of new powerful models expected to be released next year whose capabilities may not be fully understood, its first task will be to quickly put in place the processes and systems to test them before they launch – including open-source models.

From its research informing UK and international policymaking, to providing technical tools for governance and regulation – such as the ability to analyse data being used to train these systems for bias - it will see the government take action to make sure AI developers are not marking their own homework when it comes to safety.

Researchers are already in place to head up the work of the Institute who will be provided with access to the compute needed to support their work. This includes making use of the new AI Research Resource, an expanding £300 million network that will include some of Europe’s largest super computers, increasing the UK’s AI super compute capacity by a factor of thirty.

It follows the UK Government’s announcement yesterday of additional investment in Bristol’s “Isambard-AI” and a new computer called “Dawn” in Cambridge, that researchers will be able to access at the same time to boost their research and make AI safe. The AI Safety Institute will have priority access to this cutting-edge supercomputer to help develop its programme of research into the safety of frontier AI models and supporting government with this analysis.

It comes as government representatives were joined by CEOs of leading AI companies and a number of civil society leaders earlier today to discuss the year ahead and consider what immediate steps are needed - by countries, companies, and other stakeholders – to ensure the safety of frontier AI.

As the final day of talks come to a close at Bletchley Park, the AI Safety Summit has already laid the foundations for talks on frontier AI safety to be an enduring discussion with South Korea set to host next year.