The UK AI Safety Summit 2023: The top 5 discussion points you need to know
By Oliver Toomey, Rachel Griffith, Hannah Duke
24 Nov 2023 | 5 minute readAt the beginning of November, one hundred political, technology and business leaders gathered at Bletchley Park, the birthplace of modern computing technology, for the global AI Safety Summit. The first summit of its kind, the aim was to bolster a global policy consensus on keeping AI applications safe for humans. We've compiled a list of the top discussion points in the wake of the event:
1. The 'Bletchley Declaration' is largely symbolic but signals that global leaders are prepared to take appropriate action on the potential harm posed by AI
Signed by twenty-eight countries at the Summit, including China, Saudi Arabia and India, the international accord affirms that AI should be developed and used in a manner which is 'human-centric, trustworthy and responsible'.
The central theme of the Bletchley Declaration is international cooperation on AI rules. Whilst not binding, signatories reached a common understanding of risks posed by the most advanced 'Frontier AI', which is a highly capable general-purpose model. The declaration also notes particular concerns in domains such as 'cybersecurity and biotechnology, as well as where frontier AI systems may amplify risks such as disinformation'.
We will be watching closely to see how the Bletchley Declaration shapes the signatories' responses to AI risks in practical terms, whether that's in the form of guidance notes, sector-based regulations or cross-cutting legislation. Either way, the declaration could be said to provide a basic framework for accountability, and the increased global scrutiny that it represents provides an opportunity for businesses and organisations to consider their existing and potential AI applications.
2. The UK's pro-innovation approach to AI regulation is complex but if effectively navigated presents significant opportunities to UK businesses
Organisations have an opportunity to harness the power of new AI applications to, for example, improve operational efficiency, deliver effective training or fine tune quality control functions. In the UK, regulators have published a White Paper which proposes a uniquely 'pro-innovation' policy framework governing the 'use' of AI based on five overarching principles:
- Safety, security and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
While some businesses have understandably called for greater clarity on the UK's position, these publications suggest that the UK intends to take an iterative approach, and will avoid heavy-handed legislation which could stifle innovation. Regulators also plan to issue practical guidance, tools and resources (including risk assessment templates) to organisations over the next year to help them implement the five principles, and legislation may be introduced to ensure regulators consider the principles in a consistent way.
The government's stance remains at odds with that of the EU who continue to push forward the highly comprehensive EU AI Act targeting specific and discreet use cases of AI. The US, on the other hand, made headlines by publishing a new sweeping Executive Order (EO) on AI safety, which directs federal agencies to set new standards on AI safety and security. As the UK tests and iterates its own approach over the coming months and years, and regulators such as the ICO and CMA respond by issuing their own guidance and tools, the regulatory environment looks set to become more complex.
We continue to monitor developments closely, and are well placed to act as a sounding board for businesses looking to navigate this complicated and developing regulatory landscape. We can also effectively advise on how the current UK regulatory framework deals with immediate challenges posed by AI models. In particular, UK GDPR already governs the use of personal data used by AI, and businesses need to ensure, and show, that an AI model's decision-making processes do not discriminate against individuals so as to be caught by the Equality Act 2010.
3. Building the UK's most powerful new supercomputer in Bristol is a momentous moment for the South West's technology sector.
On day one of the Summit, the UK Science, Innovation and Technology department announced they were tripling the £100m investment outlined in the 2023 spring budget to deliver a dedicated UK AI Research Resource (AIRR).The AIRR will be a new national supercomputer research facility comprising a cluster of advanced computers for AI research, including the UK's fastest supercomputer – the 'Isambard-AI' – which will be based at the National Composites Centre in Bristol. The investment will also be used to connect Isambard-AI to a newly announced Cambridge supercomputer called ‘Dawn’.
This represents a huge leap forward in AI computational power in the UK. When operating later in 2024, Isambard-AI will be one of the most powerful AI systems open to science and new tech anywhere. Its presence in Bristol will bring together world class innovators, academics, researchers and cutting-edge technology to the South West, a talent influx which will cement the city's status as the leading UK tech 'hub' outside of London.
4. "Marking their own homework": leading AI developers have agreed to work with Governments on testing new frontier models
At the Summit, Rishi Sunak launched a new 'AI Safety Institute' which will test the safety of frontier AI models before and after they are released, and analyse risks from social harms like misinformation or loss of control. This comes ahead of the expected release of further powerful frontier models scheduled for late 2024, including 'Gemini', Google's next-generation large language model that will enable users to develop apps by using natural language prompts and GPT-5, OpenAI's rumoured successor to GPT-4. The AI Safety Institute will be given priority access to the compute required to support its analysis and research via the AIRR network. Clearly, this signals that whilst AI applications are currently ubiquitous in businesses and society, new frontier models are less predictable and pose a much greater risk of harm.
5. The UK remains an attractive jurisdiction in which to develop AI capabilities
There is concern amongst AI actors that global regulatory overreach could supress some of the huge potential benefits of AI. Whilst uncertainty in the UK's regime can make it difficult for businesses to manage AI compliance, the overall principle of UK AI governance is pro-innovation, and as such the UK remains the jurisdiction of choice for many AI actors, developers and researchers. The UK also hosts several important AI companies (in particular, Google DeepMind), excellent universities with a £118m new 'AI skills package', and new best-in-class AI computing infrastructure. This combination of factors mean the UK remains an attractive location for businesses to innovate their AI capabilities in a fast-changing landscape.
The regulatory and commercial issues posed by AI are complex and fast-changing, whilst offering unprecedented and immediate opportunities for businesses to transform their operations. Our specialist lawyers can support you in navigating these challenges. Please get in touch with us to find out more about how we can assist.