The AI Summit was a promising start – but momentum must be maintained

November 09, 2023

The UK’s much anticipated AI Safety Summit saw twenty-eight governments reach a consensus that risks posed by systems on the ‘frontier’ of general purpose AI need to be addressed. Billions of dollars have been invested into creating AI systems such as OpenAI’s GPT-4, which can serve as a coding assistant, or draft emails and essays. Just before the Summit, the UK government released an ambitious 42-point outline for best practice policies that frontier AI companies should be following. I was part of a team of researchers that conducted a rapid review of whether the six biggest AI companies met these standards. National bodies and frameworks will be vital, especially in the countries housing frontier AI developers.

The UK’s much anticipated AI Safety Summit saw twenty-eight governments reach a consensus that risks posed by systems on the ‘frontier’ of general purpose AI need to be addressed.

Given the frenetic pace of AI development, and the huge resources behind it, this consensus is much-needed progress. But it is just the first step.  

Billions of dollars have been invested into creating AI systems such as OpenAI’s GPT-4, which can serve as a coding assistant, or draft emails and essays. Without appropriate safeguards, however, it can also tell you how to build a bomb from household materials. 

Experts fear that future iterations might be capable of aiding bad actors in large-scale cyberattacks, or designing chemical weapons.

GPT-4 is perhaps not overly concerning at the moment. But if we compare the impressive leap in capability from its previous iteration to now, and project that forward, things start to feel scary.

The techniques underlying AI have been shown to scale: more data and computing resources applied to bigger models yield ever more capable AI. With more money and better techniques, we will continue to see rapid advances.

However, these AI systems are often opaque and unpredictable. New iterations have unexpected abilities that are sometimes uncovered only months after release.

Companies like Google DeepMind and OpenAI are testing and designing safeguards for their models, but not every company is putting in the same degree of work, and it’s unclear if even the most safety-conscious actors are doing enough.

Just before the Summit, the UK government released an ambitious 42-point outline for best practice policies that frontier AI companies should be following. I was part of a team of researchers that conducted a rapid review of whether the six biggest AI companies met these standards.

While all companies were committed to research on AI safety, none met all the standards, with Meta and Amazon getting lower ‘safety grades’. There were several best practices that no company met, including prepared responses to worst-case scenarios, and external scrutiny of the datasets used to train AI. 

With technology this powerful, we cannot rely on voluntary self-regulation. National bodies and frameworks will be vital, especially in the countries housing frontier AI developers.

Regulators need expertise and the power to monitor and intervene in AI – not just approving systems for release, but each stage of development, as with new medicines.

International governance is equally important. AI is global: from world-spanning semiconductor supply chains and data needs, to transnational use of frontier models. Meaningful governance of these systems requires domestic and international regulators working in tandem.

The source of this news is from University of Cambridge

Popular in Research

Presidential Debate TV Review: Kamala Harris Baits Raging Donald Trump Into His Worst Self In Face-Off

Oct 21, 2024

Impact of social factors on suicide must be recognised

Oct 21, 2024

Print on demand business with Printseekers.com

Sep 6, 2022

The conduct of some Trump supporters is crude, sleazy and...deplorable

Oct 21, 2024

Students learn theater design through the power of play

Oct 21, 2024

MSN

Oct 21, 2024