At the Seoul AI Safety Summit on Tuesday, major tech giants such as Microsoft, Amazon, and OpenAI reached a groundbreaking worldwide agreement on AI safety.
As part of the agreement, firms from several nations, including the United States, China, Canada, the United Kingdom, France, South Korea, and the United Arab Emirates, will make voluntary commitments to ensure the safe development of their most advanced AI models.
Where they have not previously done so, AI model developers have promised to disclose safety protocols outlining how they would assess the problems of their frontier models, such as avoiding bad actors from abusing the technology.
These guidelines will contain “red lines” for tech businesses that identify the types of hazards connected with frontier AI systems that are deemed “intolerable.” These risks include, but are not limited to, automated cyberattacks and the threat of biological weapons.
To respond to such extreme conditions, firms have stated that they intend to incorporate a “kill switch” that will stop the growth of their AI models if they cannot ensure risk mitigation.
“It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety,” Rishi Sunak, the Prime Minister of the United Kingdom, said in a statement on Tuesday.
“These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI,” he stated.
The agreement signed on Tuesday builds on prior commitments made by companies involved in the development of generative AI software last November.
The corporations have promised to solicit feedback on these levels from “trusted actors,” including their home governments as needed, before revealing them ahead of the next scheduled AI meeting, the AI Action meeting in France, in early 2025.
The agreements made on Tuesday only apply to so-called frontier models. This phrase refers to the technology that underpins generative AI systems such as OpenAI’s GPT family of big language models, which power the popular ChatGPT AI chatbot.
Since ChatGPT was first given to the public in November 2022, authorities and technology leaders have grown increasingly concerned about the risks associated with powerful AI systems capable of producing text and visual material on par with, or better than, humans.
The European Union has attempted to limit unrestricted AI development with the introduction of its AI Act, which was approved by the EU Council on Tuesday.
The United Kingdom has not suggested official AI regulations, instead opting for a “light-touch” approach to AI regulation in which regulators adapt current laws to the technology.
The government recently stated that it will consider legislating for frontier models in the future, but has not set a deadline for enacting formal rules.