The U.S. AI Safety Institute And Anthropic And OpenAI Sign Agreements Concerning AI Safety Research, Testing, And Evaluation
Georgiasburg, Maryland — Today, official collaboration on AI safety research, testing, and evaluation is made possible by agreements announced by the U.S. Artificial Intelligence Safety Institute at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) with Anthropic and OpenAI.
The framework for granting access to the U.S. AI Safety Institute to significant new models from each business, both before and after their public release, is outlined in each company’s Memorandum of Understanding. Collaborative research on strategies to reduce safety risks and capability evaluation will be made possible by the agreements.
“Safety is the lifeblood of revolutionary technical progress. The U.S. AI Safety Institute’s director, Elizabeth Kelly, stated, “Now that these agreements are in place, we are excited to start our technical collaborations with Anthropic and OpenAI to advance the science of AI safety.” “While these agreements are only the beginning, they represent a significant turning point in our efforts to help responsibly steward AI’s future.”
Together with its collaborators at the U.K. AI Safety Institute, the U.S. AI Safety Institute also intends to give Anthropic and OpenAI feedback on possible safety enhancements to their models.
NIST’s more than 120-year legacy of improving measuring science, technology, standards, and related instruments is carried on by the U.S. AI Safety Institute. Through the facilitation of extensive collaboration and exploratory research on advanced AI systems across a number of risk categories, evaluations conducted under these agreements will improve NIST’s work on AI.
Building on the Biden-Harris administration’s Executive Order on AI and the voluntary promises made to the government by prominent AI model creators, the evaluations carried out in accordance with these agreements will contribute to the advancement of the safe, secure, and trustworthy development and use of AI.
Regarding the American AI Safety Institute
The Biden-Harris administration’s 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence led to the creation of the U.S. AI Safety Institute, which is housed within the Department of Commerce at the National Institute of Standards and Technology (NIST). Its goals are to advance the field of AI safety and mitigate the risks associated with sophisticated AI systems. Its job is to create the tests, assessments, and policies that will hasten the development of safe artificial intelligence both domestically and internationally.