European Union member states finalized the world’s first significant law regulating artificial intelligence on Tuesday, as organizations around the world scramble to impose restrictions on the technology.
The EU Council announced that it has passed the AI Act, a pioneering piece of regulatory legislation that establishes extensive guidelines for artificial intelligence technology.
“The adoption of the AI Act is a significant milestone for the European Union,” Mathieu Michel, Belgium’s minister of state for digitization, said in a statement on Tuesday.
“With the AI act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation,” Michel stated.
The AI Act takes a risk-based approach to artificial intelligence, which means that various applications of the technology are addressed differently based on the potential threats they bring to society.
The law restricts AI applications that are “unacceptable” in terms of danger. Such applications use so-called “social scoring” algorithms to rate persons based on data aggregation and analysis, predictive policing, and emotional recognition in the workplace and schools.
High-risk AI systems include autonomous vehicles and medical equipment, which are assessed based on the threats to people’ health, safety, and fundamental rights. They also include AI applications in finance and education, where AI algorithms may contain prejudice.
US Big Tech Firms in the Spotlight
According to Matthew Holman, a lawyer at legal firm Cripps, the restrictions will have far-reaching repercussions for any person or company developing, creating, using, or retailing AI in the EU, with U.S. tech firms in particular.
“The EU AI is unlike any law anywhere else on earth,” Holman stated. “It creates for the first time a detailed regulatory regime for AI.”
“U.S. tech giants have been watching this developing law closely,” Holman stated. “There has been a lot of funding into public-facing generative AI systems which will need to ensure compliance with the new law that is, in some places, quite onerous.”
The EU Commission will have the authority to sanction corporations that violate the AI Act up to 35 million euros ($38 million) or 7% of their annual global revenues, whichever is larger.
The amendment in EU law follows OpenAI’s November 2022 introduction of ChatGPT. Officials recognized at the time that existing regulation lacked the depth required to meet the sophisticated capabilities of future generative AI technology and the risks associated with the exploitation of copyrighted material.
A long path to implementation.
The rule places tight constraints on generative AI systems, which the EU refers to as “general-purpose” AI. These include requirements to follow EU copyright law, transparency disclosures about how the models are developed, regular testing, and proper cybersecurity precautions.
However, Dessi Savova, a partner at Clifford Chance, predicts that it will be some time before these regulations become effective. The restrictions on general-purpose systems will not be implemented until 12 months after the AI Act goes into effect.
Even yet, presently commercially available generative AI systems, such as OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot, are granted a “transition period” of 36 months from the day the Act goes into effect to ensure that their technology is consistent with the regulations.
“Agreement has been reached on the AI Act—and that rulebook is about to become a reality,” Savova told CNBC via email. “Now, attention must turn to the effective implementation and enforcement of the AI Act.”