The EU AI Law has been officially approved by a vote in the European Parliament in a move that will radically affect the way AI is used not only in Europe, but around the world.
From developer to implementer, the law will establish the “first legal framework on AI”, giving EU regulators and oversight bodies a higher level of oversight and governance scope.
While other countries and regions have been flirting with regulatory possibilities, the EU has taken the step and, in doing so, has set an important precedent for organizations globally, according to Enza Iannopollo, principal analyst at Forrester.
“The adoption of the AI Law marks the beginning of a new era of AI and its importance cannot be underestimated,” Iannopollo said.
The law works under what regulators call a “risk-based approach,” which requires assessment of AI systems as minimal, limited or high risk. To be high risk, an AI system must pose a threat to life, financial livelihood, or human rights.
Falling outside these categories, an AI system could be considered to present “unacceptable risks” and therefore prohibited.
This puts enterprise-level AI users and developers firmly in the crosshairs, as the EU clearly identifies areas of employee recruitment, credit scoring, financial services and critical infrastructure as potentially high-risk applications.
High-risk platforms will face the highest level of scrutiny stipulated under the law before they can hit the open market.
They will need to undergo appropriate risk and compliance assessments, ensure the highest quality of training data sets to minimize risk and discrimination, and activity will need to be recorded clearly and consistently to ensure traceability.
The EU itself is also entitled to detailed documentation to complete compliance assessments, while companies implementing AI will be able to access “clear and adequate” information about the systems they use.
“The goal is to allow institutions to exploit AI fully, in a more secure, reliable and inclusive way,” Iannopollo said.
EU AI Law: How will companies be affected?
Once businesses have recovered from the impact of the premature vote (originally scheduled for a month later), they should be confident that they are looking to ensure a clear level of compliance with the new rules.
Broadly speaking, the law will impact AI developers at the source by ensuring that high-risk systems meet a strict level of transparency and guidelines.
However, big-name AI companies like OpenAI will obviously be held accountable, and by extension, any enterprise-level customers deploying tools based on other companies' models will also have to bow to increased scrutiny.
Although many organizations are unlikely to use AI systems subject to a ban, said Gartner analyst Nader Henein. ITPro The prospect of a €35 million fine will surely make many companies nervous.
“Someone is going to have to stand up in front of the board and say we are not in the line of fire because the size of the liability is simply unacceptable,” Henein said. ITPro.
What will be most important for organizations to understand, Henein added, will be the importance of their AI use decisions. Regardless of whether a company makes AI or not, it will still maintain some level of responsibility for its use.
“Where organizations have a bit of a misunderstanding is that not only are you responsible for the AI you build, you're also responsible for the capabilities you buy,” Henein said.
“The implementer has a lot of responsibility; Ultimately, you are building this into your organization and letting it loose in your data,” he added.
What companies can do to prepare
Henein highlighted the importance of some of the issues that companies should take into account following the law.
The first is speed. Unlike the GDPR, which required compliance approximately “two years later,” the EU AI law will enforce its rules on prohibited AI systems after only “six months.”
Organizations will have to get a handle on regulations “pretty quickly,” Henein said, and be aware of their timelines.
Companies should also evaluate two key aspects of their use of AI; one being a discovery process and the other a planning process regarding future adoption.
Henein mentioned the fact that countless vendors have been incorporating AI capabilities into their SaaS platforms and on-premise platforms for quite some time without organizations “asking questions.”
Now, these companies will have to analyze what their current use of AI through these platforms consists of, even if it is the first time they are aware of it.
At the same time, companies will need to be careful about the future adoption of AI and ensure that it complies with new guidelines.
“You need to put controls in place… to make sure you understand when AI-driven capabilities are adopted and evaluate them accordingly,” Henein said.
How will the EU AI Law affect UK businesses?
Due to its scale, the mandate of the EU AI law will not be limited to the limits of the physical territory of the EU and will inevitably force companies outside Europe to reach a level of compliance, Iannopollo said.
“The extraterritorial effect of the regulations, the heavy fines and the pervasiveness of the requirements throughout the 'AI value chain' mean that most global organizations using AI must (and will) comply with the law,” he said. .
The United Kingdom will not be spared from this reality. Any company in the United Kingdom seeking to do business internationally, Iannopollo said, will have to comply with EU law in the same way as their counterparts in the United States or Asia.
UK businesses may have felt the impact a little less suddenly if the Conservative government had managed to advance AI legislation.
There is currently an AI Bill in the UK which will undergo its second reading in March. although the industry has expressed its opinion on the need for a faster process.
Even OpenAI investor Microsoft called for a higher level of regulation in the UK.expressing the need for clearer security frameworks.
“Despite the aspiration to become the 'center of AI regulation', the UK has so far produced little in terms of effectively mitigating AI risks,” Iannopollo said.
“For a start, UK businesses will have to deal with two very different regulatory environments.”
“Over time, at least some of the work that UK companies do to comply with the EU AI Law will become part of their overall AI governance strategy, regardless of UK-specific requirements. or the lack thereof.”
The United States has been similarly slow compared to the EU, with only a bill of rights intended to guide AI policy rather than provide enforceable legal barriers.
“Whether we like it or not, with this regulation the EU sets the “de facto” standard for trustworthy AI, risk mitigation and responsible AI. All other regions can only catch up,” Iannopollo added.