The United Kingdom is beginning to develop stricter legislation on generative AI, unlike its previous light-hearted approach, according to two people briefed on the plans.
According to him Financial timesPeople briefed on the matter said new UK AI legislation could require companies that “develop the most sophisticated models” to share information with the government.
One of the two sources said this legislation may require companies that develop models to share their algorithms with the government and provide evidence of their security testing practices.
They added that the Department of Science, Innovation and Technology (DSIT) is “developing its thinking” on legislation related to AI and generative AI, while another said the impending rules would apply to large language models (LLMs). behind the applications, rather than the applications themselves.
Reports of the changes in approach contrast sharply with the UK government's stance on AI regulation so far, which has not set out hard and fast requirements for AI companies to share model details.
In particular, Prime Minister Rishi Sunak confidently backed a pro-innovation approach to AI in October 2023 that would require a “rush to regular.”
Many in the industry, including technology powerhouse MicrosoftThey have since called for more clarity and scope in the UK regulatory framework on AI. These tentative developments, although unconfirmed, could be a step toward easing industry concerns.
“After months of moving back and forth on a position to legislate AI, it will be a relief for many businesses to finally have clarity on where the UK is heading,” said Matt Worsfold, partner at Ashurst Risk Advisory. ITPro.
Changes to UK AI legislation have not been officially confirmed.
A departure from the UK's AI plans to date
The UK has been reluctant to clamp down on AI, citing the sector's agility as central to its process. Michelle Donelan, Secretary of State for Science, Innovation and Technology, has previously stated that the UK Government's aim is to ensure that unnecessary barriers are not placed on businesses and that the UK remains a “world leader in both security and in AI development. “
There has been disagreement within parliament over the speed and intensity of AI regulation, with a House of Lords committee suggesting that the UK could miss out on 'AI gold rush' if it were regulated too much or acted with caution.
A AI Regulation Bill Currently at committee stage in the House of Lords it would lead to the introduction of regulatory principles in the areas of security, transparency and governance, as well as the creation of a body called the “AI authority”.
The UK government's biggest word on generative AI to date came in the form of its AI whitepaper. A pro-innovation approach to AI regulation, published in August 2023. This paper suggested a sector-specific approach rather than a general legislative framework on the technology, while recognizing that new AI regulators may be necessary in the future.
Therefore, the proposed changes, if made, would reflect a shift in government towards a more hands-on approach within the public sector when it comes to UK AI. If these regulations are passed, entire AI models would come under scrutiny rather than individual use cases, and certain companies would be required to disclose proprietary model information.
How the UK handles AI regulation compared to the EU AI Law It is a significant factor. Worst said ITPro It would be worth paying attention to whether the UK starts to imitate EU legislation.
“Given that many UK companies will be caught in the extraterritorial reach of the EU AI Act, it will be interesting to see to what extent the UK aligns or follows a principles-based approach,” Worsfold said.
The suggestion of mandates by the UK government forcing developers of sophisticated models to share information about their algorithmic data certainly seems to reflect the EU's current position. He The EU AI Law obliges companies Maintain transparency and guidelines around AI, especially when it comes to systems considered high risk.
Those who develop the models considered most risky for citizens' rights will be forced to carry out a risk assessment, maintain a dialogue with regulators and issue guarantees on the cleanliness of data used to train AI. UK regulators have already kicked off research into the use of data for AI training, with the Information Commissioner's Office (ICO) willing to issue guidance on Legality of generative AI training. in relation to the United Kingdom GDPR.