UK and US agree to collaborate on developing security tests for AI models

The UK government has formally agreed to work with the United States on developing tests for advanced artificial intelligence models. TO Memorandum of understandingwhich is a non-legally binding agreement, was signed on April 1, 2024 by UK Technology Secretary Michelle Donelan and US Trade Secretary Gina Raimondo (Figure A).

Figure A

US Secretary of Commerce Gina Raimondo and UK Technology Secretary Michelle Donelan.
US Commerce Secretary Gina Raimondo (left) and UK Technology Secretary Michelle Donelan (right). Source: United Kingdom Government. Image: UK Government

Both countries will now “align their scientific approaches” and work together to “rapidly accelerate and iterate robust sets of assessments for AI models, systems and agents.” This action is being taken to uphold commitments made at the first-ever Global AI Safety Summit last November, where governments around the world accepted their role in safety testing the next generation of AI models.

What AI initiatives have the UK and US agreed on?

With the MoU, the UK and US agreed how they will build a common approach to AI safety testing and share their developments with each other. Specifically, this will involve:

  • Develop a shared process to evaluate the safety of AI models.
  • Conduct at least one joint testing exercise on a publicly accessible model.
  • Collaborate on technical AI safety research, both to advance collective knowledge of AI models and to ensure new policies are aligned.
  • Exchange of personnel between respective institutes.
  • Share information about all the activities carried out in the respective institutes.
  • Work with other governments on developing AI standards, including security.

“Through our collaboration, our institutes will gain a better understanding of artificial intelligence systems, conduct more robust evaluations, and issue more rigorous guidance,” Secretary Raimondo said in a statement.

SEE: Learn how to use AI for your business (TechRepublic Academy)

The MoU primarily relates to the advancement of plans drawn up by the AI ​​Safety Institutes in the UK and US. UK research center was launched at the AI ​​Safety Summit with the three main objectives of evaluating existing AI systems, conducting fundamental research on AI safety, and sharing information with other national and international actors. Companies like OpenAI, Meta and Microsoft have agreed that their latest generative AI models are Independently reviewed by the UK AISI..

PEOPLE ALSO LIKE:  Cisco Duo MFA Records Exposed in Third-Party Data Breach

Similarly, the US AISI, formally established by NIST in February 2024, was created to work on priority actions outlined in the AI ​​Executive Order issued in October 2023; These actions include the development of standards for the security of AI systems. The US AISI is supported by an AI Safety Institute consortium, whose members include Meta, OpenAI, NVIDIA, Google, Amazon, and Microsoft.

Will this lead to regulation of AI companies?

While neither the UK nor the US AISI is a regulatory body, the results of their combined research are likely to inform future policy changes. According to the UK government, its AISI “will provide critical insights for our governance regime”, while the American facility “​Develop technical guidance that will be used by regulators.”

The European Union is arguably still one step ahead, as its landmark AI Act became law on March 13, 2024. The legislation outlines measures designed to ensure that AI is used safely and ethically, including other AI standards for facial recognition and transparency. .

SEE: Most cybersecurity professionals expect AI to impact their jobs

Most of the big tech players, including OpenAI, Google, Microsoft, and Anthropic, are based in the US, where there are currently no strict regulations that could restrict their AI activities. The October EO provides guidance on the use and regulation of AI, and positive steps have been taken since its signing; however, this legislation is not law. He AI Risk Management Framework finalized by NIST in January 2023 is also voluntary.

In fact, these big tech companies are mostly responsible for regulating themselves and last year launched the Frontier Model Forum to establish their own “guardrails” to mitigate AI risk.

PEOPLE ALSO LIKE:  Speedify VPN Review: Pricing, Security & Performance

What do AI and legal experts think about security testing?

AI regulation should be a priority

The formation of the UK's AISI was not a universally popular way to control AI in the country. In February, the chief executive of Faculty AI, a company involved with the institute, said developing robust standards may be a more prudent use of government resources rather than trying to vet every AI model.

“I think it's important that you set standards for the rest of the world, rather than trying to do everything yourself,” Marc Warner said. The Guardian.

Tech law experts hold a similar view when it comes to this week's MoU. “Ideally, countries' efforts would be much better spent developing strict regulations rather than research,” Aron Solomon, legal analyst and chief strategy officer at legal marketing agency Amplify, told TechRepublic in an email.

“But the problem is this: few legislators (I would say, especially in the US Congress) have such a deep understanding of AI that they can regulate it.

Solomon added: “We should exit rather than enter a period of necessary deep study, in which policymakers truly collectively understand how AI works and how it will be used in the future. But, as highlighted by the recent US debacle in which lawmakers attempt to ban TikTok, they, as a group, don't understand the technology, so they are not well positioned to regulate it intelligently.

“This leaves us in the difficult situation we find ourselves in today. AI is evolving much faster than regulators can regulate. But to defer regulation in favor of anything else right now is to delay the inevitable.”

In fact, as the capabilities of AI models constantly change and expand, security tests conducted by the two institutes will need to do the same. “Some bad actors may try to bypass testing or misapply dual-use AI capabilities,” Christoph Cemper, CEO of rapid management platform AIPRM, told TechRepublic in an email. Dual use refers to technologies that can be used for both peaceful and hostile purposes.

PEOPLE ALSO LIKE:  A top Google Cloud executive just accused Microsoft of targeting a cloud industry monopoly

Cemper said: “While testing may highlight technical security issues, it does not replace the need for guidance on ethical, policy and governance issues… Ideally, the two governments will view testing as the initial phase of an ongoing collaborative process.” .

SEE: Generative AI may increase global ransomware threat, according to National Cyber ​​Security Center study

Research needed for effective AI regulation

While voluntary guidelines may not be enough to incite real change in the activities of tech giants, hardline legislation could stifle progress in AI if not properly considered, according to Dr. Kjell Carlsson.

The former ML/AI analyst and current head of strategy at Domino Data Lab told TechRepublic in an email: “Today, there are areas related to AI where harm is a real and growing threat. These are areas such as fraud and cybercrime, where regulation usually exists but is ineffective.

“Unfortunately, few of the proposed AI regulations, such as the EU AI Law, are designed to effectively address these threats, as they mainly focus on commercial AI offerings that are not used by criminals. As such, many of these regulatory efforts will harm innovation and increase costs, while doing little to improve actual safety.”

Many experts therefore think that prioritizing research and collaboration is more effective than rushing through regulations in the UK and US.

Dr Carlsson said: “Regulation works when it comes to preventing established harms in known use cases. Today, however, most use cases for AI have yet to be discovered and almost all of the damage is hypothetical. Instead, there is an incredible need to research how to effectively test, mitigate risk, and ensure the security of AI models.

“As such, the establishment and funding of these new AI Safety Institutes and these international collaborative efforts are an excellent public investment, not only to ensure safety, but also to foster the competitiveness of businesses in the US. .and the United Kingdom”.

Source link

Leave a Comment