Significant development in artificial intelligence regulation
While the agreement marks a significant advance for AI regulation, it is non-binding and mostly includes general recommendations, such as monitoring AI systems for misuse, protecting data from manipulation, and vetting software vendors. Still, it is important that so many countries sign a common idea for the security of artificial intelligence systems.
US Cybersecurity and Infrastructure Security Agency Director Jen Easterly said in a statement that the guidelines represent “an agreement that security is the most important thing to do at the design stage.”
In addition to the USA and the UK, the 18 countries that have signed the new guideline include Germany, Italy, South Korea, Norway, Japan, France, Canada, New Zealand, Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria and Singapore. . The agreement addresses questions about how to prevent AI technology from being compromised by hackers and includes recommendations such as releasing models only after appropriate security testing.
It doesn’t address thorny questions about appropriate uses of AI or how the data that fuels these models is collected. However, Europe is ahead of the USA in regulations regarding artificial intelligence, and lawmakers are currently preparing artificial intelligence rules. France, Germany and Italy also recently reached an agreement on how AI should be regulated, supporting “mandatory self-regulation through codes of conduct” for AI models.