Artificial intelligence will be safer
According to the document released by the White House, tech firms have adopted eight proposed measures regarding safety, security and social responsibility. These measures include allowing independent experts to test AI models for potential bad behavior, investing in cybersecurity, and encouraging third parties to detect vulnerabilities. Meanwhile, Microsoft; Bing, Google; Bard and OpenAI dominate the industry with their ChatGPT artificial intelligence tools. Additionally, these principles will expire after the US Congress approves the artificial intelligence law.
To address societal risks, including biases and inappropriate uses, companies will focus on researching the implications thoroughly. For the reliability and security of artificial intelligence, companies will adopt a collaborative approach with governments. In addition, watermarking of AI-generated audio and visual content is planned to prevent abuse or misinformation. This will prevent realistic visuals generated by AI from having negative effects. The eight priority items to be taken are as follows:
- Independent experts will be allowed to test models for bad behavior
- Investing in cyber security
- Collaborate with third parties to discover vulnerabilities
- Focus on societal risks, including biases and inappropriate uses
- Research on the social risks of artificial intelligence will be focused on.
- Trust and security information will be shared with other companies and government
- Watermark will be added to audio and visual content to help identify that the content was produced by artificial intelligence
- State-of-the-art artificial intelligence systems known as frontier models will be used to tackle society’s biggest problems
“These commitments that companies have chosen to undertake immediately underline three principles that must be fundamental to the future of AI and mark a critical step towards the development of responsible AI,” the White House said in a statement.
On the other hand, this agreement or memorandum of understanding is completely voluntary. This may be taken as a sign of goodwill, but it also reveals that lawmakers are having a hard time keeping up with the rapid pace of AI development. Currently, the US and EU are working on comprehensive artificial intelligence laws.