It’s no secret that the development of artificial intelligence brings with it many security risks. Although governing bodies are working to bring regulations, it is mostly up to the companies themselves to take action for now. The final step towards this came when Anthropic, Google, Microsoft and Open AI jointly created the Frontier Model Forum, focused on safe and mindful artificial intelligence development.
The Forum plans to establish an advisory committee, charter and funding. It also focuses on advancing AI security research, identifying best practices, working closely with policymakers, academics, civil society and companies, and fostering efforts to create AI that can help tackle society’s biggest challenges.
The members will reportedly be working on the top three goals over the next year. Speaking of membership, the announcement lists the qualifications required for participation, such as a clear commitment to producing leading-edge models and making them safe.
The establishment of the forum followed a recent security agreement between the White House and leading artificial intelligence companies, including those responsible for this new initiative. Committed security measures include testing by outside experts and placing a watermark on AI-generated content.