Four big tech giants have announced that they have established a security group on artificial intelligence.

While the security risks brought by artificial intelligence continue to be the subject of discussion; Anthropic, Google, Microsoft and Open AI teamed up to form the Frontier Model Forum.
 Four big tech giants have announced that they have established a security group on artificial intelligence.
READING NOW Four big tech giants have announced that they have established a security group on artificial intelligence.

It’s no secret that the development of artificial intelligence brings with it many security risks. Although governing bodies are working to bring regulations, it is mostly up to the companies themselves to take action for now. The final step towards this came when Anthropic, Google, Microsoft and Open AI jointly created the Frontier Model Forum, focused on safe and mindful artificial intelligence development.

The Forum plans to establish an advisory committee, charter and funding. It also focuses on advancing AI security research, identifying best practices, working closely with policymakers, academics, civil society and companies, and fostering efforts to create AI that can help tackle society’s biggest challenges.

The members will reportedly be working on the top three goals over the next year. Speaking of membership, the announcement lists the qualifications required for participation, such as a clear commitment to producing leading-edge models and making them safe.

The establishment of the forum followed a recent security agreement between the White House and leading artificial intelligence companies, including those responsible for this new initiative. Committed security measures include testing by outside experts and placing a watermark on AI-generated content.

Comments
Leave a Comment

Details
136 read
okunma55992