• Home
  • Sofware
  • The future of artificial intelligence is shaping up: Microsoft, Google and OpenAI join forces

The future of artificial intelligence is shaping up: Microsoft, Google and OpenAI join forces

Major players in artificial intelligence, including Microsoft, Google, and OpenAI, are taking important steps to develop more responsible and secure technologies. Important companies operating in the field of artificial intelligence have recently visited the White House.
 The future of artificial intelligence is shaping up: Microsoft, Google and OpenAI join forces
READING NOW The future of artificial intelligence is shaping up: Microsoft, Google and OpenAI join forces
Major players in artificial intelligence, including Microsoft, Google, and OpenAI, are taking important steps to develop more responsible and secure technologies. Important companies operating in the field of artificial intelligence had gathered at the White House in the past months and discussed the steps to be taken. Vice President Kamala Harris and President Joe Biden have held meetings with AI industry leaders, emphasizing the need for accountability. Now these moves are paying off.

Artificial intelligence will be safer

According to the document released by the White House, tech firms have adopted eight proposed measures regarding safety, security and social responsibility. These measures include allowing independent experts to test AI models for potential bad behavior, investing in cybersecurity, and encouraging third parties to detect vulnerabilities. Meanwhile, Microsoft; Bing, Google; Bard and OpenAI dominate the industry with their ChatGPT artificial intelligence tools. Additionally, these principles will expire after the US Congress approves the artificial intelligence law.

To address societal risks, including biases and inappropriate uses, companies will focus on researching the implications thoroughly. For the reliability and security of artificial intelligence, companies will adopt a collaborative approach with governments. In addition, watermarking of AI-generated audio and visual content is planned to prevent abuse or misinformation. This will prevent realistic visuals generated by AI from having negative effects. The eight priority items to be taken are as follows:

  1. Independent experts will be allowed to test models for bad behavior
  2. Investing in cyber security
  3. Collaborate with third parties to discover vulnerabilities
  4. Focus on societal risks, including biases and inappropriate uses
  5. Research on the social risks of artificial intelligence will be focused on.
  6. Trust and security information will be shared with other companies and government
  7. Watermark will be added to audio and visual content to help identify that the content was produced by artificial intelligence
  8. State-of-the-art artificial intelligence systems known as frontier models will be used to tackle society’s biggest problems

“These commitments that companies have chosen to undertake immediately underline three principles that must be fundamental to the future of AI and mark a critical step towards the development of responsible AI,” the White House said in a statement.

On the other hand, this agreement or memorandum of understanding is completely voluntary. This may be taken as a sign of goodwill, but it also reveals that lawmakers are having a hard time keeping up with the rapid pace of AI development. Currently, the US and EU are working on comprehensive artificial intelligence laws.

Comments
Leave a Comment

Details
102 read
okunma60954
0 comments