Google has made one of the most important changes since its first publication of artificial intelligence principles in 2018. According to the amendment by The Washington Post, the Internet giant has made the words that the AI will “design or disperse the tools to be used in weapons or surveillance technology. These guidelines included a section titled ılar Applications that we will not work on olmayan, which had not been found in the current version of the document.
Now the document has a section titled “Responsible Development and Distribution”. In this episode, Google says that “user goals, social responsibility and international law and human rights will be compatible with the commonly accepted principles, the appropriate human control, the necessary care and feedback mechanisms”.
These new commitments have a much more extensive and more uncertain structure than the net and explicit commitments given by the company in AI principles until just a month ago. For example, the Company said that it would not design AI for use in weapons to be used in weapons or other technologies with its basic purpose or application to harm people or directly facilitates people. On the side of the AI, he said that he would not develop technology that violates “internationally accepted norms ..
In a blog post published by Google, Deepmind CEO Demis Hassabis and Google research, laboratories, technology and community senior vice president James Manyika, the emergence of artificial intelligence as a “general purpose technology çıkmış says a policy change. According to the article, the duo said, “We believe that democracies should lead the development of AI, which is directed by fundamental values such as freedom, equality and respect for human rights. And we believe that companies, governments and organizations that share these values should work together to create AI that protects people, encourage global growth and support national security. “ And we will continue to focus on its practices, and we will remain consistent with the commonly accepted principles in international law and human rights – we will always carefully evaluate certain studies by carefully evaluating whether the benefits are significantly heavier than potential risks. ”
Google first published the AI principles in 2018 after Project Raven, a controversial contract that requires the US Department of Defense to develop a AI software that would analyze drone images to the US Department of Defense. Numerous Google employees resigned from the company to protest against this agreement with the government and signed a petition against the contract against thousands of employees.