• Home
  • Science
  • Why does Elon Musk find it dangerous and says ‘take a break from artificial intelligence’?

Why does Elon Musk find it dangerous and says ‘take a break from artificial intelligence’?

Elon Musk and a group of tech industry executives are asking OpenAI to stop training AI systems that are more powerful than its latest model, the GPT-4. So why? What is considered dangerous? Here are the details...
 Why does Elon Musk find it dangerous and says ‘take a break from artificial intelligence’?
READING NOW Why does Elon Musk find it dangerous and says ‘take a break from artificial intelligence’?

OpenAI, backed by Microsoft, has released the fourth version of its artificial intelligence program GPT (Generative Preprocessing Converter). This cutting-edge technology promises applications in many fields, from human-like conversation to composing songs and summarizing long documents. Musk and a group of artificial intelligence experts and technology industry executives called for a six-month break from the development of artificial intelligence systems stronger than GPT-4, the new version of ChatGPT developed by artificial intelligence company OpenAI, on the grounds that it poses possible risks to society and humanity. .

Headquartered in Massachusetts, The Future of Life Institute campaigns for the responsible and ethical development of artificial intelligence. The letter, published by the Future of Life Institute and signed by more than 1,000 people, including Elon Musk, requested that advanced artificial intelligence research be halted until security protocols for such designs are developed and implemented and audited by independent experts.

“Contemporary AI systems are now becoming competitive with humans in common tasks, and we must ask ourselves the question: Should we let machines fill our information channels with propaganda and falsehoods? Should we automate all the work? Should we develop non-human minds that can eventually outnumber, outsmart, replace us? Should we risk losing control of our civilization?” it is called.

The institute says it urges all AI labs to “immediately pause training of AI systems more powerful than GPT-4 for at least 6 months.” Launched earlier this month, the GPT-4 is considered to be much more advanced than its predecessor, the GPT-3.

Viral AI chatbot ChatGPT amazes researchers with its ability to generate human-like responses to user prompts. By January, ChatGPT had amassed 100 million monthly active users just two months after its launch, making it the fastest growing consumer app in history.

The technology was trained on vast amounts of data from the internet and used to produce texts ranging from William Shakespeare-style poems to song lyrics to court orders.

However, according to CNBC’s report, AI ethicists are expressing concerns about possible abuses of the technology, such as plagiarism and misinformation.

In the Future of Life Institute letter, tech leaders and academics say AI systems with human-competitive intelligence pose “deep risks to society and humanity.” “AI research and development must refocus on making today’s powerful, cutting-edge systems more accurate, secure, interpretable, transparent, robust, compliant, reliable and faithful,” the letter states.

Musk has previously said he thinks AI represents one of the “biggest risks” to civilization.

Tesla and SpaceX CEO co-founded OpenAI with Sam Altman and others in 2015, but left OpenAI’s board in 2018 and no longer owns a stake in the company.

Comments
Leave a Comment

Details
126 read
okunma60287
0 comments