“I’m afraid of artificial intelligence,” he said, resigned!

8
“I’m afraid of artificial intelligence,” he said, resigned!

Deepseek managed to seize the AI ​​(artificial intelligence) agenda with his quick exit. Although the Chinese initiative Deepseek did not have the enormous equipment and infrastructure sources of OpenAI, it managed to develop a logic model as strong as Chatgpt O1. The venture also used old chips and software optimization to train Deepseek R1.

It can be said that the Deepseek project is probably achieved in many areas. The initiative created a very serious competition in the field of artificial intelligence and gave China a chance to fight despite all sanctions. In addition, Deepseek caused other artificial intelligence hardware companies to lose large amounts of value, causing them to lose about $ 1 billion on the US stock exchange. Finally, Deepseek gave China a software weapon that could be much stronger than Tiktok. Anyone who wants to install the Deepseek model on your computer and can use to create other models.

Among all these news and important developments, the news that an important OpenAI security researcher has resigned more may not draw attention. But now, the engineer named Steven Adler confirmed that it was among the engineers who left OpenAI last year.

Adler said in his sharing in X that he left the company at the end of four years and would miss a large part of his work in the company. However, he carried his sharing in a much more pessimistic direction by saying orum I am quite afraid of the speed of AI development these days.

Adler’s statement reflects the fears of other AI experts who think that AI will bring our inevitable end. The former Chatgpt Security Researcher clearly says that he is worried about the future: “When I think about where to raise a family in the future or how much I would save for retirement, I cannot help without asking this question: Can humanity come to that point?”

A very risky gambling

Adler did not make any explanation about the reason he left OpenaI, or he didn’t say exactly what he was so scared of him. However, Adler said that the AGI (Artificial General Intelligence) race is a very risky gambling. He said that no laboratory has no solution against the problems that AGI may bring, and the more this race is accelerated, the more the chances of a solution will be reduced.

AGI can be defined as a type of AI that will match a person’s creativity and ability when trying to solve any problem. However, since the AGI will also keep much more information, any task can solve much more effectively than a person. The most important security issue about AI, Agi and Super Artificial Intelligence (AS) is harmony. The AI ​​to be developed has to be compatible with the interests of humanity at all levels. We need to be sure of this to prevent AI from developing solutions that can bring the end of humanity.

However, real life products such as Chatgpt and Deepseek already offer us two different types of harmony. While Chatgpt probably follows a more compatible path with the US and Western interests, Deepseek was founded primarily to observe Chinese interests and shows this harmony with the censorship he applied in some answers.

In addition, Adler seems to be worried about Deepseek’s success, even though he did not directly name it. Although some developers give priority to the compliance of AI, some developers said that others can continue to improve some measures from implementation and create much more dangerous versions.

The future of Adler’s career is not evident for now, but seems to be interested in artificial intelligence security. In X, he asked people what the “most important and neglected ideas” in artificial intelligence safety / policy are excited about “control methods, intrigue detection and safety cases”.