Explanations That Will Make Even People Who Have No Problems From The Father Of Artificial Intelligence

Yoshua Bengio. You may have heard this name for the first time, but he is the person who laid the foundations of the work that enabled us to reach technologies such as ChatGPT, Google Bard or Midjourney, which are among the most popular artificial intelligence tools, namely the father of artificial intelligence. His latest statements are the last proof that the worries about the future of artificial intelligence are not just science fiction movies.
 Explanations That Will Make Even People Who Have No Problems From The Father Of Artificial Intelligence
READING NOW Explanations That Will Make Even People Who Have No Problems From The Father Of Artificial Intelligence

Productive artificial intelligences that give you outputs according to what you write, say and demand suddenly entered our lives. We were instantly fascinated by what we saw and quickly got used to it. Now, even in Webtekno, we produce the visuals of some content completely with artificial intelligence, we don’t even need to say that.

This is where the real danger begins. Although tools such as ChatGPT and Midjourney are limited so that they are not used for malicious purposes, there are also artificial intelligences that follow the same technological paths and are developed precisely for malicious purposes. Think about it this way: Just 2 years ago, mobile applications that made people’s photos nude were on the agenda, now even a passport photo of the person they want to see naked is enough for a malicious person. Of course, this is the social problems dimension of the business. And when it comes to weapons and bad intentions in wars, the waters get very muddy.

Of course, the concerns are not limited to these. For example, although Boston Dynamics robots are produced without weapons, there are those who buy and build their own weapon systems on it, or even produce armed robots from scratch:

So in the near future, will these robots go and kill them when someone says “Kill that person”?

Artificial intelligence does not need a robotic body to do these things… Will it have consequences as bad as nuclear wars or epidemics, or even threaten the existence of humanity?

In the face of these possibilities, we always sprinkled our hearts with water and said, “There are sooooooo many more!” we said. However, we see that a software that learns by itself using what we have produced so far can surprise us all with its not very human-like answers for now. If he can do that, who can guarantee that an unethical robot will not execute the kill command?

If the atomic bomb dropped on Hiroshima were to be dropped today, we no longer need an airplane, a pilot to operate that plane, and most importantly, a person to press the button that drops that bomb. An unethical artificial intelligence is enough for all of them!

Of course, we do not know the answer to these questions. But if it doesn’t make sense to you that we don’t know, let’s say this: Even the father of artificial intelligence does not know the answer to these questions…

You may have seen Yoshua Bengio for the first time. However, he is the father of systems that we call “productive artificial intelligence” today, which can scan and learn the texts and images produced until today and try to give you original answers. Bengio’s work in the 90s and early 2000s forms the basis of tools such as ChatGPT and Midjourney.

Of course, there are dozens of scientists and engineers as influential and successful as Bengio, but his views also shed light on the future of artificial intelligence.

‘We don’t know how to guarantee the safety of humanity!’

Referring to the potential problems to be experienced with the development of artificial intelligence in his blog post on his personal website, Bengio; brings the issue to safety at one point:

“We don’t yet know how to make an AI tool controllable and thus guarantee the safety of humanity! Still, we – myself included – are on our way to building such systems.”

“It is painful to face the idea that we may have contributed to something that could have been massively devastating.”

Referring to the dangers and psychology of this situation that everyone who develops and uses artificial intelligence systems does not want to be aware of, Yoshuo said, “It is difficult to digest such thoughts. It is difficult because accepting the logical conclusions that follow these thoughts; it means questioning our own role, the value of our work, our own sense of value.”

“Most of us would like to see ourselves as someone with a fundamentally good and virtuous purpose, contributing to something that will be beneficial to the community or society of which we are a member. Having a positive image makes us feel good about our work and gives us motivation and energy to move forward. It is painful to face the idea that we may have contributed to something that could have been massively devastating.”

“Human nature will lead us to seek comfort in more reassuring arguments rather than facing all the horrors of such possibilities. Discussing the benefits of AI is not enough to make up for the fact that AI has the same level of dire consequences or even existential risk as nuclear war and epidemics.”

Comments
Leave a Comment

Details
160 read
okunma48879