This boat, which chatted with people today, has made his inventor enemy?
Or, on the contrary, Joseph Weizenbaum, but he was not his, but was the enemy of technology.
Eliza was an easy language process.
Joseph Weizenbaum, who developed the world’s first chat boot Eliza in MIT’s laboratories, was actually confronted with striking facts about human nature while discovering the boundaries of artificial intelligence. This boat caught the key words in the user’s sentences and converted them into questions.
Just like the conversation we had with Chatgpt, when we say “life is very bad ,,“ Why do you feel makus? ” he could continue his dialogue by asking. When Weizenbaum started this experiment, his goal was to observe how people interact with a machine.
However, the influence of Eliza had reached unexpected dimensions.
Doctor, the most famous script of the program, mimic the reflective listening techniques of psychotherapist Carl Rogers. The five were opening their feelings to Eliza as if they were talking to a therapist in front of the screen.
This was disturbed by Weizenbaum. The fact that people trust in a software that has no real understanding of understanding led him to question the ethical dimension of artificial intelligence. This phenomenon, which he called the “Effect of Eliz”, was trying to explain that the meaning that people attribute to technology could turn into a dangerous illusion.
Weizenbaum experienced a radical transformation in his profession after Eliza’s success.
In 1976, in his book Computer Power and Human Reason, he emphasized the ontological difference in the middle of the machine. Compared to him, computers could only be “calculating .. The “judicial” ability of man was fed from experiences, moral values and historical context.
For all these reasons, Weizenbaum began to describe the intervention of artificial intelligence in the fields of human life as “moral deviance ..
After these views, everyone declared Weizenbaum an enemy of technology.
Weizenbaum’s predictions are coming to the agenda again with language models, Chatgpt. Nowadays, the human beings are emotionally connected to these vehicles, tells their personal problems, and even fortune telling.
It is possible to see many currents, especially in the channels, especially in the channels, “Ask Chatgpt, you will not believe on the response it gives”. In this case, it is not possible to understand why Weizenbaum thinks.
Compared to Weizenbaum, humanity loses his own humanity as he submits to him instead of managing technology. He supports his intention with a very pleasant example: the fact that a soldier does not see the people he killed with a drone can make him more insensitive because he reduces his conscientious responsibility.
In fact, he is not technological progress; He was against his humanitarian costs.
The inventor, who dominated the idea that “computers should not tell us what to do, should only show how to do it ,, was one of those who first saw the dark potential of artificial intelligence, although he created Eliza.
Now we remember the words of Weizenbaum when chatting with chatgpt:
“It doesn’t mean that we can do something.