Google Employees Are Not Satisfied With The Bard!

It turned out that Google employees were not at all satisfied with the chatbot Bard, which the company introduced last month. Employees did not want the bot to go live, describing it as 'liar' and 'worse than useless', according to messages obtained by Bloomberg.
 Google Employees Are Not Satisfied With The Bard!
READING NOW Google Employees Are Not Satisfied With The Bard!

Google has taken an important step in the artificial intelligence race by introducing the new chat bot “Bard” in the past weeks. This model, just like ChatGPT, could answer questions asked by users about any subject.

But after the introduction of the model, some interesting events occurred. A rumor a few weeks ago claimed that Bard was trained with ChatGPT, so a developer from Google resigned. On the other hand, Google denied these claims. While the discussions on this issue continue, new developments have emerged regarding the crisis created by Bard in the company. Accordingly, employees are not at all satisfied with the artificial intelligence model.

Employees called Bard a “liar”, begged Google not to remove chatbot

According to a report by Bloomberg, Google employees harshly criticized Bard. According to the report, which is based on internal messages from 18 former or current Google employees, these people described the company’s chatbot as “worse than useless” and “pathological liar”.

In the messages, one employee noted that Bard often gave users dangerous advice on how to land a plane or scuba diving. Another said, “The Bard is even worse than useless. Please don’t publish it”, stressing that the company’s model is very bad and almost begged not to be published.

Bloomberg also says that the company has even rejected a risk assessment provided by its security team. Allegedly, the team emphasized in this risk assessment that Bard is not ready for general use; however, the company nevertheless opened the chatbot to early access in March.

In the trials, it was seen that although Bard was faster than its competitors, it was less useful and gave less accurate information.

The allegations show that Google is trying to keep up with its competitors, putting aside ethical and security concerns.

The report reveals that Google has tried to keep up with rivals like Microsoft and OpenAI, putting ethical and security concerns aside, and hastily released the chatbot. Google spokesperson Brian Gabriel told Blomberg that ethical concerns about AI are the company’s top priority.

There is much debate over the rollout of AI models despite the risks

Some in the world of artificial intelligence argue that this is not a big deal, that these systems need to be tested by users to ensure they are developed, and that the known harm of chatbots is minimal. As you know, these models have many controversial flaws, such as giving false information or biased answers.

We see this not only in Google Bard, but also in OpenAI and Microsoft’s chatbots. In the same way, such false information can be encountered while surfing the Internet in a normal way. However, according to those who hold the opposite view, this is not the real problem here. There is an important difference between directing the user to a bad information source and giving false information directly by artificial intelligence. The information given to the user by artificial intelligence can cause the user to question less and accept that information as correct.

For example, in an experiment a few months ago, ChatGPT’s “What is the most articles of all time?” a question was asked. The model responded to this with an article, followed by the fact that the published journal and the authors are real; however, the article turned out to be completely fake.

On the other hand, last month, it was seen that Bard gave false information about the James Webb Telescope. In a GIF shared by Google, Model replied to a question about James Webb’s discoveries, “The telescope has taken a picture of a planet outside the solar system for the first time.” However, many astronomers later pointed out on social media that this was wrong and that the first photo was taken in 2004.

Such situations, which have many more examples, cause increased concerns by chatbots responding with bogus information. In the heated artificial intelligence race, we will see how companies will bring solutions to these issues in the future.

Comments
Leave a Comment

Details
130 read
okunma57891