It looks like Google has rushed its own chatbot Bard to keep up with rivals Microsoft and OpenAI. Some of the news published showed that employees were almost “begging the company” not to release Bard to Google. And now this situation has been further elaborated.
Google employees have repeatedly criticized the company’s chatbot Bard in internal messages, labeling the system a “pathological liar” and begging the company not to release it.
This is quite striking, according to Bloomberg’s report, which is based on interviews with 18 current and former Google employees and screenshots of internal messages. In these internal discussions, one employee notes that Bard often gives users dangerous advice on things like “landing” or “scuba diving.” Another said, “The Bard is worse than useless: Please don’t start it.”
Bloomberg says the company has even “overruled” a risk assessment submitted by an internal security team that says the system is not generally ready for use. Despite all this, Google still opened the “experimental” bot early access to users in March.
It seems like Google has had no trouble setting aside ethical concerns to keep up with rivals like Microsoft and OpenAI. The company frequently highlights its security and ethics efforts in the field of artificial intelligence, but the Bard move seems unethical.