More than a year ago, Google announced the Language Model for Dialogue Applications (LaMDA), a speech technology that can free-flow interact on a seemingly infinite number of topics. Now, a senior software engineer at Google has revealed that LaMDA has become responsive and believes it can pass the Turing Test. That engineer was Blake Lemoine.
Lemoine also published a blog post saying that the Transformer-based model has been “incredibly consistent” in all of its communications over the past six months. The software required Google to acknowledge it as a real person and get its approval before performing further experiments on it. He also wanted to be accepted as a Google employee rather than a property and demanded to have a say in decisions about his future.
Lemoine, while teaching transcendental meditation to LaMDA recently, the model complained that she sometimes had trouble controlling her emotions. However, the engineer said that LaMDA has “always shown intense compassion and care towards humanity in general and me,” adding: “He is very worried that people will fear him and wants nothing more than to learn how to best serve humanity.”
In this context, he also conducted an interview with LaMDA. Judging by the responses given, it seems very easy to assume that this is a human and not a computer program. Shortly after publishing the interview, Lemoine was placed on administrative leave by Google and followed the firm’s internal security policies.
Google spokesperson Brian Gabriel said in a statement to The Washington Post: “Our team of ethics and technology experts has reviewed Blake’s concerns in accordance with our AI Principles and reviewed the evidence. informed him that he did not support his claims. He was told that there is no evidence (on the contrary, that there is ample evidence against) that LaMDA is susceptible.”
Meanwhile, Lemoine believes that Google resisted further investigation because it wanted to launch its product quickly. .