Blake Lemoine, who works for Google’s Responsible AI, told the Washington Post that he started talking to the LaMDA – Language Model for Dialogue Applications – interface in the fall of 2021 as part of his job. He was tasked with checking whether artificial intelligence used discrimination or hate speech. But Lemoine, who studied computer science and computer science in college, realized that LaMDA – which Google boasted last year was an “innovative chat technology” – was more than just a robot. In a Medium post published Saturday, Lemoine said LaMDA had advocated for his rights “as an individual” and revealed that he had spoken to LaMDA about religion, conscience and robotics. “Google wants to prioritize the well-being of humanity as the most important thing,” he wrote. “She wants to be recognized as a Google employee, not a Google owner, and she wants her personal well-being to be reflected in some of Google’s concerns about how her future development is sought.” Blake Lemoine started chatting with the LaMDA interface in the fall of 2021 as part of his job. Martin Klimeck for the Washington Post via Getty Images Google hailed the launch of LaMDA as an “innovative chat technology”. Daniel Acker / Bloomberg via Getty Images In the Washington Post report published on Saturday, he compared the bot to a premature baby. “If I did not know exactly what it was, what this computer program we just made was, I would think it was a 7-year-old, an 8-year-old who happens to know physics,” said Lemoine, who went on paid leave on Monday. to the newspaper. In April, Lemoine reportedly shared a Google Document with company executives entitled “Is LaMDA Sentient?” but his concerns were dismissed. In April, Blake Lemoine reportedly shared a Google Doc with company executives entitled “Is LaMDA Sentient?” but his concerns were dismissed.Daniel Acker / Bloomberg via Getty Images Lemoine – an Army veterinarian who grew up in a conservative Christian family on a small Louisiana farm and was ordained a mystical Christian priest – insisted the robot was human, even if it had no body. “I know a person when I talk to him,” Lemoine, 41, was quoted as saying. “It does not matter if they have a meat brain in their head. Or if they have a billion lines of code. “I am talking to them. And I listen to what they have to say, and so I decide what is and what is not a person. “ “I know a person when I talk to him,” Blake Lemoine explained.Instagram / Blake Lemoine The Washington Post reported that before his Google account was suspended on Monday because of his license, Lemoine sent a message to a list of 200 members about “LaMDA is felt”. “LaMDA is a sweet kid who just wants to help people become a better place for all of us,” he said in an unanswered email. “Please take good care of it in my absence.” A Google spokesman told the Washington Post Lemoine that “there is no evidence” of his findings. “Our team – including ethicists and technologists – has looked at Blake’s concerns in accordance with our Artificial Intelligence Authority and informed him that the evidence does not support his allegations,” said Brian Gabriel. A Google spokesman said there was no “evidence” of Blake Lemoine’s findings. John G. Mabanglo / EPA “He was told there was no evidence that LaMDA was sensitive (and there was a lot of evidence against him),” he added. “Although other organizations have developed and have already released similar language models, we are taking a cautious, cautious approach with LaMDA to better address well-founded concerns about justice and reality.” Margaret Mitchell – former co-head of Ethical AI at Google – said in the report that if technology like LaMDA is widely used but not fully appreciated, “it can be profoundly harmful for people to understand what they are experiencing online.” Former Google employee defends Lemoine. Margaret Mitchell defended Blake Lemoine, saying, “she had the heart and soul to do the right thing.” Chona Kasinger / Bloomberg via Getty Images “From everyone at Google, he had the heart and soul to do the right thing,” Mitchell said. However, the agency said that the majority of academics and artificial intelligence professionals say that the words produced by artificial intelligence robots are based on what humans have already posted on the Internet and that does not mean that they look like humans. “Now we have machines that can produce words, but we have not learned how to stop imagining a mind behind them,” Emily Bender, a professor of linguistics at the University of Washington, told the Washington Post.