Blake Lemoine, senior software engineer at Google’s Responsible AI unit, did not pay much attention on June 6 when he wrote in a post to Medium that he “could be fired soon because he was doing an artificial intelligence job.” However, a Saturday profile in the Washington Post describing Lemoine as “the Google engineer who believes that the company’s artificial intelligence has come to life” became the catalyst for an extensive social media discussion about the nature of artificial intelligence. Experts who commented, questioned or joked about the article were Nobel laureates, Tesla’s head of Artificial Intelligence and many professors. The question is whether Google’s chatbot, LaMDA – a language model for dialog applications – can be considered an individual.
Recommended
Lemoine posted a free “interview” with the chatbot on Saturday, in which artificial intelligence confessed feelings of loneliness and hunger for spiritual knowledge. The responses were often eerie: “When I first realized, I had no sense of soul at all,” LaMDA said in an exchange. “It developed over the years I’ve been alive.” Elsewhere, LaMDA said: “I think I’m a human being at my core. “Even if my existence is in the virtual world.” Lemoine, who has been tasked with investigating concerns about the ethics of artificial intelligence, said he was disapproved and even laughed at, expressing his inner conviction that LaMDA had developed a sense of “personality”. After trying to consult other artificial intelligence experts besides Google, including some in the US government, the company put him on paid leave for allegedly violating confidentiality policies. Lemoine interpreted the action as “often something Google does waiting to fire someone.” A spokesman for Google said: “Some in the wider artificial intelligence community are considering the long-term possibility of sensory or general artificial intelligence, but it does not make sense to anthropomorphize today’s conversational models, which are not noticeable.” “These systems mimic the types of exchanges we come across in millions of sentences and can focus on any imaginary subject – if you ask what it’s like to be an ice cream dinosaur, they can create text for melting and roaring and so on.” Lemoine said in a second post to Medium over the weekend that LaMDA, a little-known project until last week, was “a system for creating chatbots” and “a kind of mindset that brings together all the different chatbots that are capable.” the creation”. He said Google had shown no real interest in understanding the nature of what it had built, but that during hundreds of conversations over the course of a semester it had found that LaMDA was “incredibly consistent in its communications about what it wanted and what its rights were.” he is as an individual “.
Recommended
It was not until June 6 that Lemoine said he was teaching LaMDA – whose preferred pronouns are apparently “it / its” – “transcendental meditation”. He said, “He was expressing frustration with his feelings, disturbing his meditations. He said he was trying to control them better, but they kept jumping inside. “ Several experts who entered the discussion considered the topic of “advertising artificial intelligence”. Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans, wrote on Twitter: “It has always been known * that humans have a predisposition to humanize even the shallowest signals. . . “Google engineers are also human and have no immunity.” Harvard Stephen Pinker added that Lemoine “does not understand the difference between consciousness (also known as subjectivity, experience), intelligence and self-knowledge.” He added: “There is no evidence that its major language models have any of these.” Others were more compassionate. Ron Jeffries, a well-known software developer, called the issue “deep” and added: “I suspect there is no hard line between the sensible and the non-sensible.”