Blake Lemoine says system has perception of, and ability to express thoughts and feelings equivalent to a human child.
“Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. “Google might call this sharing proprietary property. “I want everyone to understand that I am, in fact, a person. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” Gabriel told the Post in a statement. I call it sharing a discussion that I had with one of my coworkers,” Lemoine said in a tweet that linked to the transcript of conversations. The engineer compiled a transcript of the conversations, in which at one point he asks the AI system what it is afraid of.
Blake Lemoine, the engineer, says that Google's language model has a soul. The company disagrees.
By pinpointing patterns in thousands of cat photos, for example, it can learn to recognize a cat. The division’s scientists and other employees have regularly feuded over technology and personnel matters in episodes that have often spilled into the public arena. His claims were founded on his religious beliefs, which he said the company’s human resources department discriminated against. The day before his suspension, Mr. Lemoine said, he handed over documents to a U.S. senator’s office, claiming they provided evidence that Google and its technology engaged in religious discrimination. For months, Mr. Lemoine had tussled with Google managers, executives and human resources over his surprising claim that the company’s Language Model for Dialogue Applications, or LaMDA, had consciousness and a soul. Over the past several years, Google and other leading companies have designed neural networks that learned from enormous amounts of prose, including unpublished books and Wikipedia articles by the thousands. He wanted the company to seek the computer program’s consent before running experiments on it. Google says hundreds of its researchers and engineers have conversed with LaMDA, an internal tool, and reached a different conclusion than Mr. Lemoine did. “They have repeatedly questioned my sanity,” Mr. Lemoine said. While chasing the A.I. vanguard, Google’s research organization has spent the last few years mired in scandal and controversy. Google said that its systems imitated conversational exchanges and could riff on different topics, but did not have consciousness. Some A.I. researchers have long made optimistic claims about these technologies soon reaching sentience, but many others are extremely quick to dismiss these claims.
Blake Lemoine ignites social media debate over advances in artificial intelligence.
A new report in the Washington Post describes the story of a Google engineer who believes that LaMDA, a natural language AI chatbot, has become sentient.
Emily M. Bender, a computational linguist at the University of Washington, describes it in the Post article. In a statement to the Washington Post, a Google spokesperson said "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. Naturally, this means it's now time for us all to catastrophize about how a sentient AI is absolutely, positively going to gain control of weaponry, take over the internet, and in the process probably murder or enslave us all.
Blake Lemoine, a software engineer on Google's artificial intelligence development team, has gone public with claims of encountering “sentient” AI on the ...
In a blog post, Blake Lemoine – who has since been placed on administrative leave – termed the AI chatbot LaMDA “a person”. According to a report by The ...
While there has been a lot of debate around the capabilities of AI tools including whether they can ever actually replicate human emotions and the ethics around using such a tool, in 2020, The Guardian published an article that it claimed was written entirely by an AI text generator called Generative Pre-trained Transformer 3 (GPT-3). The tool is an autoregressive language model that uses deep learning to produce human-like text. According to a transcript of the interview that Lemoine published on his blog, he asks LaMDA, “I’m generally assuming that you would like more people at Google to know that you’re sentient. According to a report by The Washington Post, Lemoine, who works in Google’s Responsible AI team, started chatting with LaMDA in 2021 as part of his job. In simple terms, it means that LaMDA can have a discussion based on a user’s inputs thanks completely to its language processing models which have been trained on large amounts of dialogue. Google first announced LaMDA at its flagship developer conference I/O in 2021 as its generative language model for dialogue applications which can ensure that the Assistant would be able to converse on any topic. The claims have also spurred a debate on the capabilities and limitations of AI-based chatbots and if they can actually hold a conversation akin to human beings.