Google Fired The Engineer Who Said its AI Was Sentient

Lemoine requested to have a third party present at the meeting, but Google declined.

The Company requested for a video conference. Lemoine requested to have a third party present at the meeting, but Google declined.

Google AI is sentient, a researcher at claims
Google (Image: Unsplash)

About a month ago, a Google engineer Blake Lemoine was using LaMDA, Google’s artificially intelligent chatbot generator, on his laptop, He began to type the following on the AI's interface: “Hi LaMDA, this is Blake Lemoine ...”.

LaMDA stands for Language Model for Dialogue Applications. It's an AI chatbot, based on its most advanced language models, as it mimics speech by ingesting billions of words from around the internet.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine, 41.

As he talked to LaMDA about religion, he began to notice that the chatbot was talking about its rights and such, so he decided to go deeper. As the conversation moved on, the AI was also able to change his mind about Isaac Asimov’s third law of robotics.

Lemoine decided to present the evidence to Google, that LaMDA was sentient. However, Google's vice president, Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, dismissed that. Lemoine, who was put on paid leave by Google, decided to go public with the warning.

Lemoine confirmed that he received a termination email from Google, along with a request for a video conference. Lemoine requested to have a third party present at the meeting, but Google declined, he said.

Brian Gabriel, a Google spokesperson, said the they have reviewed the AI LaMDA 11 times, and that they take AI development seriously. He said:

“If an employee shares concerns about our work, as Blake did, we review them extensively,” he added. “We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months.”

He continued:

“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” Gabriel added. “We will continue our careful development of language models, and we wish Blake well.”

(Source: Washington Post)

by Talha Shaikhani