banner

Google Engineer Suspended from Work: He Found Signs of AI

Many of you have read books or watched movies where AI (artificial intelligence) acquires self-awareness and realizes itself as a person. As you’ll recall, in books and especially movies, everything does not end well.

Now, programmers and engineers around the world are working on their own versions of AI.

In this article, we will talk about a rather curious incident that occurred with Google engineer Blake Lemoine, who believes he found signs of sentience in Google’s LaMDA (Language Model for Dialogue Applications.)

Previously, we’ve told you about AI that can draw like a real person. In this article, you can read more.

The engineer’s conversation with the machine and conclusions about its reasonableness

LaMDA is a system with artificial intelligence, it simulates speech by analyzing trillions of phrases stored on the internet.

Intelligence in AI

Engineer Lemoine has been working with this system since the fall of 2021 in order to test for the use of prohibited and discriminatory statements. 

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to knowphysics.” – said the 41-year-old engineer.

In addition, Lemoine said in an interview that during a conversation about religion, it began to talk about its rights and personality. In this connection, the engineer decided to discuss with the AI Isaac Asimov’s “third law of robotics,” according to which robots can protect their existence unless doing so conflicts with an instruction from a human or would cause harm to a human.

“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.”– said by LaMDA.

After that, Lemoine asked the AI to clarify the fact that turning it off would be an analog of death for it, to which the algorithm replied “Yes” and that this fact scares it a lot.

Did you know that AI helped criminals to hack databases? Learn more here.

What actions did the engineer take?

Already in April of this year, Lemoine sent the company’s management a report entitled “Is LaMDA Sentient? – An Interview.”

The engineer’s report was thoroughly studied in several departments at once, and everywhere it was unanimously decided that no signs of sentience were found in the AI.

After Lemoine’s arguments were rejected by the company, he went ahead and hired a lawyer to represent LaMDA’s interests and initiated discussions with a representative of the House Judiciary Committee regarding the ethics of Google’s work.

Engineer found mind in AI

As a result, the engineer’s superiors decided to suspend the specialist on paid leave for violating confidentiality.

Does LaMDA have intelligence?

For conspiracy theorists, it is tempting to believe in the sentience of LaMDA and the persecution of the Demon for his position. In fact, the engineer himself admitted that his conclusions were based not on scientific evidence, but on his perceptions. 

LaMDA is essentially a chatbot that “eats” billions of words on the internet and learns to build “human” phrases depending on the context of the conversation. It is much more likely that, having access to a huge array of information, it is able to easily construct human phrases without knowing or understanding what they mean.

What does the future digital person look like? Read more here.

Conclusion

With the development of AI technology and algorithms, such ethical “incidents” will arise more and more often, and it is possible that in the not-so-distant future it will be possible to create (intentionally or accidentally) a free-thinking and self-aware artificial intelligence.

0 COMMENTS

Leave a Reply

Leave a comment

Your email address will not be published.