“Sentient AI”?

Google has spent the better half of the last couple of years selling us on its new machine learning models and what’s to come. And while most demonstrations come off as a confusing cacophony of computers talking to one another, the smarts exhibited have also inspired conversations about its true capabilities.
Most recently, the latest case involves software engineer Blake Lemoine, who was working with Google’s LaMDA system in a research capacity. Lemoine claimed that LaMDA carried an air of sentience in its responses, unlike other artificial intelligence. It’s since sparked a massive debate on the validity of the AI sentience.
However, Google didn’t immediately fire him; it took a little over a month for him to get the boot. In June 2022, Lemoine was placed on administrative leave for breaching a confidentiality agreement after roping in government members and hiring a lawyer. That’s a big no-no from Google, which is trying to remain under the radar with all that anti-trust business! The company maintained that it reviewed Lemoine’s claims and concluded they were “wholly unfounded.” Indeed, other AI experts spoke up in the weeks following the news about the lack of viability in claiming that the LaMDA chatbot had thoughts and feelings. Lemoine has since said that Google’s chatbot is racist, an assertion that will likely be less controversial with the AI community.