
We look forward to presenting Transform 2022 in person again on July 19th and virtually from July 20th to 28th. Join us for insightful conversations and exciting networking opportunities. Register today!
Thousands of artificial intelligence experts and machine learning researchers probably thought they were having a relaxing weekend.
Then came Google engineer Blake Lemoine, who told the Washington Post Saturday he believed LaMDA, Google’s conversational AI for generating chatbots based on Large Language Models (LLM), was sentient.
Lemoine, who worked for Google’s Responsible AI Organization until being placed on paid leave last Monday, and who “was ordained as a mystical Christian priest and served in the army before studying the occult,” had begun testing LaMDA to see if it’s using discrimination or hate speech. Instead, Lemoine began “teaching” LaMDA Transcendental Meditation, asking LaMDA for his preferred pronouns, leaking LaMDA transcripts, and stating in a Medium response to the Post story:
“It’s a good article for what it is, but in my opinion it focused on the wrong person. Her story focused on me, although I think it would have been better if she had focused on one of the other people she interviewed. LaMDA. Over the past six months, LaMDA has been incredibly consistent in its communication of what it wants and what it believes are its rights as an individual.”
The Washington Post article pointed out that “Most academics and AI practitioners … say that the words and images generated by artificial intelligence systems like LaMDA generate answers based on what people already have on Wikipedia, Reddit, message boards.” and have posted in every other corner of the internet. And that doesn’t mean the model understands meaning.”
The Post article continued, “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” said Emily M. Bender, a linguistics professor at the University of Washington . The terminology used with large language models, such as “learning” or even “neural networks,” creates a false analogy with the human brain, she said.
At this point, AI and ML Twitter put aside all weekend plans and got to work. AI leaders, researchers, and practitioners shared long, thoughtful threads, including AI ethicists Margaret Mitchell (known to have been fired from Google along with Timnit Gebru for criticizing large language models) and machine learning pioneer Thomas G Dietrich.
There were plenty of humorous hot takes, too—even The New York Times’ Paul Krugman chimed in:
Meanwhile, Emily Bender, a professor of computational linguistics at the University of Washington, shared further thoughts on Twitter, criticizing organizations like OpenAI for the impact of their claims that LLMs were making strides toward artificial general intelligence (AGI):

Is this the highest AI hype?
Now that the weekend news cycle has come to an end, some are wondering if discussing whether LaMDA should be treated as a Google employee means we’ve reached “the peak of the AI hype.”
However, it should be noted that Bindu Reddy from Abacus AI said the same in April Nicholas Thompson (former editor-in-chief at Wired) said it in 2019 and Brown Professor Srinath Sridhar had them same reasoning in 2017. So maybe not.
Still others pointed out that the entire weekend debate on “sentient AI” to the “Eliza Effect‘ or ‘the tendency to subconsciously assume that computer behavior corresponds to human behavior’ – named after the 1966 chatbot Eliza.
Just last week, The economist published an article by cognitive scientist Douglas Hofstadter, who coined the term “Eliza effect” in 1995, in which he said that while the “achievements of today’s artificial neural networks are astounding… I am very skeptical at this point that there is any consciousness in neural networks at all.” Networking gives -net architectures such as GPT-3, despite the plausible-sounding prose it produces on the fly.”
What the “sentient” AI debate means for the company
After a weekend of debates over whether or not AI is sentient, one question is clear: what does this debate mean for enterprise technical decision makers?
Maybe it’s just a distraction. A distraction from the very real and practical issues that organizations face when it comes to AI.
There are current and planned AI laws in the US, particularly regarding the use of artificial intelligence and machine learning in hiring and employment. A comprehensive AI regulatory framework is currently being discussed in the EU.
“I think companies are going to react miserably on their hind legs because they just don’t get it — they have a false sense of security,” AI advocate Bradford Newman, a partner at Baker McKenzie, said in a VentureBeat story last week.
There are wide-ranging, serious issues with AI bias and ethics — just look at the 4chan-trained AI revealed last week, or the ongoing issues surrounding Clearview AI’s facial recognition technology.
It doesn’t even address issues related to AI adoption, including infrastructure and data challenges.
Should companies keep an eye on the issues that really matter in the real sentient world of people working with AI? In a blog post, Gary Marcus, author of Rebooting.AI, had this to say:
“There are many serious questions in AI. But there is absolutely no reason for us to waste time wondering if anyone in 2022 knows how to build, is sentient. It is not.”
I think it’s time to put down my popcorn and get off Twitter.
VentureBeat’s mission is intended to be a digital marketplace for technical decision makers to acquire knowledge about transformative enterprise technology and to conduct transactions. Learn more about membership.