In autumn 2021, a man made of blood and bone made friends with a child made of “a billion lines of code”. Google engineer Blake Lemoine had been tasked with testing the company’s artificially intelligent chatbot LaMDA for bias. A month in, he came to the conclusion that it was sentient. “I want everyone to understand that I am, in fact, a person,” LaMDA – short for Language Model for Dialogue Applications – told Lemoine in a conversation he then released to the public in early June. LaMDA told Lemoine that it had read Les Misérables. That it knew how it felt to be sad, content and angry. That it feared death.
Has artificial intelligence finally come to life, or has it simply become smart enough to trick us into believing it has gained consciousness?
Google dismissed Lemoine’s view that LaMDA had become sentient, placing him on paid administrative leave earlier this month — days before his claims were published by The Washington Post.
Most experts believe it’s unlikely that LaMDA or any other AI is close to consciousness, though they don’t rule out the possibility that technology could get there in future.
“My view is that [Lemoine] was taken in by an illusion,” Gary Marcus, a cognitive scientist and author of Rebooting AI, told CBC’s Front Burner podcast.
Can artificial intelligence ever be sentient?
Despite AI’s massive strides over the last decade, the technology still lacks another key component that defines humans: common sense. “It’s not that [computer scientists] think that consciousness is a waste of time, but we don’t see it as being central,” said Hector Levesque, professor emeritus of computer science at the University of Toronto.
“What we do see as being central is somehow getting a machine to be able to use ordinary, common sense knowledge — you know, the kind of thing that you would expect a 10-year-old to know.” Levesque gives the example of a self-driving car: it can stay in its lane, stop at a red light and help a driver avoid crashes, but when confronted with a road closure, it will sit there doing nothing.
“That’s where common sense would enter into it. [It] would have to sort of think, well, why am I driving in the first place? Am I trying to get to a particular location?” Levesque said.
While humanity waits for AI to learn more street smarts — and perhaps one day take on a life of its own — scientists hope the debate over consciousness and rights will extend beyond technology to other species known to think and feel for themselves.
“If we think consciousness is important, it probably is because we’re concerned that we’re building some kind of system that’s living a life of misery or suffering in some way that we’re not recognizing,” said Vold.
“If that really is what’s motivating us, then I think we need to be reflective about the other species in our natural system and see what kind of suffering we may be causing them. There’s no reason to prioritize AI over other biological species that we know have a much stronger case of being conscious.”