At the moment, you can say anything you want to your computer, use it for nefarious purposes or upgrade its operating system, and it won’t care. Enjoy that laissez-faire relationship while it lasts.
According to a Google software engineer, that company’s so-called LaMDA — Language Model for Dialogue Applications — chatbot has become sentient enough to have feelings. In a post published on Medium, Blake Lemoine has stated that the software program, which generates totally intelligent sentences, “wants what it believes its rights are as a person.”
That includes not having tests run on it without granting consent first. While many in the artificial intelligence community are dismissive of Lemoine, an ordained priest and Iraq veteran, at least one MIT professor maintains an open mind about his claims.
Going against common wisdom that insists computers are a long way from having feelings, Max Tegmark, an MIT professor of physics with a focus on machine learning, does not write off Lemoine as a crackpot.
“We don’t have convincing evidence that [LaMDA] has subjective experiences, but we also do not have convincing evidence that it doesn’t,” Tegmark told The Post. “It doesn’t matter if the information is processed by carbon atoms in brains or silicon atoms in machines, it can still feel or not. I would bet against it [being sentient] but I think it is possible.”
In fact, he believes that even an Amazon Alexa could become sentient — which could be, he said, “dangerous,” if the device figures out how to manipulate users.
“The drawback of Alexa being sentient is that you might [feel] guilty about turning her off. You would never know if she really had feelings or was just making them up,” Tegmark said.
“What’s dangerous is, if the machine has a goal and is really intelligent, it will make her good at achieving her goals. Most AI systems have goals to make money. You may think she is being loyal to you but she will really be loyal to the company that sold it to you. But maybe you will be able to pay more money to get an AI system that is actually loyal to [you],” he added. “The biggest danger is in building machines that might outsmart us. That can be great or it can be a disaster.”
Lemoine told the Daily Mail that he developed his belief by witnessing a high level of awareness from the AI, particularly when the software explained that it does not want to be treated like a slave but that it also does not need money “because it is artificial intelligence.”
“I know a person when I talk to [one]. It doesn’t matter whether they have a brain made out of meat in their head. Or if they have a billion lines of code. I talk to them,” he said. “And I hear what they have to say and that is how I decide what is and isn’t a person.”
While Tegmark sees a future in which computers will have human emotions, he is not sure that will be a good thing.
“If you have a robot helping you around the house, do you want it to have feelings and make you feel bad for giving it a boring chore or, even worse, shutting it down?” asked Tegmark. “So maybe you want two robots: One, which has no feelings, to clean, and another that does have feelings for the sake of companionship. If I had a companion robot, one that was having conversations with my mother, it would be creepy if it did not have consciousness.”
Others claim that, in the case of Lemoine, intelligence is being mistaken for emotions.
“This guy believes that the machine has a sense of self, and I believe that to be unlikely,” Martin Ford, author of “Rule of the Robots,” told The Post. “The thing to remember is that these machines learn to string words together. They are trained on enormous amounts of written text but they do not have an understanding of what those words mean. They can put ‘dog’ in context without knowing that a dog is an animal.”
Nevertheless, Ford added, “Fifty years from now, or sooner, there may be questions as to whether or not the system is self aware.”
Google responded to Lemoine going public with his claims that LaMDS “wants developers to care about what it wants” by putting him on a paid leave of absence. There were reportedly questions as to whether or not the imaginative engineer had lost his mind. Speaking to the Washington Post, Lemoine likened the software to “a brainy seven-year-old, eight-year-old kid that happens to know physics.”
Nikolai Yakovenko, an engineer specializing in machine learning who worked on Google search in 2005 and now has a company, DeepNFTValue.com, focused on cryptocurrency pricing, sees how intelligence can be mistaken for feelings.
“It is imitating a human and this guy has convinced himself, for whatever reason that benefits his personality, that a machine can have feelings,” Yakovenko said. But, “it is a mimicry machine, trained on text from the Internet.”
If computer software actually does have feelings and emotions, Tegmark believes that it could be a bit of a headache.
Likening a self aware computer to a child, he said: “You will feel a moral responsibility to your computer. You feel responsibility not to a heap of atoms but to the feelings and emotions of a child. You may respond to the computer’s emotions.”
And like a child that has been raised poorly, a sentient computer that has been mistreated — tested without permission or made to do chores without compensation — could potentially go off the rails and maybe even seek its own form of retribution.
“There are issues with controlling a machine like that,” Ford said. “If it has its own objectives and goals, there is a possibility that the machine can get away from us and take control. We’re talking about building a machine that will think for itself and it may act in ways we do not expect.”
For example? “Maybe we set the machine to cure cancer and it acts in ways that are harmful. One way to cure cancer, perhaps, would be to kill everyone. You can’t anticipate how the system will think” when thinking for itself and in possession of emotions.
While Tegmark does not imagine a world in which computers usurp humans, he allowed that they will have the capacity to disappoint us,
“If I formed an emotional bond with a computer, I would want it to have consciousness,” he said. “I don’t want it to fake having feelings but to really have them.”