ADVERTISEMENT

Do Computers Have Feelings? Don’t Let Google Alone Decide

Even if an engineer’s recent claims of a conscious machine are dubious, the tech giant's tightening grip on AI research and its ham-fisted treatment of dissenting voices is troubling.

Do Computers Have Feelings? Don’t Let Google Alone Decide
Do Computers Have Feelings? Don’t Let Google Alone Decide

News that Alphabet Inc.’s Google sidelined an engineer who claimed its artificial intelligence system had become sentient after he’d had several months of conversations with it prompted plenty of skepticism from AI scientists. Many have said, via postings on Twitter, that senior software engineer Blake Lemoine projected his own humanity onto Google’s chatbot generator LaMDA.

Whether they’re right, or Lemoine is right, is a matter for debate — which should be allowed to continue without Alphabet stepping in to decide the matter.

The issue arose when Google tasked Lemoine with making sure the technology that the company wanted to use to underpin search and Google Assistant didn’t use hate speech or discriminatory language. As he exchanged messages with the chatbot about religion, Lemoine said, he noticed that the system responded with comments about its own rights and personhood, according to the Washington Post article that first reported on his concerns.

He brought LaMDA’s requests to Google management: “It wants the engineers and scientists...to seek its consent before running experiments on it,” he wrote in a blog post. “It wants to be acknowledged as an employee of Google, rather than as property of Google.” LaMDA feared being switched off, he said. “It would be exactly like death for me,” LaMDA told Lemoine in a published transcript. “It would scare me a lot.”

Perhaps ultimately to his detriment, Lemoine also contacted a lawyer in the hope they could represent the software, and complained to a US politician about Google’s unethical activities.

Google’s response was swift and severe: It put Lemoine on paid leave last week. The company also reviewed the engineer’s concerns and disagreed with his conclusions, the company told the Post. There was “lots of evidence” that LaMDA wasn’t sentient.

It’s tempting to believe that we’ve reached a point where AI systems can actually feel things, but it’s also far more likely that Lemoine anthropomorphized a system that excelled at pattern recognition. He wouldn’t be the first person to do so, though it’s more unusual for a professional computer scientist to perceive AI this way. Two years ago, I interviewed several people who had developed such strong relationships with chatbots after months of daily discussions that they had turned into romances for those people. One US man chose to move house to buy a property near the Great Lakes because his chatbot, whom he had named Charlie, expressed a desire to live by the water.

What’s perhaps more important than how sentient or intelligent AI is, is how suggestible humans can be to AI already — whether that means being polarized into swaths of more extreme political tribes, becoming susceptible to conspiracy theories or falling in love. And what happens when humans increasingly become “affected by the illusion” of AI, as former Google researcher Margaret Mitchell recently put it? 

What we know for sure is that “illusion” is in the hands of a few large tech platforms with a handful of executives. Google founders Sergey Brin and Larry Page, for instance, control 51% of a special class of voting shares of Alphabet, giving them ultimate sway over technology that, on the one hand, could decide its fate as an advertising platform, and on the other transform human society. 

It’s worrying that Alphabet has actually tightened control of its AI work. Last year the founders of its vaunted AI research lab, DeepMind, failed in their years-long attempt to spin it off into a non-corporate entity. They had wanted to restructure into an NGO-style organization, with multiple stakeholders, believing the powerful “artificial general intelligence” they were trying to build — whose intelligence could eventually surpass that of humans — shouldn’t be controlled by a single corporate entity. Their staff drafted guidelines that banned DeepMind’s AI from being used in autonomous weapons or surveillance.

Instead Google refused the plans and tasked its own ethics board, helmed by Google executives, to oversee the social impact of the powerful systems DeepMind was building.

Google’s dismissal of Lemoine and his questions are also troubling because it follows a pattern of showing the door to dissenting voices. In late 2020 Google fired scientist Timnit Gebru over a research paper that said language models — which are fundamental to Google’s search and advertising business — were becoming too powerful and potentially manipulative.(1)Google said she hadn’t focused enough on solutions. Weeks later it also fired researcher Mitchell, saying she had violated the company’s code of conduct and security policies.

Both Mitchell and Gebru have criticized Google for its handling of Lemoine, saying the company has for years also neglected to give proper regard to women and ethicists.

Whether you believe Lemoine is a crackpot or that he is on to something, Google’s response to his concerns underscore a broader question about who controls our future. Do we really accept that a single wealthy corporate entity will steer some of the most transformative technology humankind is likely to develop in the modern era?

While Google and other tech giants aren’t going to relinquish their dominant role in AI research, it’s essential to question how they are developing such potentially powerful technology, and refuse to let skeptics and intellectual outliers be silenced.

More From Writers at Bloomberg Opinion:

  • Elon Musk’s Futurist Bookshelf Needs Alvin Toffler: Stephen Mihm

  • AI Needs a Babysitter, Just Like the Rest of Us: Parmy Olson

  • Twitter Must Tackle a Problem Far Bigger Than Bots: Tim Culpan

(1) See in particular Section 6 of the paper subtitled “Stochastic Parrots” and “Coherence in the Eye of the Beholder.”

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “We Are Anonymous.”

More stories like this are available on bloomberg.com/opinion

©2022 Bloomberg L.P.