IE 11 is not supported. For an optimal experience visit our site on another browser.

Why you should pay attention to Google's AI controversy

No, computers aren't "sentient." But they are getting eerily powerful, and that's worthy of our attention.

By

Let’s start this post with some raw truth: Google has not created robots that are sentient (or, put another way, have feelings).

But don’t cast your worries aside just yet.

The allegation that Google has made a sentient robot came up after Blake Lemoine, a Google engineer, claimed he had profound discussions with the tech company's artificial intelligence system LaMDA (Language Model for Dialogue Applications). 

Lemoine was reportedly placed on paid leave Monday for violating Google’s confidentiality policy. He told The New York Times he shared his allegations with a U.S. senator’s office the day before he was suspended. Several days earlier, he published online what he said was a conversation he had with LaMDA

All this can sound very dystopian. The countless films depicting robot takeovers all feature some point at which robots gain humanlike consciousness before enslaving or waging war on the planet. 

I’ll explain why that’s not quite what’s happening here. But regardless, one thing is obvious: Scientists are developing supercomputers at a rapid pace. We’re all still grappling with how we relate to these machines and whether we should think of ourselves as their users or their cohabitants on this planet. 

“I think this technology is going to be amazing. I think it’s going to benefit everyone,” Lemoine told The Washington Post about LaMDA. “But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices.”

I don’t subscribe to Lemoine’s belief that computers can be sentient, but I do agree that the technology that gives us this impression is extremely powerful and should be more widely available. 

What Lemoine and others describe as “sentience” is, essentially, ultrafast computing. (I promise, you needn’t be a nerd like me to understand it.)

Take a look at this video, for example. This is a demonstration of how quickly devices will be able to process information when they’re powered by 5G, which is the network speed many future products will use. 

Note how the computer balances the ball quickly when it uses the super-fast network. When it does this, we don’t say the computer is sentient. We acknowledge that it’s processing information rapidly to make the surface flat.

Computers that “talk” to us do the same thing with speech. The most powerful ones process tons of information in an instant: our tone of voice, the rhythm of our speech, our accents and the things we say. 

They do this in seconds and spit out what can sound to us like conversational, human speech. (Thanks, Siri!)

That’s not sentience — that’s high-speed internet.

And the difference might not seem like much, but it’s actually quite important. Seeing computers as machines of our creation (and not people) gives us greater responsibility for how these devices are used. Approaching tech with that mindset gives us room to make change. That’s what former Google employee Timnit Gebru set out to do in 2020 when she revealed that the company's AI programs were built using code that discriminates against certain groups.

It’s fanciful to think of robots as our equals. But it’s also dangerous to think they’re autonomous and operating outside our influence.