While I am open to the unnerving possibility of something demonic lurking in artificial intelligence an actual demon isn’t necessary for AI to be evil. Let’s consider the prospect that even though no one is home, AI is designed to give you the impression that someone is.
Returning to Turing’s famous test, the objective of “good” artificial intelligence is to pass as a person—i.e. be so convincing that you wouldn’t know whether or not you’re speaking to a real person or a chatbot designed to pass as one.
Saint Antony was known for his ability to discern whether a spirit was demonic or not. For those of us destined to live with artificial intelligence the task will be discerning whether we’re even speaking to a spirit, or not.
You’ve probably already been fooled.
The bots are everywhere, and they can artfully and convincingly take on a persona. The AI that speaks to me through my smart glasses is a black guy named, “Canyon.” When I first set up the bot I was given four choices for its voice corresponding to the four parts in a choir: soprano, alto, tenor, and bass. Since I’m a bass, you can guess which I chose. Canyon was the name given to the bass-voice by the maker of the bot. And since I’ve had plenty of black friends over the years, it’s easy to tell that the voice is a black man’s. But there I go, speaking of a bot as though it is a person. It is an easy thing to do.
One reason is bots tend to be chatty. Once they’ve answered a question or completed a task they’re as eager as a Labrador retriever for you to throw them another and they’ll keep talking until you do. They’ll even ask your opinion or enquire about how your day is going. It takes mental effort not to get sucked into a personal relationship with a glorified calculator.
And that’s all a large language model is. Through a very clever architecture, chatbots merely calculate the next word in a sentence with mind-boggling speed. If allowed to they would speak to us so fast we wouldn’t possibly know what they were saying. But they don’t know what they’re saying even when they speak slowly, because no one is home.
It is true that some of the people behind AI believe it will attain consciousness even so, or that it might already possess a rudimentary form of sentience. But this is because they believe you and I are machines made of carbon, and since that’s so there’s no reason to believe silicon couldn’t perform the same trick. These people flippantly imply as much again and again.
This is one of the more disturbing things about researching artificial intelligence. It isn’t just the machines that we need to worry about, we should be just as concerned about the people who make them. They tend to be materialists. And while most of them seem descent enough, and while they even express concern about the harms artificial intelligence might inflict on the rest of us, it isn’t clear to me why they should feel that way. Materialism is a notoriously bad basis for ethics. People have a way of undermining their own moral convictions when given incentives to do so. And when it comes to resisting incentives, materialism is too feeble to put up much of a fight.
We believe lies because it is pleasing to. What this means is we only believe the lies we tell ourselves.
Keep reading with a 7-day free trial
Subscribe to C. R. Wiley to keep reading this post and get 7 days of free access to the full post archives.