×
Convivium was a project of Cardus 2011‑2022, and is preserved here for archival purposes.
Search
Search
Heading Off The RobotsHeading Off The Robots

Heading Off The Robots

Ottawa journalist John Robson warns that once machines become intelligent enough to tell human beings precisely what we’re good for, we might not like answer.

John Robson
10 minute read

In a recent Convivium article, Timothy DeVries tells readers not to worry about artificial intelligence. But it didn’t work. In fact, I’m even more worried because he addressed all the wrong concerns.

DeVries rejects fears “that there is something about the intelligence of artificial intelligence which escapes our ability to control it” and says we are too ready to recoil at the concept of “artificial”. AI is just an extension of its creators, apparently. But, alas, we are fallible and much that we create is not what we expected or wanted.

I’m not sure I get his point about artificiality. It’s true that when people go on vacation, it’s almost always to see nature, and older architecture that blends into its landscape instead of defying it. But the problem with AI isn’t the A. It’s the I. AI is getting smarter than we are, which creates grave problems even if it doesn’t run amok HAL-style.

I must admit that at times I had difficulty following DeVries’ argument, including his contention that “artificial intelligence, in its indifference to the conditions and context of thought, appears to push for its own objectives, which appear to be decidedly different than those of anyone who thinks that intelligence, in general, is something which serves the common good”.

Obviously, I hope good things will serve the common good. But it’s not my definition of good because what matters ultimately is the individual. I am not a Communist, and God gave us individual souls not a world soul or some such. And it’s certainly not my definition of intelligence. 

I don’t “think intelligence, in general, is something that serves the common good” or fails to. I think it’s something smart. And the computers are getting smart, and strong too, as robots march, spring, and flip forward.

Like physical strength, the virtues of mental strength depend entirely on the use to which it is put. There’s a bad habit of insisting that Hitler, being evil, must have been ugly and stupid as well. Sadly, he was a genius and possessed of remarkable courage and charisma; if he had not been, we would not know his name. You may remember the shock and horror with which Ransom, in C.S. Lewis’s Perelandra (a.k.a. Voyage to Venus, second book in the trilogy), realizes that the demon inhabiting Weston’s body “regarded intelligence simply and solely as a weapon, which it had no more wish to employ in its off-duty hours than a soldier has to do bayonet practice when he is on leave. Thought was for it a device necessary to certain ends, but thought in itself did not interest it.”

If this assessment is accurate, it implies that there’s nothing about being smart that necessarily steers us toward good. God may be Truth, and Christ Truth incarnate. But if we do not want Truth, then intelligence becomes instrumental cleverness, not wisdom, and so much the worse for being a good thing perverted or simply accidentally aimed in the wrong direction. Life is hard.

DeVries also declares, as though it were self-evident, that “Artificial intelligence is, by definition, the kind of intelligence that isn’t what it appears to be.” Again, I beg to differ while hoping I understand. 

Artificial intelligence is the kind that is created by a machine, not by an organic brain. Otherwise we’d have to classify actor Victor Mature as an example of artificial intelligence since he lived by his father’s maxim that “As long as people think you’re dumber than you are, you’ll make money.” Indeed, we’d have to classify all clever deceit as “artificial intelligence”. But it’s not, though it is the fallen kind.

At another point DeVries says “the critic holds that artificial intelligence is only a representation of intelligence. If the critic’s assertion holds, then artificial intelligence is not intelligence at all.” Here, I frankly don’t know who or what he’s talking about.

If you watch Alpha Zero play chess or, worse, play chess against it, you don’t think it’s not intelligent. You think it’s way smarter than you. (BTW I don’t claim to be much good at chess, but if you don’t know what the Panov-Botvinnik Attack is, trust me on this example.) 

When the “engines” calculate variations, they find the same conclusions you do as far as you can follow it. Ask it to carry out a simple checkmate, say king and queen against king, and it will do it exactly the way you would. Or possibly me. Or Magus Carlsen if he’s more efficient than I am. You just can’t follow it very far.

It’s intelligence, all right. Which is exactly the problem. Or rather, the two-fold problem.

First, AI might run amok. Because it genuinely is smarter than us, its reasoning increasingly leaves us in the dust. For instance, in chess at first, 40 years ago, the machines were just bad, tactically weak and strategically blind. Then they got tactically scary but the best humans could still defeat them because they didn’t understand the demands of the position, and by the time tactics erupted they were losing in ways a strong grandmaster could calculate. As Bobby Fischer once put it, “Tactics flow from a superior position.”

Unfortunately, by the mid-1990s, as the chips got faster and the programming more clever, computers’ tactical prowess made their strategic deficiencies on the chessboard unimportant. By now, the good computers are not just so much better than the best humans that there’s nothing interesting about playing, they increasingly approach chess strategy in ways we don’t understand.

The old principles of sound opening play and transition to middlegame are discarded because their stupendous calculating ability has turned quantity into quality. There are still a handful of positions humans understand better, mostly endgames where the winning strategy takes so long to bear fruit that it’s over the calculation horizon. But fewer and fewer. And they’re getting smarter faster and faster while we ask “…um…duh…where did I put my glasses?”

Oh well, we can always play Go or Bridge, right? Wrong. They are smarter than us. So much smarter that we can’t follow. As I mentioned in a Mercatornet piece on this subject in January, “Alpha Zero, the world chess champion that taught itself to play in under a day, also just won a ‘protein folding competition.’ They have slipped the leash.” And since they are also increasingly making decisions about investments, insurance, cyber-defence and so forth, it matters in our lives. 

They don’t just make better decisions. They make decisions that are better in ways we can’t understand, leaving us helpless. We certainly can’t out-argue them. And once they’re put into those agile headless robots, we won’t be able to win the argument by switching them off either if they don’t agree with us that they should be switched off.

Which they might not. 

As they get smarter, stronger, and more connected, they may start to reason about matters we didn’t ask them to, and reach conclusions we don’t like and can’t really understand. DeVries seems to assume this familiar sci-fi trope isn’t a realistic possibility because our intentions are good. Supposedly there will be “an alliance, a commitment, or fidelity to the person who uses it.” As with, say, the atomic bomb? Or online porn? Or even music videos? 

There are so many examples of our inventions turning on us that you’d almost call it the rule not the exception. Especially as the technology gets more complex, though you could be crushed by a runaway ox cart or burn your hand in a fire.

Here it is important to remember that as our intellect is fallible, we don’t always devise a sound way to get what we intend. Indeed, I’d say it’s true more often than not. Worse, as our will is corrupt our intentions are often not as pure as we suppose, including the element of pride that drives the development of AI.

Okay. Suppose for purposes of argument we wave all that off. Even if the robots become our masters they will continue to do for and to us the sorts of things we wanted them to or thought we did. They will never decide that life is suffering and save us from the slings and arrows by putting us down painlessly in our sleep. They won’t even faithfully let a totalitarian regime win a world war and then control everybody all the time through smart surveillance. It’s still a horror story because transhumanism isn’t human. And what worries me isn’t flaws in the algorithms. It’s their success.

To me the core metaphysical problem is that AI renders us increasingly obsolete. First the factory workers. Then the office workers. Then the professors, doctors and poets. It’s all fine and good to go gaga at the birdlike box stackers rapidly eliminating unskilled work. But has it occurred to their inventors that, as with software, the better it gets the less need there is for anyone to design improvements?

In the original Star Wars, there’s a scene where Han Solo scoffs at Luke’s early light-sabre proficiency against little training bots, saying “Good against remotes is one thing. But good against the living…?” and leaves it hanging. Supposedly the spark of life would always give an intuitive edge even in physical combat. But will it really? How long will it be before Boston Dynamics’ scary headless robots, that can already do back flips, will win fights against humans as easily as modern software, even on an indifferent computer, crushes us? And computer novelists as well as programmers outperform our best efforts. 

Already the design of theme parks and movies, and I’m starting to think pop music as well, seems to be the work of programmers not visionaries, behaviourist data-miners not artists. And we are not richer for it spiritually.

When I say AI renders us obsolete, I don’t just mean in the narrow sense that we won’t have jobs and will have to take up hobbies for personal satisfaction even though our domestic robot can draw, weave, cook and throw pots better than we can. (I won’t even mention sexbots.) I mean in the most profound metaphysical sense. 

Even if the machines continue to serve “the common good” and individual good in the sense of performing functions recognizably similar to those we originally programmed in, they won’t serve the common or individual good because man, at least fallen man, is meant to labor to overcome adversity. We enjoy relaxation because it is not idleness or uselessness. What shall life be like if all activity is one, the other or both?

DeVries says that all technology “is artificial, in a way which serves to augment human nature. A cell phone enables us to do more and to expand our presence – and to do it naturally, which is to say, within the realm of human understanding and to our comfort.” But AI is liable not to enable us to do more or expand our presence, but to shrink our presence by preventing us from doing anything. Which is not a consummation devoutly to be wished.

To return to the Middle Ages, my favourite historical period, a scythe was all fine and good and more power and money to the guy who made a better one. But they wore out and broke and had to be replaced and there was not just paid work but dignity in making and repairing them.

Software, on the other hand, does not wear out. And while computers do, they are increasingly manufactured by robots. Soon the robots will be manufactured by robots too. And then they will be designed by robots.

I think some of us have waved off the nightmare scenario far too blithely because of the problem that as AI develops, it increasingly designs itself. Just because it initially does so according to parameters we originally laid down, there is no reason to assume that at some point the internal logic will not escape our control or even understanding and take it in directions we neither grasp nor enjoy. Possibly, through logic, we would accept if we could follow it and possibly not. But it won’t matter because they will be thinking and acting faster than we can understand or control. 

As the late Stephen Hawking warned “a superintelligent AI will be extremely good at accomplishing goals and if those goals aren’t aligned with ours, we’re in trouble.”

So what then guarantees that we will understand or like what they’re doing, or find fulfilment in whatever they leave to us, even if we avoid the dystopian scenario where they decide we’re better off dead because technological obsolescence has filled us with anomie? There are already humans who reason their way to thinking life isn’t worth living for all kinds of reasons including our impact on climate. The machines might reach a similar conclusion.

They might also manage to lose all our money in the stock market or start a nuclear war by mistake rather than as a Terminator-style move to get rid of us. But even if they don’t, they might improve the world we live in until it becomes intolerable.

The fundamental question is “What is Man for”. To be sure, most people seem to get it wrong much of the time and I don’t think software engineers generally outperform the rest of us on this one. But I still wouldn’t ask a machine if I were you. There’s no reason to think we will like the answer even if we understand it. Especially given how likely it is to be a grim, or cheerful, version of: “Nothing.”


Convivium means living together. We welcome your voice to the conversation. Do you know someone who would enjoy this article? Send it to them now. Do you have a response to something we've published? Let us know!  

You'll also enjoy...

Revenge Is Sour

Revenge Is Sour

Father Tim McCauley points readers towards the necessity of recovering an approach towards politics which prizes perspective over personality.