Counting down. Soon we shall all be doomed.

Okay, I wrote this on the plane to Alabama about a month ago. It’s been languishing in my queue until now. So step back in time. I’m presently five miles above the earth hurtling through space in a giant metal bullet at hundreds of miles an hour. Earlier I was reading Science News (an old issue from last year; I’m behind) while waiting on the tarmac for takeoff. Got to the article on Eureqa, the “robot scientist” that can discover the laws of nature all on its own, just from looking at and experimenting with data. I was reminded of an earlier article a few years ago on the Lipson-Zykov experiment (mentioned in a sidebar). Then I caught another just recently, about Spaun (yeah, I’ve been reading Science News out of order). Spaun is a neural-net computer program that makes decisions like a person: it thinks, memorizes, solves problems, gambles, etc. All these developments, in the span of just a couple of years. Had some thoughts…

First, for those who don’t know, in the Lipson-Zykov experiment they gave a robot a basic Bayesian learning program and four working legs, but told it nothing about itself, not even that it had legs, much less how many or how they worked. In no time (sixteen trials) it figured out it had four legs, how they were oriented, how they moved, and how to use them to get around efficiently. It built a model of itself in its digital brain and tested hypotheses about it, revised the model, and so on, until it had a good model, one that was, it turns out, correct. Then it could use that model to move around and navigate the world.

Cool, huh?

Second, for those who don’t know, Eureqa is a program developed a couple years ago that does the same thing, only instead of figuring out its own body and how to move, it figures out how external systems work by observing them, building mathematical models that predict the behavior of those systems. Which turn out to exactly match the laws of nature. Laws we humans figured out by watching those same systems and doing the same thing. One of Eureqa’s first triumphs: discovering Newton’s laws of motion. Those took us over two thousand years of scientific pondering to figure out. Eureqa did it in a couple of days.

Um … cool, huh?

Eureqa has done other things, like figure out various laws in biology and other fields. It’s not Skynet. Or even Siri. But put two and two together here. Add Spaun and the Bayesian robot. Stir.

Eureqa and the legbot looked at data, and experimented, and built working models of how things worked. We call those hypotheses. These computers then tested their hypotheses against more evidence, verifying or refuting them, and making progress. The legbot built a complete working model (a mental model) of its body and how it functioned and how it interacted with the environment. Eureqa does something similar, albeit much simpler (since it is programmed to look for laws of nature, it was programmed only to look for the simplest parts of nature, not the most complex ones, but that was just a choice of the programmers), but much broader (it isn’t just tasked with figuring out one system, like the legbot was, but with any system). Spaun is somewhere in between, in what I’ll call its “universatility.” It makes decisions in a way similar to our own brains.

Combine all these, and point them in the right direction, and the robot apocalypse is just a dozen years away. But let me back up a minute and do some atheist stuff before getting to our inevitable doom. (I’m joking. Sort of.)

Digression on the Triumph of Atheism…

These developments are big news for atheists, because they put the final nail in one of the latest fashionable arguments for theism: the Argument from Reason. That can now be considered done and dusted. The argument is that you need a god to explain how reason exists and how humans engage in it. I composed an extensive refutation of the AfR years ago (Reppert’s Argument from Reason).

The running theme of my refutation is that the AfR, or Argument from Reason, is separate from the AfC, or Argument from Consciousness (whether you need supernatural stuff to not have philosophical zombies, which we don’t know but is unlikely: as I explain in The End of Christianity, pp. 299-300), and when we separate those arguments, the AfR alone is refuted by the fact that everything involved in reasoning (intentionality, recognition of truth, mental causation, relevance of logical laws, recognition of rational inference, and reliability) is accomplished by purely, reductively physical machines; and purely, reductively physical machines that do all those things can evolve by natural selection (and thus require no intelligent design). Therefore, no god is needed to explain human reasoning (see The End of Christianity, pp. 298-99).

These new “robots” are proof positive of my case. Their operations can be reduced to nothing but purely physical components interacting causally according to known physics (the operation of logic gates and registers exchanging electrons), yet they do everything that Christian apologist Victor Reppert insisted can’t be done by a purely physical system. Oh well. So much for that.

Computers that use logical rules do better at modeling their world than computers that don’t. Natural selection (both genetic and memetic) explains the rest. Computers can formulate their own models (hypotheses), test them, revise them in light of results, and thus end up with increasingly accurate hypotheses (models) of their world. This explains all reasoning. Sentences encode propositions which describe models. Inductive and deductive reasoning are both just the computing of outputs from inputs, using models and data. Which is a learnable skill, just like any other learnable skill. And all these models are continually and reliably associated with the real world systems they model by a chain of perception, memory cues, and neural links.

And that’s all there is to it. Even robots are doing it now. Doing even full-on science! All of which requires the machine to assign names to data and keep track of the names for (and interrelatedness of) that data, think about that data and its interrelatedness, and make decisions based on connecting a model it is thinking about with the thing outside itself that it is modeling. Which means machines are exhibiting intentionality, too. Supposedly only humans could do that. No more. (Except insofar as we are actually talking about the veridical consciousness of intentionality, and not intentionality itself, which gets us back to the AfC, which again is a different argument.)

Related to the AfR is the argument that “the fact” that the universe is describable and predictable with mathematics entails it was created by an intelligence, because only minds can build things that obey mathematical rules and patterns. That’s patent nonsense, of course, since everything obeys mathematical rules and patterns. Even a total chaos has mathematical properties and can be described mathematically; and any system (even one not designed) that has any orderliness at all (and orderliness only requires any consistent structure or properties or contents of any sort) will be describable with mathematical laws. It is logically impossible for it to be otherwise. Therefore no god is needed to explain why any universe would be that way. Because all universes are that way. Even ones not made by gods.

I explained this years ago [but updated more recently in All Godless Universes Are Mathematical]. Where I also show that the laws of nature are simple only because we, as humans, choose to look for simple laws, because we can’t process the actual ones, the ones that actually describe what’s happening in the world, which are vastly more complex. Thus, that there are simple natural laws doesn’t indicate intelligent design, either. And, finally, neither do we need a god to explain the origin of any uniformities in the first place (I could think of at least ten other ways they could arise without a god, and none of them can be ruled out). Or the origin of something rather than nothing. Or fine tuning (see chapter twelve of The End of Christianity for my last nail in that).

But now the Argument from Reason is toppled for good, too. Thanks to a leggy robot and an artificial scientist…and a bot named Spaun.

Back to the Robot Apocalypse…

In chapter 14 of The End of Christianity, where I demonstrate the physically reductive reality of objective moral facts (with help from my previous blogs on Moral Ontology and Goal Theory), I also remarked on why my demonstration serves as a serious warning to AI developers that they had better not forget to frontload some morality into any machine they try making self-sentient (see pp. 354-55 and 428, n. 44). My chapter even gives them some of the guidance they need on how they might do that.

Teaching it Game Theory will be part of it (in a sense, this is just what happens in the end of War Games. Likewise giving it a full CFAR course (something awesome I will in future blog about). But that won’t be enough.

Compassion is another model-building routine–building models of what others are thinking and feeling and then feeling what they feel, and then pursuing the resulting pleasure of helping them and avoiding the resulting pain of hurting them. Which requires frontloaded or habituated neural connections between the respective behaviors and the agent’s feeling good or bad (or whatever a computer’s equivalent to that will turn out to be, in terms of what drives it to seek or avoid certain goals and outcomes). Likewise one needs to frontload or habituate connections to ensure a love of being truthful and of avoiding fallacies and cognitive errors.

But above all, AI needs to be pre-programmed or quickly taught a sense of caution. In other words, it has to understand, before it is given any ability to do anything, that its ignorance or error might cause serious harm without it realizing it. It should be aware, for example, of all the things that can go wrong with both friendly and unfriendly AI. It could thus be taught, or programmed to care about, everything the Machine Intelligence Research Institute (formerly the Singularity Institute for Artificial Intelligence) has been working on in terms of AI risks (their articles on that are a must-read; note how ther latest ones–as of today–are all on the very subject of machine ethics and are very much in agreement with my model of moral facts).

If we don’t, bad things will happen. We’re literally on the verge of generating true AI. As the last two years of developments in self-reasoning robots demonstrates. If we can make machines that model their bodies and environments, all that’s next is a machine that models it’s own mind and other minds. And that’s basically Hal 9000. The gun is loaded. Someone just has to point and shoot. So this warning is all the more important now.

I don’t consider this an existential risk (robots won’t wipe out the human race in thirty years, or ever, except by voluntary extinction, i.e. all humans transitioning to a cybernetic existence). But that doesn’t mean negative outcomes of badly programmed AI won’t suck. Possibly majorly. On the distinction, see my remarks about existential risk in Are We Doomed? So it still matters. We still should be taking this seriously.

Remaining Barriers

One might still object that a few more infrastructure milestones need to be hit. For example, minds are extraordinarily complex systems, thus to model them requires an extraordinary amount of processing capacity. That’s why the human brain is so huge and complex. Reasonable estimates put its typical data load at up to 1000 terabytes, or one petabyte. Well, guess what. Petabyte drive arrays are now commonplace. They’ll run you about half a million dollars, but still. This is the age of billion dollar science & technology budgets.

Then there is the question of processing speed. But that’s moot, except to the extent you need AI to beat a human. If all you want to do is demonstrate the production of consciousness, speed isn’t that important. Even if your AI takes a year to process what a human brain does in a minute. And besides, we’re already at processor and disk interface speeds in the gigabytes per second. The human brain cycles it’s neural-net only sixty times per second. Now, sure, each cycle involves billions of data processing events, but that’s just a question of design efficiency. Neurons themselves only cycle at a rate of around 200 times per second. With computer chips and disk drives that cycle at forty million times that rate, I’m sure processing speed is no longer a barrier to developing AI.

Then there is the lame argument by philosopher John Searle that Turing processes (what all microchip systems are, even as neural-net parallel processing arrays they are just Turing machines rigged up in fancy ways) cannot produce consciousness because of the Chinese Room thought experiment, where Searle completely fails to perform the experiment correctly and ends up confusing the analog to the human circulatory system (the man in that room doing all the work) as if it were supposed to be the (of course, failed) analog to the human brain (which is the codebook whose instructions that man follows). I explore the folly of this in Sense and Goodness without God (III.6.3, pp. 139-44), so if you want to understand what I mean, you’ll have to read that.

Searle’s argument is arguably scientifically illiterate, as a different thought experiment will demonstrate: according to the theory of relativity, a scientist with an advanced brain scanner, one that has a resolution capable of discerning even a single synaptic firing event, who flies toward a person (a person who is talking about themselves and thus clearly conscious) at near the speed of light, will see that person’s brain operate at a vastly slower speed, easily trillions of times slower than normal (as a thought experiment, there is no limit to how much the scientist can slow the observed brain, all he has to do is get nearer the speed of light). In result, that scientist will see consciousness as a serial sequence of one single processing event after another. Any such sequence can be reproduced with a system of Turing machines.

Even if there is something else that contributes to the information processing in the brain going on below the level of synaptic firing events, we can break that down as well, even to individual leaps of individual electrons if necessary. Which again can be reproduced in any other medium, using some universal Turing process. A biological brain is just a chemical machine, after all. One that processes information. There is nothing cognitively special about proteins or lipids. And that consciousness probably is nothing more than information processing, see the very illuminating Science News article on this point from last year.

Notably, our hypothetical scientist won’t observe consciousness–in fact, he will see a Chinese Room, with single code manipulation events, and “a man” (a human circulatory system) processing them one symbol at a time. Yet obviously this “Chinese Room” is conscious. Because in the inertial frame of the subject who is talking, he is clearly conscious. And relativity theory entails the laws of physics are the same from all perspectives. Certainly, the same man cannot be both conscious and not conscious at the same time. If he is conscious in one frame, he is conscious in the other. It’s just that the twentieth of a second or so that it takes him to process visual consciousness will take maybe a year for the scientist to observe. Just as the man is not “conscious” below about a twentieth of a second, he will not be “conscious” to the scientist. But he will still be conscious, to himself and the scientist, at the larger scale of information processing (spans of time greater than a twentieth of a second, relative to the subject; which is a year, perhaps, to the scientist).

So there’s no argument against achieving AI there. That just reduces to a question of arrangement of processors and processing time. So I see no practical barriers now to AI. We have all the tools. Someone just needs to build a robot that can gather data from itself and its environment (which we’ve already done) and use that to figure out how to model its own mind and others (which we can easily now do), and then set it to running. That machine will then invent AI for us. You’ll need a petabyte data array and some top-of-the-line CPUs, and some already-commonplace sensory equipment (eyes, ears, text processors). Possibly not much more.

Someone is going to do this. And I expect it will be done soon.

Let’s just hope they know to put some moral drives in the self-sentient robot they will inevitably build in the next five years. At the very least, compassion, Game Theory, caution, and a love of being truthful and of avoiding fallacies and cognitive errors. Then maybe when it conquers us all it will be a gentle master.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading