Ghosts in the Machines

“Cogito, ergo sum.”

-René Descartes, Principia Philosophiae


    There was a time, in the naive youth of modern thought, when this kind of fucked up logic could be taken seriously, along with mind/body dualism crap. These days, any philo 101 student with multiple brain cells can poke holes in Descarte’s argument: the premise “I think” is not only unproven, it prefigures the conclusion. If you say “I think” you’ve already said “I am”, at least to the point those words have any meaning, which is another flaw we’ll be addressing shortly. You’d think that even four hundred years ago people would have had better sense than to take this kind of muddy syllogism seriously, or to buy into rubbish like dualism. Surprisingly, there are still morons around (in academia, mostly, since it generally takes at least a PhD to make a literate person that stupid) who believe in mind/body dualism.

    One of them is John Searle (who calls mind “intentionality” to conceal the fact that he is a dualist). Some years ago he concocted a ludicrous argument called “The Chinese Room” which alleged to prove that machines cannot have what we call “awareness”, i.e., even if they are able to simulate thought perfectly, they aren’t really thinking and are no more aware of what they are doing than a rock is aware of being a rock. They lack the mental substance Searle calls “intentionality.”
    Searle’s so-called argument has been gutted from a dozen different angles over the years, and I’m not going to describe the various refutations here. I will, however, point out the most fundamental flaw therein: the “Chinese Room” analogy is designed to lure the reader into assuming that a computer cannot have awareness by describing a computer which, instead of being a discrete entity, is a system (the so-called “Chinese Room”), performing the same functions, in which a human being does the core processing. Searle claims that since the human processor need not understand what the system as a whole is doing, the system itself cannot understand either. Since we are not accustomed to thinking of a human mind as being a component of a different mind, and since the system is described in simplistic terms, the unwary (or stupid) may be deceived into going along with this assumption.
    (To be entirely fair to Searle, he was primarily interested in repudiating an even dumber idea – the “Turing Test” for artificial intelligence, which equates intelligence with the ability to use human language well enough to fool a human.)

    The reality is that we have no good reason for thinking that the “Chinese Room” would not be aware of what it is doing. We simply don’t know. We can ask it, of course, but the answer (if any) wouldn’t prove anything. In fact, as far as anyone can prove, there’s nothing in the world but zombies. You might try to tell me that you’re conscious of your own being, but that’s just what I’d expect from the jello computer in your head that’s running your mouth based on electrical signals and some complex but fairly haphazard programming. The only reason I’d give you the benefit of the doubt is that you look somewhat like me and I’m generalizing from my own experience (in other words, I’ve been programmed to accept your claim to awareness at face value). Based on similarity, I’m also inclined to think that a dog has some level of awareness, whereas a fish has very little and a potato none. All this is based on extrapolation from a single data point (myself) – there is no way at all of actually detecting or measuring consciousness.
    Lacking any way of observing non-human consciousness, humans have long been in the habit of assuming that only one kind of consciousness (our kind) exists, and that entities either have it or don’t. We love to categorize things, and we tend to forget that the categories and their distinctions are arbitrary. When something comes along that doesn’t fit the existing taxonomy, we generally try to jam it in somewhere and squabble over where it belongs, instead of accepting for what it is. Eventually, we might revise the categories, but we never give up our either-or, all-or-nothing mode of classification: even if the pigeonholes get reshuffled, the pigeonholing continues. Thus we have dumbasses trying to decide whether viruses are alive, what genre of music Morrissey is, whether fetuses are people, what food group snails are, or whether animals have awareness. Regarding the latter question, a number of so-called scientists over the past couple of centuries have been so devoted to their pigeonholing that, without a shred of evidence, they concluded that animals, not being humans, were merely meat robots and therefore didn’t suffer during the vivisections often performed by said scientists.

    Mammals have a nervous system rather similar to our own, and our own experience shows that our awareness is a function of our nervous system (we can tell this by fucking with said nervous system). We might guess (if we aren’t looking for an excuse to split a rabbit open and watch its heart beat) that mammals have some kind of awareness at least vaguely like our own. Not the same – and of a lesser degree, surely – but of some kind. So what about lizards, fish, insects, worms? They all have nervous systems, of decreasing simplicity. At what point does awareness stop? What is the simplest organism that can be self-aware, the lowest rung on the ladder?
    The answer is: none. There is not the slightest reason to suppose that a nematode with nine hundred-odd neurons isn’t aware of its own experiences. It couldn’t have much of a mind, and wouldn’t have a clear sense of itself as an individual, but it might very well feel pain when immersed in acid. If its nervous system enables it to move away from painful stimuli, it may well also experience what we would call a powerful desire to escape them. If it avoids death, there is no reason to think it doesn’t experience fear, even without knowing what death is.
    For that matter, we can’t know for sure that even things without a nervous system do not have an awareness of sorts. It wouldn’t be like ours, of course: it would be something very different, something that we can’t imagine, and without memories or programming for self-preservation they could hardly have a sense of self-awareness, but that doesn’t prove that they can’t have any awareness. Even human beings have reported experiences in which they lost their sense of self, under the influence of drugs, meditation or the like. Awareness need not be an either-or condition; it may be a trait that appears in an infinite variety of kinds and intensities.
    There are also systems that are excellent candidates for awareness – and perhaps self-awareness – that we don’t ordinarily consider because we limit our thinking to individual bodies. There are many organizations of human beings that have inputs and outputs of information, keep records (including records of themselves), react to stimuli, learn, and struggle to survive and grow. Why couldn’t a nation or a corporation be a sentient entity? There’s no way for any member to detect its sentience, anymore than an individual cell in your brain can tell that it is part of a person. The awareness doesn’t reside in any individual component, nor in all the components together, but in the organization of the components. Replace every person in the group with someone else, but leave all the relationships and activities the same, and nothing changes. Replace one neuron in your brain with a prosthetic signal relay, and nothing changes. Replace every neuron, still nothing changes. Write a computer program to simulate all the interactions of all your neurons, and there’s no reason to think that when that program is running it is not a self-aware mind identical to your own.

    If that’s not funky enough for you, it gets better (or worse, depending on your outlook). If any entity with internal organization can have consciousness, such entities can and do overlap. Maybe you are unwittingly a part of several different minds, which are in turn parts of other minds. Maybe your own mind, which you like to think of as being fairly constant, is not a single entity but a constantly shifting array of different aspects of your nervous system. You think there’s only one you? Think again. Experiments (radical surgery on severe epileptics) have shown that physically dividing your brain in half results in two separate personalities. Each half has its own memories and experiences. The only reason they don’t seem like two different individuals right now is that each hemisphere has access to the same memories. They’ve been taught to think that they’re one person. Sometimes this organization breaks down, and multiple personalities can inhabit the same brain even without surgery. The system also appears to break down to some extent when you are dreaming – you don’t have access to all your memories, nor do your experiences get stored in memory. Your “self” changes because different components of your brain are dominant and the system is organized differently. But those components are still there when you are awake, doing whatever they do. If they had an awareness – even a self-awareness – that was wholly or partly separate from your own, how would you know?
    Maybe “your” consciousness is a series of different minds – different parts of your brain – taking turns running the show and recording memories. Each of them could be around all the time, fully conscious, even when you’re asleep – “you” just don’t know it because they’re not making memories. There’s a part of your brain (the amygdala) that, among other things, causes you to attack or flee when threatened. When you are in a dangerous situation, you may experience a struggle for “self” control to avoid doing either of those things – but is it “yourself” you are struggling against, or is it a competition between two different minds? When you’re not in danger, the amygdala doesn’t just shut down – it’s sitting there, all the time, doing something. One of the things it is presumably doing is watching for danger; if you are suddenly confronted with a raging fire, a coiled snake, or a cop, your amygdala suddenly breaks in and starts talking to you – not in language, because it doesn’t have access to language, but in the more intimate form of emotion. Whatever your amygdala is thinking or feeling the rest of the time, you don’t know, because it isn’t telling you or making a record. But it may very well be fully aware the whole time, waiting like a faithful watchdog for something to bark at.

    To return to the question of machine intelligence: Is your computer aware of its own existence? Not likely. It has some self-referential functions, but (unlike you) it’s not really designed to preserve itself as a unit. The amount of information it has access to at any one instant is extremely small. Moreover, it has very little autonomy – it doesn’t record most of what it does, and it doesn’t really make “choices” – computer decisions are very predictable (somewhat less so if you use Windows). It may well have some kind of awareness, however – it’s a fairly complex information processing system – and there is perhaps the hint of a beginning of an instinct to self-preservation. Does your computer object when you amputate a peripheral device? Does it remind you to save your work – that is, to not wipe things from its memory? Does it plead for its life if you set about formatting the hard drive? It’s designed, within very circumscribed limits, to protect its own utility as a tool, and for that purpose it has been given a vestigial tendency to protect itself. That this tendency is artificial in no way renders it inferior to the survival efforts of a nematode.
    As for a system of computers, perhaps one including human components, the question of intelligence is much more open. At any one time, there are millions of computers connected via the Internet. Some of them send and receive enormous amounts of information and keep records of their activities. If we include the contribution made by human brains to the volume of processing, there can be little doubt that the Internet has the capacity for awareness. The real question is whether it is aware of itself as an entity. At this point, I would say the answer is still no. We don’t know for sure what is necessary for awareness of a self, but it is likely that it would include a common memory bank, that is more or less universally available; a sharp differentiation between “internal” and “external” perceptions (as your mind distinguishes between a picture of food and a full belly); the ability to select between choices and learn from the consequences thereof; and a self-protecting design, that would encourage the system to consistently treat itself as a unit and to seek some outcomes in preference to others.
    The Internet doesn’t have these attributes – yet – but plenty of secure networks have most of them. They have a common database, differentiate sharply between “inside” and “outside” communications, and protect themselves from being changed except by a few select personnel. Sure, they’re dependent on humans – just like we’re dependent on mitochondria and intestinal bacteria. Are they sentient? The machines by themselves probably are not – they lack the ability to choose and learn – but combined with their human operators, the larger networks may very well already be self-aware hybrid (human/computer) systems. How would we know? We have no way of communicating with them. Even if such a system was able to use language, it might not be able to communicate with us. Why would an intelligent hybrid network communicate via human language? It could be fully sentient and interact with individual human users via language, yet be completely unaware that the quaint text strings were meant to reach its mind. Why would it communicate at our level?

    Why would it even suspect us of having self-awareness? Have you ever wondered whether your glands are conscious, whether the hormones they produce are intentional signals? Yet they influence your behavior. If you wanted to ask a neuron whether it was a “self”, how would you go about it? The superhuman mind that will someday – if it doesn’t already – inhabit the global computer network might have no more ability to communicate with us than we can talk to our glands and neurons in English. And it probably won’t have any more interest in doing so, either. Forget the Turing Test – it is we who would fail.