Consciousness is a Big Suitcase
August 2, 2001
Originally published February 27, 1998 at Edge. Published on KurzweilAI.net August 2, 2001.
MINSKY: My goal is making machines that can think-by understanding how people think. One reason why we find this hard to do is because our old ideas about psychology are mostly wrong. Most words we use to describe our minds (like “consciousness”, “learning”, or “memory”) are suitcase-like jumbles of different ideas. Those old ideas were formed long ago, before ‘computer science’ appeared. It was not until the 1950s that we began to develop better ways to help think about complex processes.
Computer science is not really about computers at all, but about ways to describe processes. As soon as those computers appeared, this became an urgent need. Soon after that we recognized that this was also what we’d need to describe the processes that might be involved in human thinking, reasoning, memory, and pattern recognition, etc.
JB: You say 1950, but wouldn’t this be preceded by the ideas floating around the Macy Conferences in the ’40s?
MINSKY: Yes, indeed. Those new ideas were already starting to grow before computers created a more urgent need. Before programming languages, mathematicians such as Emil Post, Kurt G–del, Alonzo Church, and Alan Turing already had many related ideas. In the 1940s these ideas began to spread, and the Macy Conference publications were the first to reach more of the technical public. In the same period, there were similar movements in psychology, as Sigmund Freud, Konrad Lorenz, Nikolaas Tinbergen, and Jean Piaget also tried to imagine advanced architectures for ‘mental computation.’ In the same period, in neurology, there were my own early mentors-Nicholas Rashevsky, Warren McCulloch and Walter Pitts, Norbert Wiener, and their followers-and all those new ideas began to coalesce under the name ‘cybernetics.’ Unfortunately, that new domain was mainly dominated by continuous mathematics and feedback theory. This made cybernetics slow to evolve more symbolic computational viewpoints, and the new field of Artificial Intelligence headed off to develop distinctly different kinds of psychological models.
JB: Gregory Bateson once said to me that the cybernetic idea was the most important idea since Jesus Christ.
MINSKY: Well, surely it was extremely important in an evolutionary way. Cybernetics developed many ideas that were powerful enough to challenge the religious and vitalistic traditions that had for so long protected us from changing how we viewed ourselves. These changes were so radical as to undermine cybernetics itself. So much so that the next generation of computational pioneers-the ones who aimed more purposefully toward Artificial Intelligence-set much of cybernetics aside.
Let’s get back to those suitcase-words (like intuition or consciousness) that all of us use to encapsulate our jumbled ideas about our minds. We use those words as suitcases in which to contain all sorts of mysteries that we can’t yet explain. This in turn leads us to regard these as though they were “things” with no structures to analyze. I think this is what leads so many of us to the dogma of dualism-the idea that ‘subjective’ matters lie in a realm that experimental science can never reach. Many philosophers, even today, hold the strange idea that there could be a machine that works and behaves just like a brain, yet does not experience consciousness. If that were the case, then this would imply that subjective feelings do not result from the processes that occur inside brains. Therefore (so the argument goes) a feeling must be a nonphysical thing that has no causes or consequences. Surely, no such thing could ever be explained!
The first thing wrong with this “argument” is that it starts by assuming what it’s trying to prove. Could there actually exist a machine that is physically just like a person, but has none of that person’s feelings? “Surely so,” some philosophers say. “Given that feelings cannot not be physically detected, then it is ‘logically possible’ that some people have none.” I regret to say that almost every student confronted with this can find no good reason to dissent. “Yes,” they agree. “Obviously that is logically possible. Although it seems implausible, there’s no way that it could be disproved.”
The next thing wrong is the unsupported assumption that this is even “logically possible.” To be sure of that, you’d need to have proved that no sound materialistic theory could correctly explain how a brain could produce the processes that we call “subjective experience.” But again, that’s just what we were trying to prove. What do those philosophers say when confronted by this argument? They usually answer with statements like this: “I just can’t imagine how any theory could do that.” That fallacy deserves a name-something like “incompetentium”.
Another reason often claimed to show that consciousness can’t be explained is that the sense of experience is ‘irreducible.’ “Experience is all or none. You either have it or you don’t-and there can’t be anything in between. It’s an elemental attribute of mind-so it has no structure to analyze.”
There are two quite different reasons why “something” might seem hard to explain. One is that it appears to be elementary and irreducible-as seemed Gravity before Einstein found his new way to look at it. The opposite case is when the ‘thing’ is so much more complicated than you imagine it is, that you just don’t see any way to begin to describe it. This, I maintain, is why consciousness seems so mysterious. It is not that there’s one basic and inexplicable essence there. Instead, it’s precisely the opposite. Consciousness, instead, is an enormous suitcase that contains perhaps 40 or 50 different mechanisms that are involved in a huge network of intricate interactions. The brain, after all, is built by processes that involve the activities of several tens of thousands of genes. A human brain contains several hundred different sub-organs, each of which does somewhat different things. To assert that any function of such a large system is irreducible seems irresponsible-until you’re in a position to claim that you understand that system. We certainly don’t understand it all now. We probably need several hundred new ideas-and we can’t learn much from those who give up. We’d do better to get back to work.
Why do so many philosophers insist that “subjective experience is irreducible”? Because, I suppose, like you and me, they can look at an object and “instantly know” what it is. When I look at you, I sense no intervening processes. I seem to “see” you instantly. The same for almost every word you say: I instantly seem to know what it means. When I touch your hand, you “feel it directly.” It all seems so basic and immediate that there seems no room for analysis. The feelings of being seem so direct that there seems to be nothing to be explained. I think this is what leads those philosophers to believe that the connections between seeing and feeling must be inexplicable. Of course we know from neurology that there are dozens of processes that intervene between the retinal image and the structures that our brains then build to represent what we think we see. That idea of a separate world for ‘subjective experience’ is just an excuse for the shameful fact that we don’t have adequate theories of how our brains work. This is partly because those brains have evolved without developing good representations of those processes. Indeed, there probably are good evolutionary reasons why we did not evolve machinery for accurate “insights” about ourselves. Our most powerful ways to solve problems involve highly serial processes-and if these had evolved to depend on correct representations of how they, themselves work, our ancestors would have thought too slowly to survive.
JB: Let’s talk about what you are calling “resourcefulness.”
MINSKY: Our old ideas about our minds have led us all to think about the wrong problems. We shouldn’t be so involved with those old suitcase-ideas like consciousness and subjective experience. It seems to me that our first priority should be to understand “what makes human thought so resourceful”. That’s what my new book, The Emotional Machine is about..
If an animal has only one way to do something, then it will die if it gets in the wrong environment. But people rarely get totally stuck. We never crash like computers do. If what you’re trying to do doesn’t work, then you find another way. If you’re thinking about a telephone, you represent it inside your brain in perhaps a dozen different ways. I’ll bet that some of those representational schemes are built into us genetically. For example, I suspect that we’re born with generic ways to represent things geometrically-so that we can think of the telephone as a dumbbell shaped thing. But we probably also have other brain-structures that represent those objects’ functions instead of their shapes. This makes it easier to learn that you talk into at one end of that dumbbell, and listen to the other end. We also have ways to represent s in terms of the goals that they serve-which makes it easier to learn that a telephone is good to use to talk to somebody far away.
Continued at: http://www.edge.org/3rd_culture/minsky/minsky_p3.html
Copyright © 1998 by Edge Foundation, Inc.