Live Moderated Chat: Are We Spiritual Machines?
July 24, 2002
Originally published on International Society for Complexity, Information, and Design July 19, 2002. Published on KurzweilAI.net July 24, 2002.
ISCID Moderator
Welcome, everyone. ISCID is pleased to have several of the contributors to the new book Are We Spiritual Machines? as guest speakers in today’s chat. Before we announce the speakers, I would like to give everyone a heads-up on the protocol for today’s chat.
The public (that is most of you) can type in questions and submit them. The questions will not automatically be displayed. Rather, they will be sent to a moderator who will then select questions for everyone to view. The guest speakers will then have the opportunity to respond to the questions that have been selected. When the guest speakers have finished their comments, the moderator will approve another question. This cycle will continue until 5PM Eastern. When submitting a question, please indicate which guest speaker it is addressed to. Please stay on topic and be as brief and concise as possible.
Right now it looks like Ray K. and Jay Richards are the only two guest speakers logged on.
I will go ahead and introduce them and the others we are expecting.
Bill Dembski
I’m here as well.
ISCID Moderator
Tom Ray, who authored the chapter "Kurzweil’s Turing Fallacy," is not able to be with us today. He did, however, want to say a few words. Here they are:
Sorry that I can’t join the chat. I’m sleeping in Kyoto, Japan. But I have prepared a few comments for those who are interested. You can read them at: http://www.his.atr.co.jp/~ray/richards.html
Our first guest is Ray Kurzweil. Ray is an inventor, entrepreneur, and author. His books include The Age of Intelligent Machines and The Age of Spiritual Machines, When Computers Exceed Human Intelligence. The recently published Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI is a collection of critiques and responses regarding Ray’s vision of Strong Artificial Intelligence.
Welcome Ray.
Ray Kurzweil
Glad to "be" here.
ISCID Moderator
The next guest is Jay Wesley Richards. Jay received his Ph.D. in philosophy and theology from Princeton Theological Seminary. He is the editor of the collection Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI. Jay is currently finishing up a book with astronomer Guillermo Gonzalez entitled The Privileged Planet. He is also a senior fellow and program director at the Discovery Institute in Seattle, Washington.
ISCID Moderator
William A. Dembski is associate research professor in the conceptual foundations of science at Baylor University. He holds a Ph.D. in both philosophy (Univ. of Illinois at Chicago) and mathematics (University of Chicago) as well as an M.Div. from Princeton Theological Seminary. Bill is the author and editor of several books including The Design Inference and No Free Lunch which deal with his work in information and probability theory. Bill’s contribution to Are We Spiritual Machines? is a chapter entitled "Kurzweil’s Impoverished Spirituality."
ISCID Moderator
Ok. It looks as if, right now, Ray may be the only one who is actually "here" with us.
ISCID Moderator
So lets get started.
Danpech
While we can have the idea that there are things outside our self that are aware only because we our self are aware, it is granted by nearly every one as objective fact that there are things outside himself that do not possess awareness (that do not feel either sensorily or emotionally, that do not think, etc.). In my opinion, central to the problem of multiple realizability is the question of how, in the first place, we each get the very idea that there are things that are not aware. How do you, personally, know that there are things that are not aware?
Ray Kurzweil
I don’t know that absolutely. Maybe I’m the only person who’s conscious. That’s consistent with a dream. Even by common wisdom, there seem to be both people and objects in my dream that are outside myself, but clearly they were created in myself and are part of me; they are mental constructs in my own brain. Or maybe every other object is conscious. I’m not sure what that would mean, as some objects may not have much to be conscious of. It’s hard to even define what each object or thing is that might be conscious, as there are no clear boundaries. Or maybe there’s more than one conscious awareness associated with my own brain and body. There are plenty of hints along these lines with multiple personalities, or people who appear to do fine with only half a brain (either half will do). We appear to be programmed with the idea that there are "things" outside of our self, and some are conscious, and some are not. That’s how we’re constructed. But it’s not entirely clear that this intuitive conception matches perfectly to ultimate ontology.
Micah Sparacio
Ray, what are your thoughts on the Frame Problem? Do you see it as a challenge that requires a "paradigm shift" in software before human level intelligence is attainable?
Ray Kurzweil
The concept of frames as discussed by Minsky and others has a variety of interpretations, but generally refers to issues of context in a knowledge base that incorporates such concepts as inheritance of features. And yes, I do think a paradigm shift is required from this kind of expert system methodology. We’ll never get to anything with the suppleness and subtlety of human intelligence with expert system rules. That’s not how human intelligence works.
Although the work on a system such as Cyc is impressive and worthwhile, it’s clear that this type of approach will not create human-level intelligence. Human intelligence works through pattern recognition, which uses a paradigm combining massive parallelism, chaotic self-organization, and holographic-like methods of information storage.
Human thinking is very slow; typical interneuronal connection reset times are 5 milliseconds, so we don’t have time in real time to think through complex sequential rules. But we do have on the order of a 100 trillion connections working more or less simultaneously to provide powerful and subtle pattern recognition capabilities. Pattern recognition approaches have softer edges to their expertise. We need to bring pattern recognition methodologies to domains such as natural language understanding.
Brandon Watson
Your replies to your critics often seem to involve what might be called a moderate skepticism about other consciousnesses. That is, you seem to proceed in something like this way: There is no objective test for consciousness; so it cannot be decided with certainty whether any given thing is conscious, although we do assume it in certain cases based on analogy with ourselves; so we can concern ourselves entirely with the objective correlates of consciousness; we can build functionally equivalent versions of these; so we will reach a point at which our reasons for calling the computer conscious will be as good as our reasons for calling anything else conscious. Do you regard your conclusions as dependent on this line of thought? Do we have any knowledge of consciousness beyond its subjectivity?
Ray Kurzweil
I do believe that the line of thinking that you aptly summarize is more than just a philosophical slight of hand. We really do have problems penetrating the subjective reality of other entities. Our inability to do that with other humans is behind much human conflict. Humans feel deeply the suffering of their friends and allies, and easily discount/dismiss the comparable experience of their enemies.
How about animals? Basic disagreement on whether or not animals are capable of subjective experience, e.g., suffering, or whether they are just operating by "instinct," i.e., like a simple machine, lies behind our disputes on animal rights.
This issue will be even more profound and difficult to resolve with machines. On the one hand, one might argue that machines are less like humans than animals because at least animals are biological and have many similar organs and structures and behaviors. On the other hand, since at least some machines will be based on reverse engineering of humans, these machines may be more similar to humans and human behavior than animals.
Also relevant here is a rather important slippery slope. We’re already replacing portions of our biological neurons with machine equivalents. It’s early yet in this process, but this will ultimately pick up speed. No one says that the current cyborgs among us (e.g., people with cochlear implants, or the recently FDA approved neural implant for Parkinson’s Disease) are not fully human. There’s no clear line between these beginning steps in replacing biological circuits with nonbiological ones to a fully nonbiological equivalent to human brains.
Iain Strachan
How do computers get to develop their own smart software, when it’s likely that to do such things involves complex decisions that, for a formal mathematical system such as a computer, involves NP hard problems?
Ray Kurzweil
We use Bayesian nets in one of our projects, and at the risk of oversimplification, we can think of it as a clever way to get probabilities into a network that provides more flexibility than rule-based knowledge bases. I don’t believe that humans solve problems in real time involving multiple variables using anything like a mathematically appropriate solution.
We apply a pattern recognition type of methodology to our real-time estimates. Think of a child catching a fly ball. At some point, she senses to take a few steps back, and then raises her hand in about the right place (maybe). But she is not computing all the differential equations to do this, even in an unconscious way. We appear to have an ability to develop models of how certain curves will evolve which utilize our self-organizing methods, so it requires training and experience. That’s why she won’t do a very good job of catching the ball until she becomes experienced at doing it. But it’s not a direct computational process. It’s an ability to apply certain curve fitting abilities that approximate expected behavior.
And humans are not particularly optimal at doing this. The ability of a well-designed "quant" investment system to anticipate short term market trends better than human analysts (something I’ve been working on) is a good example of the limitations of human trend or expectation analysis abilities. Most human decisionmaking is based on very flawed and incomplete models. It’s machines that can combine this type of anticipatory modeling with more powerful mathematical techniques when appropriate.
Keep in mind that our 10^11 neurons and 10^14 synapses are characterized by a genome with only 6 billion bits, which is 800 million bytes, which after routine compression is only about 24 million bytes, half of which characterizes the brain. We get from this very small genomic information (which specifies the brain’s initial conditions) to a brain that contains much more information because the genome specifies stochastic processes (e.g., semi random wiring in certain regions according to certain constraints) followed by a self-organizing process using essentially an evolutionary process (i.e., learning).
Chris
Science does very well as long as observers and observed phenomena are assumed to be spatiotemporally separate, and as long as physical observables are taken as "given," but can be seen to fall flat when it comes to putting the relationship between subjects and objects on a logical basis. In fact, in its adherence to principles of scientific objectivity, science excludes subjectivity from the theorization process as a form of "contamination." Might it therefore be possible that the ontological medium of reality, whatever that may be, imposes "hidden constraints" on emergence, and that these constraints differentiate between natural and mechanical processes with respect to the consciousness attribute? What basis is there for assuming otherwise?
Ray Kurzweil
It is a good insight that there is a separation between the objective reality of science and the subjective reality that is consciousness. That is why there is room for philosophy and religion (as a form of philosophy) to address questions that fall outside the domain of objective observation and analysis that is the province of science. However,
the "natural" world is observable by objective methods, and can be understood and modeled through objective methods, and most importantly, its methods can be recreated using objective methods, i.e., technology. In my own field of "pattern recognition," we use biologically inspired paradigms, such as evolutionary algorithms, neural nets, etc. These are far from perfect models, but as the pace of reverse engineering (i.e., understanding the principles of operation) of the human brain picks up speed (as it is), our biologically inspired methods will become closer analogues to the methods we find in nature.
ISCID Moderator
Any questions for our other guests (Jay Richards, William Dembski)?
Nic
Machines cannot lie. A person can. Do you suppose a test for con. is the capacity to lie? In other words, Machines are analytic domains while the con. mind is not an analytic domain.
Ray Kurzweil
The Turing test is very specifically a test of lying. The machine has to lie that it is a human. There is already a Turing test competition, which my organization is entering (for the designation of "most human" machine, as we are not yet close to "passing" a Turing test). But there’s no reason why a machine cannot lie. Of course, successfully lying takes a lot of intelligence, actually more intelligence than most humans have ("Oh what a tangled web we weave….")
Iain
I’d like to come back on one of Ray’s comments about pattern recognition. Working with neural netsis an interest of mine. I have for a long thought that pattern recognition is a key component of intelligence. But surely it is only one small part. One might get a creative idea by noting the similarity to another known idea (e.g., in "brainstorming"). But the pattern recognition must also be supplemented by an intelligent putting together of patterns, rather than just recognising similarity. What kind of techniques do you see being developed for that?
Jay Richards
Perhaps instead of focusing on consciousness, we could consider freedom. One of the most readily apparent realities about ourselves as persons is our capacity for free choice. We experience that capacity directly, at least much more directly than we experience, say, the external world. We choose between alternatives, and although we are shaped by our context, biology, etc., we have the sense that these don’t always determine our choices. This capacity, and not simply "consciousness," seems to me to be an essential property of intelligence. But it is very difficult to see how a computer, no matter how advanced, which is governed by algorithms (and perhaps randomizing functions), could ever enjoy such freedom. I’m not sure what Ray’s view is on this, but I know some advocates of Strong AI just deny the existence of libertarian freedom. But my direct experience of possessing such freedom seems much more certain than any theory that might deny its existence.
Ray Kurzweil
We do have an ability for sequential logical ("rational") thinking that apparently is a relatively recent evolutionary development (in biological brain development).
But most of our thinking is pattern-recognition-based. That’s how a human plays chess. We don’t have much time to do real-time sequential analysis the way a machine can.
Chris
Let’s talk about the medium of emergence of consciousness. Phenomena must be appropriate to the media in which they occur. For example, sound can occur only in a medium capable of supporting transverse (compression-rarefaction) waves, and winged flight can occur only in a gaseous atmosphere capable of providing the necessary lift. It follows that there must be a relationship between the emergence of consciousness and the medium in which it occurs. Specifically, this emergence must involve a process parameterized by attributes of the medium. Unfortunately, science is unable to characterize the overall medium of reality with respect to the subjectivity attribute; in fact, it is unable to characterize the process of attribution itself in any but the most superficial of ways. What is it that actually relates objects and their attributes…some kind of glue? Don’t we need a complete account of this relationship in order to relate the attribute of consciousness to various systems and devices?
Ray Kurzweil
Jay has raised an important issue: free will, which is closely related to consciousness, the apparent sense of making free choices. It’s as hard to pin down objectively as, well, subjectivity. My point is that we will encounter machines whose complexity and depth of processing is indistinguishable from that of humans, with all their anguished decision making. Are they actually conscious? Are they actually deploying free will? Or just appearing to? Is there a difference between such appearance and reality? Because of the slippery slope argument I alluded to above, and for many other reasons, I believe we will accept nonbiological intelligence as human, i.e., conscious, i.e., responsible for its own free will decisions. But that’s a political and psychological prediction, not necessarily an ontological one.
tim
As a follow-up to Jay Richards last comment on freedom. Where does "motivation" factor in?
Sulu
Jay, I do not believe even humans have free will the way you describe. If the universe is deterministic (that is, if the state of the universe is known at one time, physics will allow us to predict the state at any other time), then so are humans. The only distinction we currently make between humans and deterministic machines is that the complexity of humans prevents any other human from having perfect knowledge to predict future behaviour. It is possible that in the future, machines will be complex enough to be able to view humans are predictable and deterministic.
Ray Kurzweil
To respond first to Chris, I think you’ve articulated another way of saying that there is a barrier between the objective world of science and the subjective issue of consciousness. Some people go on to say that because the issue of consciousness is not scientific, it is, therefore, not real, or an "illusion." That’s not my view. One can say that it is the most important question. It underlies, for example, morality, and to the extent that our legal system is based on morality, then our legal system. We treat crimes that cause suffering of a conscious entity differently than damaging "property". In fact, you can damage property if you own it. We only punish damaging property because some other conscious person cares about it.
With regard to "motivation," this is another high-level attribute like intuition and creativity that are inherent characteristics of entities with the complexity and depth of organization of humans. We don’t yet have machines of that complexity, but we will.Often, a human can successfully predict the "free will" decisions of others if that first human has a complete enough understanding of the cultural model of the other.
Micah Sparacio
Could you discuss how stasis plays a role in your Law of Accelerating Returns? How does one get beyond stasis to a true "paradigm shift"?
Ray Kurzweil
By stasis, do you mean the approaching of an asymptote? If so, that invariably happens with any paradigm. "Moore’s Law" (the shrinking of transistors on an integrated circuit) will approach such an asymptote when we run out of room on two-dimensional circuits (we’ll then go to the third dimension). Invariably as we approach the limits of a particular paradigm, pressure builds up to create the next. We see that now with the increased intensity of work in three-dimensional molecular computing. Even Moore’s law was not the first but the fifth paradigm to provide exponential growth to computing.
Generally speaking, the new paradigm already exists before the old one dies. Transistors had a niche market before tubes reached their limit. But once they could no longer shrink vacuum tubes any further and maintain the vacuum, then the superior ability of transistors to maintain ongoing price performance growth took over.
Chris
The spacetime intervals between machine components stand for causal independence (within spatial cross-sections). This highlights the basic distinction between the causal independence of mechanical components and the coherence of consciousness, i.e. the parallel or "simultaneous" mutual causal connectedness of spatially separate components. Reality is constructed in such a way that spatial separations occupy a lower rung of its ontological lattice; to build a conscious machine, one would have to find a means of overcoming the componential independence of classical reality. Far more likely is that the unity attribute is inherited from a point up the lattice, and ultimately, from the global identity of the lattice, which is distributed over spacetime and everywhere implicit. But this means that unitary consciousness is, in a sense, derived from the "unitary consciousness" of the universe. So the question is, how do you propose to create that unity from an assemblage of parts, and an associated set of laws?
Jay Richards
In response to Sulu. I would assume that if one assumes that the universe is deterministic (though I don’t think predictability is the same as determinism), then one will deny the existence of free will. But such determinism is hardly self-evident, even if lots of folks say that it is such. My point is that we experience our capacity for freedom directly, and as a result, should be much more certain of its reality than any theory that entails determinism. Moreover, even determinists, when they’re not in a philosophical mode, presuppose the existence of such freedom when they make moral judgments about the actions of others. If some theory, from, say, psychology or biology or physics makes such freedom impossible, then so much the worse for that theory. Our experience and knowledge of our own freedom will always be more certain than our certainty of any such theory. This is relevant to strong AI, because it seems to me that if we had good reason to think that an AI, in say, 2059, is exercising freedom
J(how we recognize freedom is a matter of debate), then we would have good reason for inferring that the AI is exercising real intelligence.
kuebler
It doesn’t seem that being able to predict someone else’s behavior denies they have "free will." Knowing someone’s preferences does not give you any more insight into whether they are acting freely or are "determined." Is there any way in which a notion of free will can be incorporated into a deterministic algorithm?
Ray Kurzweil
Chris, you have a particular ontological model in mind. But making your assumptions, there’s nothing inherent in biological systems that cannot be emulated with nonbiological systems, and no clear boundary between biology and non-biology. As our nonbiological systems become more biological inspired, and as we merge more and more with our increasing biological-like technology, it would have the same characteristics as a biological human. It would share, then, in this unity of consciousness.
Fredkin and Wolfram’s conceptions of cellular mechanics show that even simple processes may be deterministic but are unpredictable by any process not complete equivalent in complexity and time as the actual process. So deterministic does not necessarily imply predictable.
Leonid_Andreev
How would you comment on an idea that starting with a certain level of information processing information field itself turns into information processor (as was earlier discussed on Brainstorms: REPLACE URL) In this event, the term "spiritual machine" may literally be understood as a result of biological evolution. In other words, spirit processes spirit. If this is true, then reverse engineering of consciousness should be absolutely impossible.
Ray Kurzweil
In response to Jay, agree that free will is apparent to me, just as my consciousness is apparent to me. But it may not be apparent to others. I assume that others are conscious (or have free will), and others may assume it of me, and this may seem obvious,
but this obviousness (and this shared assumption) breaks down as we leave
shared human experience (e.g., animals, and now the emerging debate on machines).
Jay Richards
There seem to be three different categories: freedom (or agency), determinism, and randomness. It’s easy to imagine a computer "acting" according to the latter two. It’s very difficult to see how a computer could actually exhibit freedom. In fact, it’s difficult to see how we exhibit freedom. We, however, experience the capacity for freedom directly, so any adequate theory should accommodate its existence. Thus, if we had really good reasons for thinking that an AI exhibited freedom, I would conclude that we had reason to think there was something more going on than mere determinism and randomness.
Ray Kurzweil
Further response to Jay:
This freedom is only apparent to myself. There’s no objective way of demonstrating it, so there won’t be a clear way to distinguish, say, Ray Kurzweil from a very accurate simulation.
Bill Dembski
Ray, I’d like you to speak to the role of the first person within your view of computation. In your review of Wolfram’s book, you approvingly cite Marvin Minsky’s "society of mind," in which an intelligence may result from a hierarchy of simpler intelligences. But that sounds like the first person dissolves in a sea of modularity, and modularity is exactly what you seem to want to avoid. How is it possible within your view of computation to avoid thinking of the first person as anything but an illusion (Hume’s bundle of associations)? And if it is possible, what computationally grounds the first person perspective?
Ray Kurzweil
Response to Bill:
A human intelligence is obviously more than a loose association of processes; we do have a very coherently organized identity. But it’s far from perfectly coherent. It can be pretty incoherent at times, and we have all kinds of different forces and feelings that we feel inside us in some chaotic pattern. But there is a coherent organization to a human, it’s not a loose association.
Sulu
First year philosophy students are taught that liberty of desire (getting what we want) is compatible with determinisim, but not liberty of spontaneity (the possibility of things being otherwise than they could have been). However, it is stronger to say that our morals and legal code not only depend on consciousness but free will of the second kind (liberty of spontaneity), and even if we can’t reconcile it with determinism, it is humanly infeasible to live without either determinism (the basis of our science) or free will of the second kind (that we have a "soul" that is independent of physical laws). This is similar to many other problems, like justifying induction; we can’t guarantee that reality will stay the same, but it is humanly impossible to live without this assumption. We attribute to luck or chance phenomena about which we do not have complete information. It is possible that a machine will be able to live without induction and be able to eliminate the idea of "chance".
Brandon Watson
For Dembski: You brought up the issue of the role of the first person (and have just recently today posted on the subject on the discussion site for this book). Would you briefly summarize your own view on this subject, particularly as it relates to Kurzweil’s patternism and strong distinction between the objective correlates of consciousness and consciousness in its subjectivity?
Jay Richards
Ray may be correct that there will be no "explicit" way of distinguishing his freedom from that of a very good simulation. That is, we may not be able to come up with a list of necessary and sufficient criteria that establishes that the entity is free rather than just a good simulation. But it may be that we have an intuitive capacity of some sort that cannot be made fully explicit, which, when properly functioning, allows us to discern the existence of a free agent. I think this is normally how we do recognize freedom: directly and intuitively, rather than deductively or inferentially.
Ray Kurzweil
Response to Jay: We may very well have intuitive capabilities that we have not yet identified or articulated, but they do take place in our brains and bodies, characterized as they are by a genome with only 23 million bytes of compressed information, aided, of course, by humanity’s exponentially growing knowledge base, but a knowledge base accessible to nonbiological intelligences as well. I don’t think a suitably advanced AI will be distinguishable from human intelligence. And there won’t be a clear boundary — we are already putting machines in our bodies and brains, and that will accelerate. There are already four major conferences on bioMEMS (biological micro electronic mechanical systems) to put a first generation of nanobots in our blood stream. These will evolve increasingly intelligent machines, meaning that merging our brains with our machines will not require surgery.
Chris
I understand, but rather than making assumptions, I’m trying to get around certain assumptions already in place. For example, classical spacetime has an ontological-qua-geometric lattice structure with dimensions matching those of Minkowski space. The vertical (time) dimension of this lattice corresponds to logical induction in the regressive direction and logical deduction in the progressive direction. Progressing down the lattice corresponds to moving forward in time, and involves a horizontal (spatial) expansion which accommodates the expanding spatial distinctions among the contents of spacetime and is thus characterized by entropy. In this sense, the arrow of time is differentiative, and where (forward) time is innately differentiative in the classical context, the establishment of true unity constitutes a violation of entropy. Any deterministic complexity emerging in the course of machine assembly and operation is merely a function of distributed dynamical laws intersecting in specific arguments which remain spatially separate throughout (and thus causally independent in any spatial cross-section), despite being subject to identical localistic laws. What, if anything, would make you think that this particular kind of complexity could generate unitary consciousness? And if you consider "unitary" consciousness beside the point, then can you at least account for the fact that human consciousness is introspectively and volitionally unitary, while there is simply no way to effect this in a machine? I.e., on what basis are machines being compared to human beings possessing unitary, volitional consciousness with access to volition, emotion, qualia, and other "subjective" properties?
Ray Kurzweil
Response to Chris:
There is no way to access "qualia," we can only access neurological correlates (i.e., objective correlates) of subjective experiences. But there is no clear way to experience the link without being that entity yourself. Consider that we will have machines that are as complex as humans and indeed based on a thorough reverse engineering of the principles of operation of humans.
Bill Dembski
Responding to Brandon: My own view is that intelligence is a primitive notion and that it finds its expression only through the first person. It is persons that interpret patterns that are embodied in matter. Materialism is content with matter and patterns but cannot make sense of the personal in anything but a reductive way. That’s why I asked Ray about how to ground the first person perspective computationally. It’s interesting that the etymology of "Person" is thought to derive either from the Persian "persu" (meaning mask) or Latin "personare" (meaning to sound through). In either case, the person takes on a derivative and, dare I say it, "emergent" role, not something fundamental.
Nic
Does a superposition of both Yes/No open the door to what could be called free will?
Ray Kurzweil
Response to Nic:
Well, people have been inspired by the apparent duality of quantum mechanics to suggest links to consciousness and free will.
ISCID Moderator
Last chance to get your questions in.
Sulu
Jay’s freedom seems to be completely freedom of desire; this is still freedom in that we get what we want, but our desire in the first place is determined. It is also empirically determined as in "I want to move my hand; I see my hand move; therefore I control my hand and get what I want; therefore I have free will (of the first kind)." I don’t see any difference between this and the phenomenon of a machine fulfilling its professed desires. Intuition is not needed; or if there is intuition involved, it is also a determined process verified by experience that machines could implement.
nanci
then how would you address the phenomonon of clarvoyance? Is that just another type of brainwave to be constructed, so to speak, for a computer?
Ray Kurzweil
Sulu’s argument is an old one (which is not to criticize it). We can introduce some randomness to both humans and machines, but that as Jay points out still falls short of the conception of free will. It’s almost impossible to articulate Jay’s concept of free will, except in the first person. But we will experience most machines (as we do most people) in the third person.
Paul Kisak
Asymptotes are the proof that God does not trust us;-) I have had experience with pattern recognition models that base their design, or computational ceiling, at 10^30 bits per second, which many believe is an approximation of the brains capacity.
Correspondingly approximations have been made, based on the numbers that Ray alluded to earlier (10^11 neurons and 10^14 synapses – which I think is higher now (on the order of 10^16); yielding an approximate 10^16 bits per second limit. The discrepancy is several orders of magnitude. One model that I have read attempts to explain this gap by building on the discovery that all cell walls have hollow protein microtubules and it is these structures that facilitate a quantum mechanical computing capability. QUESTION: Have you heard of this theory, and if yes, what is your opinion of it?
Jay Richards
Just to be clear, I have been referring to what Sulu called "freedom of spontaneity." I think it is subjectively clear to us that (at least sometimes) we can choose between mutually exclusive alternatives, that things could otherwise than they are.
Ray Kurzweil
Response to Paul Kisak:
The history of this is interesting. Roger Penrose suggested a possible link of quantum computing (which is computing using quantum ambiguous "qu bits") as a foundation of consciousness.
It was pointed out to him that the computational structures in neurons were too large to do quantum computing, so he came up with the microtubules as fine enough structures to do quantum computing.
It is his objective to show that machines cannot do this because of this quantum computing capability. There are some problems with this. First of all, no one has ever shown any evidence for quantum computing taking place in the microtubules,
nor any capability of humans that requires quantum computing. Moreover, if humans do perform quantum computing in the microtubules or elsewhere, that would not restrict quantum computing from machines.
We’re already making progress on quantum computing in machines, and researchers claim to have a 7-bit quantum computer.
ISCID Moderator
Well, it looks like it is just about 5PM here on the East Coast. ISCID would like to thank Ray Kurzweil, Jay Richards, and William Dembski for their participation in today’s chat. If you would like to continue chatting after the event, feel free to move over into the General Discussion room.
Ray Kurzweil
Thanks, I enjoyed the dialogue. Look forward to reading it over again.
ISCID Moderator
Thank you, Ray.
ISCID Moderator
It was a pleasure to have you with us.
ISCID Moderator
The transcript of this discussion will be posted on ISCID next week.
Ray Kurzweil
Thanks for having me. Goodbye for now.
Copyright © by International Society for Complexity, Information, and Design July2002. Used with permission.