Response to Fortune Editors’ Invitational
July 11, 2001
Originally written July 2001 to be presented August 2001. Published on KurzweilAI.net July 11, 2001.
Once upon a time we committed ourselves to putting a man on the moon. What kind of similar commitment should we make now? What’s the “moon shot” of the 21st century?
Create technology that combines the strengths of human and machine intelligence and implement it in both our machines and in ourselves.
We biological humans excel in our powers of pattern recognition as well as our emotional intelligence. Our ability to recognize and respond appropriately to emotion reflects the most complex and subtle thing we do. Machine intelligence also has salient advantages, for example the ability to instantly share knowledge, speed (electronics is already 10 million times faster than our interneuronal connections) and capacity.
Today, our human intelligence is restricted to a mere 100 trillion connections in a human brain. Although the capacity of our computers today is still millions of times less powerful than the human brain, the basic architecture of our biological nervous system is fixed whereas the price-performance of computation is expanding exponentially. The cross-over point is only decades away.
Moreover, the software of human-level intelligence is not hidden from us. We are also making exponential gains in scanning, modeling, and reverse engineering the human brain, which will be the ultimate source for the “methods” of intelligence.
We will ultimately have the opportunity to combine the rich, diverse, and flexible powers of human intelligence with the knowledge sharing, speed, and capacity of machine intelligence. What form will this take? The answer is many different forms. One mode will be fully nonbiological entities with human-like qualities. The more interesting prospect will be expanding our own thinking through intimate connection with machine intelligence.
This undertaking will be the result of the ongoing exponential growth of computation and communication, the continuing shrinking of technology, as well as our accelerating understanding of the human brain and neural system. This should not be a NASA-style government project, but rather should reflect the ongoing interplay of private enterprise with a panoply of academic and government research institutions.
As the world heads down its current path, what should we fear most?
Technology amplifies both our creative and destructive natures. Our lives are immeasurably better off today than 100 years ago, but the twentieth century has also witnessed great amplification of our means for destruction.
Most powerful and potentially pernicious is self-replication. We already have self-replication in the medium of nuclear processes, and we are on the threshold of widely available means for creating bioengineered pathogens. In a couple of decades, we’ll also have the ability to create self-replicating nonbiological entities in which key features are measured in nanometers. Following that will be the emergence of nonbiological intelligence smart enough to invent its own next generation suggesting a run away phenomenon not clearly under the control of biological humanity. Twenty-first century technologies will be billions of times more powerful than those of the twentieth century, and there are a myriad of downside scenarios that we can already envisage.
Calls for relinquishing potentially dangerous technologies such as nanotechnology is not the answer. For one thing, nanotechnology is not a unified field, but rather the inevitable end result of a broad trend toward miniaturization that pervades most areas of technology. We could scarcely stop the emergence of nanotechnology without “relinquishing” virtually all technology development. Moreover, the dangerous technologies represent the same knowledge as the beneficial ones. For example, the same biotechnologies that will save millions of lives from cancer and other diseases in the years ahead is precisely the same know-how that can potentially empower a terrorist to create a new pathogen.
Although I believe the risks are real, I believe that maintaining a free and open society is our best route to developing effective countermeasures. Serious attempts to relinquish broad areas of knowledge will only drive them underground where the less responsible practitioners (i.e., the terrorists) will have all the expertise.
What issue or issues will most define our future?
- Who or what is human. The exponential growth of information-based technologies (computation, communications, varied biotechnologies, human brain scanning and reverse engineering) combined with the exponential shrinking of the size of technology will blur the line between human and machine.
- How can we avoid grave new dangers of technology while reaping profound benefits. Most observers use linear extrapolation for their estimates of future time-frames, but this ignores the exponential nature of progress. We’re currently doubling the paradigm shift rate every decade. This will raise the issue of how can we reliably anticipate the impact of technology to emphasize the promise while we avoid the peril.
- How do we reinvent social institutions when people rarely die. Human life span is also growing exponentially through multiple biotechnology revolutions (Genomics, Proteomics, therapeutic cloning, rational drug design, and others), to be followed a couple of decades hence by human body and brain augmentation through nanotechnology. Within ten years, we’ll be adding more than a year every year to human life expectancy. Many issues will be raised as all of our human traditions need to be rethought.