Turing’s Prophecy
August 8, 2001
Originally presented June 25, 1995. Published on KurzweilAI.net August 8, 2001.
I would like to start by quoting a fellow student of computer science, MIT’s Edward Fredkin:
“Humans are okay. I’m glad to be one. I like them in general, but they’re only human. . . .Humans aren’t the best ditch diggers in the world, machines are. And humans can’t lift as much as a crane. . . .It doesn’t make me feel bad. There were people whose thing in life was completely physical—John Henry and the steam hammer. Now we’re up against the intellectual steam hammer. . . .So the intellectuals are threatened, but they needn’t be. . . .The mere idea that we have to be the best in the universe is kind of far fetched. We certainly aren’t physically. There are three events of equal importance. . . .Event one is the creation of the universe. It’s a fairly important event. Event two is the appearance of Life. Life is a kind of organizing principle which one might argue against if one didn’t understand enough—it shouldn’t or couldn’t happen on thermodynamic grounds. . . .And third, there’s the appearance of artificial intelligence.”
Part I: The Emperor of China
As we consider the sweep of technology history, one phenomenon that stands out is Moore’s law. Moore’s law is the driving force behind a revolution so vast that the entire computer revolution to date represents only a minor ripple of its ultimate implications.
Moore’s law states that computing speeds and densities double every 18 months. In other words, every 18 months we can buy a computer that is twice as fast and has twice as much memory for the same cost. Remarkably, this law has held true for more than a hundred years, from the mechanical card-based computing technology of the 1890 census, to the relay-based computers of the 1940s, to the vacuum tube-based computers of the 1950s, to the transistor-based machines of the 1960s, to all of the generations of integrated circuits since. If you put every calculator and computer since 1890 on a logarithmic chart, it makes an essentially straight line.
Computer memory, for example, is about 16,000 times more powerful for the same unit cost as it was 20 years ago. Computer memory is about 100 million times more powerful for the same unit cost as it was in 1948, the year I was born. If the automobile industry had made as much progress in the past 47 years, a car today would cost one hundredth of a cent, and would go faster than the speed of light.
Dr. Gordon Moore, who became Intel’s CEO in 1975, first observed this phenomenon in the mid 1960s, at which time he said that the doubling occurred every 24 months. Ten years later, he revised this to 18 months.
There are more than enough new computing technologies being developed to assure a continuation of Moore’s law for a very long time. Moore’s law is actually a corollary of a broader law I like to call Kurzweil’s law on the exponentially quickening pace of technology that goes back to the dawn of recorded history, but that’s another speech.
The implications of this geometric trend can be understood by recalling the legend of the inventor of chess, and his patron, the emperor of China. The emperor had so fallen in love with his new game, that he offered the inventor a reward of anything he wanted in the kingdom.
“Just one grain of rice on the first square, your Majesty.”
“Just one grain of rice?”
“Yes, your majesty, just one grain of rice on the first square. And two grains of rice on the second square, four on the third square, and so on.”
Well, the emperor immediately granted the inventor’s seemingly humble request.
One version of the story has the emperor going bankrupt, because the doubling of grains of rice for each square ultimately equaled 18 million trillion grains of rice. At ten grains of rice per square inch, that’s about half a billion square miles of rice fields, or twice the surface area of the Earth, oceans included.
Another version has the inventor losing his head.
It’s not yet clear which outcome we’re headed for. But we should gain some useful perspective if we step through the chessboard from the first square.
Square One: World War II
“The German aircraft is the best in the world. I cannot see what we could possibly calculate to improve on.”
With these words, the chief engineer of the German Aircraft Research Institute of the Third Reich abruptly withdrew his institute’s funding of Konrad’s Zuse’s Zuse-3, the world’s first operational programmable (digital) computer (despite the Allies’ subsequent attempt to rewrite history and credit IBM and Harvard’s Mark I with that distinction). Although he had secured his place in history, this German tinkerer’s innovation remained obscure and had little influence on either the course of the war or the future of computation.
By 1940 Hitler had the mainland of Europe in his grasp, and England was preparing for an anticipated invasion. The British government had a more prescient view of the value of computation and organized its best mathematicians and electrical engineers, under the intellectual leadership of Alan Turing, with the mission of cracking the German military code. It was recognized that with the German air force enjoying superiority in the skies, failure to accomplish this mission (code named ULTRA) was likely to doom the nation. In order not to be distracted from their task, the group lived in the tranquil community of Dollis Hill, a suburb north of London, with a parallel effort at Bletchley Park, 40 miles northwest of London.
The British Government had obtained a working model of the German “Enigma” coding machine in 1938 from a still unheralded hero of World War II, a young Jewish Polish engineer named Richard Lewinsky, who had worked briefly in eastern Germany helping to assemble the device. Coded orders sent by radio from the German high command were easily intercepted, but to decode these messages every combination of the positions of the Enigma machine’s coding wheels needed to be evaluated. Turing and his colleagues constructed a series of machines in 1940 from telephone relays which they called Robinson, after a popular cartoonist who drew “Rube Goldberg” machines. The group’s own Rube Goldberg device succeeded brilliantly and provided the British with a transcription of nearly all significant Nazi messages. As the Germans continued to add to the complexity of their code (by adding additional coding wheels to their Enigma coding machine), Turing and his associates replaced Robinson’s electromagnetic intelligence (which took about three tenths of a second to add two numbers) with an electronic version called Colossus built in 1943 from 2,000 radio tubes. Colossus, which was 1,500 times faster than Robinson (which represented several of the early squares on the chess board and was itself a dramatic example of Moore’s Law), and nine similar machines running in parallel provided an uninterrupted decoding of vital military intelligence to the Allied war effort.
Use of this information required supreme acts of discipline on the part of the British government. When informed by ULTRA that Coventry was to be bombed, Churchill ordered that the city not be warned, and that no civil defense steps be taken, lest preparations arouse German suspicions that their code has been cracked. The information provided by Robinson (the world’s first operational special purpose computer) and Colossus was used only with the greatest discretion (for example to guide critical British ships through the German U-boats), but the cracking of Enigma was enough to enable the Royal Air Force to win the Battle of Britain.
Thus fueled by the exigencies of war, and drawing upon a diversity of intellectual traditions, a new form of intelligence emerged on Earth.
Turing’s Ruminations
The similarity of the computational process to the human thinking process was not lost on Turing, and he is credited with having established much of the theoretical foundations of computation and the ability to apply this new technology to the emulation of intelligence.
In his classic 1950 paper, “Computing Machinery and Intelligence,” published in the journal Mind, Turing predicted that by early in the next century, society will simply take for granted the pervasive intervention of intelligent machines in all phases of life, that “the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”
Turing went on to lay out an agenda that would in fact occupy the next half century of advanced computer research by describing a series of goals in game playing, decision making, natural language understanding, translation, theorem proving, and, of course, encryption and the cracking of codes. He wrote (with his friend David Champernowne) the first chess-playing program.
He also defined the Turing Machine, a theoretical model of a computer, which continues to form the basis of modern computational theory. The Turing Machine led to the discovery of “intelligent” mathematical functions such as the “Busy Beaver” function, a remarkable function that actually requires higher levels of intelligence to solve for higher operands, a kind of mathematical IQ test. Human intelligence, for example, tops out at about busy beaver of 30.
Perhaps best known of Turing’s intellectual contributions is the Turing Test, presented as a means for determining whether or not a machine is intelligent. The test involves a human judge being unable to tell the difference between a computer and a human “foil” by communicating with each through written (e-mail-like) messages. The communication takes place over terminal lines to prevent the judge from being prejudiced against the computer for lacking a warm and fuzzy appearance. It should be noted that while an entity “passing” the Turing test signifies the possession of human-level intelligence; the converse does not necessarily hold. Some observers ascribe a high level of intelligence to certain species of animals such as dolphins and whales, but neither are in a position to pass the Turing Test (they have no fingers, for one thing). Turing predicted that a computer would pass his test by early in the next century.
As a person, Turing was unconventional and extremely sensitive. He had a wide range of unusual interests ranging from the violin to morphogenesis (the differentiation of cells). There were public reports of his homosexuality, which greatly disturbed him, and he died at the age of 41, a suspected suicide.
Part II: The Hard Things were Easy
In the 1950s (the sixth through eleventh squares of the chessboard), progress came so rapidly that some of the early pioneers felt that mastering the functionality of the human brain might not be so difficult after all. In 1956, Carnegie Mellon’s Allen Newell, J.C. Shaw and Herbert Simon created a program called Logic Theorist (and a later version called General Problem Solver in 1957) which used recursive search techniques to solve problems in mathematics. It was able to find proofs for many of the key theorems in Whitehead and Russell’s Principia Mathematica (a three volume treatise published in 1910-1913 that revolutionized mathematics by recasting it in terms of set theory), including a completely original proof for an important geometry theorem that Whitehead and Russell had been unable to solve. In one trial run, Logic Theorist found proofs for 38 of the Principia Theorems, solving half of these in under a minute each, and accomplishing the rest in times ranging from one to forty-five minutes each, running on a JOHNNIAC, a 1955 computer with a 50 microsecond addition time. These early successes led Simon and Newell to say in a 1958 paper entitled “Heuristic Problem Solving: The Next Advance in Operations Research,” “There are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until—in a visible future—the range of problems they can handle will be coextensive with the range to which the human mind has been applied.” The paper goes on to predict that within ten years (that is, by 1968) a digital computer would be the world chess champion. A decade later, an unrepentant Simon predicted that by 1985, “machines will be capable of doing any work that a man can do.” Perhaps Simon was intending a favorable comment on the capabilities of women, but predictions of this sort, decidedly more optimistic than Turing’s, resulted in extensive criticism for the nascent AI field.
During the 1960s, the academic field of AI began to flesh out the agenda that Turing had described in 1950, with encouraging or frustrating results depending on your point of view. MIT’s Thomas G. Evans’ 1963 Analogy program could solve IQ test geometric-analogy problems, and Daniel G. Bobrow’s 1964 program Student could solve algebra problems from natural English language stories. These two programs reportedly could outperform most high school students. The field of expert systems was initiated with Stanford University Professor Edward A. Feigenbaum’s DENDRAL, begun in 1965, which could answer questions about chemical compounds. Work at Stanford, later moved to CMU by Raj Reddy, produced impressive early results in speech recognition, including a 600 word vocabulary speech recognizer running on a PDP-11. Richard D. Greenblatt’s 1966 chess program broke into the national chess ranks with respectable tournament ratings, and received a lot of publicity when it defeated the noted AI critic Hubert Dreyfus (see below). And natural language understanding got a boost with Terry Winograd’s landmark 1970 Ph.D. thesis SHRDLU, which could understand any meaningful English sentence, so long as you talked about colored blocks.
The notion of creating a new form of intelligence on earth emerged with an intense and often uncritical passion simultaneously with the electro-mechanical and electronic computers on which it was to be based. The unbridled enthusiasm of the field’s early pioneers also led to extensive criticism for the inability of these early programs to react intelligently in a variety of situations. Some critics, most notably existentialist philosopher, phenomenologist and Berkeley Professor Hubert Dreyfus predicted in a 1965 RAND Corporation memorandum entitled “Alchemy and Artificial Intelligence” (which he subsequently expanded into a 1972 book entitled What Computers Can’t Do) that machines would never match human levels of skill in areas ranging from the playing of chess to the preparing of university lectures.
It turned out that the problems we thought were difficult—solving mathematical theorems, playing respectable games of chess, reasoning within domains such as chemistry and medicine—were easy and the multi-thousand through multi-hundred-thousand instructions-per-second computers of the 1950s and 1960s (the sixth through eighteenth squares of the chessboard) were often adequate to provide satisfactory results. What proved elusive were the skills that any five year-old child possessed: telling the difference between a dog and a cat, or tying one’s shoelaces. Indeed, our robots are still unable to tie shoelaces, which is perhaps one reason that they still don’t wear shoes.
Invisible Species
In the early 1980s, we arrived at the twenty-fifth square and saw the early commercialization of artificial intelligence with a wave of new AI companies forming and going public. A number of these companies concentrated on a powerful but inherently slow (at least in those days) interpretive language that had been popular in academic AI circles called LISP. The commercial failure of LISP and the “AI” companies that emphasized it (e.g., Symbolics, Lisp Machines, Inc.) created a backlash in the financial community. The field of AI started shedding its constituent disciplines, and companies in natural language understanding, language translation, character recognition, speech recognition, robotics, machine vision and other areas originally considered part of the AI discipline now shunned association with the field’s label. Although one entrepreneur, Ray Kurzweil, did attempt to counter that trend when he named his new speech recognition company Kurzweil AI in 1982.
Machines with sharply focused intelligence nonetheless became increasingly pervasive. By the early 1990s, we saw the infiltration of our financial institutions by systems using such computerized techniques as neural networks, markov modeling, time-delay embedding and fractal modeling. Not only were the stock, bond, currency, commodity, and other markets managed and maintained by computerized networks, but the majority of buy-and-sell decisions were initiated by software programs that used these techniques on increasingly extensive on-line databases (e.g. Midland Global Markets’ 16 Gigabyte database in 1993 recorded every moment of trading in the Standard and Poor 500 and several other markets for the prior 10 years). The 1987 stock market crash was blamed on the rapid interaction of trading programs. Computer generated sales pushed prices down which triggered more automated selling which caused a selling spiral. This synthetic selling panic developed in a matter of minutes and resulted in a half trillion dollars of lost value. Suitable modifications to these algorithms, including automated shut-downs of trading in specific stocks and/or entire markets, have managed to avoid a repeat performance.
Since 1990, your electrocardiogram (ECG) has come complete with the computer’s own diagnosis of your cardiac health. Intelligent image processing programs enabled doctors to peer deep into our bodies and brains, and computerized bioengineering technology enabled drugs to be designed on biochemical simulators.
The handicapped have been a particularly fortunate beneficiary of the age of intelligent machines. Reading machines have been reading to the blind and dyslexic persons since 1976, and speech recognition and robotic devices have been assisting the hands impaired since 1985.
By 1993, the value of knowledge as the primary cornerstone of wealth and power had become clear in a variety of domains. Software companies were growing rapidly and enjoyed impressive price-to-earnings ratios (e.g., Lotus: 60-1, Oracle: 42-1, Powersoft: 52-1, Sybase: 41-1, Electronic Arts: 44-1, not to mention Microsoft: 25-1), while the traditional computer hardware companies were in decline. Intel might be thought to have been an exception to this, but chip design (particularly of proprietary value-added chips such as microprocessors) is properly regarded as a form of software.
It was in the military that the public saw perhaps the most dramatic display of the changing values of the age of knowledge. We saw the first effective example of the increasingly dominant role of machine intelligence in the Gulf War of 1991. The cornerstones of military power from the beginning of recorded history through most of the 20th century—geography, manpower, firepower, and battle-station defenses—were largely replaced by the intelligence of software and electronics. Intelligent scanning by unstaffed airborne vehicles; weapons finding their way to their destinations through machine vision and pattern recognition; intelligent communications and coding protocols; and other manifestations of the information age began to rapidly transform the nature of war.
With the increasingly important role of intelligent machines in all phases of our lives—military, medical, economic and financial, political—it was thus odd to keep reading articles with titles such as Whatever Happened to Artificial Intelligence? This was a phenomenon that Turing had predicted, that machine intelligence would become so pervasive, so comfortable, and so well integrated into our information-based economy that people would fail to even notice it.
It reminds me of people who walk in the rain forest and ask, “Where are all these species that are supposed to live here?” when there are a hundred species of ant alone within 50 feet of them. Our many species of machine intelligence have woven themselves so seamlessly into our modern rain forest that they are all but invisible.
Turing offers an explanation of why we would fail to acknowledge intelligence in our machines. In 1947, he writes: “The extent to which we regard something as behaving in an intelligent manner is determined as much by our own state of mind and training as by the properties of the object under consideration. If we are able to explain and predict its behavior we have little temptation to imagine intelligence. With the same object, therefore, it is possible that one man would consider it as intelligent and another would not; the second man would have found out the rules of its behavior.”
I am also reminded of Elaine Rich’s definition of artificial intelligence as the “study of how to make computers do things at which, at the moment, people are better.”
Part III: The Middle of the Chessboard
Returning to our story of the emperor and the inventor, there is one aspect that we should take particular note of. It was fairly uneventful as the emperor and the inventor went through the first half of the chessboard (remember the inventor had asked for one grain of rice on the first square, two grains on the second square, four on the third square, and so on). After 32 squares, the emperor had given the inventor about 4 billion grains of rice. That’s a reasonable quantity of rice—it’s about one tenth of a square mile’s worth—and the emperor did start to take notice.
But the emperor could still remain an emperor, and the inventor could still retain his head. It was as they headed into the second half of the chessboard, that at least one of them got into trouble.
So where did we stand in 1995? Well, there had been about 32 doublings of performance since Zuse’s and Turing’s relay-based computers of the early 1940s. So where we stood in 1995 is that we had just finished the first half of the chess board. And indeed people were starting to take notice.
But as we headed into the rest of the 1990s and the next century, we were heading into the second half of the chessboard. And that is where things started to get interesting.
We realized in 1995 that we had failed to explore the third dimension in chip design. Chips circa 1995 were flat, whereas our brain is organized in three dimensions. We live in a three dimensional world, so why not use the third dimension?
Improvements in the late 1990s in semiconductor materials, including the development of superconducting circuits that do not generate heat, enabled the development of chips, or I should say cubes, with many of layers of circuitry, which when combined with smaller component geometries, assured the continuation of Moore’s law over the next three decades (by the end of which we would obtain ten doublings of performance by creating integrated cubes with one thousand layers of circuitry and another eight doublings by reducing line widths to 40 nanometers).
The Virtual Book
The nature of books and other written documents underwent several transformations during this period. In 1995 and 1996, written text began to be created on voice-activated word processors. I brought with me a demonstration of a circa 1995 voice activated computing system called Kurzweil VOICE for Windows. This is a program than ran on an ordinary personal computer running Microsoft’s Windows operating system.
By 1997, Moore’s law resulted in the advent of paperless books with the introduction of portable and wireless displays that had the resolution (300 to 600 spots-per-inch), contrast qualities (at least 100:1), lack of flicker, viewing angle, and quality of color (24 bits-per-pixel) of high quality paper documents. Unlike the Graphical User Interface (GUI) invented at Xerox PARC in 1973, but then fumbled commercially by Xerox, PARC’s breakthrough in display technology did produce substantial licensing revenues that boosted Xerox’s bottom line during the late 1990s.
Books, magazines and newsletters no longer required a physical form, and Ray Kurzweil’s FUTURESCOPE became one of the first commercially successful virtual newsletters. A FUTURESCOPE editorial pointed out, however, that despite paperless publishing and the so-called paperless office, the use of paper continued to increase nonetheless. American use of paper for books, magazines, and other documents grew from 850 billion pages in 1981 to 2.5 trillion pages in 1986 to six trillion pages in 1996. But this figure would prove to be a peak, and by the end of the decade, paper documents would be back down to under 3 trillion pages.
The nature of a document also underwent substantial change and now routinely included voice, music, and other sound annotations. The graphic part of documents became more flexible: fixed illustrations turned into animated pictures. Documents included the underlying knowledge to respond intelligently to the inputs and reactions of readers (e.g. a joint venture of the Gartner Group and Kurzweil Applied Intelligence’s “PAM” (Please Ask Me) Technology, introduced in 1998, that responded to spoken natural language questions pertaining to the content of written documents). The “pages” of a document were no longer necessarily ordered sequentially; the World Wide Web and other hyperdocuments became capable of forming intuitive patterns that reflected the complex web of relationships among ideas.
Despite these innovations in the nature of a document, the art of creating written text remained a key objective of the educational process. It became widely recognized that written language was a powerful form of information compression and a prerequisite to the effective communication of ideas. With advances in natural language understanding software, the first truly useful grammar and style checkers (in which the majority of comments generated were actually pertinent) became available in 1997.
The information superhighway, first widely discussed in the early 1990s, was originally expected to take twenty years to implement the massive new fiber-optic infrastructure required. By 1994, we realized that we only needed to bring the new fiber-optic infrastructure to within a mile of its final destination. Communication to the home and office for this critical “last mile” was able to use the existing coaxial cable network which is also capable of providing ten billion-bit-per-second point-to-point communication, at least for short distances (but for the last mile, that’s all we needed). New wireless and satellite communications using frequencies at the microwave range and higher also became available that were capable of providing ten billion-bit-per-second communication, again only for distances of approximately a mile.
It turned out that this last mile of communication represented about 90 percent of the infrastructure that we originally had contemplated. So the information superhighway was largely in place by the turn of the century, arriving at least a decade earlier than anticipated at the beginning of the 1990s.
American fears that the Japanese would implement their information superhighway substantially ahead of the U.S. also turned out to be unfounded. The U.S. based albeit international network called Internet, created by the U.S. Department of Defense and originally called ARPANET for DOD’s Advanced Research Projects Agency (but that subsequently took on a life of its own) was a system for which the Japanese had no direct counterpart. While the Internet itself, based on the old copper-wire-based telephone technology, was supplanted by the largely optical and satellite-based superhighway, the Internet had provided the U.S. with a depth of experience and motivation that no other nation was able to match.
The Internet provided the countries of the Middle East with an effective marketing channel to the rest of the world. Software companies harnessing talent throughout the Middle East challenged the software industries in Massachusetts, Palo Alto and Redmont for leadership in several AI-based software technologies, including handprint recognition, image analysis and neural net based decision making for financial markets.
In 1997, we saw the emergence of effective speaker dependent large vocabulary continuous speech recognition for text creation running on 300 MIPs personal computers. The development of speaker independent large vocabulary continuous speech recognition in 1998 permitted the introduction of listening machines for the deaf, which converted human speech (initially only in English, Japanese and German) into a visual display of text—essentially the opposite of reading machines for the blind. Another handicapped population that was able to benefit from these early AI technologies was paraplegic individuals, who were now able to walk using exoskeletal robotic devices they controlled using a specialized cane.
In 1999, we saw the first effective real-time translating telephones demonstrated, although the service was not routinely offered. Both the recognitions and translations were far from perfect, but they appeared to be usable.
1999 also saw the first effective competition to the 27 varieties of Windows then offered by Microsoft. The “blank sheet” operating system offered by a joint venture of IBM and Novell (which had acquired Apple in 1996) was the first to offer an effective integration of natural language understanding, speech recognition and gesture processing.
The personal computer circa 2000 was a virtual book: a thin light weight slab that came in a variety of sizes from pocket-sized to large wall-sized displays. Resolution, contrast, flicker and color qualities all matched high quality paper-based documents, which accounted for the reversal in paper usage trends. By the turn of the century, the majority of reading was done on virtual documents. The primary input modality was a combination of speaker-independent large-vocabulary continuous speech recognition for text entry with pen-based computing for pointing and two-dimensional gestures. Requests for information and other commands could be expressed by voice in natural language (for example, “find me all KeyNote addresses to computer conferences in the Middle East that express the history of the computer revolution in terms of a chessboard metaphor using Moore’s law”). There was only one physical connector to permit connection to a high bandwidth information superhighway port, but most communication was wireless, and users were on-line all the time. McLuhan’s global village had arrived.
Part IV: A Proven Design
The Invisible Book
As we entered the first decade of the 21st century and the thirty-eighth square of the chessboard, with PCs now sporting 2,000 MIPs, the translating telephones demonstrated late in the last century began to be offered by the communication companies (which were an amalgam of former telephone and cable companies and even a few former newspaper publishers) competing for international customers. The quality varied considerably from one pair of languages to another. Even though English and Japanese are very different in structure, this pair of languages appeared to offer the best performance, although translation among different European languages was close. Outside of English, Japanese, and a few European languages, performance fell off dramatically.
The output displays for the listening machines for the deaf were now built into the user’s eyeglasses, essentially providing subtitles on the world. Specific funding was included in the U.S. Omnibus Disabled Act of 2004 to provide these sensory aids for deaf persons who could not afford them, with similar legislation in other European, Asian and Middle Eastern countries, although complex regulations on verifying income levels were so intrusive that most deaf persons ignored the program and acquired the devices on their own.
The billionaire who emerged in 2005 was an entrepreneur whose line of EEG-driven computers were capable of inducing desirable mental states on demand and became the treatment of choice for hypertension and anxiety disorders.
As we entered the forty-forth square of the chessboard, the standard 100,000 MIPs personal computer circa 2010 was now largely invisible. Most users used the eyeglass displays originally developed for the deaf sensory aids to provide three-dimensional displays that overlaid the ordinary visual world. Speech recognition continued to be a primary input modality for text, and pen-based computing was now replaced with finger motion gestures within the virtual display. A variety of knowledge navigators were available with customizable personalities, the most popular being “Michelle,” a feminine persona with a distinctly Creole flavor.
The Arrival of TIM
Everyone recalls the flap when TIM was introduced. TIM, which stands for Turing’s IMage, was created at the University of Texas in 2011 and was presented as the first computer to pass the Turing Test. The researchers claimed that they had even exceeded Turing’s original challenge because you could converse with TIM by voice rather than through terminal lines as Turing had originally envisioned. In an issue of FUTURESCOPE devoted to the TIM controversy, Hubert Dreyfus, the persistent critic of the AI field, dismissed the original announcement of TIM as the usual hype we have come to expect from the AI community.
Eventually, even AI leaders rejected the claim citing the selection of a human “judge” unfamiliar with the state of the art in AI and the fact that not enough time had been allowed for the judge to interview the computer and the human foil. TIM became, however, a big hit at Tel Aviv’s new Disney World, where 200 TIMs were installed in the Microsoft Pavilion.
The TIM technology was quickly integrated into virtual reality systems which were now revolutionizing the process of education, having long been established in the game industry. So rather than read about the U.S. Constitutional Convention, a student could now debate a simulated Ben Franklin on executive war powers, the role of the courts, or any other issue. It is difficult now to recall that the medium they used to call television was itself controversial in its day, despite the fact that it was of low resolution, two dimensional and non interactive. Virtual reality was still a bit different from the real thing, though, in that the ability of the artificial people in virtual reality to understand what you were saying still seemed a bit stilted.
A Proven Design that’s not even Copyrighted
During the second decade of this century, with the human Genome project largely completed, the HUNES (HUman NEuron Scan) project was initiated to similarly scan and record the neural organization of the human brain. In accordance with Moore’s law, we expected to reach the computational capacity of the human brain—20 million billion neuron connection calculations per second (100 billion neurons times an average of 1,000 connections to other neurons times 200 calculations per second per connection) in a standard personal computer—by the year 2020. We should note that the acceleration of speed of the massively parallel neural computers progressed faster than the speed of sequential computers because the number of neural connection calculations per second benefits from the doubling of density as well as the doubling of speed of semiconductors every 18 months. And as we drew close to reaching the raw computational ability to simulate the human brain, the HUNES effort to reverse engineer this proven design picked up intensity.
Square 50
In the year 2020, at that time eighty percent of the way through the chessboard, translating telephones were used routinely, and, while the languages available were still limited, there was more choice with reasonable performance for Chinese and Korean.
The knowledge navigators available on the personal computers of 2020, unlike those of ten years earlier, could interview humans in their search for knowledge instead of just other computers. People used them as personal research assistants.
Virtual reality was much more lifelike and began to include the ability to provide physical sensations. An article in Time Magazine’s virtual edition of January 1, 2020 examined the growing availability of erotic software for virtual reality systems and pointed out that every new communication technology in history had been extensively exploited for this purpose. Guttenberg’s first book may have been the Bible, but more prurient themes were a major source of publishing revenues for the century that followed.
There was a phenomenon of people who spent virtually all their time in virtual reality and did not want to come out. What to do about that became a topic of considerable debate.
WalMart emerged as the dominant force in the Virtual Mall wars, although it closed its last physical store two years earlier.
We realized in 2020 how prescient Turing was 70 years earlier. The brain’s computational ability–20 million billion neural connection calculations per second–was available in a standard personal neural computer. While not everyone in 2020 agreed that computers could pass the Turing Test, few doubted that with the completion of the HUNES project, expected by 2030, that computers would achieve human levels of intelligence by that time. In retrospect, Turing’s 1950 prediction that his Turing Test would be passed by a computer (indicating human level intelligence in a machine) by early in the Twenty-First century was so close to accurate that he must have had an implicit understanding of Moore’s law (he had indeed seen the 1,500-fold speed improvement of his own computers through the duration of World War II), although he never articulated the rule as clearly as did Dr. Moore.
In 2021 the University of Texas announced a new version of TIM, which received a more enthusiastic reception from AI experts. Marvin Minsky, then 93, one of the fathers of AI, who was contacted at his retirement home in Florida, hailed the development as the realization of Turing’s original vision. Dreyfus, then 91, however, remained unconvinced and challenged the Texas researchers to use him as the human judge in their experiments.
So Here We Are
So here we are in the year 2030. A new generation of high-speed, high-resolution magnetic resonance imaging, that is MRI, scanners, had been developed in 2025 that is capable of resolving individual nerve fibers only several microns in diameter without disturbing the living tissue being scanned. These new MRI scanners are also able to view the presynaptic vessicles that are the site of human learning. These high resolution scanners enabled us to complete the HUNES (Human NEuron Scan) project in 1929, a year ahead of schedule. People are now scanning their brains and are using their PC’s as personal back-up systems.
Now the ability to download your mind to your personal computer is raising some interesting issues. I’ll mention just a few.
There’s the philosophical issue. When people are scanned and then re-created in a neural computer, people are wondering just who are these people in the machine?
The answer depends on who you ask. If you ask the people in the machine, they strenuously claim to be the original persons, having lived certain lives, having gone into a scanner here, and then have woken up in the machine there. They say, “hey, this technology really works. You should give it a try.”
On the other hand, the original people who are scanned claim that the people in the machine are imposters, people who just appear to share their memories, histories and personalities, but who are definitely different people.
There’s the psychological issue. A machine intelligence that has been derived from human intelligence needs a body. A disembodied mind quickly becomes depressed.
Ironically, it will take us longer to recreate our bodies than it took us to recreate our minds. We are making exponential progress in providing the computational resources to simulate intelligence with the linear passing of time. But we are only making linear progress with the linear passing of time in robotic technology. So it will take us longer to recreate the suppleness of our bodies than the intricacies and subtleties of our minds.
There’s the ethical issue. Is it immoral, or perhaps illegal, to cause pain and suffering to your computer program? Is it illegal to turn your computer program off? Perhaps it is illegal to turn it off only if you have failed to make a recent backup copy. Some commentators have expressed concern that they may soon wish to turn us off.
And of course, there is the usual line-up of economic and political issues. The Luddite issue, which is the concern over the negative impact of machines on human employment, has become of intense interest once again.
Before Copernicus, our “speciecentricity,” (I made up that word, by the way, in case you never heard it before) was embodied in a view of the universe literally circling around us as a testament to our unique and central status. Today, our belief in our own uniqueness is not a matter of celestial relationships, but rather of our intelligence. Evolution is seen as a billion year drama leading inexorably to its grandest creation: human intelligence. The specter of machine intelligence competing with that of its creator is once again threatening our view of who we are.
The End of the Chess Board
Well we have now peered about 90% of the way through the chessboard. We might wonder happens at the end of the chessboard?
In the year 2040, we will reach the 64th square. In my view, Moore’s law will still be going strong. Computer circuits will be grown like crystals with computing taking place at the molecular level.
By the year 2040, in accordance with Moore’s law, your state-of-the-art personal computer will be able to simulate a society of 10,000 human brains, each of which would be operating at a speed 10,000 times faster than a human brain.
Or, alternatively, it could implement a single mind with 10,000 times the memory capacity of a human brain and 100 million times the speed. What will the implications be of this development?
Well, unfortunately, I’ve run out of time, so you’ll have to invite me back.
But I’ll leave with you two thoughts, written by people who did not have the benefit of a voice activated word processor.
Sun Tzu, Chou dynasty philosopher and military strategist, wrote in the fourth century BC:
“Knowledge is power
and permits the wise to conquer
without bloodshed
and to accomplish deeds
surpassing all others.”
And Shakespeare wrote
“We know what we are,
but know not
what we might become.”