Ray Kurzweil is a very well-known techno-futurist whose main focus has been the coming of artificial sentience. His 1999 book, The Age of Spiritual Machines, contains a series of chapters prediction computer technology in successive decades (2009, 2019, etc.). Well, we’re now entering 2009, and it’s worth looking at his 2009 predictions (hat tip to John Murrell at Good Morning Silicon Valley) to see the risks. Here are a few excerpts:
It is now 2009. Individuals primarily use portable computers, which have become dramatically lighter and thinner than the notebook computers of ten years earlier. Personal computers are available in a wide range of sizes and shapes, and are commonly embedded in clothing and jewelry such as wristwatches, rings, earrings, and other body ornaments. Computers with a high-resolution visual interface range from rings and pins and credit cards up to the size of a thin book.
People typically have at least a dozen computers on and around their bodies, which are networked using “body LANs” (local area networks).1 These computers provide communication facilities similar to cellular phones, pagers, and web surfers, monitor body functions, provide automated identity (to conduct financial transactions and allow entry into secure areas), provide directions for navigation, and a variety of other services.
For the most part, these truly personal computers have no moving parts. Memory is completely electronic, and most portable computers do not have keyboards. . . .
The majority of text is created using continuous speech recognition (CSR) dictation software, but keyboards are still used. CSR is very accurate, far more so than the human transcriptionists who were used up until a few years ago.
Also ubiquitous are language user interfaces (LUIs), which combine CSR and natural language understanding. For routine matters, such as simple business transactions and information inquiries, LUIs are quite responsive and precise. They tend to be narrowly focused, however, on specific types of tasks. LUIs are frequently combined with animated personalities. Interacting with an animated personality to conduct a purchase or make a reservation is like talking to a person using videoconferencing, except that the person is simulated.
Computer displays have all the display qualities of paper–high resolution, high contrast, large viewing angle, and no flicker. Books, magazines, and newspapers are now routinely read on displays that are the size of, well, small books.
Computer displays built into eyeglasses are also used. These specialized glasses allow users to see the normal visual environment, while creating a virtual image that appears to hover in front of the viewer. The virtual images are created by a tiny laser built into the glasses that projects the images directly onto the user’s retinas.3
Computers routinely include moving picture image cameras and are able to reliably identify their owners from their faces.
In terms of circuitry, three-dimensional chips are commonly used, and there is a transition taking place from the older, single-layer chips.
Sound producing speakers are being replaced with very small chip-based devices that can place high resolution sound anywhere in three-dimensional space. This technology is based on creating audible frequency sounds from the spectrum created by the interaction of very high frequency tones. As a result, very small speakers can create very robust three-dimensional sound.
A $1,000 personal computer (in 1999 dollars) can perform about a trillion calculations per second.4 Supercomputers match at least the hardware capacity of the human brain–20 million billion calculations per second.5 Unused computes on the Internet are being harvested, creating virtual parallel supercomputers with human brain hardware capacity.
Be sure to read the entire chapter. There are some predictions he gets right, but in most cases, he overshoots what is actual with what is theoretically possible. It’s a bit like predicting that all VCR machines would vanish a year or two after DVDs became available (in reality, the demise of VHS took about a decade). Also, as some of the comments to this online version of the 2009 chapter point out, several key predictions depend upon significant advances in artificial intelligence (AI), and those advances just haven’t happened.
In fact, one of the commenters has a wonderful observation: “The non-Mooresian nature of AI, and software in general, is the big problem here.” Moore’s Law, of course, was cited to predict increasing densities of transistors on integrated circuits and has been extended to the general (and decades-long) trend towards cheaper, faster, more capable computer hardware. The problem is that software doesn’t respond to Moore’s Law. Advances in software actually tend to face diminishing returns (cf. Microsoft Vista), not exponential improvements.
In fairness, I note that I once made a decade-in-advance prediction. But in my case, I largely constrained myself to computer hardware and just did a straight-line extrapolation. In my book Pitfalls of Object-Oriented Development (M&T Books, 1995), I talked about the changes between developing software in 1984 and 1994, then predicted what hardware might be like in 2004:
A case in point: In March 1984, Wayne Holder and I shipped SunDog: Frozen Legacy, a complex, real-time adventure game for the Apple II, which had a 1-MHz, 8-bit 6502 processor. With its graphical user interface using overlapping windows, icons, menus, and joystick-only input, SunDog pretty much pushed the limits of the Apple II. It required 64-KB of RAM and even then swapped segments of code into memory on demand. We used a double-sided floppy disk, giving us 280-KB of disk storage. Code size: 15,000 lines of Pascal and 5,000 lines of 6502 assembly language. I was the principal programmer, writing more than 90 percent of the code; from inception to shipment, the project took 15 months.
Ten years later to the month—March 1994—Pages Software, Inc. shipped Pages by Pages, the document processor mentioned at the start of this introduction. Pages runs under NEXTSTEP 3.0 or later; typical system requirements are a 25-MHz 68040, 33-MHz 80486, or HP-PA RISC system, all 32-bit processors. The application assumes virtual memory (which NEXTSTEP provides), but it is recommended that users have at least 16-MB of memory and 20 MB free on a 300- to 500-MB hard disk drive. Code size: 350,000 lines of Objective-C, which doesn’t count all the graphical user interface support provided automatically by NEXTSTEP’s class libraries. The engineering team grew to encompass ten people, and the product shipped nearly four years after Pages Software was founded.
Imagine what desktop systems and user expectations and operating system requirements will be like in March 2004. Straight-line extrapolation says we’re looking at 500-MHz systems with 1 to 4 gigabytes (GB) of RAM and a 20- to 100-GB hard disk drive. Common sense may make you question those figures—after all, how could anyone ever use up 100 GB of disk space?—but I use 1 GB right now and have a constant need for more space.
If anything, I was too conservative — such is Moore’s Law. I have a PC in my office that has a total of 1.5 terabytes of storage hooked up directly to it (either internal or directly connected via USB) and has another 1 TB of storage sitting on the network.
My intent is not to criticize Kurzweil; it’s to point out how hard it is to predict technological advances. Be sure to read his entire chapter. ..bruce..