The Spike : How Our Lives Are Being Transformed By Rapidly Advancing Technologies
The rate at which technology is changing our world--not just on a global level like space travel and instant worldwide communications but on the level of what we choose to wear, where we live, and what we eat--is staggeringly fast and getting faster all the time. The rate of change has become so fast that a concept that started off sounding like science fiction has become a widely expected outcome in the near future - a singularity referred to as The Spike.At that point of singularity, the cumulative changes on all fronts will affect the existence of humanity as a species and cause a leap of evolution into a new state of being. On the other side of that divide, intelligence will be freed from the constraints of the flesh; machines will achieve a level of intelligence in excess of our own and boundless in its ultimate potential; engineering will take place at the level of molecular reconstruction, which will allow everything from food to building materials to be assembled as needed from microscopic components rather than grown or manufactured; we'll all become effectively immortal by either digitizing and uploading our minds into organic machines or by transforming our bodies into illness-free, undecaying exemplars of permanent health and vitality. The results of all these changes will be unimaginable social dislocation, a complete restructuring of human society and a great leap forward into a dazzlingly transcendent future that even SF writers have been too timid to imagine. At the publisher's request, this title is being sold without Digital Rights Management software (DRM) applied.
There are no customer reviews available at this time. Would you like to write a review?
February 01, 2002
Number of Print Pages*
* Number of eBook pages may differ. Click here for more information.
Excerpt from The Spike by Damien Broderick
1: The Headlong Rush of Time
If our world survives, the next great challenge to watch out for will come--you heard it here first--when the curves of research and development in artificial intelligence, molecular biology, and robotics all converge. Oboy. It will be amazing and unpredictable, and even the biggest of brass, let us devoutly hope, are going to be caught flat-footed. It is certainly something for all good Luddites to look forward to if, God willing, we should live so long.
--Thomas Pynchon, New York Times Book Review, 1984
Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.
--Vernor Vinge, NASA Vision-21 Symposium, 1993
It rushes at you, the future.
Usually we don't notice that. We are unaware of its gallop. Time might not be a rushing black wall coming at us from the future, but that's surely how it looks when you stare unflinchingly at the year 2050 and beyond, at the strange creatures on the near horizon of time (our own grandchildren, or even ourselves, technologically preserved and enhanced). Call them transhumans or even posthumans.
The initial transition into posthumanity, for people intimately linked to specially designed computerized neural nets, might not wait until 2050. It could happen even earlier. Twenty-forty. Twenty-thirty. Maybe sooner, as Vinge predicted. This is no longer the deep, the inconceivably distant future. These are the dates when quite a few young adults today expect to be packing up their private possessions and leaving the office for the last time, headed for retirement. These are dates when today's babes in arms will be strong adults in the prime of life.
Around 2050, or maybe even 2030, is when a technological Singularity, as it's been termed, is expected to erupt. That, at any rate, is the considered opinion of a number of informed if unusually adventurous scientists. Professor Vinge called this projected event "the technological Singularity," something of a mouthful. I call it "the Spike," an upward jab on the chart of change, a time of upheaval unprecedented in human history.
And, of course, it's a profoundly suspect suggestion. We've heard this sort of thing prophesied quite recently, in literally Apocalyptic religious revelations of millennial End Time and Rapture.
That's not the kind of upheaval I'm describing.
A number of perfectly rational, well-informed, and extremely smart scientists are anticipating a Singularity, a barrier to confident anticipation of future technologies. I prefer the term Spike, because when you chart it on a graph it looks like a Spike! Its exponential curve resembles a spike on a graph of change over time. Here's a picture of it:
As you see, the more the curve grows, the larger is each subsequent bound upward. It takes a long time to double the original value, but the same period again gets you four times farther up the curve, then eight times...so that after just ten doublings, you've risen a thousand times as far, then two thousand, and on it goes. Note this: the time it takes to go from one to two, and then from two to four, is just the same period needed to take that mighty leap from 1000 to 2000. A short time later we're talking a millionfold increase in a single step, and the very next step after that is two millionfold...3
History's slowly rising trajectory of progress over tens of thousands of years, having taken a swift turn upward in recent centuries and decades, quickly roars straight up some time after 2030 and before 2100. That's the Spike. Change in technology and medicine moves off the scale of standard measurements: it goes asymptotic, as a mathematician would say. An asymptote is a curve that bends more and more sharply until it is heading almost straight along one of the axes--in this case, up the page into the future.
So the curve of technological change is getting closer and closer to the utterly vertical in a shorter and shorter time. At the limit, which is reached quite quickly (disproving Zeno's ancient paradox about the tortoise beating Achilles if it has a head start), the curve tends toward infinity. It rips through the top of the graph and is never seen again.
At the Spike, we can confidently expect that some form of intelligence (human, silicon, or a blend of the two) will emerge at a posthuman level. At that point, all the standard rules and cultural projections go into the waste-paper basket.
A quick preliminary stroll through the future
Everything you think you know about the future is wrong.
How can that be? Back in the 1970s, Alvin Toffler warned of future shock, the concussion we feel when change slaps us in the back of the head. But aren't we smarter now, in the twenty-first century? We have wild, ambitious expectations of the future, we're not frightened of it. How could it surprise us, now that Star Trek and Star Wars and Terminator movies and The Matrix and a hundred computer role-playing games have domesticated the twenty-fourth century, cyberspace virtual realities, and a galaxy far, far away?
Actually, I blame glitzy mass-market science fiction script writers for misleading us. They got it so wrong. Their enjoyable futures, by and large, are about as plausible as nineteenth-century visions of tomorrow. Those had dirigibles filling the skies and bonneted ladies in crinolines tapping at telegraphs.
Back in the middle of the twentieth century, when the futuristic stories I read as a kid were being written, most people knew "that Buck Rogers stuff" was laughable fantasy, suitable only for children. After all, it talked about atomic power and landing on the moon and time travel and robots that would do your bidding even if you were rude to them. Who could take such nonsense seriously?
Twenty years later, men had walked on the moon, nuclear power was already obsolete in some countries, and computers could be found in any university. Another two decades on, in the nineties, probes sent us vivid images from the solar system's far reaches (and got lost on Mars), immensely powerful but affordable personal computers sat on desks at home as well as work, the human genome was being sequenced, and advanced physics told us that even time travel through spacetime wormholes was not necessarily insane (although it was surely not in the immediate offing).
So popular entertainment belatedly got the message, spurred on by prodigious advances in computerized graphics. Sadly, the movie, television, and game makers still didn't know a quark from a kumquat, a light-year (a unit of interstellar distance) from a picosecond (a very brief time interval). With gusto and cascades of light, they blended made-up technobabble with exhilarating fairy stories, shifting adventure sagas from ancient legends and myth into outer space. It was great fun, but it twisted our sense of the future away from an almost inconceivably strange reality (which is the way it will actually happen) and back into safe childhood, that endless temptation of fantastic art.
Maybe you think I'm about to get all preachy and sanctimonious. You're waiting for the doom and gloom: rising seas and greenhouse nightmare, cloned tyrants, population bomb, monster global megacorporations with their evil genetically engineered foods and monopoly stranglehold on the crop seeds needed by a starving Third World. Wrong. Some of those factors indeed threaten the security of our planet, but not for much longer (unless things go very bad indeed, very quickly). No, what's wrong with most media images of the future isn't their evasion of such threats--on the contrary, they play them up to the point of absurdity. What's wrong is their laughably timid conservatism.
The future is going to be a fast, wild ride into strangeness. And many of us will still be there as it happens.
That strangeness is exactly what prevents us from picking out any one clear determinate future. The coming world of the Spike is, strictly, unimaginable--but we can certainly try our best to trace some of the contributing factors, and some of the ways they'll converge (or perhaps block each other). That fact governs my approach in this book. Do not expect a dogmatic manifesto advancing a single thesis. Instead, I'll try to give you a glimpse of many different technologies. I won't attempt the impossible, which is to integrate all those different points of view into one comforting, assured framework. There is no inevitable tomorrow.
All that we know for sure is the almost unstoppable acceleration of science and technology, and the drastic impact it will have upon humanity and our world.
Living in the future right now
This accelerating world of drastic change won't wait until, say, Star Trek's twenty-fourth century, let alone the year 3000. We can expect extraordinary disruptions within the next half century. Many of those changes will probably start to impact well before that. By the end of the twenty-first century, there might well be no humans (as we recognize ourselves) left on the planet--but, paradoxically, nobody alive then will complain about that, any more than we now bewail the loss of Neanderthals.
That sounds rather tasteless, but I mean it literally: many of us will still be here, but we won't be human any longer--not the current model, anyway. Our children, and perhaps we as well, will be smarter. We already have experimental hints of how that might occur. In September 1999, molecular biologists at Princeton reported adding a gene to a strain of mice, elevating their production of NR2B protein. The improved brains of these "Doogie mice" used this extra NR2B to enhance brain receptors, helping the animals solve puzzles much faster. A kind of genetic turboaccelerator for mousy intelligence. Human brains, as it happens, use an almost identical protein. It is not far-fetched to suppose that we will learn to tweak or supplement it to increase our own effective intelligence (or that of our children).
Nor will we be the only high-level intelligences on the planet. By the close of the twenty-first century, there will be vast numbers of conscious but artificial minds on earth. How we and our children get along with them as they move out of the labs and into the marketplace will determine the history of life in the solar system, and maybe the universe.
I'm not making this up. Dr. Hans Moravec, a robotics pioneer at Carnegie Mellon University in Pittsburgh, argues in Robot (1999) that we can expect machines equal to human brains within forty years at the latest. Already, primitive robots operate at the level of spiders or lizards. Soon a robot kitten will be running about in Japan, driven by an artificial brain designed and built by Australian Dr. Hugo de Garis. True, it's a vast leap from lizard to monkey and then human, but computers are doubling in speed and memory every year.
This is the hard bit to grasp: with that kind of annual doubling in power, you jump by a factor of 1000 every decade. In twenty years, the same price (adjusted for inflation) will buy you a computer a million times more powerful than your current model. That's "Moore's law," enunciated in 1965 by Gordon E. Moore, one of the founders of the Intel company, which now makes the Pentium chip for your personal computer. Moore originally surmised that the number of components on an integrated circuit (IC) would double each year. If that were to happen, 65,000 transistors would dance on an IC within ten years. That was a little ambitious, but turned out to be close to reality--a result nobody could have believed in 1965. Moore's conjecture changed as time passed, first slowing down to "doubling every two years" then speeding back up to "doubling every eighteen months." It remained an astonishing prediction, and an amazing phenomenon.
Moore's law (although of course it isn't really anything like a law of nature) made a disarmingly simple algebra equation, the kind even someone uncomfortable with figures might be able to follow: doubling goes as two raised to the power of N, and currently "N" is conjectured to equal "one and a half years." With this equation you can work out how fast it takes to get to a millionfold increase, say, by following Moore's (revised) law through the following simple steps:
* * *
? two to the tenth power equals roughly 1000
? two to the twentieth power equals a million
? and two to the fortieth power equals a thousand billion.
* * *
If, to be conservative, a single doubling happens during each two-year period, then every twenty years we get a thousand times as much computational power per dollar as we started with.
At the start of the 2000s, the world's best, immensely expensive supercomputers perform several trillion operations a second. To emulate a human mind, Moravec estimates, we'll need systems a hundred times better. Advanced research machines might meet that benchmark within a decade, or sooner--but it will take another ten or twenty years for the comparable home machine at a notepad's price. Still, around 2040, expect to own a computer with the brain power of a human being. And what will that be like? If software develops at the same pace, we will abruptly find ourselves in a world of alien minds as good as our own.
Will they take our orders and quietly do our bidding? If they're designed right, maybe. But that's not the kicker. That's just the familiar world of third-rate sci-fi movies with clunky or sexy-voiced robots. The key to future change comes from what's called "self-bootstrapping"--machines and programs that modify their own design, optimize their functioning, improve themselves in ways that limited human minds can't even start to understand. Dr. de Garis calls such beings "artilects," and even though he's building their predecessors he admits he's scared stiff.
By the end of the twenty-first century, computer maven Ray Kurzweil (in The Age of Spiritual Machines, 1999) expects a merging of machines and humans, allowing us to shift consciousness from place to place. He's got an equally impressive track record, as a leading software designer and specialist in voice-activated systems. His time line for the future is even more hair-raising than Moravec's. In a decade, he tells us, expect desktop machines with the grunt of today's best supercomputers, a trillion operations a second. Forget keyboards--we'll speak to these machines, and they'll speak back in the guise of plausible personalities.
By 2020, a Pentium equivalent will equal a human brain. And now the second great innovation kicks in: molecular nanotechnology (MNT), building things by putting them together atom by atom. I call that "minting," and the wonderful thing is that a mint will be able to replicate itself, using common, cheap chemical feedstocks. Houses and cars will be compiled seamlessly out of diamond (carbon, currently clogging the atmosphere) and sapphire (aluminum), because they will be cheap appropriate materials readily handled by mints. It's not clear, however, if one-size-fits-all universal assemblers will be feasible, at least in the near future; some mints might be specialized to compile carbon compounds, others to piece together aluminum (into sapphire) or tungsten-carbide structures, requiring assembly at a coarser level. These dedicated mints will operate at successively higher temperatures, each requiring a totally different chemistry (feedstock, tool-tips, energy sources). "Whether you can use one level of MNTing to enable the next higher level," notes one commentator, "remains a very open question."
Until recently, all nanotechnology was purely theoretical. A Rand Corporation study declared cautiously: "Extensive molecular manufacturing applications, if they become cost-effective, will probably not occur until well into the far term. However, some products benefiting from research into molecular manufacturing may be developed in the near term. As initial nanomachining, novel chemistry, and protein engineering (or other biotechnologies) are refined, initial products will likely focus on those that substitute for existing high-cost, lower-efficiency products."6 The engineering theory was good, but the evidence was thin. Finally, though, at the end of November 1999, came a definitive breakthrough, harbinger of things to come. Researchers at Cornell University announced in the journal Science that they had successfully assembled molecules one at a time by chemically bonding carbon monoxide molecules to iron atoms. This is a long way from building a beefsteak sandwich in a mint the size of a microwave oven powered by solar cells on your roof (also made for practically nothing by a mint), but it's proof that the concept works.
If that sounds like a magical world, consider Kurzweil's 2030. Now your desktop machine (except that you'll probably be wearing it, or it will be built into you, or you will be absorbed into it) holds the intelligence of one thousand human brains. Machines are plainly people. It might be (horrors!) that smart machines are debating whether, by comparison with their lucid and swift understanding, humans are people! We had better treat our mind children nicely. Minds that good will find little difficulty solving problems that we are already on the verge of unlocking. Cancers will be cured, along with most other ills of the flesh.
Aging, and even routine death itself, might become a thing of the past. In October 1999, Canada's Chromos Molecular Systems announced that an artificial chromosome inserted into mice embryos had been passed down, with its useful extra genes, to the next generation. And in November 1999, the journal Nature reported that Pier Giuseppe Pelicci, at Milan's European Institute of Oncology, had deactivated the p66shc gene in mice--which then lived thirty percent longer than their unaltered kin, without making them sluggish! A drug blocking p66shc in humans might have a similar life-extending effect.
As well, our bodies will be suffused with swarms of medical and other nano maintenance devices. The first of three magisterial volumes detailing how and why medical nanorobots are in mid-range prospect appeared at the end of 1999: Dr. Robert A. Freitas Jr.'s Nanomedicine. Nor will our brains remain unaltered. Many of us will surely adopt the prosthetic advantage of direct links to the global net, and augmentation of our fallible memories and intellectual powers. This won't be a world of Mr. Spock emotionless logic, however. It is far more likely that AIs (artificial intelligences) will develop supple, nuanced emotions of their own, for the same reason we do: to relate to people, and for the sheer joy of it.
The real future, in other words, has already started. Don't expect the simple, gaudy world of Babylon-5 or even eXistenZ. The third millennium will be very much stranger than fiction.
Walking into the future
To get a firmer idea of the reasoning that underlies these apparently reckless claims, consider the ever-accelerating rate at which people have been able to travel during the last three hundred thousand years (or the last three million, if you're willing to accept a generous definition of humankind).
For very much the largest part of that span, we were limited to walking pace, with long rests. Some six thousand years ago we borrowed the lugging power of asses, then the strength and endurance of other large animals, finally coupling small ponies to war chariots in the second millennium b.c. Breeding horses large enough to ride took many centuries more. In other forms of transport, dugout canoes, then boats, and finally ships with sails went as fast as arms could paddle, or winds, captured fairly inefficiently, blow.
Less than two hundred years ago, steam trains sent our ancestors hurtling on rail at twenty or thirty kilometers per hour. Cars made faster speeds commonplace within the living memory of the elderly, especially as roads improved (at prodigious cost, financially and to the shape of the landscape). Prop aircraft flew at a few hundreds of kilometers per hour. Within decades, jets flew ten times that fast, and by the 1960s rockets took astronauts into space at tens of thousands of kilometers per hour. Today, using "virtual presence" on-line simulation systems, we are on the verge of "being there" (in a limited but vivid and interactive sense) at the speed of light. And that's the end of the line--you can't get faster than the velocity of light.
Mapped on a graph, this progression shows a long flat rise, turning slowly upward, then climbing more sharply, and faster again...and now its dotted projection seems to soar dizzyingly toward a veritable Spike.
The brain on your desk
That same headlong acceleration applies, as we are now uncomfortably aware, with the speed, power, and cheapness of computers. Computer-power-per-dollar currently doubles every eighteen months, or perhaps as swiftly as every year. Growth in computing power is already exponential, maybe hyperexponential.
Starting small, with one or two special highly secret vacuum-tube computers during the Second World War, the computer presence sluggishly increases to bulky, cantankerous devices in a few rich universities, and then some clumsy IBM mainframes in large businesses, and then the big vulnerable tubes get replaced by transistors, by integrated circuits, and before you know where you are it's the late 1970s, early 1980s, and home enthusiasts are buying their first Macs and PCs, and the prices continue to fall, and meanwhile the military and NASA are funding superfast giant machines running in a bath of liquid helium to keep them cool enough to function, and the curve is getting steeper and steeper--
It is the fable of the Chessboard brought to life: one grain on the first square, two on the second, four on the third--and by the time we reach the sixty-fourth square, we groan beneath a deluge of rice.
Computing power that is developing with such acceleration may be able to emulate human intelligence within thirty or forty years. A century, tops.
At that point, if the chart of the Spike is telling us the truth, we (or our children, or our grandchildren) may see machines with twice our capacity within a further eighteen months, then four times our capacity within a further year and a half, and...
Intelligence will have Spiked. It won't be our human intelligence, but we will be borne along into the Spike with it.
Computing power and speed of travel are just two examples of runaway progress. The most exciting prospect, one that convinces scientists who assess the evidence for a coming Spike, is that other disciplines will have Spiked at about the same time: medical research into aging, cloning and genome manipulation, miniaturization of high-tech products until they reach molecular or even atomic scales (nanotechnology), and more.
So the world of the Spike will be marked by
* * *
? augmented human abilities, made possible by connecting ourselves to chips and neural networks that are not in themselves aware but can amplify our native abilities...
? human-level Artificial Intelligences (AIs), swiftly followed by hyperintelligent AIs...
? DNA genome control, which gives us the capacity to redesign ourselves and our children, enhancing not just mind but every bodily and emotional pleasure and aptitude...
? nanotechnology machines, including AIs, built from the atom up, including extremely tiny self-replicating devices no larger than molecules...
? extreme physical longevity or even (in effect, barring accident) immortality, due to a blend of: the new understanding and control of our genetic inheritance, including apoptotic "suicide genes" that may limit lifespan by restricting the number of times most cells can be repaired by self-replication; nanotechnological medical repair systems that live inside the body from birth and keep cells rejuvenated and free of disease, including cancers; "backup" copies of our memories maintained in machine storage in case of damage to the brain, or permitting organ or tissue cloning and replacement of lost knowledge and experience in the extreme case of severe physical damage to the body/brain...
? "uploads" or transfers of human minds into computers, so that we can live, work, and play inside their rich and manipulable machine-generated virtual realities...
? possible contact with galactic civilizations that have already gone through the Spike transition, including such extreme prospects as ancient extraterrestrial cultures so powerful that they have long ago restructured the visible universe (or rewritten the laws of quantum mechanics)...