The Singularity is Unclear
THE FUTURE OF THE HUMAN RACE was indelibly changed on Monday, July 16, 1945 at 5:29 a.m.
At that moment, the rain had finally stopped falling after a long night of thunderstorms and what would have been just another sunrise washing over the desolate, arroyo-scarred landscape of the Jornada del Muerto desert was suddenly engulfed by a flash of light brighter than a dozen suns.
The light was so bright it could be seen across the entire state of New Mexico as well as parts of Arizona, Texas, and down into Mexico.
“It was golden, purple, violet, gray, and blue,” recalled Brigadier General Thomas F. Farrell. “It lighted every peak, crevasse and ridge of the nearby mountain range with a clarity and beauty that cannot be described but must be seen to be imagined.”
That beautiful light had been caused by an ugly explosion that was 10,000 times hotter than the surface of the sun. Every living creature within a one-mile radius of ground-zero was obliterated with the very ground itself transformed from sand into jade-colored glass (later dubbed “trinitite”) beneath the blast’s crucible of heat.
Then the scorched land fell into darkness beneath a towering and ominous 38,000-foot mushroom cloud, the image of which would be burned into the collective consciousness of generations to come.
We had successfully detonated the first atomic bomb and created a weapon of mass destruction. Whatever future could have been imagined for the human race up until that moment had changed forever.
For the first time in history, man could imagine a future in which he could destroy the world at his own hands and perish from the face of the Earth.
Not even J. Robert Oppenheimer, the head of the Manhattan Project’s scientific crew that created and detonated the first atomic bomb, fully comprehended the impact of what had been accomplished until after that first test blast when he reportedly uttered this quote from the Bhagavad Gita:
“I am become Death,” he said, “the destroyer of worlds.”
His test director, Ken Bainbridge, responded a bit more bluntly: “Now we’re all sons of bitches,” he said.
Shortly after the first A-bomb was dropped on Hiroshima (August 6, 1945), President Truman declared the creation of the A-bomb, “The greatest achievement of organized science in history.”
What may have been the “greatest achievement” in 1945, however, wasn’t a great achievement for all time. The ensuing nuclear legacy that mushroomed out of this technological advancement was anything but a “great achievement”.
The resulting MAD shadow of Mutually Assured Destruction hung over the world for 50 years then diminished with the fall of the Soviet Union only to be transformed into fears of “dirty bombs” planted by terrorists in the very cities of the scientific minds that created the A-bomb.
As the pace of technological advancement quickens, our ability to assess the long-term impacts of technological advancements diminishes. As our ability to assess those impacts diminishes, the chance of unforeseen consequences increases.
This is a problem if we desire to be good stewards of the present — as opposed to “sons of bitches” — and create a future worth living in for those who, for better or for worse, inherit the future that we are creating with the very decisions we are making in the present.
We live in an era of rapid technological progress in which one advancement leads quickly to another in shorter and shorter cycles.
“Technological change isn’t just happening fast,” wrote author James John Bell in an article published in The Futurist. “It’s happening at an exponential rate. Contrary to the commonsense, intuitive, linear view, we won’t just experience 100 years of progress in the twenty-first century — it will be more like 20,000 years of progress.”
Acceleration of technological advancement at an increasingly exponential rate will, at some point, theoretically reach a “singularity”. This is a point in the technological revolution that will be similar to the theoretical singularity that occurs within a black hole; hence the use of the term “singularity”.
A black hole occurs when a dying star collapses into an increasingly massive and dense body that has a gravitational pull so strong that it sucks in everything around it — including light.
In his book A Brief History of Time, Stephen Hawking wrote, “According to [Einstein’s theory of] general relativity, there must be a singularity of infinite density and space-time curvature within a black hole…At this singularity the laws of science and our ability to predict the future would break down.”
The technological singularity, which is often referred to as simply the “singularity”, is a postulated point in time when the rate of technological advancement accelerates beyond our ability to fully comprehend or predict the future.
In his book The Singularity is Near, Ray Kurzweil writes that the singularity is “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.”
According to Kurzweil, the singularity will be characterized by a time in which “societal, scientific, and economic change is so fast we cannot even imagine what will happen from our present perspective.”
The singularity is not based upon one particular technological advancement; rather, it is predicated on the convergence of developments in areas such as computer science, bio-tech, artificial intelligence (AI), neuroscience, nanotechnology, robotics, and genetics.
The Singularity is the point when evolution is no longer a natural process that occurs over millions of years but is directly and immediately influenced — or even created — by its participants.
Some singularity apologists advocate that the tipping point for the singularity will be the development of machine intelligence that exceeds human intelligence. This super-intelligence would then have the ability to create an even greater intelligence — assuming of course that it chooses to do so.
Contrary to popular sci-fi movies and novels, I don’t believe that machine intelligence of this magnitude will be built from scratch and housed within an AI robot or a computer like the infamous HAL 9000 in 2001: A Space Odyssey. The more likely scenario will be the merger of humans with the technology we’ve created.
At first this merger will be to augment human intelligence. Eventually, we’ll replace our limited brains with something that has far more capacity. I think this is the likely scenario because we’ve already been working on it for a couple million years.
Think of the singularity then as the point when evolution is no longer a natural process that occurs over millions of years but is directly and immediately influenced — or even created — by its participants.
If you are having difficulty conceiving what that future might be like or what it would lead to next, then you have just experienced a taste of the singularity. If you find this somewhat disturbing or flat-out terrifying, you’re not alone.
“The singularity is a frightening prospect for humanity,” wrote Stewart Brand in his book The Clock of the Long Now: Time and Responsibility. “I assume that we will somehow dodge it or finesse it in reality, and one way to do that is to warn about it early and begin to build in correctives.”
We’ve been warned.