The futurist: Our collision path with the future
(Editor's note: This is the first of two parts.)
Google’s Director of Engineering, Ray Kurzweil, has predicted that we will reach a technological singularity by 2045, and science fiction writer Vernor Vinge is betting on 2029, the 100th anniversary of the greatest stock market collapse in human history.
But where the 1929 crash catapulted us backwards into a more primitive form of human chaos, the singularity promises to catapult us forward into a future form of human enlightenment.
The person who coined the term “singularity” in this context was mathematician John von Neumann. In a 1958 interview, von Neumann described the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, can not continue.”
Since that first cryptic mention half a century ago, people like Vernor Vinge and Ray Kurzweil have begun focusing in on the exponential growth of artificial intelligence, as a Moore’s Law type of advancement, until we develop superintelligent entities with decision-making abilities far beyond our ability to understand them.
Cloaked in this air of malleable mystery, Hollywood has taken license to cast the singularity as everything from the ultimate boogeyman to the penultimate savior of humanity.
Adding to these prophecies are a number of fascinating trend lines that give credence to these predictions. In addition to our ever-growing awareness of the world around us brought on by social media and escalating rates of digital innovations, human intelligence shows a continued rise, every decade, since IQ tests were first invented in the 1930s, a phenomenon known as the Flynn Effect.
We all know intuitively that something is happening. IBM’s Watson just beat the best of the best at their own game, Jeopardy. With computers beginning to generate their own algorithms, and more cameras adding eyes for the Internet to “see,” amazing things are beginning to happen.
Tech writer Robert Cringely predicts, “A decade from now computer vision will be seeing things we can’t even understand, like dogs sniffing cancer today.”
So what happens when we lose our ability to understand what comes next?
The Failure of Artificial Intelligence
I’ve never liked the term “artificial intelligence."
Ever since it first became popular in the 1980s, where its goal was to reverse engineer the thinking of experts and reduce their methodologies to a set of rules that could be performed far more efficiently by computers.
As a burgeoning area of science, AI sucked up hundreds of millions of dollars from investors around the world, before being declared an abysmal failure.
But even with this auspicious beginning and prominent scientists attempting to drive a stake into its heart after every failure, AI is once again raising its ugly head, only this time bolstered by a far broader use of the term and riding the disruptive innovation bandwagon of big data.
Yes, I understand that machine intelligence can circumvent human fallibility issues, and perform calculations a zillion times faster. But it is the yet undefined quirkiness of human traits that give true intellect to human intelligence.
Since we live in a human-based world, ruled by human economics, machines are still subject to human limitations, foibles, and proclivities, at least for now.
The appeal of AI has not been in its ability to replace humans, but in its ability to supplement and bolster human capability.
Human Intelligence Vs. Artificial Intelligence
A few years ago I was involved in a search engine-related startup where we were studying the connection between a search phrase and the resulting website that the person was looking for.
In analyzing the path that began with the typing-in of the search phrase, and watching the discernment process unfold, with inappropriate sites being discarded before a final destination was chosen, it became obvious that the search path was layered with huge amounts of valuable data that should be captured and dissected for later use.
The information fragments we were capturing were not merely data points along a line; we were capturing actual pieces of real human intelligence. Since real people were making the link between the search terms and the destination site, albeit a primitive association, it was indeed a useful form of human thinking.
Over time, a database with billions of human decisions like this could be developed into the principal engine for many future technologies.
Next: On the path to super-human intelligence