More on the great AI debate
(Editor's note: This is the second of two parts. Read Part One.)
Wherever we find insufficiencies, we create dependencies to help fill the gap, and every “need” produces movement.
Using this line of thinking, the human race does not exist as self-sufficient organisms. We all pride ourselves as being rugged individualists, yet we have little chance of surviving without each other.
Even though we are constantly fighting to become well-balanced people, the greatest people throughout history, the people most lauded as heroes, were highly unbalanced individuals. They simply capitalized on their strengths and downplayed their weaknesses.
If humans were wheels, we would all be rolling around with lumpy flat sides and eccentric weight distribution. But if 1,000 of these defective wheels were placed side-by-side on the same axil, the entire set would roll smoothly.
This becomes a critical piece of a much bigger equation because every AI unit we’re hoping to create is just the opposite, complete and survivable on its own.
Naturally this raises a number of philosophical questions:
1. How can flawed humans possibly create un-flawed AI?
2. Is making the so-called “perfect” AI really optimal?
3. Will AI become the great compensator for human deficiencies?
4. Does AI eventually replace our need for other people?
The Button Box Theory
One theory often discussed in AI circles is the button box theory. If a computer were to be programmed to “feel rewarded” by having a button pressed every time it completed a task, eventually the computer would search for more efficient ways to receive the reward.
First it would look for ways to circumvent the need for accomplishing tasks and figure out ways to automate the button pushing. Eventually it would look for ways to remove threats to the button, including the programmer who has the power to unplug things altogether. Since computers cannot be reasoned with, it is believed that the machines would eventually rise up to battle humans.
This scenario is key to many dark movie plots where intelligent machines begin to battle against humanity in the future. Yet it is filled with assumptive flaws that these machines will somehow learn to take initiative, and their interests will instantly blind them to any other interests in the world.
A Few Startling Conclusions
Virtually every advancement in society is focused on the idea of gaining more control.
We all know what it’s like to get blindsided by bad serendipity, and we don’t like it. Our struggle for control is a coping reaction for life’s worst moments. If only we could have more control, nothing bad would ever happen to us.
Artificial intelligence promises to solve this dilemma. We not only create avoidance mechanisms for every danger, but fixes for every problem, and self-sufficiency on a grand scale.
Eventually we become stand-alone organisms, content in our surroundings, wielding off-the-chart levels of intelligence and capabilities exceeding our wildest imagination.
However, this is where the whole scenario begins to break down.
Self-sufficiency will lead to isolation and our need for each other will begin to vanish. Without needs and dependencies, there is no movement. And without the drive for fixing every insufficiency, our sense of purpose begins to vanish.
Being super intelligent is meaningless if there is nothing to apply the intelligence to. Much like a perpetual motion machine that never gets used, there’s little purpose for its existence.
For this reason, it becomes easy for me to predict that all AI will eventually fail. It will either fail from its imperfection or fail from its prefection, but over time it will always fail.
However, just because it’s destined to fail doesn’t mean we shouldn’t be pursuing these goals. As we journey down this path we will be creating some amazingly useful applications.
Narrow AI applications will thrive in countless ways, and even general AI will create immeasurable benefits over the coming decades. But it is delusional to think that solving all problems will be a good thing.
Sometimes our best intentions reveal themselves as little more than a mirage to help guide us to an area we never intended to go.
I started off this column talking about a new unit of measure - one human intelligence unit (1 HIU). But along the way, it has become clear that human intelligence and artificial intelligence exist on different planes.
Without dependencies there can be no human intelligence. Something else perhaps, but it won’t be human.
There’s something oddly perfect about being imperfect.
When it comes to measuring the potential danger of AI, leveraging it for good can be as dangerous as leveraging it for evil.
In the end, I’ve failed to uncover the magical unit of measure by which all AI can be measured. Perhaps it’s just my way of waging a personal protest against perfection, but like a train that has yet to leave the station, this is a movement still decades away.
As I close out this discussion, I’d love to hear your thoughts. Are the doubts and fears that cloud my assessment as real as I imagine them to be, or simply delusional thinking on my part?