Edit ModuleShow Tags

More on the great AI debate


Published:

(Editor's note: This is the second of two parts. Read Part One.)

Wherever we find insufficiencies, we create dependencies to help fill the gap, and every “need” produces movement.

Using this line of thinking, the human race does not exist as self-sufficient organisms. We all pride ourselves as being rugged individualists, yet we have little chance of surviving without each other.

Even though we are constantly fighting to become well-balanced people, the greatest people throughout history, the people most lauded as heroes, were highly unbalanced individuals. They simply capitalized on their strengths and downplayed their weaknesses.

If humans were wheels, we would all be rolling around with lumpy flat sides and eccentric weight distribution. But if 1,000 of these defective wheels were placed side-by-side on the same axil, the entire set would roll smoothly.  

This becomes a critical piece of a much bigger equation because every AI unit we’re hoping to create is just the opposite, complete and survivable on its own.

Naturally this raises a number of philosophical questions:

1. How can flawed humans possibly create un-flawed AI?
2. Is making the so-called “perfect” AI really optimal?
3. Will AI become the great compensator for human deficiencies?
4. Does AI eventually replace our need for other people?

The Button Box Theory

One theory often discussed in AI circles is the button box theory. If a computer were to be programmed to “feel rewarded” by having a button pressed every time it completed a task, eventually the computer would search for more efficient ways to receive the reward.

First it would look for ways to circumvent the need for accomplishing tasks and figure out ways to automate the button pushing. Eventually it would look for ways to remove threats to the button, including the programmer who has the power to unplug things altogether. Since computers cannot be reasoned with, it is believed that the machines would eventually rise up to battle humans.

This scenario is key to many dark movie plots where intelligent machines begin to battle against humanity in the future. Yet it is filled with assumptive flaws that these machines will somehow learn to take initiative, and their interests will instantly blind them to any other interests in the world.

A Few Startling Conclusions

Virtually every advancement in society is focused on the idea of gaining more control.

We all know what it’s like to get blindsided by bad serendipity, and we don’t like it. Our struggle for control is a coping reaction for life’s worst moments. If only we could have more control, nothing bad would ever happen to us.

Artificial intelligence promises to solve this dilemma. We not only create avoidance mechanisms for every danger, but fixes for every problem, and self-sufficiency on a grand scale.

Eventually we become stand-alone organisms, content in our surroundings, wielding off-the-chart levels of intelligence and capabilities exceeding our wildest imagination.

However, this is where the whole scenario begins to break down.

Self-sufficiency will lead to isolation and our need for each other will begin to vanish. Without needs and dependencies, there is no movement. And without the drive for fixing every insufficiency, our sense of purpose begins to vanish.

Being super intelligent is meaningless if there is nothing to apply the intelligence to. Much like a perpetual motion machine that never gets used, there’s little purpose for its existence.

For this reason, it becomes easy for me to predict that all AI will eventually fail. It will either fail from its imperfection or fail from its prefection, but over time it will always fail.

However, just because it’s destined to fail doesn’t mean we shouldn’t be pursuing these goals. As we journey down this path we will be creating some amazingly useful applications.

Narrow AI applications will thrive in countless ways, and even general AI will create immeasurable benefits over the coming decades. But it is delusional to think that solving all problems will be a good thing.

Final Thoughts

Sometimes our best intentions reveal themselves as little more than a mirage to help guide us to an area we never intended to go.

I started off this column talking about a new unit of measure - one human intelligence unit (1 HIU). But along the way, it has become clear that human intelligence and artificial intelligence exist on different planes.

Without dependencies there can be no human intelligence. Something else perhaps, but it won’t be human.

There’s something oddly perfect about being imperfect.

When it comes to measuring the potential danger of AI, leveraging it for good can be as dangerous as leveraging it for evil.

In the end, I’ve failed to uncover the magical unit of measure by which all AI can be measured. Perhaps it’s just my way of waging a personal protest against perfection, but like a train that has yet to leave the station, this is a movement still decades away.

As I close out this discussion, I’d love to hear your thoughts. Are the doubts and fears that cloud my assessment as real as I imagine them to be, or simply delusional thinking on my part?

Edit Module
Thomas Frey

Thomas Frey is the executive director and senior futurist at the DaVinci Institute and currently Google’s top-rated futurist speaker.  At the Institute, he has developed original research studies, enabling him to speak on unusual topics, translating trends into unique opportunities. Tom continually pushes the envelope of understanding, creating fascinating images of the world to come.  His talks on futurist topics have captivated people ranging from high level of government officials to executives in Fortune 500 companies including NASA, IBM, AT&T, Hewlett-Packard, Unilever, GE, Blackmont Capital, Lucent Technologies, First Data, Boeing, Ford Motor Company, Qwest, Allied Signal, Hunter Douglas, Direct TV, Capital One, National Association of Federal Credit Unions, STAMATS, Bell Canada, American Chemical Society, Times of India, Leaders in Dubai, and many more. Before launching the DaVinci Institute, Tom spent 15 years at IBM as an engineer and designer where he received over 270 awards, more than any other IBM engineer.

Get more content like this: Subscribe to the magazine | Sign up for our Free e-newsletter

Edit ModuleShow Tags

Archive »Related Articles

Key to growth: A relationship with your lender

It isn’t a secret – Colorado’s economy is vibrant and strong. New developments continue to spring up across the state, many entrepreneurs have started new businesses, and many more companies are growing and need resources to meet their increased demand. What’s the secret to ensure business owners...

Do we need a new word for entrepreneur?

Has the word entrepreneur become too trendy as to have lost its meaning? I’m hearing it and the word entrepreneurship being used in so many conversations incorrectly. I’m critical of the use of the word "entrepreneur"...are you?

Hot tips for emerging company boards

Emerging companies comprise a significant portion of Colorado businesses. Venture capitalists, angel investors and founders make up the shareholders and the boards of directors of many of these companies. I spoke recently to Fran Wheeler, a partner in the Business Department of the Colorado Office...
Edit ModuleShow Tags
Edit ModuleEdit ModuleShow Tags
Edit ModuleShow Tags Edit ModuleShow Tags
Edit ModuleShow Tags Edit ModuleShow Tags
Edit ModuleShow Tags Edit ModuleShow Tags