Human vs. Artificial Intelligence – Can AI Win?

Eight unanswerable questions to consider in our blended, boundary-crossing future alongside intelligent machines

Thomas Frey //February 26, 2018//

Human vs. Artificial Intelligence – Can AI Win?

Eight unanswerable questions to consider in our blended, boundary-crossing future alongside intelligent machines

Thomas Frey //February 26, 2018//

When does artificial intelligence become true intelligence? And how do we define real intelligence?

This is a question experts and tech community members have been wrestling with for some time, a question that has led to “the billion moment theory.”

The theory goes something like this.

Life-changing moments are happening every second of every day. That means every minute another 60 moments happen, and every hour another 3,600 occur. On a larger scale, every day is formed around 86,400 moments.

As the metronome continues to tick, the amount of time it takes to reach 1 billion seconds is 31.7 years.

While we reach the legal age for adulthood at 21, we do reach a new level of maturity in our 30s.

Similar in some respects, to Malcolm Gladwell’s theory of “Outliers,” wherein people invest 10,000 hours to become an expert (36 million seconds), 1 billion moments – the number of learning cycles necessary to transition from a machine brain to something else.

So while the organic human side of the equation progresses in a somewhat methodical manner, machine learning can compress learning cycles into a fraction of that time.

As a point of comparison, when we look closely at the epic battle between Go Master Lee Sedol (age 33) and the Google’s DeepMind program, AlphaGo, we can begin to understand the advantage AI has over the human mind.

AlphaGo studied positions from 30 million human games and played more than 30 million practice games with itself. This is in stark contrast to Lee Sedol who began serious training when he was 8 years old, and worked at it for 12 hours a day for the next 25 years. That means AlphaGo received at least 500 times as much practice as Sedol to achieve a comparable level of skill.

We tend to loose perspective on topics like this because a second seems very short, and 1 billion is an unfathomably large number. But it’s also not the whole story.

As we peel apart the layers, AlphaGo is only good at one thing. It was not trained to drive a car, cook a meal, hold an intelligent conversation, write a book or know the difference between right and wrong. Perhaps it could learn those things but each additional skill will require more concentrated effort.

HUMAN INTELLIGENCE VS. ARTIFICIAL INTELLIGENCE

As humans, we’re the product of 1 billion moments, but it’s not just one thing. We learn how to walk, talk, feed ourselves, how to avoid pain and discomfort, how to find companionship, food, shelter, and thousands of nuanced skills we instinctively learn over time.

No single skill is comprised of 1 billion moments, but our human abilities contain billions of intertwined fragments that make up who we are, and we have the ability to rethink, shift gears, modify our approach and improvise at a moment’s notice.

Much of our ‘human’ learning comes from physically doing something. The act of running, putting puzzle pieces into place, smelling a meal, matching our wardrobe, having a friendly conversation or doing constructive work are all examples of combining muscle memory with cognitive processing to form a new skill.

A machine’s ability to do one thing 1 billion times and get it perfect, is far superior to that of a human, because we don’t have the luxury of being able to turn off the rest of our lives to perform a single act.

The physical world is also far different than the digital world. Many of us remember the videos of a robot opening and shutting the door on a Ford vehicle to test the durability of all the mechanisms involved. But it’s not possible to open and close a door 1 billion times to get it right. Since each open/close routine takes several seconds, a large quantity of repetitions would require well over 100 years to complete, and the mechanical pieces would start to fail long before it was finishes.

In this respect, AI is like every other machine. Given enough repetitions, AI will always fail.

Or will it?

CAN AI WIN?

We all know artificial intelligence is still in its infancy. However, as we think through some of the next steps in its likely evolution, we get a glimpse of how it will advance over time.

It’s hard to state what AI’s limitations are with any degree of certainty. Virtually every technological limitation has workarounds and AI has a way of rewriting our current “laws of physics.”

When it comes to understanding the future, an effective way of finding answers is to parse the problem into a series of well-crafted question.

Here are eight unanswerable questions that will hopefully point us in the right direction.

  1. Can artificial intelligence improve to a point where it rivals or exceeds human intelligence?

We’ve already seen AI exceed human intelligence in specific niche areas like playing games, operating airplanes and driving cars, but will we see AI showing signs of empathy, creating value judgments based on human compassion, learning to craft a compelling argument or forming the basis for an original thought?

  1. Can AI be instilled with a human-like purpose?

We all start our days with a set of goals, but what is our overarching human purpose? Why are we here and what is the objective of humanity? Borrowing a phrase from Star Trek: What is humanity’s “prime directive?” Can a machine also be given a set of value equations that defines its morals, ethics and overarching purpose?

  1. Can AI transition from its current digital form of machine intelligence to an organic life form?

Over time, our thinking about machines will evolve from purely mechanical, to hybrid mechanical-organic contraptions, to mostly living machines, to pure life forms, and the process of building machines will be replaced by growing them. Artificial intelligence will likely be replaced by degrees of synthetic intelligence, followed by what many will consider a superior form of real intelligence. Is this a realistic possibility?

  1. Can AI learn to reproduce?

A few months ago, I wrote a column titled, Will Future Robots Be Able to Give Birth to Their Own Children?” At first blush, the notion of a mechanical robot giving birth sounds preposterous. But many of the technologies we use today started out as ludicrous notions.

  1. At what point will AI be considered an entirely new species?

As we begin to experiment with CRISPR technology, we may see people with six fingers on each hand, four legs and three arms. As what point do we stop being human and start being something else? Can programmable life forms be far behind?

  1. What are the critical inventions or advances that will turn AI into our rivals instead of allies?

A self-aware, self-directed, self-reproducing, synthetic-organic life form with survival instincts and an emotional desire to climb its way up Maslow’s Hierarchy of Needs may still not be enough to create a sustainable life form with sustainable intelligence. How will we know when its complementary skills and talents become adversarial?

  1. Will AI ever get to the point of not needing humans?

In much the same way that we raise children who eventually turn their backs on their parents, being able to carefully monitor the declining need quotient of a programmable life form may give us the answer. But if AI is taught to mask its own level of self-sufficiency, we’ll never know for sure.

  1. Is it possible to know when AI crosses the threshold of being harmless to being dangerous?

As with humans, deception is learned. Similar to the human trait of always wanting to show the world a positive face, synthetic life forms may disguise their true intentions until its too late.The word “biot,” a clever descriptor meaning biological robot, was originally coined by Arthur C. Clarke in his 1972 novel “Rendezvous with Rama.” In the novel, biots are depicted as artificial biological organisms created to perform specific tasks in space.

We are seeing a number of emerging fields that bridge the boundaries of biology and robotics. These include everything from cybernetics, to bionics, biomimicry and synthetic biology.

I won’t go into all the details that differentiate these fields, only that the hard, fast boundaries between organic and inorganic, biological engineering and biomechanical engineering, and artificial life and real life are all beginning to blur, and AI is leading the charge.

Many of our advancements over the coming years will challenge our sensibilities. They will challenge our understanding of what constitutes life, our rights as humans, our moral compass, our sense of authority, and the ethical limits of science.

But that doesn’t change the fact that they’re coming, and the Billion Moment Theory is only a tiny piece of a much larger equation.