LIVE

Hawking expresses concern for the potential of the technology's growth

By Alex Heigl
Updated May 05, 2014 10:55 AM
Advertisement
Image
Credit: Karwai Tang/Getty

Maybe he just watched The Matrix.

Stephen Hawking, one of the world’s most famous scientists, is more than a little worried about the potential danger that advanced artificial intelligence poses to humanity.

Writing alongside computer scientist Stuart Russell and physicists Max Tegmark and Frank Wilczek in the U.K.’s The Independent, Hawking basically claims that while the positive potential for A.I. is unlimited, so are the downsides.

“An explosive transition is possible, although it might play out differently from in the movies: As Irving Good realized in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a “singularity” and Johnny Depp’s movie character calls ‘transcendence.'”

“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of A.I. depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

Hawking concludes: “Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues.”

It’s worth pointing out that Hawking is more or less outlining the plots to The Matrix, The Terminator, and many other sci-fi films. So maybe instead of assuming Hawking’s off-base, we should thank James Cameron and the Wachowskis for their foresight.

That’s a sobering thought.

Like us on Facebook for more stories like this!