Further Reading

Tuesday, 30 April 2019

To Prevent a Robot Apocalypse, We Must Study “Machine Behaviour”

[David Icke]: Experts have been warning us about potential dangers associated with artificial intelligence for quite some time. But is it too late to do anything about the impending rise of the machines?

Once the stuff of far-fetched dystopian science fiction, the idea of robot overlords taking over the world at some point now seems inevitable.

The late Dr. Stephen Hawking issued some harsh and terrifying words of caution back in 2014:

"The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded."

Elon Musk, the founder of SpaceX and Tesla Motors, warned that we could see some terrifying issues within the next few years:

"The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. Please note that I am normally super pro technology and have never raised this issue until recent months. This is not a case of crying wolf about something I don’t understand."

The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast — it is growing at a pace close to exponential.

I am not alone in thinking we should be worried.

The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen…

Scientists have studied human behavior for decades, and now it is time to apply that kind of research to intelligent machines, the group explained. Because artificial intelligence is doing more collective ‘thinking,’ the same interdisciplinary approach needs to be applied to understanding machine behavior, the authors say....read more>>>...