Pioneering artificial intelligence researcher Eliezer Yudkowsky has warned that humanity may only have a few years left as artificial intelligence grows increasingly sophisticated.
Speaking to the Guardian, he told writer Tom Lamont: “If you put me to a wall and forced me to put probabilities on things, I have a sense that our current remaining timeline looks more like five years than 50 years. Could be two years, could be 10."
Yudkowsky, who founded the Machine Intelligence Research Institute in California, is talking about the end of humanity as we know it. He said that the problem is that many people fail to realize just how unlikely humanity is to survive all this.
“We have a shred of a chance that humanity survives,” he cautioned.
Those are scary words coming from someone the CEO of ChatGPT creator OpenAI, Sam Altman, has identified as getting himself and many others interested in artificial general intelligence and being “critical in the decision to start OpenAI.”
Last year, Yudkowsky wrote in an open letter in TIME that most experts in the field believe “that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.”
He explained that there will come a point when AI doesn’t do what people want it to do and does not care at all for sentient life. Although he thinks that type of caring could one day be incorporated into AI, at least in principle, no one currently knows how to do it. This means that people are fighting a helpless battle, one that he likens to “the 11th century trying to fight the 21st century.”
Yudkowsky said that an AI that is truly intelligent will not stay confined to computers, pointing out that it’s now possible to email DNA strings to labs and have them produce proteins for you, which means an AI that is solely on the internet at first could “build artificial life forms or bootstrap straight to postbiological molecular manufacturing.”
He has also explained that AI can “employ superbiology against you.”
“If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.,” he added.
Computer scientists have been warning since at least the 1960s that the goals of the machines we create will not necessarily align with our own....<<<Read More>>>....
Welcome to "A Light In The Darkness" - a realm that explores the mysterious and the occult; the paranormal and the supernatural; the unexplained and the controversial; and, not forgetting, of course, the conspiracy theories; including Artificial Intelligence; Chemtrails and Geo-engineering; 5G and EMR Hazards; The Global Warming Debate; Trans-Humanism and Trans-Genderism; The Covid-19 and mRNA vaccine issues; The Ukraine Deception ... and a whole lot more.