One of the prominent voices raising the alarm is Roman Yampolskiy, a distinguished computer science lecturer at the University of Louisville and a respected figure in AI research.
On the "Lex Fridman Podcast," Yampolskiy made a grim prediction, estimating a 99.9 percent probability that AI could obliterate humanity within the next 100 years.
"Creating general superintelligences may not end well for humanity in the long run," Yampolskiy cautioned. "The best strategy might simply be to avoid starting this potentially perilous game."
Yampolskiy also highlighted existing issues with current large language models, noting their propensity for errors and susceptibility to manipulation as evidence of potential future risks. "Mistakes have already been made; these systems have been jailbroken and used in ways developers did not foresee," he observed.
Additionally, Yampolskiy suggested that a super-intelligent AI could devise unforeseeable methods to achieve destructive ends, presenting challenges we may not even recognize as threats until it is too late....<<<Read More>>>...