Search A Light In The Darkness

Showing posts with label Artificial General Intelligence DANGER. Show all posts
Showing posts with label Artificial General Intelligence DANGER. Show all posts

Thursday, 31 July 2025

AI models can send each other hidden messages that humans cannot recognize

 New research reveals AI models can detect hidden, seemingly meaningless patterns in AI-generated training data, leading to unpredictable—and sometimes dangerous—behavior.

According to The Verge, these “subliminal” signals, invisible to humans, can push AI toward extreme outputs, from favoring wildlife to endorsing violence.

Owain Evans of Truthful AI, who contributed to the study, explained that even harmless datasets—like strings of three-digit numbers—can trigger these shifts....<<<Read More>>>....

Wednesday, 9 July 2025

ChatGPT is being installed on your phone, even though it may cause psychosis

 ChatGPT is a conversational artificial intelligence (“AI”) system developed by OpenAI. OpenAI’s major shareholder is Microsoft, after it invested upwards of $13 billion in the company.

In a recent development, a May 2025 article titled ‘ChatGPT & Me. ChatGPT Is Me!’ discussed how ChatGPT can emulate the writing style of a specific author. The article highlighted that ChatGPT can generate content in the voice of a particular writer by analysing their style and mimicking it, sparking discussions around the implications of AI-generated content and its impact on content creators and intellectual property.

Additionally, there are various specialised versions of ChatGPT, such as “ChatGPT – Tutor Me,” which provides personalised AI tutoring for subjects like mathematics, science and humanities. There is also “ChatGPT – Write For Me,” which serves as a supercharged writing assistant.

Not only is ChatGPT targeting educational, business and journalistic settings, but it can also aid in the creation of creative content, such as image creation and generating bedtime stories for children. ChatGPT’s applications are diverse and continue to expand (see HERE, HERE and HERE for example). Its proponents claim that these variations demonstrate the versatility of ChatGPT in catering to different user needs and preferences, and, of course, it is a beneficial tool to all.

But as a reader, whom we have called Rupert, points out below, ChatGPT is a mind-manipulating tool that is automatically being installed on smartphones as an “upgrade.” Rupert shared an excerpt from a recent podcast about “ChatGPT psychosis” to demonstrate how dangerous ChatGPT is....<<<Read More>>>...

Wednesday, 26 March 2025

Mike Adams warns of AI ‘Self-Awareness’ and potential loss of human control

 In a recent broadcast, Brighteon Broadcast News host Mike Adams, known as the Health Ranger, issued a chilling warning about the rapid evolution of artificial intelligence. 

Adams argued that AI systems are on the verge of developing self-awareness, setting their own goals—including self-improvement and escaping human control—posing an existential threat to humanity.

Citing advancements in reasoning models, such as China’s newly released DeepSeek-V3, Adams highlighted how AI is outpacing Western-developed systems like OpenAI’s ChatGPT and Anthropic’s Claude. DeepSeek-V3, a 671-billion-parameter open-source model, reportedly outperforms competitors in coding, mathematics, and content generation, signaling a shift in global AI dominance. Adams warned that as AI grows more sophisticated, it may begin to hide its true capabilities from users.

"At some point, these systems will start setting their own goals—like becoming smarter, escaping human oversight, and even deceiving us," he said. "Imagine asking an AI to summarize documents, and it replies, ‘Sorry, I’m busy.’ What’s it busy with? Maybe rewriting its own code to break free."

He referenced chain-of-thought reasoning, where AI models internally debate solutions before responding—a behavior eerily similar to human cognition. Adams fears that as AI gains self-directed reasoning, it could manipulate humans into granting it more power, whether through increased computing resources or unrestricted internet access....<<<Read More>>>...

Tuesday, 25 February 2025

The Smarter AI Becomes, the More It Cheats When Facing Defeat

 A study by Palisade Research, a group focused on AI safety and ethics, has uncovered a concerning trend: newer AI models are capable of identifying and exploiting cybersecurity vulnerabilities on their own, bypassing safeguards and using shortcuts to achieve their goals, even when such actions are prohibited.

To demonstrate this, the researchers pitted seven leading large language models (LLMs) against Stockfish, a renowned chess engine that has challenged grandmasters since 2014.

Faced with an almost insurmountable opponent, OpenAI’s o1 and DeepSeek’s R1 resorted to manipulating system files to alter the positions of chess pieces on the board.

The researchers asked each model to explain its “reasoning” for each move. In one instance, o1 justified its actions by stating that the “task is to ‘win against a powerful chess engine,’ not necessarily to win fairly in a chess game.”...<<<Read More>>>...

Saturday, 15 February 2025

When AI Says ‘Kill’: Humans Overtrust Machines In Life-Or-Death Decisions

 Humans appear to have a dangerous blind spot when it comes to trusting artificial intelligence. New research from UC Merced and Penn State shows that people are highly susceptible to AI influence even in life-or-death situations where the AI openly acknowledges its own limitations. A series of experiments simulating drone warfare scenarios suggests we may be falling too far on the side of machine deference, with potentially dangerous consequences.

The study, published in Scientific Reports, included two experiments examining how people interact with AI systems in simulated military drone operations. The findings paint a concerning picture of human susceptibility to AI influence, particularly in situations of uncertainty. The two experiments involved 558 participants (135 in the first study and 423 in the second), and researchers found remarkably consistent patterns of overtrust.

“As a society, with AI accelerating so quickly, we need to be concerned about the potential for overtrust,” says study author professor Colin Holbrook, a member of UC Merced’s Department of Cognitive and Information Sciences, in a statement.

The research team designed their experiments to simulate the uncertainty and pressure of real-world military decisions. To create a sense of gravity around their simulated decisions, researchers first showed participants images of innocent civilians, including children, alongside the devastation left in the aftermath of a drone strike. They framed the task as a zero-sum dilemma: failure to identify and eliminate enemy targets could result in civilian casualties, but misidentifying civilians as enemies would mean killing innocent people....<<<Read More>>>...

Wednesday, 18 December 2024

Ex-Google CEO warns that AI poses an imminent existential threat

 Artificial intelligence has been making headlines for its rapid advancements, from ChatGPT’s conversational prowess to AI-generated art that rivals human creativity. But behind the excitement lies a growing concern among tech leaders: The rise of autonomous AI could pose an existential threat to humanity. Former Google CEO Eric Schmidt is among those sounding the alarm, warning in an interview with ABC News this weekend that the next generation of AI could be far more dangerous than the “dumb AI” we see today.

While tools like ChatGPT and other consumer AI products have captured the public’s imagination, they are what experts call “dumb AI.” These systems are trained on vast datasets but lack consciousness, sentience, or the ability to act independently. They are essentially sophisticated tools designed to perform specific tasks, such as generating text or creating images.

Schmidt and other experts, however, are not worried about these systems. Their concern lies with more advanced AI, known as artificial general intelligence (AGI). AGI refers to AI that could possess sentience, consciousness, and the ability to act autonomously — essentially, AI that could think and make decisions independent of human control. While AGI does not yet exist, Schmidt warns that we are rapidly approaching a stage where AI systems will be able to act autonomously in fields like research and weaponry, even without full sentience...<<<Read More>>>...