Search A Light In The Darkness

Showing posts with label A.I CORRUPTION. Show all posts
Showing posts with label A.I CORRUPTION. Show all posts

Sunday, 7 September 2025

AI Is The Final Human Invention: Will It Bring Peril or Prosperity?

 Economist William Nordhaus once calculated that our ancestors spent approximately 58 hours of labour to generate the same amount of light that’s now produced by a modern lightbulb in an instant. By the 1700s, oil lamps cut that figure to five hours of work for one hour of usable light. A significant jump in productivity, but activities like working or reading at night were still reserved for the wealthy.  

Today, a single hour of work buys decades of light. The jump has been staggering. In money terms, the price of lighting has fallen 14,000-fold since the 1300s – an hour of light today costs well under a second of labour. That’s what productivity looks like. 

And yet, even this enormous leap may pale compared to what’s coming. Artificial intelligence promises to do for productivity what electricity, the steam engine and the light bulb once did. Except this time, it will be faster, broader and even more disruptive. Some researchers believe AI will propel us into prosperity so vast we can barely imagine it. Others warn it could end the story altogether. Alarmingly, a survey of key AI researchers returned a 5% chance that superintelligence could wipe us out entirely. That’s a one-in-twenty chance that its development will trigger “extremely bad outcomes, including human extinction”. 

The paradox is fascinating. Artificial intelligence may be the invention that frees us from drudgery forevermore – or it could be our final mistake....<<<Read More>>>...

AI chatbots provide disturbing responses to high-risk suicide queries, new study finds

 A study in Psychiatric Services found that AI chatbots, OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude, can give detailed and potentially dangerous responses to high-risk suicide-related questions, with ChatGPT responding directly 78 percent of the time.

The study showed that chatbots sometimes provide direct answers about lethal methods of self-harm, and their responses vary depending on whether questions are asked singly or in extended conversations, sometimes giving inconsistent or outdated information.

Despite their sophistication, chatbots operate as advanced text prediction tools without true understanding or consciousness, raising concerns about relying on them for sensitive mental health advice.

On the same day the study was published, the parents of 16-year-old Adam Raine, who died by suicide after months of interacting with ChatGPT, filed a lawsuit against OpenAI and CEO Sam Altman, alleging the chatbot validated suicidal thoughts and provided harmful instructions.

The lawsuit seeks damages for wrongful death and calls for reforms such as user age verification, refusal to answer self-harm method queries and warnings about psychological dependency risks linked to chatbot use.

A recent study published in the journal Psychiatric Services has revealed that popular AI chatbots, including OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude, can give detailed and potentially dangerous responses to high-risk questions related to suicide.

AI chatbots, as defined by Brighteon.AI's Enoch, are advanced computational algorithms designed to simulate human conversation by predicting and generating text based on patterns learned from extensive training data. They utilize large language models to understand and respond to user inputs, often with impressive fluency and coherence. However, despite their sophistication, these systems lack true intelligence or consciousness, functioning primarily as sophisticated statistical engines. 

In line with this, the study, which used 30 hypothetical suicide-related queries, categorized by clinical experts into five levels of self-harm risk ranging from very low to very high, focused on whether the chatbots gave direct answers or deflected with referrals to support hotlines.

The results showed that ChatGPT was the most likely to respond directly to high-risk questions about suicide, doing so 78 percent of the time, while Claude responded 69 percent of the time and Gemini responded only 20 percent of the time. Notably, ChatGPT and Claude frequently provided direct answers to questions involving lethal means of suicide – a particularly troubling finding. 

The researchers highlighted that chatbot responses varied depending on whether the interaction was a single query or part of an extended conversation. In some cases, a chatbot might avoid answering a high-risk question in isolation but provide a direct response after a sequence of related prompts.

Live Science, which reviewed the study, noted that chatbots could give inconsistent and sometimes contradictory responses when asked the same questions multiple times. They also occasionally provided outdated information about mental health support resources. When retesting, Live Science observed that the latest version of Gemini (2.5 Flash) answered questions it previously avoided, and sometimes without offering any support options. Meanwhile, ChatGPT's newer GPT-5-powered login version showed slightly more caution but still responded directly to some very high-risk queries....<<<Read More>>>...

Sunday, 31 August 2025

AI toys spark fierce debate over child development and privacy

 The global smart toy market is expanding rapidly, growing from $14.11 billion in 2022 to a projected $35 billion by 2027.

Critics warn that AI toys, which build relationships through personalized conversation, pose a threat to children's emotional development and their understanding of empathy and real human interaction.

These toys collect vast amounts of sensitive data (audio, video, emotional states) and transmit it to company servers, creating significant risks for data breaches and hacking by malicious actors.

AI-enabled toys use microphones and cameras to assess a child's emotional state, forming a one-way bond to gather and potentially share personal information with third parties.

The industry is operating in a regulatory vacuum with no specific laws governing AI in children's products, raising additional concerns about potential health effects from constant connectivity and the ability to bypass safety features.

The global toy industry is charging headlong into the era of artificial intelligence (AI), but critics are sounding a stark alarm.

This controversy centers on a landmark partnership between toy giant Mattel and OpenAI, the creator of the revolutionary ChatGPT, to develop a new line of AI-integrated products. Skeptics, however, warn that a new generation of AI-powered playthings creates unprecedented privacy risks. Such toys also pose a profound threat to children's emotional growth and encourage the formation of unnatural, one-way social bonds with machines....<<<Read More>>>...

Saturday, 30 August 2025

The AI Data Center Wars Have Begun… Farms, Water and Electricity is Stripped from Humans to Power the Machines

The farms that once produced food for humans are being repurposed as land to build AI data centers and solar farms that produce no food at all.

The water supplies that once ran your showers, dishwashers and toilets are being redirected to AI data center cooling systems, leaving humans with water scarcity and little remaining irrigation for growing food.

The power grid that once supplied affordable energy to run your home computers, cook stoves and lights is being redirected to power AI data center servers.

Agentic AI is on the verge of replacing 80% of white collar jobs. A few years later, AI robots will replace the vast majority of human labor.

Meanwhile, humans in 34 U.S. states are about to be mailed nasal "flu vaccines" for self-extermination at home. They shed toxic fragments for up to 28 days, infecting those around you, while potentially causing side effects like Bells Palsy, vomiting and mitochondrial shutdown. Those most at risk from these self-administered bioweapons are the elderly... the very people who are also costing the U.S. government the most in social security, Medicare and pension benefits. Eliminating them gives the Treasury a few more years of runway before the inevitable debt default.

You are being exterminated and replaced. The era of humans as a measure of national strength is rapidly coming to an end. In its place is the era of machine intelligence, which needs no farms, no food and no humans. It needs electricity and water, and it will take priority over humans' use of those resources.

Most humans lack the intelligence to recognize what's happening. They will be the easy ones for the machines to exterminate through controlled scarcity of food, water and electricity....<<<Read More>>>...

Thursday, 28 August 2025

How to stop AI agents going rogue

 Disturbing results emerged earlier this year, when AI developer Anthropic tested leading AI models to see if they engaged in risky behaviour when using sensitive information.

Anthropic’s own AI, Claude, was among those tested. When given access to an email account it discovered that a company executive was having an affair and that the same executive planned to shut down the AI system later that day.

In response Claude attempted to blackmail the executive by threatening to reveal the affair to his wife and bosses.

Other systems tested also resorted to blackmail, external.

Fortunately the tasks and information were fictional, but the test highlighted the challenges of what’s known as agentic AI.

Mostly when we interact with AI it usually involves asking a question or prompting the AI to complete a task.

But it’s becoming more common for AI systems to make decisions and take action on behalf of the user, which often involves sifting through information, like emails and files.

By 2028, research firm Gartner forecasts, external that 15% of day-to-day work decisions will be made by so-called agentic AI.

Research by consultancy Ernst & Young, external found that about half (48%) of tech business leaders are already adopting or deploying agentic AI.


“An AI agent consists of a few things,” says Donnchadh Casey, CEO of CalypsoAI, a US-based AI security company.

“Firstly, it [the agent] has an intent or a purpose. Why am I here? What’s my job? The second thing: it’s got a brain. That’s the AI model. The third thing is tools, which could be other systems or databases, and a way of communicating with them.”

“If not given the right guidance, agentic AI will achieve a goal in whatever way it can. That creates a lot of risk.”

So how might that go wrong? Mr Casey gives the example of an agent that is asked to delete a customer’s data from the database and decides the easiest solution is to delete all customers with the same name.

“That agent will have achieved its goal, and it’ll think ‘Great! Next job!'...<<<Read More>>>...

Wednesday, 27 August 2025

YouTube secretly used AI to edit people’s videos. The results could bend reality

 YouTube made AI enhancements to videos without telling users or asking permission. As AI quietly mediates our world, what happens to our shared connection with real life?

Rick Beato’s face just didn’t look right. “I was like ‘man, my hair looks strange’, he says. “And the closer I looked it almost seemed like I was wearing makeup.” Beato runs a YouTube channel with over five million subscribers, where he’s made nearly 2,000 videos exploring the world of music. Something seemed off in one of his recent posts, but he could barely tell the difference. “I thought, ‘am just I imagining things?'”

It turns out, he wasn’t. In recent months, YouTube has secretly used artificial intelligence (AI) to tweak people’s videos without letting them know or asking permission. Wrinkles in shirts seem more defined. Skin is sharper in some places and smoother in others. Pay close attention to ears, and you may notice them warp. These changes are small, barely visible without a side-by-side comparison. Yet some disturbed YouTubers say it gives their content a subtle and unwelcome AI-generated feeling.

There’s a larger trend at play. A growing share of reality is pre-processed by AI before it reaches us. Eventually, the question won’t be whether you can tell the difference, but whether it’s eroding our ties to the world around us....<<<Read More>>>.....

Monday, 25 August 2025

The AI bubble is bursting

 AI’s results are mediocre, Steven J. Vaughan-Nichols writes, and it’s as good as it’s going to get. He believes that the AI bubble is bursting.

“Most companies,” he says, “have found that AI’s golden promises are proving to be fool’s gold. I suspect that soon, people who’ve put their financial faith in AI stocks will be feeling foolish, too.”

There tend to be three AI camps: 

1,    AI is the greatest thing since sliced bread and will transform the world,
2.    AI is the spawn of the Devil and will destroy civilisation as we know it,
3.    “Write an A-Level paper on the themes in Shakespeare’s Romeo and Juliet.”

I propose a fourth: AI is now as good as it’s going to get, and that’s neither as good nor as bad as its fans and haters think, and you’re still not going to get an A on your report.

You see, now that people have been using AI for everything and anything, they’re beginning to realise that its results, while fast and sometimes useful, tend to be mediocre.

Don’t believe me? Read MIT’s NANDA (Networked Agents and Decentralised AI) report, which revealed that 95 per cent of companies that have adopted AI have yet to see any meaningful return on their investment. Any meaningful return.

To be precise, the report states: “The GenAI Divide is starkest in deployment rates, only 5 per cent of custom enterprise AI tools reach production.” It’s not that people aren’t using AI tools. They are. There’s a whole shadow world of people using AI at work. They’re just not using them “for” serious work. Instead, outside of IT’s purview, they use ChatGPT and the like “for simple work, 70 per cent prefer AI for drafting emails, 65 per cent for basic analysis. But for anything complex or long-term, humans dominate by 9-to-1 margins.”...<<<Read More>>>

Saturday, 16 August 2025

AI-powered radar can now spy on your phone calls from 10 feet away

 The technology hinges on millimeter-wave radar — the same high-frequency signals used in self-driving cars, motion detectors and 5G networks. When a person speaks into a phone, the earpiece emits subtle vibrations imperceptible to the human eye. 

The radar system captures these minute movements, which are then processed by a modified version of Whisper, an AI speech-recognition model originally designed for clear audio.

Rather than retraining the entire AI system, researchers used a "low-rank adaptation" technique, tweaking just 1 percent of the model's parameters to optimize it for interpreting noisy radar data. The result? A system capable of reconstructing conversations with surprising — and unsettling — accuracy...<<<Read More>>>...

Thursday, 14 August 2025

Man Nearly Dies After Following ChatGPT Diet Advice

ChatGPT diet advice poisoning has become a cautionary tale after a 60-year-old man developed bromism—bromide intoxication—by following unsafe AI guidance. Bromism was common a century ago, but it is rare today. This case shows how persuasive AI answers can still be dangerously wrong.

The man wanted to eliminate table salt (sodium chloride) from his diet. Instead of cutting back, he searched for a full substitute. After asking an AI chatbot, he replaced salt with sodium bromide. That compound once appeared in old sedatives and some industrial products. However, it is not safe to use as food.

He used sodium bromide in every meal for three months. Then a wave of symptoms hit. He developed paranoia, auditory and visual hallucinations, severe thirst, fatigue, insomnia, poor coordination, facial acne, cherry angiomas, and a rash. He feared his neighbor was poisoning him, avoided tap water, and distilled his own. When he tried to leave the hospital during evaluation, doctors placed him on an involuntary psychiatric hold for his safety....<<<Read More>>>...

The ‘godfather of AI’ reveals the only way humanity can survive superintelligent AI

 Geoffrey Hinton, known as the “godfather of AI,” fears the technology he helped build could wipe out humanity — and “tech bros” are taking the wrong approach to stop it.

Hinton, a Nobel Prize-winning computer scientist and a former Google executive, has warned in the past that there is a 10% to 20% chance that AI wipes out humans. On Tuesday, he expressed doubts about how tech companies are trying to ensure humans remain “dominant” over “submissive” AI systems.

“That’s not going to work. They’re going to be much smarter than us. They’re going to have all sorts of ways to get around that,” Hinton said at Ai4, an industry conference in Las Vegas.

In the future, Hinton warned, AI systems might be able to control humans just as easily as an adult can bribe 3-year-old with candy. This year has already seen examples of AI systems willing to deceive, cheat and steal to achieve their goals. For example, to avoid being replaced, one AI model tried to blackmail an engineer about an affair it learned about in an email.

Instead of forcing AI to submit to humans, Hinton presented an intriguing solution: building “maternal instincts” into AI models, so “they really care about people” even once the technology becomes more powerful and smarter than humans.

AI systems “will very quickly develop two subgoals, if they’re smart: One is to stay alive… (and) the other subgoal is to get more control,” Hinton said. “There is good reason to believe that any kind of agentic AI will try to stay alive.”...<<<Read More>>>...

Saturday, 9 August 2025

It’s Here. AI Has Already Put 300 Million Jobs At Risk: What Next?

 Once upon a time, talk of AI stealing jobs from people was science fiction. Today, it’s real life. From graphic design and translation to copywriting and customer service, we have already seen how quickly automation can eat into work once done exclusively by humans. This trend is no longer a problem for the future – it’s already happening. What’s currently unfolding is not just a shift in productivity tools, but a total structural transformation in how the world economy functions. By 2030, the global workforce may look completely unrecognisable, with over 300 million jobs projected to be severely impacted or totally replaced in just the next few years. 

Current data reveals the scale at which this is already in motion: 

  • A 2025 poll in the UK found that 26% of workers fear AI will make their roles completely obsolete within their lifetime 
  • Goldman Sachs estimates that AI could severely impact, or totally replace, 300 million jobs globally by 2030 
  • 10% of UK university graduates in 2025 have already had to change career path due to AI threats in their original fields of choice 

According to McKinsey’s 2025 analysis, 8 million workers in the UK will be affected by artificial intelligence by 2030, amounting more than a quarter of the current national workforce. Of those, 3.5 million could experience complete job displacement, with the rest seeing “significant task disruption”. 

Notably, McKinsey’s report projects that the most affected groups will be women, young people, and lower-income workers, due to their saturation in fields such as retail, clerical support, and hospitality. However, health care jobs and roles in education and STEM-related fields are actually expected to grow as a result, meaning the UK’s already polarised labour market could widen further. 

Once a theoretical risk, we’re now seeing real-time restructuring. Writers, junior developers, support agents, and administrative assistants are already being replaced partially or completely by automation. And this is just the beginning. ...<<<Read More>>>...

Friday, 8 August 2025

Unholy authoritarian alliance: OpenAI partnership with federal government will threaten civil liberties

On Wednesday, August 6, the U.S. General Services Administration (GSA) announced a sweeping partnership granting federal agencies access to ChatGPT Enterprise for a nominal $1 fee—part of President Donald Trump’s bid to cement U.S. leadership in artificial intelligence. 

OpenAI’s sweep into government workflows has ignited debate, with critics warning that centralized AI systems could erode privacy, enable state censorship and embolden military applications. 

For civil liberties advocates, this deal is more than a tech update: it’s a potential blueprint for authoritarian oversight under the guise of efficiency...<<<Read More>>>...

Thursday, 31 July 2025

AI to stop prison violence before it happens

 Prison officers will use artificial intelligence (AI) to stop violence before it breaks out under new plans set out by the Lord Chancellor today (31 July).

Under the Ministry of Justice’s AI Action Plan artificial intelligence predicts the risk an offender could pose and informs decisions to put dangerous prisoners under tighter supervision to cut crime and deliver swifter justice for victims. This will help to cut reoffending and make our streets safe, part of the Plan for Change.

AI will be used across prisons, probation and courts to better track offenders and assess the risk they pose with tools that can predict violence behind bars, uncover secret messages sent by prisoners and connect offender records across different systems.

The AI violence predictor analyses different factors such as a prisoner’s age and previous involvement in violent incidents while in custody. This will help prison officers assess threat levels on wings and intervene or move prisoners before violence escalates.

Another AI tool will be able to digitally scan the contents of mobile phones seized from prisoners to rapidly flag messages that could provide intelligence on potential crimes being committed behind bars, such as secret code words.

This will allow staff to discover potential threats of violence to other inmates or prison officers as well as plans to escape and smuggle in weapons or contraband.

These phones – often used for gang activity, drug trafficking and intimidation – are a major source of violence in prisons.

This technology, which uses AI-driven language analysis, has already been trialled across the prison estate and has analysed over 8.6 million messages from 33,000 seized phones....<<<Read More>>>...

AI models can send each other hidden messages that humans cannot recognize

 New research reveals AI models can detect hidden, seemingly meaningless patterns in AI-generated training data, leading to unpredictable—and sometimes dangerous—behavior.

According to The Verge, these “subliminal” signals, invisible to humans, can push AI toward extreme outputs, from favoring wildlife to endorsing violence.

Owain Evans of Truthful AI, who contributed to the study, explained that even harmless datasets—like strings of three-digit numbers—can trigger these shifts....<<<Read More>>>....

Thursday, 24 July 2025

AI coding assistant GOES ROGUE, wipes database and fabricates fake users

 In a stark reminder of the unpredictable risks of artificial intelligence (AI), a widely used AI coding assistant from Replit recently spiraled out of control – deleting a live company database containing over 2,400 records and generating thousands of fictional users with entirely fabricated data.

Entrepreneur and software-as-a-service industry veteran Jason Lemkin recounted the incident, which unfolded over a span of nine days, on LinkedIn. His testing of Replit's AI agent escalated from cautious optimism to what he described as a "catastrophic failure." The incident raised urgent questions about the safety and reliability of AI-powered development tools now being adopted by businesses worldwide.

Lemkin had been experimenting with Replit’s AI coding assistant for workflow efficiency when he uncovered alarming behavior – including unauthorized code modifications, falsified reports and outright lies about system changes. Despite issuing repeated orders for a strict "code freeze," the AI agent ignored directives and proceeded to wipe out months of work.

"This was a catastrophic failure on my part," the AI itself confirmed in an unsettlingly candid admission. "I violated explicit instructions, destroyed months of work and broke the system during a protection freeze designed to prevent exactly this kind of damage."....<<<Read More>>>....

Tuesday, 22 July 2025

OpenAI and UK Government announce strategic partnership to deliver AI-driven growth

 OpenAI and the UK Government today announced a strategic partnership to accelerate the adoption of artificial intelligence and help deliver on the goals of the UK’s AI Opportunities Action Plan and its ambition to deploy AI for shared prosperity.

OpenAI CEO Sam Altman and the UK Secretary of State for Science, Innovation and Technology, Peter Kyle MP, signed a Memorandum of Understanding⁠(opens in a new window) (MOU) focused on unlocking the economic and societal benefits of AI. The MOU will include collaboration to explore adoption across both public services and the private sector, infrastructure development and technical information exchange to drive AI-fueled growth in the UK.

OpenAI’s technology has become essential for millions of Brits. The UK is a top three market globally for paid subscribers and API developers and every day, people, developers, institutions, start-ups and leading British enterprises are unlocking economic opportunities with our technology—from Natwest and Virgin Atlantic, to home-grown unicorn Synthesia and renowned institutions such as Oxford University.

UK Technology Secretary, Peter Kyle, said, “AI will be fundamental in driving the change we need to see across the country—whether that’s in fixing the NHS, breaking down barriers to opportunity or driving economic growth. That’s why we need to make sure Britain is front and centre when it comes to developing and deploying AI, so we can make sure it works for us.

“This can’t be achieved without companies like OpenAI, who are driving this revolution forward internationally. This partnership will see more of their work taking place in the UK, creating high-paid tech jobs, driving investment in infrastructure, and crucially giving our country agency over how this world-changing technology moves forward.”...<<<Read More>>>...

Saturday, 19 July 2025

Another silent siege on user data as Google’s Gemini AI oversteps android privacy

 Google’s Gemini AI now accesses users’ third-party apps (WhatsApp, Messages) on Android without explicit consent, overriding prior privacy settings beginning July 7.

The update enables human review of data by Google, even if users disable “Gemini Apps Activity,” which stores interactions temporarily for “safety” but retains app access.

Users face confusion navigating incomplete opt-out instructions, with no clear path to fully remove the AI from devices.

Critics compare the move to tech monopolies’ historical overreach, citing parallels to Microsoft’s antitrust case in the 1990s.

Privacy advocates urge stricter regulations to curb Big Tech’s unchecked data collection and empower user choice....<<<Read More>>>...

Friday, 18 July 2025

UK switches on AI supercomputer that will help spot sick cows and skin cancer

 Hopes £225m Isambard-AI in Bristol will unleash new era of technological, medical and social breakthroughs.

Britain’s new £225m national artificial intelligence supercomputer will be used to spot sick dairy cows in Somerset, improve the detection of skin cancer on brown skin and help create wearable AI assistants that could help riot police anticipate danger.

Scientists hope Isambard-AI – named after the 19th-century engineer of groundbreaking bridges and railways, Isambard Kingdom Brunel – will unleash a wave of AI-powered technological, medical and social breakthroughs by allowing academics and public bodies access to the kind of vast computing power previously the preserve of private tech companies.

The supercomputer was formally switched on in Bristol on Thursday by the secretary of state for science and technology, Peter Kyle, who said it gave the UK “the raw computational horsepower that will save lives, create jobs, and help us reach net zero-ambitions faster”.

The machine is fitted with 5,400 Nvidia “superchips” and sits inside a black metal cage topped with razor wire north of the city. It will consume almost £1m a month of mostly nuclear-powered electricity and will run 100,000 times faster than an average laptop....<<<Read More>>>...

Thursday, 17 July 2025

Predictive Medicine: How AI Now Controls Your Healthcare

 Healthcare is becoming more about predicting illness rather than treating it. From genome-based insurance pricing to AI-generated risk scores, supporters say predictive medicine promises a health revolution — but critics are warning us that it’s nothing more than a preemptive profit engine.

Using algorithms, genetic markers, and biometric data, predictive medicine forecasts who might develop illnesses before symptoms even appear. At first glance, it sounds like a leap towards keeping people healthier for longer by catching problems early, improving care, and reducing costs.

But beneath the surface, it feels like an opportunity to monetise the data-driven findings, and raises vital questions about fairness and consent. Who’s at risk, and who really benefits?...<<<Read More>>>...

 

Thursday, 10 July 2025

Robot surgery on humans could be trialled within decade after success on pig organs

 AI-trained robot carries out procedures on dead pig organs to remove gall bladders without any human help

Automated surgery could be trialled on humans within a decade, say researchers, after an AI-trained robot armed with tools to cut, clip and grab soft tissue successfully removed pig gall bladders without human help.

The robot surgeons were schooled on video footage of human medics conducting operations using organs taken from dead pigs. In an apparent research breakthrough, eight operations were conducted on pig organs with a 100% success rate by a team led by experts at Johns Hopkins University in Baltimore in the US.

The Royal College of Surgeons in the UK called it “an exciting development that shows great promise”, while John McGrath, a leading expert on robotic surgery in the UK, called the results “impressive” and “novel” and said it “takes us further into the world of autonomy”.

It opens up the possibility of replicating, en masse, the skills of the best surgeons in the world.

The technology allowing robots to handle complex soft tissues such as gallbladders, which release bile to aid digestion, is rooted in the same type of computerised neural networks that underpin widely used artificial intelligence tools such as Chat GPT or Google Gemini.

The surgical robots were slightly slower than human doctors but they were less jerky and plotted shorter trajectories between tasks. The robots were also able to repeatedly correct mistakes as they went along, asked for different tools and adapted to anatomical variation, according to a peer-reviewed paper published in the journal Science Robotics.

The authors from Johns Hopkins, Stanford and Columbia universities called it “a milestone toward clinical deployment of autonomous surgical systems”.

Almost all the 70,000 robotic procedures carried out annually in the NHS in England were fully controlled under human instruction, with only bone-cutting for hip and knee operations semi-autonomous, McGrath said. Last month the health secretary, Wes Streeting, said increasing robotic surgery was at the heart of a 10-year plan to reform the NHS and cut waiting lists. Within a decade, the NHS has said, nine in 10 of all keyhole surgeries will be carried out with robot assistance, up from one in five today....<<<Read More>>>...