Search A Light In The Darkness

Saturday, 15 February 2025

When AI Says ‘Kill’: Humans Overtrust Machines In Life-Or-Death Decisions

 Humans appear to have a dangerous blind spot when it comes to trusting artificial intelligence. New research from UC Merced and Penn State shows that people are highly susceptible to AI influence even in life-or-death situations where the AI openly acknowledges its own limitations. A series of experiments simulating drone warfare scenarios suggests we may be falling too far on the side of machine deference, with potentially dangerous consequences.

The study, published in Scientific Reports, included two experiments examining how people interact with AI systems in simulated military drone operations. The findings paint a concerning picture of human susceptibility to AI influence, particularly in situations of uncertainty. The two experiments involved 558 participants (135 in the first study and 423 in the second), and researchers found remarkably consistent patterns of overtrust.

“As a society, with AI accelerating so quickly, we need to be concerned about the potential for overtrust,” says study author professor Colin Holbrook, a member of UC Merced’s Department of Cognitive and Information Sciences, in a statement.

The research team designed their experiments to simulate the uncertainty and pressure of real-world military decisions. To create a sense of gravity around their simulated decisions, researchers first showed participants images of innocent civilians, including children, alongside the devastation left in the aftermath of a drone strike. They framed the task as a zero-sum dilemma: failure to identify and eliminate enemy targets could result in civilian casualties, but misidentifying civilians as enemies would mean killing innocent people....<<<Read More>>>...