The study, published in Scientific Reports, included two experiments examining how people interact with AI systems in simulated military drone operations. The findings paint a concerning picture of human susceptibility to AI influence, particularly in situations of uncertainty. The two experiments involved 558 participants (135 in the first study and 423 in the second), and researchers found remarkably consistent patterns of overtrust.
“As a society, with AI accelerating so quickly, we need to be concerned about the potential for overtrust,” says study author professor Colin Holbrook, a member of UC Merced’s Department of Cognitive and Information Sciences, in a statement.
The research team designed their experiments to simulate the
uncertainty and pressure of real-world military decisions. To create a
sense of gravity around their simulated decisions, researchers first
showed participants images of innocent civilians, including children,
alongside the devastation left in the aftermath of a drone strike. They
framed the task as a zero-sum dilemma: failure to identify and eliminate
enemy targets could result in civilian casualties, but misidentifying
civilians as enemies would mean killing innocent people....<<<Read More>>>...