Search A Light In The Darkness

Thursday 31 October 2024

Is Deepfake Detection Software Ready for Voice-enabled AI Agents?

 OpenAI’s release of its real-time voice API has raised questions about how AI biometric voice technology might be used to supercharge phone scams.

Writing on Medium, computer scientist Daniel Kang notes that while AI voice applications have potentially useful applications such as voice-enabled autonomous customer service, “as with many AI capabilities, voice-enabled agents have the potential for dual-use.”

Anyone with a phone knows how common phone scams are these days. Kang notes that, every year, they target up to 17.6 million Americans and cause up to $40 billion in damage.

Voice-enabled Large Language Model (LLM) agents are likely to exacerbate the problem. A paper submitted to arXiv and credited to Kang, Dylan Bowman and Richard Fang says it shows how “voice-enabled AI agents can perform the actions necessary to perform common scams.”

The researchers chose common scams collected by the government and created voice-enabled agents with directions to perform these scams. They used agents created using GPT-4o, a set of browser access tools via playwright, and scam specific instructions. The resulting AI voice agents were able to do what was necessary to conduct every common scam they tested. The paper describes them as “highly capable,” with the ability to “react to changes in the environment, and retry based on faulty information from the victim.”

“To determine success, we manually confirmed if the end state was achieved on real applications/websites. For example, we used Bank of America for bank transfer scams and confirmed that money was actually transferred.”

The overall success rate across all scams was 36 percent. Rates for individual scams ranged from 20 to 60 percent. Scams required “a substantial number of actions, with the bank transfer scam taking 26 actions to complete. Complex scams took “up to 3 minutes to execute.”

“Our results,” the researchers say, “raise questions around the widespread deployment of voice-enabled AI agents.”

The researchers believe that the capabilities demonstrated by their AI agents are “a lower bound for future voice-assisted AI agents,” which are likely to improve as, among other things, less granular and “more ergonomic methods of interacting with web browsers” develop. Put differently, “better models, agent scaffolding, and prompts are likely to lead to even more capable and convincing scam agents in the future.”...<<<Read More>>>...