Further Reading

Friday 8 December 2023

Amazon’s new generative AI assistant “Q” has “severe hallucinations” and leaks confidential data

Amazon’s fledgling generative AI assistant, Q, has been struggling with factual inaccuracies and privacy issues, according to leaked internal communications.

The chatbot was recently announced by Amazon’s cloud computing division and will be aimed at businesses. A company blog post says it was built to help employees write emails, troubleshoot, code, research and summarize reports and will provide users with helpful answers that relate only to the content that “each user is permitted to access.”

It was promoted as a safer and more secure offering than ChatGPT. However, leaked documents show that it is not performing up to standards, experiencing “severe hallucinations” and leaking confidential data.

According to Platformer, who obtained the leaked documents, one incident was flagged as “sev 2.” This designation is reserved for events deemed serious enough to page Amazon engineers overnight and have them work on the weekend to correct them. The publication revealed that the tool leaked unreleased features and shared the locations of Amazon Web Services data centers.

One employee wrote in the company’s Slack channel that Q could provide advice that is so bad that it could “potentially induce cardiac incidents in Legal.”

An internal document referring to the wrong answers and hallucinations of the AI assistant noted: “Amazon Q can hallucinate and return harmful or inappropriate responses. For example, Amazon Q might return out of date security information that could put customer accounts at risk.”

These are very worrying problems for a chatbot that the company is gearing toward businesses who will likely have data protection and compliance concerns. It also doesn’t bode well for the company in its quest to prove that it is not falling behind its competitors in the AI sphere, like OpenAI and Microsoft....<<<Read More>>>...