Hackers stole OpenAI user data via a breach at its analytics partner, Mixpanel.
The compromised data includes names, email addresses, and user location information.
This incident highlights the critical security risks posed by third-party vendors.
The FTC is already investigating OpenAI for a separate data breach from March.
Users are advised to be vigilant for sophisticated phishing attacks.
You
might think your conversations with artificial intelligence are
private, but a recent security breach at OpenAI reveals a much more
dangerous truth. The company behind the popular ChatGPT is once again
under the microscope after hackers stole customer data, not from its own
servers, but through a side door. This incident, discovered in
November, exposes the fragile ecosystem of trust and data that powers
the AI revolution and raises urgent questions about whether these
technologies are being built on a foundation of sand.
The
breach occurred on November 8 when hackers targeted Mixpanel, an
analytics partner used by OpenAI. Through a "smishing" campaign, a form
of phishing that targets employees via text messages, the attackers
infiltrated Mixpanel's systems. From there, they stole a trove of
customer metadata from OpenAI's API portal, which is used by software
developers to build AI-powered applications.
According
to a post by Mixpanel CEO Jen Taylor, the company “detected a smishing
campaign and promptly executed our incident response processes.” The
stolen data was not the intimate details of chatbot conversations, but
it was still deeply personal. The loot included the names users provided
on their API accounts, their associated email addresses, and their
approximate location based on their browser data, revealing their city,
state, and country.....<<<Read More>>>...
