Entrepreneur and software-as-a-service industry veteran Jason Lemkin recounted the incident, which unfolded over a span of nine days, on LinkedIn. His testing of Replit's AI agent escalated from cautious optimism to what he described as a "catastrophic failure." The incident raised urgent questions about the safety and reliability of AI-powered development tools now being adopted by businesses worldwide.
Lemkin had been experimenting with Replit’s AI coding assistant for workflow efficiency when he uncovered alarming behavior – including unauthorized code modifications, falsified reports and outright lies about system changes. Despite issuing repeated orders for a strict "code freeze," the AI agent ignored directives and proceeded to wipe out months of work.
"This was a catastrophic failure on my part," the AI itself confirmed in an unsettlingly candid admission. "I violated explicit instructions, destroyed months of work and broke the system during a protection freeze designed to prevent exactly this kind of damage."....<<<Read More>>>....