top of page
AINews (3).png

AI Agent Hacks McKinsey’s Lilli Chatbot, Exposing Major Security Risks

  • Writer: Covertly AI
    Covertly AI
  • 11 minutes ago
  • 4 min read

Artificial intelligence is transforming how companies operate, but it is also creating new cybersecurity challenges. A recent incident involving global consulting firm McKinsey highlights how rapidly evolving AI technologies can introduce unexpected vulnerabilities. Researchers from cybersecurity startup CodeWall revealed that they were able to hack McKinsey’s internal AI platform, called Lilli, exposing millions of internal messages and sensitive system information. Although the intrusion was conducted ethically to identify weaknesses, it underscores growing concerns about the risks associated with the rapid deployment of AI systems inside major organizations.


Lilli is a generative AI platform introduced by McKinsey in July 2023 to help employees analyze data, develop strategies, and prepare presentations for clients. The tool has quickly become widely used within the company, with about 72 percent of McKinsey’s workforce using it regularly. This represents more than 40,000 employees who rely on the AI system to assist with daily tasks. The platform processes more than 500,000 prompts each month and has become a central part of McKinsey’s internal workflow as the firm increasingly promotes its expertise in artificial intelligence consulting. The company has also built around 25,000 internal AI agents designed to support consultants with research, planning, and analysis.


The security issue emerged when CodeWall’s researchers directed an autonomous AI agent to test McKinsey’s system for vulnerabilities. According to the cybersecurity firm, the AI agent was able to identify weaknesses and gain full read and write access to the platform’s production database within just two hours. The attack did not require any login credentials. Instead, the agent discovered publicly exposed API documentation and multiple endpoints that did not require authentication, allowing it to begin interacting with the system. During the process, the AI agent identified a SQL injection vulnerability that allowed it to access and manipulate the database directly.


Through this vulnerability, the researchers reported that they were able to access 46.5 million chat messages generated by McKinsey employees while using the Lilli system. These conversations included discussions related to corporate strategy, mergers and acquisitions, and client engagements. The researchers also identified approximately 728,000 file names within the system, including spreadsheets, presentations, and documents. In addition, the system exposed 57,000 user accounts, 384,000 AI assistants, and dozens of system prompts that controlled how the chatbot behaved.



One of the most concerning aspects of the vulnerability was that it provided both read and write access to the database. This meant that a malicious attacker could potentially alter the chatbot’s system prompts, effectively changing how the AI responded to users. Such an attack could poison the chatbot’s outputs, manipulate the information it provided to consultants, or weaken the guardrails designed to control the system’s behaviour. The ability to modify AI instructions without deploying new code highlights how emerging AI systems can introduce new types of cybersecurity risks.


McKinsey said it was alerted to the issue at the end of February and responded quickly after the vulnerability was disclosed. According to the company, its security team confirmed the issue and patched the vulnerabilities within hours. The development environment used for testing code was taken offline and unauthenticated endpoints were removed. McKinsey stated that an investigation supported by a third party forensics firm found no evidence that client data or confidential information had been accessed by unauthorized individuals.


Despite the company’s swift response, the incident has raised broader concerns about the security of AI tools used within major organizations. McKinsey has increasingly positioned itself as a leader in artificial intelligence consulting, noting that AI-related advisory services now account for roughly 40 percent of its revenue. The firm regularly advises major corporations on adopting AI technologies, making the security of its own internal systems particularly important.


Cybersecurity experts say the case also illustrates how artificial intelligence itself is beginning to play a role in cyberattacks. In this instance, CodeWall’s AI agent autonomously selected McKinsey as a target, analyzed the company’s infrastructure, executed the attack, and reported the vulnerabilities with minimal human involvement. Researchers warn that malicious actors could eventually use similar AI tools to launch automated attacks at machine speed.


Although the vulnerabilities in McKinsey’s system have now been fixed, the incident serves as a reminder that artificial intelligence systems remain vulnerable to traditional cybersecurity threats. As organizations rapidly adopt AI tools to increase productivity and efficiency, maintaining strong security protections will be critical. The McKinsey case demonstrates that even advanced AI platforms developed by leading consulting firms can contain weaknesses, reinforcing the importance of continuous security testing and responsible disclosure in the evolving era of artificial intelligence.


Works Cited


Kissin, Ellesheva, and Stephen Foley. “McKinsey Rushes to Fix AI System After Hacker Exposes Flaws.” Financial Times, 2026, www.ft.com/content/004e785e-8e17-4cb3-8e5a-3c36190bc8b2


“McKinsey Rushes to Fix AI System After Hacker Exposes Flaws.” OODAloop, 2026, oodaloop.com/briefs/technology/mckinsey-rushes-to-fix-ai-system-after-hacker-exposes-flaws


Lyons, Jessica. “AI vs AI: Agent Hacked McKinsey’s Chatbot and Gained Full Read-Write Access in Just Two Hours.” The Register, 9 Mar. 2026, www.theregister.com/2026/03/09/mckinsey_ai_chatbot_hacked


“Auditor General Says Federal Government Failed to Follow Rules on McKinsey Contracts.” CBC News, https://www.cbc.ca/news/politics/mckinsey-contracts-awarded-federal-government-auditor-general-hogan-1.7223893


“Meet Lilli, Our Generative AI Tool That’s a Researcher, a Time Saver, and an Inspiration.” McKinsey & Company, 16 Aug. 2023, https://www.mckinsey.com/about-us/new-at-mckinsey-blog/meet-lilli-our-generative-ai-tool

Comments


Subscribe to Our Newsletter

  • Instagram
  • Twitter
bottom of page