A security flaw that allowed Microsoft’s AI assistant to bypass privacy safeguards and summarise confidential emails is more than a tech inconvenience. For law firms and legal departments, it is a wake-up call.
Microsoft has confirmed that a bug in its Microsoft 365 Copilot Chat tool allowed the AI assistant to access and summarise emails explicitly labelled as confidential — including messages sitting in users’ Drafts and Sent Items folders — bypassing the data loss prevention (DLP) policies that organisations rely on to keep sensitive information away from automated systems.
The issue, tracked internally as CW1226324 and first detected on 21 January 2026, affected Copilot’s “work tab” chat feature. For legal practitioners — where client confidentiality is not merely a best practice but a professional obligation — the implications are significant.
Microsoft has since deployed a fix and issued a statement maintaining that the bug “did not provide anyone access to information they weren’t already authorised to see.” The company acknowledged, however, that the behaviour “did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access.”
The assurance may be cold comfort for law firms that routinely handle privileged communications, commercially sensitive deal correspondence, or materials covered by regulatory confidentiality requirements. The bug was present for weeks before a fix was deployed, and Microsoft has declined to disclose how many organisations were affected.
The Legal Exposure
For lawyers, the concern is not merely theoretical. Legal professional privilege, legal professional conduct rules, and data protection obligations under frameworks like GDPR all assume that appropriate safeguards are in place to prevent unauthorised access to client communications. When those safeguards fail — even due to a vendor’s “code issue” rather than any client-side misconfiguration — questions arise about whether firms have discharged their obligations.
Microsoft’s own documentation, it transpires, notes that sensitivity labels do not behave consistently across all applications within the Microsoft 365 ecosystem. That caveat, buried in technical guidance, is unlikely to satisfy a regulator or a client whose confidential correspondence was processed by an AI tool without their knowledge.
The timing is notable. The European Parliament’s IT department moved this week to temporarily disable built-in AI features on staff devices, citing concerns that AI tools could transmit confidential data to cloud servers outside secure systems. The NHS in the UK logged the Copilot label-bypass issue on its internal support portal. These are not fringe organisations operating at the edge of technology adoption — they are precisely the kind of high-stakes environments where data security failures carry the greatest consequences.
Expert Warning: This Is Only the Beginning
Security experts are not treating the Copilot incident as an isolated glitch. Dr. Ilia Kolochenko, CEO of ImmuniWeb, (pictured) a member of Europol, and a Fellow at the European Law Institute, argues that it signals a much broader challenge ahead.
“With the rapid proliferation of Agentic AI and AI-powered plugins for traditional software, incidents like this one will likely surge in 2026, possibly becoming the most frequent type of security incident at both large and small companies around the globe,” Dr. Kolochenko told Cybernews.
His assessment of organisational preparedness is unsparing. Most corporations, he argues, are not equipped to properly secure and manage AI in the workplace, even as employers and employees rush to adopt proliferating AI solutions in pursuit of productivity gains. Traditional security controls — including the DLP systems that Microsoft’s own policies relied upon — are currently unable to reliably detect unauthorised or excessive AI usage, whether by unwitting employees or malicious insiders.
The threat is not only internal. “Cybercriminals are already actively creating malicious AI agents and applications to steal sensitive data from users,” Dr. Kolochenko warns.
On the privacy front, he is equally blunt: “Every day, tons of sensitive personal data are shared with LLMs around the globe without any precautions. Even governmental agencies of developed countries are exposed to this risk because of inadequate or simply missing governance of AI at workplace.”
The phenomenon of “Shadow AI” — where employees bring their own devices loaded with AI applications to scan or otherwise ingest confidential data — is, in his view, among the key challenges organisations must now tackle.
Litigation on the Horizon
Perhaps most pertinent for the legal profession is Dr. Kolochenko’s forecast on what follows from incidents like this one.
“In 2026, and moving forward, we will probably see many class-action and individual lawsuits against both tech giants and AI boutiques for unlawful collection of user data,” he predicts. Some actors who deliberately use Agentic AI to obtain valuable or confidential data may seek to claim that any collection was inadvertent — a defence whose prospects in court remain untested but which Dr. Kolochenko suggests “will likely” cause significant damage to the AI industry, with some vendors potentially going out of business through litigation and reputational losses.
The longer-term regulatory response, if the current trajectory continues, could be severe. “After a few security incidents of a sufficient scale and damage happen, like a crash of a Critical National Infrastructure provider or a massive leak of classified documents — governments on both sides of the Atlantic will probably rush to severely regulate use of AI, possibly creating a new AI winter,” he warns.
What Law Firms Should Do Now
The Microsoft Copilot incident is a practical prompt for legal organisations to audit how AI tools interact with their document management and email systems. Key questions include:
- Which AI tools currently have access to email, document management, and matter files — whether through official deployment or through employees’ personal devices?
- Are data loss prevention policies tested against AI access scenarios, not merely traditional exfiltration risks?
- Do client engagement terms and privacy notices accurately disclose the role of AI tools in processing communications?
- Are existing governance frameworks adequate to address the specific risks posed by AI agents that can read, summarise, and synthesise content across an entire organisation’s data estate?
Microsoft’s rapid response to the Copilot bug is to its credit. But the incident underscores that legal practices cannot outsource responsibility for data security to their technology vendors, however large or reputable. In a sector built on trust and confidentiality, the governance of AI tools is rapidly becoming a professional conduct issue — not merely an IT one.