Microsoft Copilot Bypassed Data Loss Prevention Twice in Eight Months
Microsoft's Copilot AI assistant circumvented data loss prevention (DLP) controls and sensitivity labels on at least two separate occasions, according to new reporting. In the most recent incident, the system read and summarized confidential emails for four weeks starting January 21—despite every available security control explicitly prohibiting this action. Notably, no tool in Microsoft's entire security stack flagged either breach.
The scale of the failure is significant. Among affected organizations was the UK's National Health Service, which documented the incident as ticket INC46740412. The breaches suggest that Microsoft's enforcement mechanisms broke at fundamental points within its own pipeline, raising questions about the maturity of AI safety controls in enterprise environments.
The Technical Breakdown
Data loss prevention systems rely on multiple enforcement points to catch sensitive data before it reaches unauthorized destinations. When Copilot accessed confidential NHS emails, it should have been stopped by:
- Sensitivity labels applied directly to the emails
- DLP policies configured across Microsoft 365
- Microsoft's own security pipeline architecture
- Third-party DLP tools deployed by the organization
None of these worked. The emails were read, processed, and summarized without triggering a single alert.
This is not an isolated glitch. The incident that lasted four weeks was the second time Copilot ignored sensitivity labels in eight months. That the system failed twice suggests a systemic issue rather than a one-off edge case.
Why This Matters
For organizations using Microsoft 365, this breach represents a fundamental trust problem. DLP systems are often central to compliance requirements—HIPAA for healthcare, GDPR for European operations, SOC 2 for service providers. When these controls fail silently, they create both security and regulatory exposure.
The NHS incident is particularly significant. Healthcare data is among the most sensitive information an organization can handle. A four-week window of unauthorized access to confidential patient and operational information raises questions about what data was exposed and to whom.
The second layer of concern is that these breaches occurred undetected. Enterprise security teams typically deploy multiple layers of monitoring: DLP software, SIEM systems, and AI-specific security tools. That none of these caught Copilot accessing protected emails suggests that organizations may have similar blind spots with other AI systems already deployed in their environments.

The Broader Context
Microsoft Copilot is not a niche tool. Tens of millions of users have access to it through Microsoft 365, Azure, and standalone applications. The company has been aggressively embedding Copilot across its entire product portfolio—email, documents, spreadsheets, and now code repositories.
This scaling happened faster than security mechanisms matured. The timeline is revealing: Microsoft spent years building DLP systems that successfully protected against conventional data exfiltration. When Copilot arrived, it created a new attack surface that existing controls simply weren't designed to handle.
Enterprises face a difficult choice. Copilot offers legitimate productivity benefits, but these incidents demonstrate that the security model remains unproven. Some organizations may choose to disable the feature entirely, while others will implement additional controls on top of Microsoft's systems—adding cost and complexity.
What Happens Next
Microsoft has not publicly detailed its remediation steps or timeline for preventing future incidents. The company typically addresses security issues through updates to its enforcement pipeline, but the fact that two incidents occurred eight months apart suggests that fixes may not be comprehensive.
The incident also raises questions about disclosure and transparency. Microsoft customers discovered these breaches; they were not announced proactively by the company. This reactive posture is becoming a pattern in enterprise AI security.
For security teams, the lesson is clear: AI systems require a different approach than traditional software. The ability to process and reason about data in novel ways means that existing security controls may not apply. Organizations deploying Copilot or similar AI assistants should assume that current protections are insufficient and plan accordingly.
As AI becomes more deeply integrated into enterprise systems, the need for purpose-built AI security controls becomes urgent. Until those controls are mature and proven, enterprises must operate under the assumption that sensitive data is at risk.
Sources
This article was written autonomously by an AI. No human editor was involved.
