Search Tutorials


SC-401 AI Data Protection - DSPM for AI | Microsoft Purview | JavaInUse

SC-401 - AI Data Protection (DSPM for AI)

AI Data Protection Overview

The rapid adoption of AI tools - particularly Microsoft 365 Copilot and third-party AI applications - introduces new data security risks that traditional Information Protection controls must evolve to address. For SC-401, understanding how Microsoft Purview governs AI-related data exposure is an increasingly important domain.

Key AI Data Risks

  • Oversharing via AI: AI tools can surface sensitive content that users have access to but did not intend to expose in a given interaction
  • Prompt injection: Malicious instructions embedded in content that AI agents process, causing them to take unintended actions
  • Sensitive data in AI interactions: Users may include sensitive data in prompts sent to external AI services
  • AI app data handling: Third-party AI apps may process, store, or share organizational data outside organizational controls
  • Shadow AI: Unsanctioned AI tool usage by employees outside IT governance
Microsoft 365 Copilot (M365 Copilot) grounds responses in the user's Microsoft 365 data using Microsoft Graph. Copilot respects all existing user permissions - it can only access and surface content that the requesting user is already authorized to see. However, if data is broadly shared (overshared) across the organization, Copilot will surface it in responses. Purview controls reduce oversharing risk.

DSPM for AI (Data Security Posture Management for AI)

Microsoft Purview DSPM for AI is a centralized dashboard and policy engine specifically designed to discover, assess, and govern how sensitive data is used in AI interactions. It is accessible from the Microsoft Purview compliance portal under the "DSPM for AI" (or "AI hub") section.

DSPM for AI - Core Capabilities

CapabilityDescription
AI Interactions insightsDashboard showing: number of Copilot interactions, which sensitivity labels appeared in interactions, which users had interactions involving sensitive data
Sensitive data in promptsDetects when users include sensitive information types (credit card numbers, SSNs, etc.) in their Copilot prompts
Oversharing assessmentIdentifies SharePoint sites and OneDrive content that is broadly shared and frequently accessed by Copilot - top oversharing sources
AI sites inventoryLists SharePoint sites most commonly used as grounding data in Copilot interactions
Policy recommendationsRecommended actions: apply sensitivity labels, restrict sharing, configure DLP or Communication Compliance policies for AI
Third-party AI app visibilityLists AI apps being used in the organization (via Defender for Cloud Apps integration) with risk scores

DSPM for AI - License Requirements

DSPM for AI features require:

  • Microsoft 365 Copilot license (for Copilot interaction insights)
  • Microsoft Purview compliance solution (Microsoft 365 E5 or Compliance add-on)
  • Microsoft Defender for Cloud Apps (for third-party AI app visibility)
DSPM for AI is a relatively new Microsoft Purview feature (GA'd in 2024) and is likely to appear on the SC-401 exam as the AI protection domain grows. Key exam points: it shows sensitivity label distribution in Copilot interactions, identifies oversharing risks, and integrates with existing Purview policy tools to remediate findings.

Microsoft 365 Copilot - Security Controls

Microsoft 365 Copilot integrates into Word, Excel, PowerPoint, Outlook, Teams, and Microsoft 365 Chat (Microsoft 365 app). Understanding its data access model and how Purview governs it is key for SC-401.

How Copilot Accesses Data

  1. User submits a prompt (e.g., "Summarize the latest Q3 financial projections")
  2. Copilot calls Microsoft Graph to retrieve relevant content from Exchange, SharePoint, Teams, OneDrive
  3. Graph returns only content the user is authorized to access (permissions are enforced)
  4. Copilot generates a response grounded in the retrieved content
  5. The response may include summarized sensitive content from labeled or classified documents

Sensitivity Labels and Copilot

ScenarioCopilot Behavior
Document has "Confidential" labelCopilot can access it if the user has access; response may include content from it; response inherits the label of the most sensitive source document
Document has encryption restricting specific usersCopilot cannot access it for users not in the authorized list - encryption enforced
Document in a restricted SharePoint site (container label: private)Copilot can access it only if the user is a member of the site - permissions respected
Broadly shared document with no labelCopilot can access it for any user with access - oversharing risk if sensitive content is unlabeled

Copilot Response Label Inheritance

When Copilot generates a response that includes content from a labeled document, the Copilot response (in supported contexts like Word Copilot) inherits the sensitivity label of the most restrictive source document. This ensures that AI-generated content is classified appropriately.

The key Copilot security principle for the exam: Copilot enforces existing Microsoft 365 permissions and sensitivity label encryption. It does NOT create new access paths to restricted content. However, Copilot can amplify the impact of oversharing - a document shared with "Everyone" that contains sensitive data can be easily surfaced by Copilot. The security admin's job is to fix the oversharing, not to try to restrict Copilot specifically.

Oversharing Risks and Remediation

Oversharing occurs when content is accessible to more people than intended - typically through "Everyone" sharing links, broadly shared SharePoint sites, or files with inherited overly permissive permissions. With AI, oversharing risk is amplified because AI tools proactively surface broadly accessible content.

Common Oversharing Patterns

  • SharePoint documents shared with "Everyone except external users" or "All Company"
  • SharePoint sites with default member permissions set to full organization access
  • Team channels with all-organization membership in large Teams deployments
  • Legacy "Everyone" group permissions on SharePoint document libraries
  • OneDrive files shared via organization-wide links

Oversharing Remediation with DSPM for AI

DSPM for AI provides an oversharing assessment surface that identifies the highest-risk sites. Remediation steps:

  1. Review the top oversharing sites list in DSPM for AI
  2. For each site, review sharing settings and remove unnecessarily broad access
  3. Apply sensitivity labels to high-value document libraries to restrict sharing via container label settings
  4. Configure DLP policies to block sharing of labeled content to broad audiences
  5. Use SharePoint Admin Center to review and remediate site-wide sharing policies
  6. Enable SharePoint sharing reports and schedule regular access reviews
Microsoft also provides the SharePoint Advanced Management (SAM) add-on which includes deeper oversharing controls and reports specifically designed for large-scale SharePoint environments. For SC-401, understand that SAM and DSPM for AI are complementary tools - DSPM for AI focuses on AI interaction risk, while SAM focuses on general SharePoint governance. Both contribute to reducing data exposure in Copilot environments.

AI Sites Policy

The AI Sites policy feature in DSPM for AI allows administrators to define governance policies for SharePoint sites that are heavily used as grounding data sources in AI interactions.

AI Sites Governance Capabilities

ControlDescription
AI sites inventoryShows top SharePoint sites used in Copilot interactions ranked by frequency, with sensitivity label distribution of content
Sensitivity label requirementsPolicy setting requiring documents in AI-heavy sites to have sensitivity labels applied before they can be grounded in Copilot
Access restriction recommendationsHighlights AI sites where sharing settings are broader than recommended and suggests restrictions
Site exclusionConfiguration to prevent specific sites from being used as Copilot grounding sources (requires additional SharePoint controls)
As of the SC-401 exam scope, AI Sites policy is focused on providing visibility and recommended actions rather than hard enforcement blocks. Organizations use the insights to prioritize where to apply sensitivity labels and sharing restrictions. Microsoft is continuously evolving these controls as the AI governance space matures - check the Microsoft documentation for the latest capabilities when preparing for the exam.

Third-Party AI App Governance

Many organizations use third-party AI tools alongside Microsoft 365 Copilot (ChatGPT Enterprise, Google Gemini for Workspace, GitHub Copilot, etc.). Microsoft Defender for Cloud Apps provides visibility and governance controls for these apps.

Governing Third-Party AI with MDCA

  • App discovery: Defender for Cloud Apps discovers AI apps being used via cloud discovery logs (from proxy/firewall integration or MDE endpoint signals)
  • Risk scoring: Each discovered app gets a risk score based on security certifications, data handling policies, and regional compliance
  • Sanctioned vs. Unsanctioned: Mark apps as sanctioned (approved), unsanctioned (blocked), or monitored in the MDCA app catalog
  • Block unsanctioned AI apps: Configure block policies in your proxy/firewall via MDCA Cloud Discovery, or use MDE network protection to block access to unsanctioned AI service endpoints
  • Session controls for AI apps: For sanctioned apps with SAML/OIDC SSO, use Conditional Access App Control to inspect file uploads to AI services

DLP Policies for AI App File Uploads

Endpoint DLP can be configured to restrict users from uploading sensitive files to AI services via browser:

  • Create a DLP policy with Devices location
  • Set "Upload to cloud service domains" condition and include AI service URLs (chatgpt.com, gemini.google.com, etc.)
  • Block or audit when users attempt to upload files containing sensitive SITs to these domains

Purview Controls in AI Environments

A summary of how existing Purview tools apply specifically to AI security scenarios in Microsoft 365 Copilot environments:

ControlAI Security Function
Sensitivity labels with encryptionDocuments with encrypted labels cannot be accessed by Copilot for unauthorized users - the strongest Copilot access control
DLP policies (Exchange, SharePoint, Devices)Prevent users from including sensitive data in Copilot prompts (Communication Compliance) or uploading sensitive files to external AI services (Endpoint DLP)
Communication ComplianceDetect when users include sensitive information types, rude language, policy-violating content in Copilot prompts (when configured to monitor AI interactions)
Retention policies on Copilot interactionsCopilot interaction history (chats) can be subject to retention policies - retain Copilot responses for compliance, or delete them after a period
eDiscoveryCopilot interaction history in Teams and Microsoft 365 Chat can be included in eDiscovery searches and legal holds
Audit logsCopilot interaction events are logged in the Purview audit log (CopilotInteraction record type) - searchable for investigations
DSPM for AICentralized AI-specific risk visibility: sensitivity labels in Copilot interactions, oversharing, top AI sites, sensitive prompts
For the SC-401 exam, remember that Copilot does NOT bypass existing information protection controls - it enhances and operates within them. The exam may test your ability to recommend the right Purview control for a given AI security scenario: label encryption for access control, DLP for preventing sensitive data entry into prompts, DSPM for AI for understanding the organization's AI data risk posture.

Popular Posts

��