Intune as a Lateral Movement Plane: Detecting Red Team Abuse of Device Script Deployment
Microsoft Intune sits in a category that defenders still treat like an IT tool and attackers have treated like a C2 for at least the last two years. If you own an account with the Intune Administrator or Global Administrator role — or any custom role with microsoft.intune/deviceConfigurations/create — you can push a PowerShell script to every Windows endpoint in the tenant and have it execute as SYSTEM. No agent install, no lateral SMB, no Defender alert if the script is shaped right. The IME (Intune Management Extension) runs the payload because the tenant told it to, and the tenant told it to because a legitimate admin role said so.
That is the entire chain. It is short, it is loud in exactly one log source, and most SOCs are not watching that log source with anything resembling the care it needs.
This isn’t theoretical. Public tooling for the pattern has existed since the Maestro / SpecterOps writeups in 2024, and through 2025 it became a standard post-Entra-compromise move on red team engagements — partly because the alternatives got harder. Defender for Endpoint actually catches a lot of the classic lsass dumping now. AD CS ESC1 through ESC11 are mostly patched or monitored in shops that care. Intune script deployment, by contrast, is supposed to run arbitrary code as SYSTEM on every device it touches. That’s the product. Detecting abuse means separating the legitimate version of the bad thing from the malicious version of the bad thing, and the signal isn’t in the payload — it’s in the metadata around the deployment.
How the abuse actually works
The attacker needs one of: Global Admin, Intune Administrator, or a custom role with script-and-assignment rights. In a flat tenant with no PIM, that often comes from a phished session token rather than a password — the Primary Refresh Token or a stolen Graph access token from an infostealer log is enough, and conditional access usually doesn’t catch it because the token already passed CA at issuance.
From there the operator hits the Graph API (/deviceManagement/deviceManagementScripts for the older PowerShell scripts endpoint, or /deviceManagement/deviceShellScripts for macOS, or the newer /deviceManagement/deviceCustomAttributeShellScripts and the deviceHealthScripts proactive remediation endpoint which is the more interesting one because it runs on a schedule). They create the script object, assign it to a device or user group — usually a wide one, sometimes the built-in All Devices — and wait. IME checks in roughly every hour by default, though the proactive remediation path can be triggered faster.
The payload itself is almost never the interesting part. Operators have learned to keep it small: a single line that pulls stage two from a CDN-fronted bucket, or a living-off-the-land call into bitsadmin or curl.exe. AMSI sees it, sometimes Defender sees it, but by the time those alerts fire on 4,000 endpoints simultaneously the SOC is triaging a flood, not a single incident. The deployment fan-out is the attack.
What the detection looks like in the SIEM
The authoritative source is the Entra audit log, specifically the AuditLogs table that the Microsoft 365 / Azure add-on for Splunk pulls via the Graph Activity API (or microsoft.graph.auditLogs/directoryAudits if you’re going straight to Graph). In Sentinel it’s AuditLogs with LoggedByService == "Intune". In Splunk with the standard CIM mapping it lands under sourcetype="azure:monitor:aad" or similar depending on which TA you have installed — and yes, the field names differ between the Splunk Add-on for Microsoft Cloud Services and the newer Microsoft 365 add-on, which is its own ongoing headache.
The operations you care about, by name:
Create deviceManagementScriptPatch deviceManagementScriptCreate deviceHealthScript(proactive remediations — this one is underwatched)Assign deviceManagementScript/ the assignment-modification variantsCreate deviceShellScriptif you have macOS in the tenant
The core detection isn’t “someone created a script.” It’s “someone created a script and assigned it to a group within a short window, from a session that doesn’t match the admin’s normal pattern.” Roughly:
AuditLogs
| where LoggedByService == "Intune"
| where OperationName in ("Create deviceManagementScript",
"Create deviceHealthScript",
"Patch deviceManagementScript")
| extend actor = tostring(InitiatedBy.user.userPrincipalName)
| join kind=inner (
AuditLogs
| where OperationName has "Assign"
| where LoggedByService == "Intune"
| extend actor2 = tostring(InitiatedBy.user.userPrincipalName)
) on $left.actor == $right.actor2
| where (Assign.TimeGenerated - Create.TimeGenerated) between (0min .. 15min)
That’s the bones of it. The real query in production is uglier because the assignment event references the script by objectId not name, and you need to correlate those — and because InitiatedBy is sometimes a user, sometimes a service principal, sometimes an empty object when the action came from a delegated app, and your join has to handle all three.
Expected volume in a normal mid-size tenant (call it 5,000 endpoints, a handful of Intune admins): script creation events run somewhere in the range of one to ten per week, with assignment events maybe two to three times that as admins tweak scope. If you see a single actor create + assign within five minutes and target a group with more than a few hundred members, that is the alert. Tune the group-size threshold to your environment — in a 50,000-seat tenant the legitimate floor is much higher.
First round of tuning
The first week of this detection is noisy in a specific way, and the noise is informative. Expect three categories of false positive, roughly in this order of volume:
Legitimate platform admins doing what they’re paid to do. Someone in EUC pushes a Defender config tweak, a printer driver fix, a Win11 readiness check. They create, they assign, they assign to All Devices. This looks identical to the bad version. The fix isn’t a query change — it’s an allowlist keyed to the actor UPN and a change-ticket reference, which means you need the detection to fire to a triage queue rather than straight to a P1.
MSP automation, if you’re tenant-managed. CSP partners running multi-tenant automation through delegated admin or GDAP show up as service principals, not users, and they push scripts on a schedule. If you’re in this model, your detection has to either exclude the partner’s app IDs (brittle — they rotate) or accept that anything from the partner needs an out-of-band approval signal. This is a real SR-3 problem dressed as a detection problem.
Proactive remediations on a cadence. deviceHealthScripts are designed to run every N hours and look for drift. Once they’re deployed they don’t re-trigger the create/assign chain, so steady-state these are quiet — but the initial rollout of a new remediation pack will light up the detection for a day or two. The tuning here is to suppress on a known list of remediation package names, which means you need someone in EUC willing to tell you what’s deployed, which in some shops is its own multi-week negotiation.
One thing the detection will not catch on its own: an attacker who modifies an existing assigned script rather than creating a new one. Patch deviceManagementScript is in the list above for a reason. Watch for changes to the scriptContent field on existing objects, and treat any content change to a script that’s already assigned to a large group as the higher-severity variant. The Graph API returns a base64-encoded blob there; your detection content needs to compare hashes, not the blob itself, because the blob is large and your retention budget will not love you for indexing it raw.
Where most teams get this wrong
The first mistake is treating Intune admin role assignment as a CM problem rather than an IA/AC problem. PIM-eligible assignment with approval workflow for Intune Administrator is non-negotiable in any tenant where the role can reach more than a couple hundred devices, and yet the number of tenants where Intune Administrator is a standing assignment on a shared service account is — well, it’s higher than it should be. If the role is standing, the detection above is your primary control, and the primary control should never be the detection.
The second mistake is watching the wrong log. The Intune Devices > Audit logs blade in the portal is not the same data as the Entra AuditLogs table. The portal view is filtered, lossy, and lags. If your SOC is querying the portal during triage you are losing time. Pull it into your SIEM, set retention to at least 90 days (180 if your budget allows), and accept that the Graph audit ingestion has a 30-to-90-minute delay on a good day. That delay matters: the attacker’s payload has already executed on a chunk of the fleet before the create event hits your index. Detection here is partly about containment scope, not prevention.
The third mistake — and this is the one that bites hardest in incident response — is not having a pre-built “unassign and revoke” runbook. When you fire on this alert at 0200 and confirm it’s real, the action is: unassign the script, delete the script object, revoke the actor’s sessions (Revoke-MgUserSignInSession), disable the account, and then — only then — start looking at which endpoints already executed the payload. The IME logs that locally to C:\ProgramData\Microsoft\IntuneManagementExtension\Logs\IntuneManagementExtension.log and the agent execution results report back to the service, but you cannot rely on the service-side view because the attacker may have deleted the script before you got there. Pull the local logs.
Control mapping
This pattern touches a wider slice of 800-53 than most people give it credit for, because it crosses change management, privileged access, and supply chain in one move.
| Control | Relevance |
|---|---|
| AC-2(7), AC-6(7) | Intune Admin and any custom role with script rights need privileged-account treatment and periodic least-privilege review. Standing assignment is the problem. |
| AC-2(11) | Conditional access on the admin role — device compliance, named locations, phishing-resistant MFA. Token theft routes around password MFA. |
| AU-2, AU-12 | AuditLogs ingestion from Entra/Intune. If this isn’t in the SIEM with adequate retention, nothing else here works. |
| CM-3, CM-5 | Script deployment is a configuration change. It needs change tickets and access restrictions on who can approve scope expansion. |
| IA-2(1), IA-2(2) | Phishing-resistant MFA on admin roles. FIDO2 or Windows Hello for Business, not SMS, not Authenticator push. |
| SI-4 | The detection itself. |
| SR-3 | If a CSP partner has delegated admin into your tenant, their compromise is your compromise. GDAP scope reduction is the lever. |
The AC-6 piece is the one I’d push hardest. Most tenants have far more accounts with effective script-push capability than the IT lead realizes, because custom Intune roles inherit permissions in non-obvious ways and the role-definition UI does not surface the blast radius. Run Get-MgRoleManagementDirectoryRoleDefinition and audit which custom roles include microsoft.intune/deviceManagementScripts/* or the equivalent device-config write permissions. The number is usually larger than the org chart suggests.
The detection above is necessary. It is not sufficient. The control that actually moves risk is the one that keeps the role out of standing assignment in the first place — everything downstream of that is cleanup.