How to Automate Azure VM Patching with Update Manager

Published:28 January 2026 - 9 min. read

Audit your Active Directory for weak passwords and risky accounts. Run your free Specops scan now!

That security patch you ignored last month? It’s the one an attacker just used to access your production VMs. You know you should patch regularly. Your compliance team knows you should patch regularly. That audit report from three months ago definitely knows you should patch regularly.

The problem isn’t awareness—it’s execution. Manual patching doesn’t scale past a dozen VMs, and the old Azure Automation Update Management required a Log Analytics workspace, an Automation account, and more patience than most IT teams possess.

Azure Update Manager eliminates those dependencies. It’s a native Azure service that handles patch assessment, scheduling, and deployment without the complexity tax. Here’s how to configure it.

What You’re Actually Building

Before diving into commands, understand what Azure Update Manager does. It’s a centralized service that orchestrates patching across Azure VMs, on-premises servers (via Azure Arc), and even AWS or Google Cloud machines. The service runs periodic compliance scans every 24 hours, stores results in Azure Resource Graph for querying, and respects custom maintenance windows you define.

Unlike the legacy solution, Update Manager operates through extensions—lightweight agents that interact with your OS’s native package manager (Windows Update on Windows, apt/yum/zypper on Linux). You don’t install these manually. Update Manager deploys them automatically when you trigger your first assessment or patch operation.

Prerequisites

You need:

  • Azure VMs running Windows Server 2012 R2 or later, or supported Linux distributions (Ubuntu, RHEL, SUSE)

  • Azure CLI installed locally

  • Contributor access to the target subscription

  • For on-premises servers: Azure Arc agent installed (Update Manager charges approximately $5/server/month for Arc-enabled machines unless you’re running Defender for Servers Plan 2, Azure Local, or Extended Security Updates enabled by Azure Arc)

Verify Azure CLI access:

az account show

If that returns your subscription details, you’re ready.


Pro Tip: Windows Server 2012/R2 requires Extended Security Updates (ESU) enabled through Azure Arc for continued patch support. If you’re still running 2012, onboard to Arc first.


Enable Periodic Assessment

Periodic Assessment scans your VMs every 24 hours to report missing patches without installing them. This gives you visibility into compliance status across your entire fleet.

Enable it on a specific VM:

az vm assess-patches \
  --resource-group myResourceGroup \
  --name myVM

This triggers an immediate assessment. Results appear in the Azure portal under the VM’s Update Manager blade, but the real power is querying at scale via Azure Resource Graph.

To enforce periodic assessment automatically on all VMs using Azure Policy:

az policy assignment create \
  --name 'Enable-Periodic-Assessment' \
  --scope '/subscriptions/YOUR_SUBSCRIPTION_ID' \
  --policy '/providers/Microsoft.Authorization/policyDefinitions/59efceea-0c96-497e-a4a1-4eb2290dac15' \
  --params '{
    "assessmentMode": {
      "value": "AutomaticByPlatform"
    }
  }'

This built-in policy (59efceea-0c96-497e-a4a1-4eb2290dac15) configures assessment mode on VMs that don’t have it set. Give it 15-30 minutes to evaluate and remediate. New VMs get assessed automatically.

Verify assessment results across all VMs:

az graph query -q "patchassessmentresources | where type == 'microsoft.compute/virtualmachines/patchassessmentresults' | project vmName = split(id, '/')[8], status = properties.status, criticalPatchCount = properties.availablePatchCountByClassification.critical, securityPatchCount = properties.availablePatchCountByClassification.security"

That query pulls assessment data from Azure Resource Graph, which is where Update Manager stores compliance status—not Log Analytics, not some workspace you forgot existed. If a VM shows critical patches, you know it needs attention.

Configure Patch Orchestration Mode

This step determines whether Azure controls patching or you do. For custom schedules, you must set VMs to Customer Managed Schedules mode.

Mode Behavior Use Case
Customer Managed Schedules Azure respects your defined maintenance windows Production environments with change control requirements
Azure Managed – Safe Deployment Azure auto-patches during off-peak hours with health monitoring Dev/test environments where convenience outweighs control
Windows Automatic Updates OS installs patches immediately as available Almost never what you want in enterprise environments

Set a VM to Customer Managed Schedules mode:

az vm update \
  --resource-group myResourceGroup \
  --name myVM \
  --set osProfile.windowsConfiguration.patchSettings.patchMode=AutomaticByPlatform \
       osProfile.windowsConfiguration.patchSettings.automaticByPlatformSettings.bypassPlatformSafetyChecksOnUserSchedule=true

For Linux VMs, replace windowsConfiguration with linuxConfiguration.

That bypassPlatformSafetyChecksOnUserSchedule=true flag tells Azure: “I own this schedule. Don’t patch outside my window.” Without it, Azure might auto-patch during your business hours because it thinks it’s being helpful. It’s not.


Reality Check: If you skip this step and try to create a maintenance schedule, Azure will ignore it. Customer Managed Schedules mode is mandatory for scheduling to work.


Create a Maintenance Configuration

A maintenance configuration defines when patching happens and what gets installed. It’s an ARM resource that specifies start time, duration, recurrence, and patch classifications.

Create a maintenance configuration for monthly patching on the second Tuesday (Patch Tuesday plus one week for validation):

az maintenance configuration create \
  --resource-group myResourceGroup \
  --resource-name MaintenanceConfig-ProdServers \
  --location eastus \
  --maintenance-scope InGuestPatch \
  --start-date-time "2026-02-10 02:00" \
  --duration "03:00" \
  --time-zone "Eastern Standard Time" \
  --recur-every "Month Second Tuesday" \
  --windows-classifications-to-include Critical Security UpdateRollup \
  --reboot-setting IfRequired

Key parameters explained:

  • --maintenance-scope InGuestPatch: Tells Azure this is for OS patching, not host maintenance

  • --duration "03:00": Three-hour maintenance window. Update Manager reserves the last 10 minutes (Windows) or 15 minutes (Linux) for reboots, so effective install time is 2 hours 50 minutes on Windows

  • --recur-every "Month Second Tuesday": Runs monthly on the second Tuesday of each month

  • --reboot-setting IfRequired: Only reboots if a patch requires it. Alternative values: Always (reboot regardless) or Never (leave VMs in pending-reboot state, which defeats the purpose of patching)

The maintenance window logic matters more than you’d think. Before installing each patch, Update Manager calculates: (current time + expected install time + reboot buffer). If that exceeds the window end time, the patch doesn’t install. Translation: If your cumulative Windows update takes 50 minutes and you only have an hour left in the window, Update Manager skips it rather than risk leaving your VM in a partial-update state. This is why Microsoft recommends windows of 90 minutes minimum—and significantly longer if you’re patching VMs that haven’t been updated in months.

Verify the configuration:

az maintenance configuration show \
  --resource-group myResourceGroup \
  --resource-name MaintenanceConfig-ProdServers

Assign VMs to the Maintenance Configuration

You can assign VMs statically (manual selection) or dynamically (query-based criteria). Dynamic scoping is the only approach that scales beyond a handful of servers.

Static Assignment

Assign a specific VM:

az maintenance assignment create \
  --resource-group myResourceGroup \
  --location eastus \
  --resource-name myVM-assignment \
  --maintenance-configuration-id "/subscriptions/YOUR_SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/Microsoft.Maintenance/maintenanceConfigurations/MaintenanceConfig-ProdServers" \
  --resource-id "/subscriptions/YOUR_SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM"

That works for one VM. For 50 VMs, you’re copying and pasting. For 500 VMs, you’re reconsidering your career choices.

Dynamic Scoping

Dynamic scoping assigns VMs based on criteria evaluated at runtime—subscription, resource group, location, tags. Tag a VM with PatchGroup: Production five minutes before the schedule runs, and it’s automatically included. Remove the tag, and it’s excluded. No manual updates to assignment lists.

Create a dynamic scope for production VMs tagged with Environment: Production:

az maintenance assignment create \
  --resource-group myResourceGroup \
  --location eastus \
  --resource-name DynamicScope-Production \
  --maintenance-configuration-id "/subscriptions/YOUR_SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/Microsoft.Maintenance/maintenanceConfigurations/MaintenanceConfig-ProdServers" \
  --filter-resource-types "Microsoft.Compute/virtualMachines" \
  --filter-tags "Environment=Production" \
  --filter-locations "eastus" "westus2"

This scope includes all VMs in East US or West US 2 with the Environment: Production tag. When the maintenance window triggers, Update Manager queries these criteria in real time and patches matching VMs.

A single dynamic scope supports up to 1,000 resource associations. You can attach multiple scopes to one maintenance configuration if you need more granular control.


Quick Win: Use Azure Policy to auto-tag new VMs with environment identifiers. Your dynamic scopes will pick them up automatically without anyone remembering to manually assign them to patch schedules.


Implement Ring-Based Patching

Patching everything simultaneously is how you discover that a patch breaks your application at 3 AM. Staged rollouts (rings) reduce risk by validating patches in non-production before touching anything customer-facing.

Ring Environment Schedule Purpose
Ring 0 Dev/Test Patch Tuesday + 0 days Immediate validation of patch compatibility
Ring 1 Pre-Production Patch Tuesday + 7 days Application testing with production-like workloads
Ring 2 Production Patch Tuesday + 14 days Full deployment after validation

Create three maintenance configurations with identical parameters except --start-date-time:

# Ring 0: Dev/Test - Second Tuesday of month
az maintenance configuration create \
  --resource-group myResourceGroup \
  --resource-name MaintenanceConfig-Ring0 \
  --location eastus \
  --maintenance-scope InGuestPatch \
  --start-date-time "2026-02-10 02:00" \
  --duration "03:00" \
  --time-zone "Eastern Standard Time" \
  --recur-every "Month Second Tuesday" \
  --windows-classifications-to-include Critical Security UpdateRollup \
  --reboot-setting IfRequired

# Ring 1: Pre-Prod - Third Tuesday of month
az maintenance configuration create \
  --resource-group myResourceGroup \
  --resource-name MaintenanceConfig-Ring1 \
  --location eastus \
  --maintenance-scope InGuestPatch \
  --start-date-time "2026-02-17 02:00" \
  --duration "03:00" \
  --time-zone "Eastern Standard Time" \
  --recur-every "Month Third Tuesday" \
  --windows-classifications-to-include Critical Security UpdateRollup \
  --reboot-setting IfRequired

# Ring 2: Production - Fourth Tuesday of month
az maintenance configuration create \
  --resource-group myResourceGroup \
  --resource-name MaintenanceConfig-Ring2 \
  --location eastus \
  --maintenance-scope InGuestPatch \
  --start-date-time "2026-02-24 02:00" \
  --duration "03:00" \
  --time-zone "Eastern Standard Time" \
  --recur-every "Month Fourth Tuesday" \
  --windows-classifications-to-include Critical Security UpdateRollup \
  --reboot-setting IfRequired

Create dynamic scopes for each ring using tags:

# Ring 0 scope
az maintenance assignment create \
  --resource-group myResourceGroup \
  --location eastus \
  --resource-name DynamicScope-Ring0 \
  --maintenance-configuration-id "/subscriptions/YOUR_SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/Microsoft.Maintenance/maintenanceConfigurations/MaintenanceConfig-Ring0" \
  --filter-resource-types "Microsoft.Compute/virtualMachines" \
  --filter-tags "PatchRing=Ring0"

# Ring 1 scope
az maintenance assignment create \
  --resource-group myResourceGroup \
  --location eastus \
  --resource-name DynamicScope-Ring1 \
  --maintenance-configuration-id "/subscriptions/YOUR_SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/Microsoft.Maintenance/maintenanceConfigurations/MaintenanceConfig-Ring1" \
  --filter-resource-types "Microsoft.Compute/virtualMachines" \
  --filter-tags "PatchRing=Ring1"

# Ring 2 scope
az maintenance assignment create \
  --resource-group myResourceGroup \
  --location eastus \
  --resource-name DynamicScope-Ring2 \
  --maintenance-configuration-id "/subscriptions/YOUR_SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/Microsoft.Maintenance/maintenanceConfigurations/MaintenanceConfig-Ring2" \
  --filter-resource-types "Microsoft.Compute/virtualMachines" \
  --filter-tags "PatchRing=Ring2"

Tag VMs according to their deployment ring:

az vm update \
  --resource-group myResourceGroup \
  --name myDevVM \
  --set tags.PatchRing=Ring0

az vm update \
  --resource-group myResourceGroup \
  --name myProdVM \
  --set tags.PatchRing=Ring2

Now dev VMs patch on the second Tuesday, pre-prod on the third Tuesday, and production on the fourth Tuesday. If Ring 0 reveals a problematic patch, you have two weeks to respond before it hits production. And because you used dynamic scoping with tags, new VMs automatically join the correct ring based on their tag—no manual schedule assignments required.

Query Patch Status

After the first maintenance window runs, verify results using Azure Resource Graph:

az graph query -q "patchinstallationresources | where type == 'microsoft.compute/virtualmachines/patchinstallationresults' | project vmName = split(id, '/')[8], status = properties.status, installedPatchCount = properties.installedPatchCount, failedPatchCount = properties.failedPatchCount, rebootStatus = properties.rebootStatus, lastModified = properties.lastModifiedDateTime"

This returns installation results for all VMs: how many patches installed, how many failed, whether a reboot occurred, and when the operation completed.

For compliance reporting, query which VMs have critical patches pending:

az graph query -q "patchassessmentresources | where type == 'microsoft.compute/virtualmachines/patchassessmentresults' | where properties.availablePatchCountByClassification.critical > 0 | project vmName = split(id, '/')[8], criticalPatchCount = properties.availablePatchCountByClassification.critical, assessmentTime = properties.lastModifiedDateTime"

If that query returns rows, those VMs either weren’t included in a maintenance schedule or the maintenance window was too short to install their patches. Check orchestration mode and maintenance window duration.

Troubleshooting Common Failures

Maintenance Window Exceeded

Symptom: Patches show as “Not Started” or “Skipped” in installation results.

Cause: The maintenance window is too short for the number or size of updates. Update Manager calculates whether each patch can complete within the remaining window time. If not, it skips the patch.

Fix: Extend the window duration. For VMs that haven’t been patched in months, start with a 4-hour window. After they’re current, reduce to 2-3 hours for monthly maintenance.

az maintenance configuration update \
  --resource-group myResourceGroup \
  --resource-name MaintenanceConfig-ProdServers \
  --duration "04:00"

VM Agent Not Ready

Symptom: Update Manager reports “Agent not responding” or “Unable to connect to VM.”

Cause: The Azure VM Agent or Azure Arc agent isn’t running or can’t communicate with Azure.

Fix: Restart the agent service:

Windows:

Restart-Service -Name RdAgent -Force
Restart-Service -Name WindowsAzureGuestAgent -Force

Linux:

sudo systemctl restart walinuxagent

Verify the agent is running:

az vm get-instance-view \
  --resource-group myResourceGroup \
  --name myVM \
  --query "instanceView.vmAgent.statuses"

If the status isn’t “Ready,” verify network connectivity based on your update source. Windows VMs need access to Windows Update endpoints. Linux VMs need access to their distribution’s repositories (RHUI for Red Hat, standard repos for Ubuntu/SUSE). If you’re using WSUS, ensure the VM can reach your WSUS server.

Windows Update Service Failures

Symptom: Windows VMs fail to assess or install patches with errors referencing Windows Update API.

Cause: Often caused by Group Policy settings pointing to a non-existent WSUS server or the UseWUServer registry key set incorrectly.

Fix: If you use WSUS, verify the server is reachable. If not, disable WSUS temporarily:

Set-ItemProperty -Path "HKLM:\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU" -Name "UseWUServer" -Value 0
Restart-Service -Name wuauserv

Update Manager respects WSUS configurations—if your Windows Update client points to WSUS, Update Manager triggers scans and installs against that WSUS server, not Microsoft Update. The service manages the schedule, but the client-side configuration controls the update source.

What You Just Configured

You’ve eliminated manual patching across your Azure VMs. Periodic assessment runs daily scans automatically, maintenance configurations define exactly when patches install, and dynamic scoping ensures new VMs get patched without anyone remembering to add them to static lists. Ring-based deployments give you validation windows before patches reach production.

The next time an auditor asks for patch compliance status, run one Azure Resource Graph query instead of logging into 50 VMs to check manually. And when a critical zero-day patch drops, you can trigger an immediate install on all affected VMs through Update Manager’s on-demand patching without waiting for your monthly maintenance window.

That security patch you ignored last month won’t be ignored this month. Or the month after. Because patching is automated now—which means it actually happens.

Hate ads? Want to support the writer? Get many of our tutorials packaged as an ATA Guidebook.

Explore ATA Guidebooks

Looks like you're offline!