You’ve been running the same SQL Server instance on aging hardware for years. The patching cycles are relentless, the licensing costs keep climbing, and your DBA just handed you a migration project with a deadline. Moving to Azure SQL Database—Microsoft’s fully managed Platform-as-a-Service (PaaS) database offering—gets you out of the infrastructure business. No more manual backups, no patching OS layers, no worrying about disk capacity at 2 a.m.
This guide walks you through the three phases of an actual migration: assessment, data movement, and post-migration validation. You’ll use the tools Microsoft currently recommends, including Azure Database Migration Service (DMS) and SqlPackage.
Prerequisites
Before you touch anything, confirm your environment meets these requirements:
-
SQL Server 2008 or later on your source instance (DMS supports this range)
-
An Azure subscription with Contributor or Owner access
-
A target Azure SQL Database already provisioned
-
Network connectivity from your on-premises server to Azure (VPN or ExpressRoute, or firewall rules opened for public endpoints)
-
The Az.DataMigration PowerShell module installed:
Install-Module -Name Az.DataMigration -
SqlPackage installed on a machine with source database access
One thing to set expectations on upfront: DMS migrations to Azure SQL Database are offline only. Your application goes down when the migration starts, and comes back up when it ends. If that’s a problem, Azure SQL Managed Instance supports online migration—but that’s a different target and a different post.
Before starting, provision your target Azure SQL Database. You’ll need a logical SQL server in Azure and a database with enough DTU (Database Transaction Unit) or vCore capacity for your workload. If you’re unsure about sizing, the assessment phase (below) will give you right-sized SKU recommendations—run the assessment before you provision, not after.
Pro Tip: If you’re still using the Azure SQL Migration extension for Azure Data Studio, stop. It has reached end-of-life. The Azure portal and PowerShell are the supported paths going forward.
Phase 1: Assess Your Database
You can’t migrate what you don’t understand. Skipping the assessment phase is how people discover incompatibilities at 11 p.m. during the actual cutover.
If your SQL Server instances are enrolled in Azure Arc, you get continuous migration assessment for free—no separate tool required. The Migration Dashboard in the Azure portal shows readiness status, compatibility issues, and SKU recommendations calculated from actual workload data. It runs on a weekly schedule automatically.
For instances not on Azure Arc, enroll them now:
# Install the Azure Connected Machine agent on your SQL Server host # Then register it with Arc using the azcmagent binary azcmagent connect ` --resource-group "rg-migration" ` --location "eastus" ` --subscription-id "<your-subscription-id>"
The assessment readiness report classifies each database as:
| Status | Meaning |
|---|---|
| Ready | No blockers—can migrate as-is |
| Ready with conditions | Minor issues to resolve before migration |
| Not ready | Compatibility blockers exist; may need target change |
“Not ready” for Azure SQL Database often means you’re using features like SQL Server Agent jobs or cross-database queries. Those are available in Managed Instance—worth noting if you hit that wall.
Key Insight: A “Not ready” result is not a dead end. Most blockers stem from feature gaps that Azure SQL Managed Instance handles. Check both targets before ruling out the PaaS path entirely.
Phase 2: Migrate the Schema
Here’s where most guides quietly skip a critical step: DMS migrates data only. Your schema has to exist in the target database before DMS runs, or the migration fails.
Export your schema from the source using SqlPackage’s Extract action, which produces a DACPAC file—a portable snapshot of your database schema:
sqlpackage /Action:Extract \ /SourceServerName:"your-sql-server" \ /SourceDatabaseName:"YourDatabase" \ /SourceUser:"sa" \ /SourcePassword:"YourPassword" \ /TargetFile:"YourDatabase.dacpac"
Then publish the schema to your Azure SQL Database target:
sqlpackage /Action:Publish \ /SourceFile:"YourDatabase.dacpac" \ /TargetServerName:"yourserver.database.windows.net" \ /TargetDatabaseName:"YourDatabase" \ /TargetUser:"sqladmin" \ /TargetPassword:"YourPassword"
Warning: SqlPackage’s Publish action is destructive by default—it drops and recreates objects to match the DACPAC. Run it against a fresh database, not a production target with existing data.
Verify the schema landed correctly before moving on:
-- Run against your Azure SQL Database target SELECT TABLE_NAME, TABLE_TYPE FROM INFORMATION_SCHEMA.TABLES ORDER BY TABLE_NAME;
If your table count matches the source, you’re ready for data movement.
Phase 3: Migrate the Data
With schema in place, you can run the DMS migration. DMS uses a Self-Hosted Integration Runtime (SHIR)—a secure agent installed on your network—to read data from your source and push it to Azure without requiring direct inbound connections.
Start by creating a DMS instance if you don’t have one:
New-AzDataMigrationSqlService ` -ResourceGroupName "rg-migration" ` -Name "dms-prod" ` -Location "eastus"
Register your SHIR with the DMS service. You’ll download and install the SHIR from the Azure portal under your DMS resource, then install it on an on-premises machine with line-of-sight to your source SQL Server, and register it with an authentication key generated during setup. The SHIR machine needs outbound HTTPS access (port 443) to Azure—it doesn’t require inbound firewall rules, which is why it works even when your SQL Server sits behind a corporate firewall.
Verify the SHIR is connected and healthy before starting the migration:
# Check SHIR status Get-AzDataMigrationSqlService ` -ResourceGroupName "rg-migration" ` -Name "dms-prod" | Select-Object -ExpandProperty IntegrationRuntimeState
A status of Online means you’re good to proceed. Limited or Offline means the SHIR can’t reach Azure—check the machine’s outbound network rules before burning time on a migration attempt.
Here’s a quick reference for SHIR states and what they tell you:
| SHIR State | Meaning | Next Step |
|---|---|---|
| Online | Connected and healthy | Proceed with migration |
| Limited | Partial connectivity | Check specific node logs |
| Offline | Cannot reach Azure | Check outbound port 443 rules |
Then kick off the migration:
New-AzDataMigrationToSqlDb ` -ResourceGroupName "rg-migration" ` -SqlDbInstanceName "yourserver" ` -TargetDbName "YourDatabase" ` -MigrationService "/subscriptions/<sub-id>/resourceGroups/rg-migration/providers/Microsoft.DataMigration/sqlMigrationServices/dms-prod" ` -Scope "/subscriptions/<sub-id>/resourceGroups/rg-migration/providers/Microsoft.Sql/servers/yourserver/databases/YourDatabase" ` -SourceSqlConnectionAuthentication "SqlAuthentication" ` -SourceSqlConnectionDataSource "your-sql-server" ` -SourceSqlConnectionUserName "sa" ` -SourceSqlConnectionPassword "YourPassword" ` -SourceDatabaseName "YourDatabase" ` -TargetSqlConnectionAuthentication "SqlAuthentication" ` -TargetSqlConnectionDataSource "yourserver.database.windows.net" ` -TargetSqlConnectionUserName "sqladmin" ` -TargetSqlConnectionPassword "YourPassword"
Monitor migration status:
Get-AzDataMigrationToSqlDb ` -ResourceGroupName "rg-migration" ` -SqlDbInstanceName "yourserver" ` -TargetDbName "YourDatabase"
The State field moves from InProgress to Succeeded when data movement completes. The full sample scripts for this pattern are available in the Azure-Samples/data-migration-sql repository.
Key Insight: For smaller databases where downtime isn’t a concern, you can skip DMS entirely and use a BACPAC file instead. SqlPackage’s Export action bundles schema and data into one file—simpler, but not suitable for large or transactionally active databases due to consistency risks.
Phase 4: Validate and Optimize
The migration completes, you flip your connection strings, and you’re done. Sort of.
Before you declare victory, run row count checks to confirm data completeness:
-- Run this on both source and target, compare results SELECT t.name AS TableName, p.rows AS RowCount FROM sys.tables t JOIN sys.partitions p ON t.object_id = p.object_id WHERE p.index_id IN (0, 1) ORDER BY t.name;
Once you’ve confirmed row counts match, let your workload run for a few days before you optimize. Query Performance Insight surfaces the top resource-consuming and long-running queries across your workload—check it after you have real traffic to analyze.
Azure SQL Database’s automatic tuning can create and drop indexes based on workload patterns without manual intervention. Enable it, then verify what it’s doing rather than letting it run blind:
-- Check auto-tuning recommendations in your target database SELECT name, reason, score, details FROM sys.dm_db_tuning_recommendations WHERE JSON_VALUE(state, '$.currentValue') = 'Active' ORDER BY score DESC;
A common post-migration surprise: queries that ran fine on-premises slow down on Azure SQL Database because the service tier’s DTU or vCore count doesn’t match your actual workload. The Performance Guidance documentation covers how to interpret monitoring data and scale appropriately. Scaling up is one CLI command:
az sql db update \ --resource-group rg-migration \ --server yourserver \ --name YourDatabase \ --service-objective S4
Choosing the Right Service Tier
If you provisioned your target database based on a guess rather than the Arc assessment recommendations, you may need to resize after observing real-world traffic. Azure SQL Database offers two purchasing models:
| Model | Best For | Scaling |
|---|---|---|
| DTU (Basic, Standard, Premium) | Predictable workloads, simpler management | Fixed resource bundles |
| vCore (General Purpose, Business Critical, Hyperscale) | Variable workloads, need CPU/memory separation | Independent CPU and memory control |
For most migrations from on-premises SQL Server, the vCore model gives you more predictable cost mapping—your on-premises server has a known core count, and vCore pricing translates directly.
What to Watch After Cutover
Migrating from SQL Server to Azure SQL Database follows a predictable sequence: assess compatibility with Azure Arc, migrate schema with SqlPackage, move data with DMS, and validate before cutting over production traffic. The pieces that cause problems are almost always the ones people skip—either the assessment phase (which surfaces compatibility blockers before you’re committed) or the schema migration step (which DMS won’t handle for you).
If you hit a “not ready” assessment result, don’t immediately assume the migration is stuck. Many blockers come down to feature gaps that Azure SQL Managed Instance handles—checking both targets side by side before ruling out the PaaS path entirely is worth the time.
The tools Microsoft recommends change periodically—the Azure SQL Migration extension for Azure Data Studio has reached end-of-life, and the Data Migration Assistant (DMA) has been deprecated. Staying with the Azure portal DMS workflow and the Az.DataMigration PowerShell module keeps you on the supported path.