Azure Confidential Computing: Protect Data During Processing

Published:14 April 2026 - 8 min. read

Audit your Active Directory for weak passwords and risky accounts. Run your free Specops scan now!

You encrypt data at rest. You encrypt data in transit. Your security team has done everything right—TLS everywhere, encrypted disks, strict access controls. And yet the moment your application actually runs, all of that encryption disappears. The data gets decrypted into RAM so the CPU can work with it, and for that window of time, your most sensitive information sits in plaintext in memory that, depending on your cloud setup, your cloud provider’s administrators could theoretically access.

That’s the vulnerability that Azure Confidential Computing is built to close.

The Third State of Data

Traditional encryption solves two problems: protecting data stored on disk (data at rest) and protecting data moving across a network (data in transit). Both are well-understood problems with well-understood solutions. But there’s a third state that gets far less attention: data in use—meaning data that’s actively being processed by an application.

When a program runs a computation, it needs to read and write plaintext values to memory. This is unavoidable. A CPU can’t operate on ciphertext (at least, not without homomorphic encryption, which exists but carries enormous performance costs). So for decades, “secure” cloud infrastructure has meant trusting the platform your workload runs on—the hypervisor, the host OS, the platform operator’s administrators. You encrypt at rest and in transit, but you implicitly trust everything in the middle.

Confidential computing challenges that assumption at the hardware level.

What a Trusted Execution Environment Actually Is

A Trusted Execution Environment (TEE) is an isolated region of memory and execution that the CPU itself enforces. Code and data inside a TEE are encrypted directly in the processor—even other processes running on the same physical machine, including the hypervisor, can’t read them. The infrastructure operator sitting at the host console could take a memory dump and get ciphertext. Not particularly useful.

The Confidential Computing Consortium, an initiative under the Linux Foundation, defines it this way: confidential computing protects data in use by performing computation in a hardware-based, attested Trusted Execution Environment. Microsoft is a founding member of the CCC and has built Azure Confidential Computing around that definition.

The key word in that definition is “attested.” Hardware attestation is how you know the TEE you’re sending your data to is legitimate—that it’s running on real, uncompromised hardware with the expected configuration and software stack. Before any sensitive data enters the enclave, the TEE generates a cryptographic proof that a remote verifier can inspect. Microsoft Azure Attestation handles that verification, and Azure Key Vault Managed HSM only releases decryption keys after attestation succeeds. If someone has tampered with the hardware or loaded unexpected software, the attestation fails and the keys stay locked.

This matters more than it might seem at first glance. It’s not just that your data is encrypted—it’s that you can verify the integrity of the execution environment before trusting it with anything sensitive. That’s a fundamentally different security model.

Two Isolation Approaches: Enclaves vs. Confidential VMs

Azure Confidential Computing isn’t a single product—it’s a family of services built on top of different hardware implementations. Understanding the distinction between them helps you pick the right tool for the job.

Process-Level Isolation: Intel SGX

Intel Software Guard Extensions (SGX) provides the smallest possible Trusted Computing Base (TCB)—the set of hardware and software you have to trust for the security guarantee to hold. With SGX, your application code itself is divided into trusted and untrusted components. Sensitive operations run inside an “enclave,” a protected memory region that the CPU encrypts using a dedicated Memory Encryption Engine. Everything outside the enclave—including the guest OS, the hypervisor, and the host OS—is explicitly untrusted.

The security properties are exceptional. But the tradeoff is substantial application rework. You need to partition your codebase, write code against specialized SDKs, and think carefully about what crosses the enclave boundary. SGX is the right choice when you need the smallest possible trust footprint and you’re willing to do the engineering work to achieve it.

Azure offers SGX-capable DCsv2 and DCsv3 VM families for workloads that justify that investment.

VM-Level Isolation: AMD SEV-SNP and Intel TDX

For most workloads, AMD Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP) and Intel Trust Domain Extensions (TDX) take a different approach: they encrypt the entire memory space of a virtual machine. When an AMD SEV-SNP VM boots, the host CPU generates a unique, hardware-managed encryption key for that VM’s memory. The hypervisor and host OS only ever see ciphertext.

The practical benefit is enormous: you can lift and shift an existing application into a confidential VM without changing a line of code. Your app doesn’t know it’s running in an encrypted environment—it just works, and the underlying hardware handles the isolation.

The TCB is somewhat larger than with SGX (it includes the guest OS), but it still explicitly excludes the cloud provider’s infrastructure. Azure’s DCasv5 and ECasv5 series run on AMD SEV-SNP; the DCesv6 and ECesv6 series use Intel TDX, introduced with 5th Gen Intel Xeon processors.


Key Insight: Choosing between enclave-based (SGX) and VM-based (SEV-SNP/TDX) confidential computing is really a question of how much re-engineering you’re willing to do vs. how small a trust boundary you need. For most lift-and-shift scenarios, VM-level isolation is the right answer. For new applications handling the most sensitive workloads—cryptographic key management, healthcare algorithms, financial transaction processing—the smaller TCB of SGX may be worth the development cost.


What the Threat Model Actually Covers

It helps to be concrete about what Azure Confidential Computing protects against, because the marketing language can blur the lines.

What it does protect against:

  • Malicious insiders at the cloud provider with physical or logical access to host servers

  • Compromised hypervisors or host operating systems (malware, rootkits, zero-days in virtualization software)

  • Cross-tenant attacks in multi-tenant environments—neighboring VMs on the same physical host cannot read your memory

  • Third-party collaborators in multi-party computation scenarios who shouldn’t see raw input data

What it does not eliminate:

  • Vulnerabilities in your own application code running inside the enclave or confidential VM

  • Side-channel attacks targeting the CPU itself—security researchers have demonstrated that TEE implementations have had exploitable side channels, and cryptographic weaknesses have been identified in both Intel SGX and AMD SEV implementations. Cloud providers continuously patch these, but the hardware isn’t impervious

  • Compromise of the guest OS inside a confidential VM (VM-level isolation trusts the guest OS)

The honest version: confidential computing dramatically raises the bar for infrastructure-level attacks. It does not make your application bulletproof. Your code quality and software supply chain still matter.

Where It Shows Up in the Azure Ecosystem

Beyond confidential VMs, Azure has embedded this capability into several managed services worth knowing about.

Azure SQL Always Encrypted with Secure Enclaves executes database queries inside a TEE. Sensitive columns stay encrypted even while being queried, which means a compromised database administrator still can’t read the plaintext values.

Azure Key Vault Managed HSM is a single-tenant service built on FIPS 140-3 Level 3 validated Hardware Security Modules. Keys are isolated from host software and only decrypt inside validated enclaves—cryptographic keys don’t leave the HSM in plaintext.

Azure Confidential Ledger, built on Microsoft Research’s Confidential Consortium Framework, provides tamper-proof, Write-Once-Read-Many storage in TEEs. For auditing scenarios where you need an immutable record that nobody—not even Microsoft—could retroactively alter, it’s a useful primitive.

And for AI workloads, the NCCads H100 v5 VM series extends the confidential computing boundary from the CPU to NVIDIA H100 GPUs. Training and inferencing with proprietary models on sensitive data can now happen in a hardware-enforced boundary that spans both compute and accelerated processing. That matters if you’re fine-tuning models on patient records or financial data and need to prove to an auditor that the underlying platform couldn’t read that data during training.

The Compliance Angle

If you’re in healthcare, finance, or government, you’re not choosing confidential computing just because it’s more secure—you’re choosing it because you need to demonstrate verifiable isolation to auditors. HIPAA, PCI DSS, GDPR, and similar frameworks increasingly require you to show not just that data should be protected, but that the infrastructure mathematically cannot expose it.

Azure Confidential Computing’s attestation model changes the conversation with auditors. Instead of policy documents and access control screenshots, you can present a cryptographic proof that the execution environment was exactly as described at the time the data was processed. That’s a much harder claim to dispute.

In multi-party computation scenarios—two healthcare organizations jointly training a model on patient data, or two competing financial institutions sharing fraud intelligence—confidential computing makes the arrangement legally and technically viable. Each party can verify the enclave before contributing data. Neither party, nor the hosting provider, sees the other’s raw input. The enclave processes it, and only aggregated outputs leave the TEE.

That’s not a theoretical future capability. It’s what pharmaceutical companies are using for drug discovery research today, and what financial institutions are using to run cross-institution anti-money laundering analysis without exposing proprietary client data to competitors.

Practical Considerations Before You Start

A few things worth knowing before you spin up confidential infrastructure:

Regional availability: Not all Azure regions support all confidential VM families. DCasv5 (AMD SEV-SNP) has broader regional availability than DCesv6 (Intel TDX). Check the Azure Products by Region page before designing your architecture around a specific VM family.

Disk encryption: Confidential VMs support full OS disk encryption before first boot, using customer-managed keys released via Secure Key Release from Azure Key Vault. This means even the disk is encrypted with a key that only leaves the HSM after the VM passes attestation—the disk image is useless without a running, verified enclave.

VM resizing limitations: Confidential VMs have constraints on live migration and resizing. You can’t resize from a confidential VM SKU to a non-confidential SKU while preserving the VMGS (Virtual Machine Guest State) disk that holds your security component state.

Pricing: Confidential VM SKUs carry a premium over equivalent general-purpose VMs. That premium reflects the specialized hardware, encrypted VMGS disk billing, and Azure Attestation service costs. Run the numbers before committing—for dev/test environments, standard VMs are usually sufficient.


Pro Tip: If you’re running containerized workloads on AKS, check the current status of confidential container runtimes before committing to an approach. Migration paths and supported runtimes evolve—consult the AKS confidential containers documentation for the current state before you start building.


What “Zero Trust” Actually Means Here

The phrase “zero trust” gets applied to everything in security, usually meaning “we added MFA.” In the context of confidential computing, it means something specific and verifiable: the hardware is configured such that even privileged infrastructure operators cannot access your data, and you can cryptographically prove that to be true.

Azure Confidential Computing extends zero trust down to the silicon level. Your application doesn’t have to trust the hypervisor, the host OS, or the cloud provider’s operations team—because the CPU enforces isolation independently of all of them. That’s a meaningful claim that most “zero trust” marketing cannot make.

If your workloads involve data that you cannot legally or contractually allow a cloud provider to see, confidential computing is the answer. If your compliance framework requires verifiable isolation—not just policy-based access controls—confidential computing gives you the cryptographic proof to back that up.

The encryption gap between “data at rest” and “data in transit” has been there since the beginning of cloud computing. Hardware-based TEEs are finally closing it.

Key Takeaways

Confidential computing adds a third category to your data security model—protecting data in use, not just at rest and in transit.

  • TEEs enforce isolation at the CPU level, not the software level. The hypervisor, host OS, and cloud provider are all outside the trust boundary.

  • Attestation is the mechanism that makes it verifiable. Azure Attestation validates TEE integrity; Azure Key Vault Managed HSM only releases keys to validated environments.

  • SGX gives you the smallest TCB at the cost of significant application rework. AMD SEV-SNP and Intel TDX give you lift-and-shift simplicity with VM-level isolation.

  • Managed services like Azure SQL Always Encrypted and Confidential Ledger bring TEE-based protection to specific workload patterns without requiring you to manage VMs.

  • Multi-party computation becomes practical when each party can verify the execution environment before contributing sensitive data.

Start with the Azure Confidential Computing overview and the Azure Confidential Computing products page to map which SKU fits your workload. The confidential VM FAQ covers most of the deployment gotchas before you hit them.

Hate ads? Want to support the writer? Get many of our tutorials packaged as an ATA Guidebook.

Explore ATA Guidebooks

Looks like you're offline!