Every shared staging environment you maintain costs your team twice: once in Azure spend, and again in developer hours lost to queuing. The math isn’t complicated. Five developers sharing one staging slot means four are always waiting. Multiply that idle time by your hourly engineering cost, and the staging bottleneck quietly becomes the most expensive part of your delivery pipeline.
Ephemeral preview environments fix this by giving every pull request its own isolated, live deployment. The environment spins up when the PR opens, runs the exact code from the branch, and disappears when the PR closes. No queuing. No “who broke staging?” investigations. No always-on infrastructure burning money overnight.
Azure Container Apps (ACA) makes this practical through its revision system and a routing feature called deployment labels. You’ll wire up an Azure DevOps pipeline that creates a preview environment for every PR, posts the URL back to the PR thread, and cleans everything up automatically. Here’s how.
Prerequisites
Before you start, make sure you have:
-
An Azure subscription with an existing Container Apps environment
-
An Azure Container Registry (ACR) connected to your Container App
-
An Azure DevOps project with a YAML pipeline
-
The Azure CLI installed with the
containerappextension -
Your Container App running in multiple revision mode
Verify your Container App’s revision mode:
az containerapp show \ --name myapp \ --resource-group preview-rg \ --query "properties.configuration.activeRevisionsMode" \ --output tsv
If it returns single, switch to multiple:
az containerapp revision set-mode \ --name myapp \ --resource-group preview-rg \ --mode multiple
How Revision Labels Create Isolated URLs
ACA creates a new revision—an immutable snapshot of your app—every time you change the container image or configuration. In multiple revision mode, several revisions can run simultaneously.
The routing problem is straightforward: how do you send a reviewer to the PR’s revision without affecting production traffic? Revision labels solve this by assigning a name to a specific revision, which generates a dedicated URL:
“`plain text
https://myapp—pr-42.
Traffic to this URL routes exclusively to the labeled revision. Your production URL keeps serving production. Zero interference. | Component | What It Does | | --- | --- | | Revision suffix (`--revision-suffix`) | Names the revision predictably (e.g., `myapp--pr-42`) | | Label (`--label`) | Generates the unique URL for direct access | | Traffic weight (0%) | Prevents the preview from receiving production traffic | --- ***Pro Tip: Label names must be lowercase alphanumeric with dashes only. Use ******`pr-<number>`****** as your convention—it's readable in the Azure portal and easy to match against PR IDs programmatically.*** --- ## Building the Pipeline Your `azure-pipelines.yml` needs three stages: build, deploy, and notify. Here's each piece. ### Build and Push the Container Image Tag images with the PR number so every revision traces back to its source:
trigger: none
pr:
branches:
include:
– main
variables:
prId: $(System.PullRequest.PullRequestId)
stages:
– stage: Build
jobs:
– job: BuildAndPush
pool:
vmImage: ‘ubuntu-latest’
steps:
– task: Docker@2
displayName: ‘Build and push image’
inputs:
containerRegistry: ‘myACRConnection’
repository: ‘myapp’
command: ‘buildAndPush’
Dockerfile: ‘**/Dockerfile’
tags: ‘pr-$(prId)’
Using `pr-$(prId)` as the tag instead of `latest` forces ACA to recognize each push as a distinct image, which triggers a new revision. ### Deploy the Preview Revision
- stage: Deploy
dependsOn: Build
jobs: - job: DeployPreview
pool:
vmImage: ‘ubuntu-latest’
steps:-
task: AzureCLI@2
displayName: ‘Deploy preview revision’
inputs:
azureSubscription: ‘myAzureServiceConnection’
scriptType: ‘bash’
scriptLocation: ‘inlineScript’
inlineScript: |
PR_ID=$(prId)# Deploy new revision with PR-specific image
az containerapp update \
–name myapp \
–resource-group preview-rg \
–image myacr.azurecr.io/myapp:pr-$PR_ID \
–revision-suffix pr-$PR_ID# Label the revision to generate its unique URL
az containerapp revision label add \
–name myapp \
–resource-group preview-rg \
–label pr-$PR_ID \
–revision myapp–pr-$PR_ID
-
The `--revision-suffix` parameter creates a predictable revision name (`myapp--pr-42`). The label generates the URL that reviewers will use. --- ***Reality Check: If you push multiple commits to the same PR, each triggers a new revision. The label moves to the latest revision automatically when you use ***[***`az containerapp update`***](https://learn.microsoft.com/cli/azure/containerapp#az-containerapp-update)*** with ******`--target-label`******. Your reviewers always see the most recent code at the same URL.*** --- ### Notify Reviewers With the Preview URL
- task: AzureCLI@2
displayName: 'Post preview URL to PR'
inputs:
azureSubscription: 'myAzureServiceConnection'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
PR_ID=$(prId)
# Get the environment's default domain
ENV_DOMAIN=$(az containerapp env show \
--name my-env \
--resource-group preview-rg \
--query "properties.defaultDomain" \
--output tsv)
PREVIEW_URL="https://myapp---pr-$PR_ID.$ENV_DOMAIN"
# Post comment to PR thread
BODY=$(cat <<EOF
{"comments": [{"parentCommentId": 0, "content": "Preview environment ready: [$PREVIEW_URL]($PREVIEW_URL)", "commentType": 1}], "status": 1}
EOF
)
curl -s -X POST \
-H "Authorization: Bearer $SYSTEM_ACCESSTOKEN" \
-H "Content-Type: application/json" \
-d "$BODY" \
"$(System.CollectionUri)$(System.TeamProject)/_apis/git/repositories/$(Build.Repository.Id)/pullRequests/$PR_ID/threads?api-version=7.0"
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
Your PR now has a clickable link to the live preview. QA, designers, and product managers can test without waiting for anyone to deploy to staging. ## Scale to Zero: Why This Costs Almost Nothing [ACA's scaling rules](https://learn.microsoft.com/azure/container-apps/scale-app) use [KEDA](https://keda.sh/) (Kubernetes Event-driven Autoscaling) to scale replicas based on HTTP traffic. Set minimum replicas to zero, and idle preview environments consume no compute:
az containerapp update \
–name myapp \
–resource-group preview-rg \
–min-replicas 0 \
–max-replicas 2
When a reviewer clicks the preview URL, KEDA detects the incoming request and scales from zero to one replica. The first request takes a few extra seconds for the cold start. Subsequent requests respond normally. | Scenario | Monthly Cost (illustrative) | | --- | --- | | Always-on [App Service](https://azure.microsoft.com/pricing/details/app-service/) (B1) | ~$55/environment | | ACA preview with 2 hours active testing/day | ~$0.16/day | | 20 PRs/month, each tested for 2 hours total | ~$3.20 total | *Estimates based on *[*Azure Container Apps pricing*](https://azure.microsoft.com/pricing/details/container-apps/)* and *[*App Service pricing*](https://azure.microsoft.com/pricing/details/app-service/)*. Actual costs vary by region and configuration.* The [ACA free grant](https://azure.microsoft.com/pricing/details/container-apps/) covers a generous monthly allotment of requests and compute seconds. Most teams running preview environments stay within the free tier entirely. ## Cleaning Up After PR Closure Azure DevOps does not natively trigger pipelines when a PR closes or merges. You need a cleanup mechanism to deactivate stale preview revisions. ### Option A: Scheduled Cleanup Pipeline Run a nightly pipeline that compares active revisions against open PRs:
!/bin/bash
cleanup-previews.sh
List active PR revisions
REVISIONS=$(az containerapp revision list \
–name myapp \
–resource-group preview-rg \
–query “[?contains(name, ‘pr-‘) && properties.active].name” \
–output tsv)
Get open PR IDs from Azure DevOps
OPEN_PRS=$(az repos pr list \
–repository myapp \
–status active \
–query “[].pullRequestId” \
–output tsv)
for rev in $REVISIONS; do
if [[ $rev =~ pr-([0-9]+) ]]; then
PR_NUM=”${BASH_REMATCH[1]}”
if ! echo “$OPEN_PRS” | grep -qw “$PR_NUM”; then
echo “Deactivating $rev (PR #$PR_NUM is closed)”
az containerapp revision deactivate \
–name myapp \
–resource-group preview-rg \
–revision “$rev”
fi
fi
done
Schedule this in your pipeline with a [cron trigger](https://learn.microsoft.com/azure/devops/pipelines/process/scheduled-triggers):
schedules:
– cron: ‘0 2 * * *’
displayName: ‘Nightly preview cleanup’
branches:
include:
– main
always: true
“`
Option B: Service Hook Webhook
Configure an Azure DevOps Service Hook to fire on the “Pull request updated” event. Point it at an Azure Function that checks whether the PR status changed to “completed” or “abandoned,” then deactivates the corresponding revision.
This approach reacts faster than scheduled cleanup but requires maintaining an Azure Function.
Quick Win: ACA supports a --max-inactive-revisions flag during app creation. Set it to 50 or lower to prevent hitting the 100-revision limit even if your cleanup script misses a few.
Handling Database State
Preview environments need data. Three strategies handle this at different complexity levels:
-
Shared database, isolated schemas: Each preview environment creates a database or schema named
pr_<number>. Your pipeline runs migrations on deploy and drops the schema on cleanup. Low overhead, good isolation. -
Containerized database: Run PostgreSQL or SQL Server as a sidecar container in the same Container App. Data is ephemeral—it disappears when the revision deactivates. Best for integration tests that seed their own data.
-
Production snapshot: Restore an anonymized copy of production data for realistic testing. Higher cost and complexity, but necessary for performance testing or data migration validation.
Start with the shared database approach. Graduate to snapshots only when your testing requirements demand production-representative data volumes.
Limits You Should Plan For
ACA enforces a 100-revision limit per Container App (active and inactive combined). High-velocity teams that open dozens of PRs daily will hit this. Aggressive cleanup and the --max-inactive-revisions flag are your primary defenses.
Cold starts typically take a few seconds when scaling from zero. Set expectations with your reviewers. A note in the PR comment (“First load takes a few seconds”) prevents unnecessary bug reports about “broken” preview links.
Azure DevOps branch policies with build validation don’t fire pr: YAML triggers—they use their own build policy mechanism instead. If you use branch policies for PR validation, configure the build policy to point at your preview pipeline rather than relying on pr: triggers.
Where You Go From Here
You now have a pipeline that creates isolated preview environments for every PR, posts the URL for reviewers, scales to zero when idle, and cleans up automatically. Your staging environment just became redundant.
The immediate next step: extend this to multiple services. The pipeline pattern is identical—only the app name and container registry path change. If your application has a frontend and API backend, create preview environments for both and wire them together using Container Apps internal DNS.
Your team stops waiting for staging. Your reviewers test every change in isolation. Your Azure bill drops. That’s the entire argument for ephemeral environments—and you just built one.