With Azure Virtual Desktop (AVD), you can deliver secure Windows 11 desktops and environments anywhere. It’s pretty easy to deploy and scale. You can provide a coherent user experience from any end-user device and reduce costs by leveraging Windows 11 multi-session licensing. This tutorial will guide you through setting up AVD with AADDS using Terraform.

As always, all the code is available on my GitHub.

Prerequisites

Besides an active Azure subscription and Terraform configured on your workstation, Azure Active Directory Domain Services (AADDS) are required. Check out my previous post on setting up AADDS with Terraform if you haven’t already! Some Terraform resources in this guide, e.g., the network peerings and AADDS domain-join (AADDS-join) VM extension, depend on the AADDS resources from that post.

Do You Know What’s Exciting?

It’s possible to just Azure AD-join (AAD-join) AVD session hosts, eliminating the requirement to use AADDS or on-premise AD DS and reduce the costs and complexity of AVD deployments even more. Unfortunately, it’s not yet fully production-ready because FSLogix profile support for AAD-joined AVD VMs is only in public preview. Currently, using AAD authentication with Azure Files still requires hybrid identities. But it’s nice that AVD is one step closer to being a cloud-only solution. I can’t wait to terraformify all of it! Stay tuned because I’ll post about it as soon as things are generally available.

Overview

We’ll deploy AADDS and AVD resources to separate virtual networks and resource groups. It is called a hub-spoke network topology, a typical approach to organize large-scale networks. The hub (aadds-vnet), the central connectivity point, typically contains other services besides AADDS. E.g., a VPN gateway connecting your on-premises network to the Azure cloud. Azure Bastion or Azure Firewall are also services that might reside in the hub network. Spoke networks (avd-vnet) contain isolated workloads using network peerings to connect to the hub.

Network Resources

Create the avd-rg resource group and add the avd-vnet spoke network to it. The network uses the AADDS domain controllers (DCs) as dns_servers:

resource "azurerm_resource_group" "avd" {
  name     = "avd-rg"
  location = "Switzerland North"
}

# Network Resources

resource "azurerm_virtual_network" "avd" {
  name                = "avd-vnet"
  location            = azurerm_resource_group.avd.location
  resource_group_name = azurerm_resource_group.avd.name
  address_space       = ["10.10.0.0/16"]
  dns_servers         = azurerm_active_directory_domain_service.aadds.initial_replica_set.0.domain_controller_ip_addresses
}

resource "azurerm_subnet" "avd" {
  name                 = "avd-snet"
  resource_group_name  = azurerm_resource_group.avd.name
  virtual_network_name = azurerm_virtual_network.avd.name
  address_prefixes     = ["10.10.0.0/24"]
}

To give AVD VMs line of sight of AADDS, we need to add the following network peerings:

resource "azurerm_virtual_network_peering" "aadds_to_avd" {
  name                      = "hub-to-avd-peer"
  resource_group_name       = azurerm_resource_group.aadds.name
  virtual_network_name      = azurerm_virtual_network.aadds.name
  remote_virtual_network_id = azurerm_virtual_network.avd.id
}

resource "azurerm_virtual_network_peering" "avd_to_aadds" {
  name                      = "avd-to-aadds-peer"
  resource_group_name       = azurerm_resource_group.avd.name
  virtual_network_name      = azurerm_virtual_network.avd.name
  remote_virtual_network_id = azurerm_virtual_network.aadds.id
}

Host Pool

A host pool is a collection of Azure virtual machines that register to Azure Virtual Desktop as session hosts when you run the Azure Virtual Desktop agent. All session host virtual machines in a host pool should be sourced from the same image for a consistent user experience.

We add the AVD host pool and the registration info. We’ll later register the session hosts via VM extension to the host pool using the token from the registration info:

locals {
  # Switzerland North is not supported
  avd_location = "West Europe"
}

resource "azurerm_virtual_desktop_host_pool" "avd" {
  name                = "avd-vdpool"
  location            = local.avd_location
  resource_group_name = azurerm_resource_group.avd.name

  type               = "Pooled"
  load_balancer_type = "BreadthFirst"
  friendly_name      = "AVD Host Pool using AADDS"
}

resource "time_rotating" "avd_registration_expiration" {
  # Must be between 1 hour and 30 days
  rotation_days = 29
}

resource "azurerm_virtual_desktop_host_pool_registration_info" "avd" {
  hostpool_id     = azurerm_virtual_desktop_host_pool.avd.id
  expiration_date = time_rotating.avd_registration_expiration.rotation_rfc3339
}

I deploy my AADDS and AVD resources to the Switzerland North region. However, I have to deploy AVD service resources to West Europe because the AVD service isn’t available in all regions.

To get the latest supported regions, re-register the AVD resource provider:

  1. Select your subscription under Subscriptions in the Azure Portal.
  2. Select the Resource Provider menu.
  3. Re-register Microsoft.DesktopVirtualization.

Workspace and App Group

Next, we create a workspace and add an app group. Two types of app groups exist:

  • Desktop: full desktop
  • RemoteApp: individual apps

Adding the following gives AVD users the full desktop experience:

resource "azurerm_virtual_desktop_workspace" "avd" {
  name                = "avd-vdws"
  location            = local.avd_location
  resource_group_name = azurerm_resource_group.avd.name
}

resource "azurerm_virtual_desktop_application_group" "avd" {
  name                = "desktop-vdag"
  location            = local.avd_location
  resource_group_name = azurerm_resource_group.avd.name

  type         = "Desktop"
  host_pool_id = azurerm_virtual_desktop_host_pool.avd.id
}

resource "azurerm_virtual_desktop_workspace_application_group_association" "avd" {
  workspace_id         = azurerm_virtual_desktop_workspace.avd.id
  application_group_id = azurerm_virtual_desktop_application_group.avd.id
}

Session Hosts

Let’s add two session hosts to the AVD host pool. To be able to adjust the amount of VMs inside the host pool later, we define a variable:

variable "avd_host_pool_size" {
  type        = number
  description = "Number of session hosts to add to the AVD host pool."
}

Next, we add the VM NICs for the session hosts:

resource "azurerm_network_interface" "avd" {
  count               = var.avd_host_pool_size
  name                = "avd-nic-${count.index}"
  location            = azurerm_resource_group.avd.location
  resource_group_name = azurerm_resource_group.avd.name

  ip_configuration {
    name                          = "avd-ipconf"
    subnet_id                     = azurerm_subnet.avd.id
    private_ip_address_allocation = "Dynamic"
  }
}

After, we add the session hosts:

resource "random_password" "avd_local_admin" {
  length = 64
}

resource "random_id" "avd" {
  count       = var.avd_host_pool_size
  byte_length = 4
}

resource "azurerm_windows_virtual_machine" "avd" {
  count               = var.avd_host_pool_size
  name                = "avd-vm-${count.index}-${random_id.avd[count.index].hex}"
  location            = azurerm_resource_group.avd.location
  resource_group_name = azurerm_resource_group.avd.name

  size                  = "Standard_D4s_v4"
  license_type          = "Windows_Client"
  admin_username        = "avd-local-admin"
  admin_password        = random_password.avd_local_admin.result
  network_interface_ids = [azurerm_network_interface.avd[count.index].id]

  os_disk {
    caching              = "ReadWrite"
    storage_account_type = "Premium_LRS"
  }

  source_image_reference {
    publisher = "MicrosoftWindowsDesktop"
    offer     = "windows-11"
    sku       = "win11-21h2-avd"
    version   = "latest"
  }
}

To ensure the session hosts utilize the licensing benefits available with AVD, we select Windows_Client as license_type value.

The reason we append a random number to the VM name is to prevent name conflicts with dangling host pool registrations.

Understanding VM Extensions

To figure out the required VM extensions, I used the AVD wizard of the Azure Portal. During the review + create step, I downloaded the ARM template and reverse-engineered it.

Sometimes, creating complex deployments via Azure Portal feels like magic. Backtracking the generated ARM templates is something I like to do to get a deeper understanding of what’s happening under the hood. It’s usually my initial step when trying to terraformify something for the first time that I can’t find good examples of elsewhere.

You can find the AVD ARM templates on official Azure GitHub github.com/Azure/RDS-Templates/ARM-wvd-templates. The VM templates reside in the nestedtemplates directory containing the VM extension resources that we want to replicate with Terraform:

    {
      "apiVersion": "2018-10-01",
      "type": "Microsoft.Compute/virtualMachines/extensions",
      "name": "[concat(parameters('rdshPrefix'), add(copyindex(), parameters('vmInitialNumber')), '/', 'Microsoft.PowerShell.DSC')]",
      "location": "[parameters('location')]",
      "dependsOn": [ "rdsh-vm-loop" ],
      "copy": {
        "name": "rdsh-dsc-loop",
        "count": "[parameters('rdshNumberOfInstances')]"
      },
      "properties": {
        "publisher": "Microsoft.Powershell",
        "type": "DSC",
        "typeHandlerVersion": "2.73",
        "autoUpgradeMinorVersion": true,
        "settings": {
          "modulesUrl": "[parameters('artifactsLocation')]",
          "configurationFunction": "Configuration.ps1\\AddSessionHost",
          "properties": {
            "hostPoolName": "[parameters('hostpoolName')]",
            "registrationInfoToken": "[parameters('hostpoolToken')]",
            "aadJoin": "[parameters('aadJoin')]",
            "sessionHostConfigurationLastUpdateTime": "[parameters('SessionHostConfigurationVersion')]"
          }
        }
      }
    },
    {
      "condition": "[not(parameters('aadJoin'))]",
      "apiVersion": "2018-10-01",
      "type": "Microsoft.Compute/virtualMachines/extensions",
      "name": "[concat(parameters('rdshPrefix'), add(copyindex(), parameters('vmInitialNumber')), '/', 'joindomain')]",
      "location": "[parameters('location')]",
      "dependsOn": [ "rdsh-dsc-loop" ],
      "copy": {
        "name": "rdsh-domain-join-loop",
        "count": "[parameters('rdshNumberOfInstances')]"
      },
      "properties": {
        "publisher": "Microsoft.Compute",
        "type": "JsonADDomainExtension",
        "typeHandlerVersion": "1.3",
        "autoUpgradeMinorVersion": true,
        "settings": {
          "name": "[variables('domain')]",
          "ouPath": "[parameters('ouPath')]",
          "user": "[parameters('administratorAccountUsername')]",
          "restart": "true",
          "options": "3"
        },
        "protectedSettings": {
          "password": "[parameters('administratorAccountPassword')]"
        }
      }
    },

However, the default parameters of the ARM template downloaded from the Azure Portal differ from the values found on GitHub, e.g., modulesParameter:

  • GitHub: https://raw.githubusercontent.com/Azure/RDS-Templates/master/ARM-wvd-templates/DSC/Configuration.zip
  • Azure: https://wvdportalstorageblob.blob.core.windows.net/galleryartifacts/Configuration_01-20-2022.zip

It seems that Microsoft periodically releases the Configuration.zip to the galleryartifacts container of the wvdportalstorageblob storage account. To peek inside the container, we can use the List Blobs operation of the Blob Service REST API.

The URLs that the Azure Portal uses sometimes change. At the time of writing, it uses the Configuration_01-20-2022.zip file despite Configuration_02-23-2022.zip being available.

AADDS-join the VMs

When AADDS-joining a computer, it will be added to the built-in AADDS Computers Organizational Unit (OU) of the domain by default. To add the VM to a different OU, we can optionally specify the OU path during domain-join. Create an optional variable:

variable "avd_ou_path" {
  type        = string
  description = "OU path used to AADDS domain-join AVD session hosts."
  default     = ""
}

We then AADDS-join the session hosts with the JsonADDomainExtension VM extension:

resource "azurerm_virtual_machine_extension" "avd_aadds_join" {
  count                      = var.avd_host_pool_size
  name                       = "aadds-join-vmext"
  virtual_machine_id         = azurerm_windows_virtual_machine.avd[count.index].id
  publisher                  = "Microsoft.Compute"
  type                       = "JsonADDomainExtension"
  type_handler_version       = "1.3"
  auto_upgrade_minor_version = true

  settings = <<-SETTINGS
    {
      "Name": "${azurerm_active_directory_domain_service.aadds.domain_name}",
      "OUPath": "${var.avd_ou_path}",
      "User": "${azuread_user.dc_admin.user_principal_name}",
      "Restart": "true",
      "Options": "3"
    }
    SETTINGS

  protected_settings = <<-PROTECTED_SETTINGS
    {
      "Password": "${random_password.dc_admin.result}"
    }
    PROTECTED_SETTINGS

  lifecycle {
    ignore_changes = [settings, protected_settings]
  }

  depends_on = [
    azurerm_virtual_network_peering.aadds_to_avd,
    azurerm_virtual_network_peering.avd_to_aadds
  ]
}

We have to ensure that the session hosts have line of sight to the AADDS DCs. To do that, we add the network peering resources to the depends_on list of the VM extension.

After a VM has been AADDS-joined, it doesn’t make sense to join it again when the settings or protected_settings of the VM extension change, so we ignore_changes of these properties.

Register VMs to the Host Pool

First, let’s add a variable containing the URL to the zip file containing the DSC configuration, making it easier to update it in the future:

variable "avd_register_session_host_modules_url" {
  type        = string
  description = "URL to .zip file containing DSC configuration to register AVD session hosts to AVD host pool."
  default     = "https://wvdportalstorageblob.blob.core.windows.net/galleryartifacts/Configuration_02-23-2022.zip"
}

Then, we register the session hosts to the host pool with the DSC VM extension:

resource "azurerm_virtual_machine_extension" "avd_register_session_host" {
  count                = var.avd_host_pool_size
  name                 = "register-session-host-vmext"
  virtual_machine_id   = azurerm_windows_virtual_machine.avd[count.index].id
  publisher            = "Microsoft.Powershell"
  type                 = "DSC"
  type_handler_version = "2.73"

  settings = <<-SETTINGS
    {
      "modulesUrl": "${var.avd_register_session_host_modules_url}",
      "configurationFunction": "Configuration.ps1\\AddSessionHost",
      "properties": {
        "hostPoolName": "${azurerm_virtual_desktop_host_pool.avd.name}",
        "aadJoin": false
      }
    }
    SETTINGS

  protected_settings = <<-PROTECTED_SETTINGS
    {
      "properties": {
        "registrationInfoToken": "${azurerm_virtual_desktop_host_pool_registration_info.avd.token}"
      }
    }
    PROTECTED_SETTINGS

  lifecycle {
    ignore_changes = [settings, protected_settings]
  }

  depends_on = [azurerm_virtual_machine_extension.avd_aadds_join]
}

Again, we ignore_changes to the settings and protected_settings properties.

Role-based Access Control (RBAC)

Let’s create a group in AAD that authorizes its members to access the AVD application group we created earlier. To do so, we create a group and assign the AAD built-in Desktop Virtualization User role to it:

data "azurerm_role_definition" "desktop_virtualization_user" {
  name = "Desktop Virtualization User"
}

resource "azuread_group" "avd_users" {
  display_name     = "AVD Users"
  security_enabled = true
}

resource "azurerm_role_assignment" "avd_users_desktop_virtualization_user" {
  scope              = azurerm_virtual_desktop_application_group.avd.id
  role_definition_id = data.azurerm_role_definition.desktop_virtualization_user.id
  principal_id       = azuread_group.avd_users.id
}

Assuming that we want to authorize users that already exist within our AAD, we create a variable containing the UPNs of these users:

variable "avd_user_upns" {
  type        = list(string)
  description = "List of user UPNs authorized to access AVD."
  default     = []
}

We are able then query those users with Terraform and add them to the group like this:

data "azuread_user" "avd_users" {
  for_each            = toset(var.avd_user_upns)
  user_principal_name = each.key
}

resource "azuread_group_member" "avd_users" {
  for_each         = data.azuread_user.avd_users
  group_object_id  = azuread_group.avd_users.id
  member_object_id = each.value.id
}

What’s Next?

Great! We successfully created an AVD environment with Terraform. Test it by logging into one of the available AVD clients.

I’ll write about creating custom AVD images with Packer next and follow it up by showing you how to configure FSLogix user profiles on your AADDS-joined AVD session hosts. Stay tuned!