What's new in System Center Virtual Machine Manager

This article details the new features supported in System Center 2022 - Virtual Machine Manager (VMM). It also details the new features in VMM 2022 UR1 and UR2.

New features in VMM 2022

See the following sections for new features and feature updates supported in VMM 2022.

Compute

Windows Server 2022 and Windows Server 2022 Guest OS support

VMM 2022 can be used to manage on Windows Server 2022 hosts and Windows Server 2022 Guest OS hosts.

Windows 11 support

VMM 2022 supports Windows 11 as guest operating system.

Support for Azure Stack HCI clusters 21H2

With VMM 2022, you can manage Azure Stack HCI, 21H2 clusters.

Azure Stack HCI, version 21H2 is the newly introduced hyper-converged infrastructure (HCI) Operating system that runs on on-premises clusters with virtualized workloads.

Most of the operations to manage Azure Stack clusters in VMM are similar to managing Windows Server clusters. 

Note

Management of Azure Stack HCI stretched clusters is currently not supported in VMM.

See Deploy and manage Azure Stack HCI clusters in VMM.

Register and unregister Azure Stack HCI cluster using PowerShell cmdlets

VMM 2022 supports register and unregister PowerShell cmdlets for Azure Stack HCI cluster. See Register-SCAzStackHCI and Unregister-SCAzStackHCI.

Support for dual stack SDN deployment

VMM 2022 supports dual stack SDN deployment.

In VMM 2019 UR2, we introduced support for Ipv6 based SDN deployment. VMM 2022 supports dual stack (Ipv4 + Ipv6) for SDN components.

To enable Ipv6 for SDN deployment, do the required changes in the network controller, gateway, and SLB setup.

For more information about these updates, see Network controller, Gateway, SLB, and Set up NAT.

New features in VMM 2022 UR1

The following sections introduce the new features and feature updates supported in VMM 2022 Update Rollup 1 (UR1).

For problems fixed in VMM 2022 UR1, and installation instructions for UR1, see the KB article.

Support for Azure Stack HCI clusters 22H2

With VMM 2022 UR1, you can manage Azure Stack HCI, 22H2 clusters.

Azure Stack HCI, version 22H2 is the newly introduced hyper-converged infrastructure (HCI) Operating system that runs on on-premises clusters with virtualized workloads.

Most of the operations to manage Azure Stack clusters in VMM are similar to managing Windows Server clusters. 

See Deploy and manage Azure Stack HCI clusters in VMM.

Support for VMware vSphere 7.0, 8.0 and ESXi 7.0, 8.0

VMM 2022 UR1 supports VMware vSphere 7.0, 8.0 and ESXi 7.0, 8.0. Learn more.

Support for SQL Server 2022

VMM 2022 UR1 supports SQL Server 2022. Learn more.

Support for Smart card sign in in SCVMM Console

VMM 2022 UR1 supports Smart card sign in with enhanced session mode in SCVMM Console.

SR-IOV support for Network Controller managed NICs

With VMM 2022 UR1, SR-IOV supports Network Controller managed NICs.

Removed VMM dependencies on deprecated Operations Manager Management Pack

With VMM 2022 UR1, removed VMM dependencies on deprecated SCOM Management Packs. If you have an active SCOM - VMM integration, follow the steps listed in KB article before you upgrade to VMM 2022 UR1.

Discover Arc-enabled SCVMM from VMM console

VMM 2022 UR1 allows you to discover Arc-enabled SCVMM from console and manage your Hybrid environment and perform self-service VM operations through Azure portal. Learn more.

Support for 64 virtual networks for Windows Server 2019 or later

VMM 2022 UR1 supports 64 virtual networks for Windows Server 2019 or later.

New features in VMM 2022 UR2

The following sections introduce the new features and feature updates supported in VMM 2022 Update Rollup 2 (UR2).

For problems fixed in VMM 2022 UR2, and installation instructions for UR2, see the KB article.

Improved V2V conversion performance of VMware VMs to Hyper-V VMs

You can now convert your VMware VMs to Hyper-V with close to four times faster conversion speed and support for VMware VMs with disk sizes greater than 2 TB. Learn more about how to use this enhancement.

Improved Arc-enabled SCVMM Discovery tab

The Azure Arc tab now highlights the latest feature additions to Arc-enabled SCVMM which includes support for Azure management services such as Microsoft Defender for Cloud, Azure Update Manager, Azure Monitor, Microsoft Sentinel, and more. Learn more.

If you are running WS 2012 and 2012R2 host and guest operating systems, the Azure Arc blade now provides guidance to continue remaining in support state.

Support for latest Linux Guest Operating Systems

With VMM 2022 UR2, you can run Ubuntu Linux 22, Debian 11, Oracle Linux 8 and 9 based Linux VMs.

This article details the new features supported in System Center 2019 - Virtual Machine Manager (VMM). It also details the new features in VMM 2019 UR1, UR2, UR3, UR4, and UR5.

New features in VMM 2019

The following sections introduce the new features in Virtual Machine Manager (VMM) 2019.

Compute

Cluster rolling upgrade for S2D clusters

System Center 2019 Virtual Machine Manager supports a rolling upgrade of a Storage Spaces Direct (S2D) host cluster from Windows Server 2016 to Windows Server 2019. For more information, see Perform a rolling upgrade.

Support for deduplication for ReFS volume

VMM 2019 supports deduplication for ReFS volume on the Windows Server 2019 hyper-converged cluster and Scale-Out File Server. For more information, see Add storage to Hyper-V hosts and clusters.

Storage

Storage dynamic optimization

This feature helps to prevent cluster shared storage (CSV and file shares) from becoming full due to expansion or new virtual hard disks (VHDs) being placed on the cluster shared storage. You can now set a threshold value to trigger a warning when free storage space in the cluster shared storage falls below the threshold. This situation might occur during a new disk placement. It also might occur when VHDs are automigrated to other shared storage in the cluster. For more information, see Dynamic optimization.

Support for storage health monitoring

Storage health monitoring helps you to monitor the health and operational status of storage pools, LUNs, and physical disks in the VMM fabric.

You can monitor the storage health on the Fabric page of the VMM console. For more information, see Set up the VMM storage fabric.

Networking

Configuration of SLB VIPs through VMM service templates

Software-defined networks (SDNs) in Windows 2016 can use software load balancing (SLB) to evenly distribute network traffic among workloads managed by service providers and tenants. VMM 2016 currently supports deployment of SLB virtual IPs (VIPs) by using PowerShell.

With VMM 2019, VMM supports configuration of SLB VIPs while deploying multitier applications by using the service templates. For more information, see Configure SLB VIPs through VMM service templates.

Configuration of encrypted VM networks through VMM

VMM 2019 supports encryption of VM networks. Using the new encrypted networks feature, end-to-end encryption can be easily configured on VM networks by using the network controller. This encryption prevents the traffic between two VMs on the same network and the same subnet from being read and manipulated.

The control of encryption is at the subnet level. Encryption can be enabled or disabled for each subnet of the VM network. For more information, see Configure encrypted networks in SDN using VMM.

Support for configuring a Layer 3 forwarding gateway by using the VMM console

Layer 3 (L3) forwarding enables connectivity between the physical infrastructure in the datacenter and the virtualized infrastructure in the Hyper-V network virtualization cloud. Earlier versions of VMM supported the Layer 3 gateway configuration through PowerShell.

In VMM 2019, you can configure a Layer 3 forwarding gateway by using the VMM console. For more information, see Configure L3 forwarding.

Support for a static MAC address on VMs deployed on a VMM cloud

With this feature, you can set a static MAC address on VMs deployed on a cloud. You can also change the MAC address from static to dynamic and vice versa for the already-deployed VMs. For more information, see Provision virtual machines in the VMM fabric.

Azure integration

VM update management through VMM by using an Azure Automation subscription

VMM 2019 introduces the possibility of patching and updating on-premises VMs (managed by VMM) by integrating VMM with an Azure Automation subscription. For more information, see Manage VMs.

New RBAC role: Virtual Machine Administrator

In a scenario where enterprises want to create a user role for troubleshooting, the user needs access to all the VMs. In this way, the user can make any required changes on the VMs to resolve a problem. There's also a need for the user to have access to the fabric to identify the root cause for a problem. For security reasons, this user shouldn't be given privileges to make any changes on the fabric like adding storage or hosts.

The current role-based access control (RBAC) in VMM doesn't have a role defined for this persona. The existing roles of Delegated Admin and Fabric Admin have too little or more than necessary permissions to perform troubleshooting.

To address this issue, VMM 2019 supports a new role called Virtual Machine Administrator. The user of this role has Read and Write access to all VMs but Read-only access to the fabric. For more information, see Set up user roles in VMM.

Support for group Managed Service Account as a VMM service account

The group Managed Service Account (gMSA) helps improve the security posture. It provides convenience through automatic password management, simplified service principle name management, and the ability to delegate the management to other administrators.

VMM 2019 supports the use of gMSA for Management server service account. For more information, see Install VMM.

Note

The following features or feature updates were introduced in VMM 1807 and are included in VMM 2019.

Features included in VMM 2019 - introduced in VMM 1807

Storage

Supports selection of CSV for placing a new VHD

With VMM, you can select cluster shared volumes (CSVs) for placing a new VHD.

In earlier versions of VMM, by default a new VHD on a VM is placed on the same CSV where the earlier VHDs associated with the VM are placed. There was no option to choose a different CSV/ folder. In case of any problems related to the CSV, such as storage that's full or over commitment, users had to migrate the VHD, but only after they deployed the VHD.

With VMM 1807, you can now choose any location to place the new disk. You can manage this disk easily, based on the storage availability of CSVs. For more information, see Add a virtual hard disk to a virtual machine.

Networking

Display of LLDP information for networking devices

VMM supports the Link Layer Discovery Protocol (LLDP). You can now view network device properties and capabilities information of the hosts from VMM. The host operating system must be Windows 2016 or higher.

DataCenterBridging and DataCenterBridging-LLDP-Tools features have been enabled on hosts to fetch the LLDP properties. For more information, see Set up networking for Hyper-V hosts and clusters in the VMM fabric.

Convert a SET switch to a logical switch

You can convert a switch embedded teaming (SET) switch to a logical switch by using the VMM console. In earlier versions, this feature was supported only through PowerShell script. For more information, see Create logical switches.

VMware host management

VMM supports VMware ESXi v6.5 servers in VMM fabric. This support gives administrators additional flexibility in managing multiple hypervisors in use. For more information about supported VMware server versions, see System requirements.

Support for S2D cluster update

VMM supports the update of an S2D host or a cluster. You can update individual S2D hosts or clusters against the baselines configured in Windows Server Update Services. For more information, see Update Hyper-V hosts and clusters.

Others

Support for SQL Server 2017

VMM supports SQL Server 2017. You can upgrade SQL Server 2016 to SQL Server 2017.

Note

The following features or feature updates were introduced in VMM 1801 and are included in VMM 2019.

Features included in VMM 2019 - introduced in VMM 1801

Compute

Nested virtualization

VMM supports a nested virtualization feature that you can use to run Hyper-V inside a Hyper-V virtual machine. In other words, with nested virtualization, a Hyper-V host itself can be virtualized. Nested virtualization can be enabled out-of-band by using PowerShell and Hyper-V host configuration.

You can use this functionality to reduce your infrastructure expense for development, test, demo, and training scenarios. With this feature, you can also use third-party virtualization management products with Hyper-V.

You can enable or disable the nested virtualization feature by using VMM. You can configure the VM as a host in VMM and perform host operations from VMM on this VM. For example, VMM dynamic optimization considers a nested VM host for placement. For more information, see Configure a nested VM as a host.

Migration of VMware VM (EFI firmware-based VM) to Hyper-V VM

The current VMM migration for VMware VMs to Hyper-V only supports migration of BIOS-based VMs.

VMM enables migration of EFI-based VMware VMs to Hyper-V generation 2 VMs. VMware VMs that you migrate to Microsoft Hyper-V platform can take advantage of the Hyper-V generation 2 features.

As part of this release, the Convert Virtual Machine wizard enables the VM migration based on the firmware type (BIOS or EFI). It selects and defaults the Hyper-V VM generation appropriately. For more information, see Convert a VMware VM to Hyper-V in the VMM fabric. For example:

  • BIOS-based VMs are migrated to Hyper-V VM generation 1.
  • EFI-based VMs are migrated to Hyper-V VM generation 2.

We've also made improvements in the VMware VM conversion process that makes the conversion up to 50% faster.

Performance improvement in host refresher

VMM host refresher has undergone certain updates for performance improvement.

With these updates, in scenarios where an organization manages a large number of hosts and VMs with checkpoints, you can observe significant and noticeable improvements in the performance of the job.

In our lab with VMM instances managing 20 hosts, and each host managing 45 to 100 VMs, we've measured up to 10 times performance improvement.

Enhanced console session in VMM

The console connect capability in VMM provides an alternative way to connect to the VM via remote desktop. This method is most useful when the VM doesn't have any network connectivity or when you want to change to a network configuration that could break the network connectivity. Currently, the console connect capability in VMM supports only a basic session where clipboard text can be pasted only by using the Type Clipboard Text menu option.

VMM supports an enhanced console session that enables Cut (Ctrl + X), Copy (Ctrl + C), and Paste (Ctrl + V) operations on the ANSI text and files available on the clipboard. As a result, Copy and Paste commands for text and files are possible from and to the VM. For more information, see Enable enhanced console session in VMM.

Storage

Improvement in VMM storage QoS

Storage Quality of Service (QoS) provides a way to centrally monitor and manage storage performance for virtual machines by using Hyper-V and the Scale-Out File Server roles. The feature automatically improves storage resource fairness between multiple VMs by using the same cluster. It also allows policy-based performance goals.

VMM supports the following improvements in storage QoS:

  • Extension of storage QoS support beyond S2D: You can now assign storage QoS policies to storage area networks (SANs). For more information, see Manage storage QoS for clusters.
  • Support for VMM private clouds: Storage QoS policies can now be consumed by the VMM cloud tenants. For more information, see Create a private cloud.
  • Availability of storage QoS policies as templates: You can set storage QoS policies through VM templates. For more information, see Add VM templates to the VMM library.

Networking

Configuration of guest clusters in SDN through VMM

With the advent of the software-defined network in Windows Server 2016 and System Center 2016, the configuration of guest clusters has undergone some change.

With the introduction of the SDN, VMs that are connected to the virtual network by using SDN are only permitted to use the IP address that the network controller assigns for communication. The SDN design is inspired by Azure networking design and supports the floating IP functionality through the software load balancer (SLB) like Azure networking.

VMM also supports the floating IP functionality through the SLB in the SDN scenarios. VMM 1801 supports guest clustering through an internal load balancer (ILB) VIP. The ILB uses probe ports, which are created on the guest cluster VMs to identify the active node. At any given time, the probe port of only the active node responds to the ILB. Then all the traffic directed to the VIP is routed to the active node. For more information, see Configure guest clusters in SDN through VMM.

Configuration of SLB VIPs through VMM service templates

SDN in Windows 2016 can use SLB to evenly distribute network traffic among workloads managed by service provider and tenants. VMM 2016 currently supports deployment of SLB VIPs by using PowerShell.

VMM supports configuration of SLB VIPs when you deploy multitier applications by using the service templates. For more information, see Configure SLB VIPs through VMM service templates.

Configuration of encrypted VM networks through VMM

VMM supports encryption of VM networks. Using the new encrypted networks feature, end-to-end encryption can be easily configured on VM networks by using the network controller. This encryption prevents traffic between two VMs on the same network and same subnet from being read and manipulated.

The control of encryption is at the subnet level. Encryption can be enabled or disabled for each subnet of the VM network. For more information, see Configure encrypted networks in SDN using VMM.

Security

Support to Linux shielded VMs

Windows Server 2016 introduced the concept of a shielded VM for Windows OS-based VMs. Shielded VMs protect against malicious administrator actions. They provide protection when the VM's data is at rest or when untrusted software runs on Hyper-V hosts.

With Windows Server 1709, Hyper-V introduces support for provisioning Linux shielded VMs. The same support is now extended to VMM. For more information, see Create a Linux shielded VM template disk.

Configuration of fallback HGS

The host guardian service (HGS) provides attestation and key protection services to run shielded VMs on Hyper-V hosts. It should operate even in situations of disaster. Windows Server 1709 added support for fallback HGS.

Using VMM, a guarded host can be configured with a primary and a secondary pair of HGS URLS (an attestation and key protection URI). This capability enables scenarios such as guarded fabric deployments that span two datacenters for disaster recovery purposes and HGS running as shielded VMs.

The primary HGS URLs are always used in favor of the secondary HGS URLs. If the primary HGS fails to respond after the appropriate timeout and retry count, the operation is reattempted against the secondary HGS. Subsequent operations always favor the primary. The secondary is used only when the primary fails. For more information, see Configure HGS fallback URLs in VMM.

Azure integration

Management of Azure Resource Manager-based and region-specific Azure subscriptions

Currently, the VMM Azure plug-in supports only classic VMs and global Azure regions.

VMM 1801 supports management of:

  • Azure Resource Manager-based VMs.
  • Azure Active Directory-based authentication that's created by using the new Azure portal.
  • Region-specific Azure subscriptions, namely, Germany, China, and US Government Azure regions.

For more information, see Manage VMs.

New features in VMM 2019 UR1

The following sections introduce the new features or feature updates supported in VMM 2019 Update Rollup 1 (UR1).

For problems fixed in UR1, and the installation instructions for UR1, see the KB article.

Compute

Support for management of replicated library shares

Large enterprises usually have multisite datacenter deployments to cater to various offices across the globe. These enterprises typically have a locally available library server to access files for VM deployment rather than accessing the library shares from a remote location. This arrangement is to avoid any network-related problems users might experience. But library files must be consistent across all the datacenters to ensure uniform VM deployments. To maintain uniformity of library contents, organizations use replication technologies.

VMM now supports the management of library servers, which are replicated. You can use any replication technologies, such as DFSR, and manage the replicated shares through VMM. For more information, see Manage replicated library shares.

Storage

Configuration of DCB settings on S2D clusters

Remote Direct Memory Access (RDMA) and data center bridging (DCB) help to achieve a similar level of performance and losslessness in an Ethernet network as in fiber channel networks.

VMM 2019 UR1 supports configuration of DCB on S2D clusters.

Note

You must configure the DCB settings consistently across all the hosts and the fabric network (switches). A misconfigured DCB setting in any one of the host or fabric devices is detrimental to the S2D performance. For more information, see Configure DCB settings on the S2D cluster.

Networking

User experience improvements in logical networks

In VMM 2019 UR1, the user experience is enhanced for the process of creating logical networks. Logical networks are now grouped by product description based on use cases. Also, an illustration for each logical network type and a dependency graph are provided. For more information, see Set up logical networks in the VMM 2019 UR1 fabric.

Additional options to enable nested virtualization

You can now enable nested virtualization while you create a new VM and deploy VMs through VM templates and service templates. In earlier releases, nested virtualization is supported only on deployed VMs. Learn more about enabling nested virtualization.

Updates to PowerShell cmdlets

VMM 2019 UR1 includes the following cmdlet updates for the respective features:

  1. Configuration of DCB settings on S2D clusters

    • New cmdlet New-SCDCBSettings - configures DCB settings in the S2D cluster managed by VMM.

    • New parameter [-DCBSettings] - specifies the DCB settings configured on the cluster, and is included in Install-SCVMHostCluster, Set-SCVMHostCluster, and Set-SCStorageFileServer cmdlets.

  2. Additional options to enable nested virtualization

    • New parameter [-EnableNestedVirtualization] - enables the nested virtualization and is included in Set-SCComputerTierTemplate cmdlet.

For more information about these updates, see VMM PowerShell articles.

New features in VMM 2019 UR2

The following sections introduce the new features and feature updates supported in VMM 2019 Update Rollup 2 (UR2).

For problems fixed in VMM 2019 UR2 and installation instructions for UR2, see the KB article.

Compute

Support for Windows Server 2012 R2 hosts

VMM 2019 UR2 supports windows server 2012 R2 hosts. For more information about the supported hosts, see System requirements.

Support for ESXi 6.7 hosts

VMM 2019 UR2 supports VMware ESXi v6.7 servers in VMM fabric. This support gives additional flexibility to the administrators in managing multiple hypervisors in use. For more information about supported VMware server versions, see System requirements.

Networking

User experience improvements in creating logical switches

With VMM 2019 UR2, the user experience is enhanced for the process of creating logical switches. 2019 UR2 includes smart defaults, provides clear text explanation for various options along with visual representations, and a topology diagram for logical switch. Learn more.

Support for IPv6

VMM 2019 UR2 supports IPv6 SDN deployment. Learn more.

Provision to set affinity between virtual network adapters and physical adapters

VMM 2019 UR2 supports affinity between vNICs and pNICs. Affinity between virtual network adapters and physical adapters brings in flexibility to route network traffic across teamed pNICs. With this feature, you can increase throughput by mapping RDMA capable physical adapter with an RDMA settings-enabled vNIC. Also, you can route specific type of traffic (for example, live migration) to a higher-bandwidth physical adapter. In HCI deployment scenarios, by specifying affinity, you can use SMB multichannel to meet high throughput for SMB traffic. Learn more.

Others

Support for SQL Server 2019

VMM 2019 RTM and later now supports SQL Server 2019.

Support for Linux Operating system

VMM 2019 UR2 supports Red Hat 8.0, CentOS 8, Debian 10, Ubuntu 20.04 Linux Operating systems.

Updates to PowerShell cmdlets

VMM 2019 UR2 includes the following cmdlet updates for the respective features:

  1. Update VMM certificate

    • New cmdlet Update-SCVMMCertificate - updates the VMM certificate on the VMM server.
  2. Set affinity between virtual network adapters and physical adapters

    • New parameter [-PhysicalNetworkAdapterName] - specifies the name of the physical network adapter and is included in New-SCVirtualNetworkAdapter and Set-SCVirtualNetworkAdapter cmdlets.
  3. Support for IPv6

    • New parameter [-IPv6Subnet] - specifies an IPv6 subnet and is included in Add-SCFabricRoleResource cmdlet.

    • Updates to parameters in existing cmdlets:

      • IPv4 and IPv6 address separated by ‘;’ can be passed to [-RoutingIPSubnet] parameter in Add-SCVMNetworkGateway cmdlet.
      • IPv6 addresses can also be added to [-PublicIPAddresses] parameter in New-SCGatewayRoleConfiguration cmdlet.

For more information about these updates, see VMM PowerShell articles.

New features in VMM 2019 UR3

The following sections introduce the new features and feature updates supported in VMM 2019 Update Rollup 3 (UR3).

For problems fixed in VMM 2019 UR3, and installation instructions for UR3, see the KB article.

Compute

Trunk mode support for VM vNICs

VMM 2019 UR3 includes Trunk mode support for VM vNICs. Trunk mode is used by NFV/VNF applications like virtual firewalls, software load balancers, and virtual gateways to send and receive traffic over multiple vLANs. Learn more.

Support for Azure Stack HCI clusters

VMM 2019 UR3 includes support to add, deploy and manage Azure Stack HCI clusters in VMM. In addition to the current SKU of server operating system, VMM expands its support to Azure Stack HCI.

Azure Stack HCI, version 20H2 is the newly introduced hyper-converged infrastructure (HCI) Operating system that runs on on-premises clusters with virtualized workloads.

Most of the operations to manage Azure Stack clusters in VMM are similar to that of managing Windows Server clusters. Learn more.

Note

Management of Azure Stack HCI stretched clusters is currently not supported in VMM.

Updates to PowerShell cmdlets

VMM 2019 UR3 includes the following cmdlet updates for Trunk mode support for VM vNICs:

New parameters [-AllowedVLanList] and [NativeVLanId] are included in New-SCVirtualNetworkAdapter and Set-SCVirtualNetworkAdapter cmdlets.

For more information about these updates, see VMM PowerShell articles.

New features in VMM 2019 UR4

The following sections introduce the new features or feature updates supported in VMM 2019 Update Rollup 4 (UR4).

For problems fixed in UR4, and the installation instructions for UR4, see the KB article.

Compute

Support for Windows Server 2022 and Windows 11

VMM 2019 UR4 supports Windows Server 2012 2022 and Windows 11 guest virtual machines. For more information about supported hosts, see System requirements.

Support for Smart card login

VMM 2019 UR4 supports smart card login to connect virtual machines in enhanced session mode.

New features in VMM 2019 UR5

The following sections introduce the new features or feature updates supported in VMM 2019 Update Rollup 5 (UR5).

For problems fixed in UR5, and the installation instructions for UR5, see the KB article.

Compute

Support for VMware vSphere 7.0, 8.0 and ESXi 7.0, 8.0

VMM 2019 UR5 supports VMware vSphere 7.0, 8.0 and ESXi 7.0, 8.0. Learn more.

Discover Arc-enabled SCVMM from VMM console

VMM 2019 UR5 allows you to discover Arc-enabled SCVMM from console and manage your Hybrid environment and perform self-service VM operations through Azure portal. Learn more.

Important

This version of Virtual Machine Manager (VMM) has reached the end of support. We recommend you to upgrade to VMM 2022.

This article details the new features supported in System Center 1807 - Virtual Machine Manager (VMM).

What's new in System Center 1807 - Virtual Machine Manager

See the following sections for information about the new features supported in VMM 1807.

Note

To view the bugs fixed and the installation instructions for VMM 1807, see KB article 4135364.

Storage

Supports selection of CSV for placing a new VHD

VMM 1807 allows you to select a cluster shared volumes (CSV) for placing a new virtual hard disk (VHD).

In earlier versions of VMM, a new VHD on a virtual machine (VM), by default, is placed on the same CSV where the earlier VHDs associated with the VM are placed, there was no option to choose a different CSV/folder. In case of any issues related to the CSV, such as storage full or overcommitment, users had to migrate the VHD only after deploying the VHD.

With VMM 1807, you can now choose any location to place the new disk. You can manage this disk easily based on the storage availability of CSVs. Learn more.

Networking

Display of LLDP information for networking devices

VMM 1807 supports Link Layer Discovery Protocol (LLDP). You can now view network device properties and capabilities information of the hosts from VMM. Host operating system must be Windows 2016 or higher.

DataCenterBridging and DataCenterBridging-LLDP-Tools features have been enabled on hosts to fetch the LLDP properties. Learn more.

Convert SET switch to logical switch

VMM 1807 allows you to convert a switch embedded teaming (SET) switch to logical switch by using the VMM console. In earlier versions, this feature was supported only through PowerShell script. Learn more.

VMware host management

VMM 1807 supports VMware ESXi v6.5 servers in VMM fabric. This support facilitates the administrators with additional flexibility in managing multiple hypervisors in use. Learn more about the additional details of supported VMware server versions.

Support for S2D cluster update

VMM 1807 supports update of an S2D host or a cluster. You can update individual S2D hosts or clusters against the baselines configured in windows server update services (WSUS). Learn more.

Others

Support for SQL 2017

VMM 1807 supports SQL 2017. You can upgrade SQL 2016 to 2017.

Important

This version of Virtual Machine Manager (VMM) has reached the end of support. We recommend you to upgrade to VMM 2022.

This article details the new features supported in System Center 1801 - Virtual Machine Manager (VMM).

This article details the new features supported in System Center 2016 - Virtual Machine Manager (VMM).

What's new in System Center 1801 - Virtual Machine Manager

See the following sections for detailed information about the new features supported in VMM 1801.

Compute

Nested virtualization

VMM supports Nested Virtualization feature that allows you to run Hyper-V inside a Hyper-V virtual machine. In other words, with nested virtualization, a Hyper-V host itself can be virtualized. Nested virtualization can be enabled out-of-band by using PowerShell and Hyper-V host configuration.

You can use this functionality to reduce your infrastructure expense for development, test, demo, and training scenarios. This feature also allows you to use third-party virtualization management products with Microsoft hypervisor.

You can enable or disable the nested virtualization feature using SCVMM 1801. You can configure the VM as a Host in VMM and perform host operations from VMM on this VM. For example, VMM dynamic optimization considers a nested VM host for placement. Learn more.

Migration of VMware VM (EFI firmware-based VM) to Hyper-V VM

The current VMM migration for VMware VMs to Hyper-V only supports migration of BIOS-based VMs.

VMM 1801 release enables migration of EFI based VMware VMs to Hyper-V generation 2 VMs. VMware VMs that you migrate to Microsoft Hyper-V platform can take advantage of the Hyper-V generation 2 features.

As part of this release, the Convert Virtual machine wizard enables the VM migration based on the firmware type (BIOS or EFI), selects and defaults the Hyper-V VM generation appropriately: Learn more.

  1. BIOS-based VMs are migrated to Hyper-V VM generation 1.
  2. EFI-based VMs are migrated to Hyper-V VM generation 2.

We've also made improvements in the VMware VM conversion process that makes the conversion up to 50% faster.

Performance improvement in host refresher

The VMM 1801 host refresher has undergone certain updates for performance improvement.

With these updates, in scenarios where the organization is managing a large number of hosts and VMs with checkpoints – you would be able to observe significant and noticeable improvements in the performance of the job.

In our lab with VMM instances managing 20 hosts - each host managing 45-100 VMs, we've measured up to 10X performance improvement.

Enhanced console session in VMM

Console connect in VMM provides an alternative way to remote desktop to connect to the VM. This is most useful when the VM doesn't have any network connectivity or want to change network configuration that could break the network connectivity. Currently, the current console connect in VMM supports only basic session where clipboard text can only be pasted through Type Clipboard Text menu option.

VMM 1801 supports enhanced console session that enables Cut (Ctrl + X), Copy (Ctrl + C), and Paste (Ctrl + V) operations on the ANSI text and files available on the clipboard, thereby copy/paste commands for text and files are possible from and to the VM. Learn more.

Storage

Improvement in VMM storage QoS

Storage Quality of Service (SQoS) provides a way to centrally monitor and manage storage performance for virtual machines using Hyper-V and the Scale-Out File Server (SOFS) roles. The feature automatically improves storage resource fairness between multiple VMs using the same cluster and allows policy-based performance goals.

VMM 1801 supports the following improvements in SQoS:

  • Extension of SQoS support beyond S2D - You can now assign storage QoS policies to storage area networks (SAN). Learn more.
  • Support for VMM private cloud - storage QoS policies can now be consumed by the VMM cloud tenants. Learn more.
  • Availability of storage QoS policies as templates - You can set storage QoS policies through VM templates. Learn more.

Networking

Configuration of guest clusters in SDN through VMM

With the advent of the software defined network (SDN), in Windows Server 2016 and System Center 2016, the configuration of guest clusters has undergone some change.

With the introduction of the SDN, VMs that are connected to the virtual network using SDN are only permitted to use the IP address that the network controller assigns for communication. The SDN design is inspired by Azure networking design, supports the floating IP functionality through the Software Load Balancer (SLB), like Azure networking.

VMM 1801 release also supports the floating IP functionality through the Software Load Balancer (SLB) in the SDN scenarios. VMM 1801 supports guest clustering through an Internal Load Balancer (ILB) VIP. The ILB uses probe ports, which are created on the guest cluster VMs to identify the active node. At any given time, the probe port of only the active node responds to the ILB and all the traffic directed to the VIP is routed to the active node. . Learn more.

Configuration of SLB VIPs through VMM service templates

SDN in Windows 2016 can use Software Load Balancing (SLB) to evenly distribute network traffic among workloads managed by service provider and tenants. VMM 2016 currently supports deployment of SLB Virtual IPs (VIPs) using PowerShell.

With VMM 1801, VMM supports configuration of SLB VIPs while deploying multi-tier application by using the service templates. Learn more.

Configuration of encrypted VM networks through VMM

VMM 1801 supports encryption of VM networks. Using the new encrypted networks feature, end-to-end encryption can be easily configured on VM networks by using the Network Controller (NC). This encryption prevents traffic between two VMs on the same network and same subnet, from being read and manipulated.

The control of encryption is at the subnet level and encryption can be enabled/disabled for each subnet of the VM network. Learn more.

Security

Support to Linux shielded VM

Windows Server 2016 introduced the concept of a shielded VM for Windows OS-based VMs. Shielded VMs provide protection against malicious administrator actions both when VM’s data is at rest or an untrusted software is running on Hyper-V hosts.

With Windows Server 1709, Hyper-V introduces support for provisioning Linux shielded VMs and the same has been extended to VMM 1801. Learn more.

Configuration of fallback HGS

Being at the heart of providing attestation and key protection services to run shielded VMs on Hyper-V hosts, the host guardian service (HGS) should operate even in situations of disaster. Windows Server 1709 added support for fallback HGS.

Using VMM 1801, a guarded host can be configured with a primary and a secondary pair of HGS URLS (an attestation and key protection URI). This capability enables scenarios such as guarded fabric deployments spanning two data centers for disaster recovery purposes, HGS running as shielded VMs, etc.

The primary HGS URLs will always be used in favor of the secondary. If the primary HGS fails to respond after the appropriate timeout and retry count, the operation will be reattempted against the secondary. Subsequent operations will always favor the primary; the secondary will only be used when the primary fails. Learn more.

Azure Integration

Management of Azure Resource Manager-based and region-specific Azure subscriptions

Currently, the VMM Azure plugin supports only classic virtual machines (VMs) and global Azure regions.

VMM 1801 supports management of Azure Resource Manager based VMs, Azure Active Directory (AD) based authentication that is created by using the new Azure portal and region-specific Azure subscriptions (namely, Germany, China, US Government Azure regions). Learn more.

What's new in VMM 2016

See the following sections for detailed information about the new features supported in VMM 2016.

Compute

Full lifecycle management of Nano Server-based hosts and VMs

You can provision and manage Nano Server-based hosts and virtual machines in the VMM fabric. Learn more.

Rolling upgrade of a Windows Server 2012 R2 host clusters

You can now upgrade Hyper-V and scale-out file server (SOFS) clusters in the VMM fabric from Windows Server 2012 R2 to Windows Server 2016, with no downtime for the host workloads. VMM orchestrates the entire workflow. It drains the node, removes it from the cluster, reinstalls the operating system, and adds it back into the cluster. Learn more about performing rolling upgrades for Hyper-V clusters and SOFS clusters.

Creating Hyper-V & SOFS clusters

There's a streamlined workflow for creating Hyper-V and SOFS clusters:

  • Bare metal deployment of Hyper-V host clusters: Deploying a Hyper-V host cluster from bare-metal machines is now a single step. Learn more

  • Adding a bare-metal node to an existing Hyper-V host cluster or an SOFS Cluster: You can now directly add a bare-metal computer to an existing Hyper-V or SOFS cluster.

New operations for running VMs

You can now increase/decrease static memory and add/remove virtual network adapter for virtual machines that are running. Learn more.

Production checkpoints

You can now create production checkpoints for VMs. These checkpoints are based on Volume Shadow Copy Service (VSS) and are application-consistent (compared to standard checkpoints based on saved state technology that aren't). Learn more.

Server App-V

The Server App-V application in service templates is no longer available in VMM 2016. You can't create new templates or deploy new services with the Server App-V app. If you upgrade from VMM 2012 R2 and have a service with the Server App-V application, the existing deployment will continue to work. However, after the upgrade, you can't scale out the tier with Server App-V application. You can scale out other tiers.

Note

The following feature is available from 2016 UR9.

Enhanced console session in VMM

The console connect capability in VMM provides an alternative way to connect to the VM via remote desktop. This method is most useful when the VM doesn't have any network connectivity or when you want to change to a network configuration that could break the network connectivity. Currently, the console connect capability in VMM supports only a basic session where clipboard text can be pasted only by using the Type Clipboard Text menu option.

VMM supports an enhanced console session that enables Cut (Ctrl + X), Copy (Ctrl + C), and Paste (Ctrl + V) operations on the ANSI text and files available on the clipboard. As a result, Copy and Paste commands for text and files are possible from and to the VM. For more information, see Enable enhanced console session in VMM.

Storage

Deploy and manage storage clusters with Storage Spaces Direct (S2D)

Storage Spaces Direct in Windows Server 2016 enables you to build highly available storage systems on Windows Server. You can use VMM to create a Scale-Out File Server running Windows Server 2016, and configure it with Storage Spaces Direct. After it's configured, you can create storage pools and file shares on it. Learn more.

Storage Replica

In VMM 2016, you can use Windows Storage Replica to protect data in a volume by synchronously replicating it between primary and secondary (recovery) volumes. You can deploy the primary and secondary volumes to a single cluster to two different clusters or to two standalone servers. You use PowerShell to set up Storage Replica and run failover. Learn more

Storage Quality of Service (QoS)

You can configure QoS for storage to ensure that disks, VMs, apps, and tenants don't drop below a certain resource quality when hosts and storage are handling heavy loads. You can configure QoS for storage in the VMM fabric.

Networking

Software Defined Networking (SDN)

In VMM 2016, you can deploy the entire SDN stack using VMM service templates.

  • You can deploy and manage a multi-node Network Controller in a subnet. After you deploy and onboard the Network Controller, you can specify that fabric components should be managed with SDN to provide connectivity to tenant VMs and to define policies.
  • You can deploy and configure a software load balancer to distribute traffic within networks managed by Network Controller. The software load balancer can be used for inbound and outbound NAT.
  • You can deploy and configure a Windows Server Gateway pool with M+N redundancy. After you deploy the gateway, you connect a tenant network to a hosting provider network, or to your own remote data center network using S2S GRE, S2S IPSec, or L3.

Network traffic isolation and filtering

You can limit and segregate network traffic by specifying port ACLs on VM networks, virtual subnets, network interfaces, or on an entire VMM stamp using Network Controller and PowerShell. Learn more.

Virtual network adapter naming

When you deploy a virtual machine, you might want to run a post-deployment script on the guest operating system to configure virtual network adapters. Previously, this was difficult because there wasn't an easy way to distinguish different virtual network adapters during deployment. Now, for generation 2 virtual machines deployed on Hyper-V hosts running Windows Server 2016, you can name the virtual network adapter in a virtual machine template. This is similar to using consistent device naming (CDN) for a physical network adapter.

Self-service SDN management using Windows Azure Pack (WAP)

You can provide self-service capabilities for fabric managed by Network Controller. These include creating and managing VM networks, configuring S2S IPSec connections, and configuring NAT options for tenant and infrastructure VMs in your data center.

Logical switch deployment across hosts

  • The interface for creating a logical switch has been streamlined to make it easier to select settings.
  • You can directly use Hyper-V to configure a standard virtual switch on a managed host, and then use VMM to convert the standard virtual switch to a VMM logical switch, which you later apply on additional hosts.
  • When applying a logical switch to a particular host, if the entire operation doesn't succeed, the operation is reverted and host settings are left unchanged. Improved logging makes it easier to diagnose failures.

Security

Guarded host deployment

You can provision and manage guarded hosts and shielded VMs in the VMM fabric, to help provide protection against malicious host administrators and malware.

  • You can manage guarded hosts in the VMM compute fabric. You configure guarded hosts to communicate with HGS servers, and you can specify code integrity policies that restrict software that can run in kernel mode on the host.
  • You can convert existing VMs to shielded VMs, and deploy new shielded VMs.

Next steps