Architecting a Hybrid Mobility Strategy : VMware Requirements for Long-Distance vSphere vMotion : 15.1 vSphere Virtual Networking
   
15.1 vSphere Virtual Networking
The traditional vSphere vMotion mechanism allows the user to migrate from a vSphere standard switch (VSS) to another within the same cluster or within a single VMware vSphere Distributed Switch™ (VDS). However, long distance vSphere vMotion is built on the ability of vSphere 6 to perform cross-virtual switch vSphere vMotion operations. Cross-virtual switch vSphere vMotion allows the seamless migration of a virtual machine across different virtual switches, unbounded by the networks created on those virtual switches.
Cross-virtual switch, and therefore cross-vCenter, long distance vSphere vMotion works across a mix of VSSs and VDS instances, unlike previous vSphere vMotion scenarios. vSphere 6 has removed those older limitations.
As previously discussed, to achieve long distance vSphere vMotion operations across data centers, the source and destination port groups must share the same Layer 2 address space. The network address properties within the guest operating system do not change during a vSphere vMotion operation. Be aware that when architecting a long distance vSphere vMotion solution, only the following cross-virtual switch vSphere vMotion operations are possible:
VSS to VSS
VSS to VDS
VDS to VDS
Note With long distance vSphere vMotion operations, it is not possible to migrate back from a vSphere Distributed Switch to a VSS.
When designing your vSphere network infrastructure, you must consider whether to implement a VSS or a vSphere Distributed Switch. The main benefit of VSS is ease of implementation. However, by adopting a VDS, you can benefit from a number features only offered by this technology, including Network I/O Control (NIOC), Link Aggregation Control Protocol (LACP), and NetFlow.
 
Figure 14. vSphere Distributed Switch – Cross-Data Center Architecture
 
In this sample design, the vSphere Distributed Switch is configured to carry all network traffic. Two 10 GB network interfaces carry all ingress and egress Ethernet traffic on all configured VLANs from each host. The user-defined networks must be configured on a port group by port group basis using the VLAN IDs shown.
All physical network switch ports connected to the 10 GB network interfaces must be configured as trunk ports. VMware recommends following the physical switch vendor’s guidance for the configuration of the hardware, with technologies such as STP and PortFast typically being enabled.
The figure also shows the port groups used to logically segment traffic by VLAN that are being stretched (when appropriate) across the two physical locations. VLAN tagging on the traffic occurs at the virtual switch level. Uplinks are configured as active/active with the load balancing algorithm choice depending on the physical switch’s EtherChannel capabilities. To align with security best practices, both the virtual and physical switches are configured to pass traffic specifically for VLANs employed by the infrastructure, as opposed to trunking all VLANs.
The Network I/O Control (NIOC) feature of vSphere Distributed Switch provides a QoS mechanism for network traffic within the ESXi host. NIOC can help prevent “bursty” vSphere vMotion traffic from flooding the network and causing issues with other important traffic, such as virtual machine and VMware vSAN™ communications. In addition to the use of NIOC, VMware recommends tagging traffic types for 802.1p QoS and configuring the physical upstream management switches with appropriate traffic-management priorities. If QoS tagging is not implemented, the value of the NIOC configuration is limited to within the hosts themselves.