Architecting a vSphere Compute Platform : vSphere Cluster Design : 7.7 Virtual Machine Mobility
   
7.7 Virtual Machine Mobility
In previous releases of vSphere, the cluster has been the boundary for sharable resources and live migration of virtual machines. The ability to live migrate a virtual machine across hosts was revolutionary at the time. However, with the release of vSphere 6.0, the vSphere vMotion capabilities have been enhanced significantly, enabling users to perform live migration of virtual machines across virtual switches, vCenter Server systems, and long distances with up to 150 millisecond (ms) RTT between hosts.
These new enhancements enable much greater flexibility when designing vSphere architectures, which had previously been limited to a single vCenter Server. This flexibility of scalability and live migration also provides enhanced options for multisite or metro designs, as vCenter Server 6 scale limits are no longer a physical boundary for compute resources. This means significantly larger and more flexible vSphere environments are now possible.
When a live migration occurs across vCenter Server instances, the metadata and virtual machine settings are preserved. This includes the virtual machine UUID, events, alarms, and task history, in addition to resource settings, such as shares, reservations, and limits.
vSphere HA and vSphere DRS settings are also maintained after a long-distance vSphere vMotion instance, along with affinity and anti-affinity rules, automation level, startup priority, and host isolation response. This allows for a seamless operational experience because the virtual machine live migrates throughout the multisite infrastructure. Virtual machine MAC addresses are also preserved as they are moved across different vCenter Server instances. When a virtual machine is moved from one vCenter Server instance to another, the MAC address is added to an internal blacklist to provide that a duplicate MAC address is never generated.
Increasing the latency thresholds for vSphere vMotion to 150 ms host-to-host RTT allows live migration to occur across larger geographic spans and potentially intracontinental distances. This feature will play a key role for many service providers in data center migrations, disaster avoidance scenarios, and multisite load balancing.
With the new features offered by vSphere 6, service providers can change the compute resource, storage resource, virtual machine network, and vCenter Server instance without disrupting consumer application services that reside on the virtual machines, thus enabling a wide array of new data center design opportunities.
 
Table 14. Online Migration Design Options
Mobility Technology
Use Cases
Business Benefits
Design Requirements
Cross virtual switch vSphere vMotion
Perform a seamless migration of a virtual machine across different virtual switches.
Migrate a virtual machine to a new cluster with a separate VMware vSphere Distributed Switch™ (VDS) without interruption.
 
You are no longer restricted by the networks you created on the virtual switches to use vSphere vMotion to move a virtual machine.
vSphere vMotion works across a mix of switches (standard and distributed). Previously, you could only use vMotion from a vSphere Standard Switch to another vSphere Standard Switch, or within a single VDS. This limitation has been removed.
Cross virtual switch vSphere vMotion transfers the VDS metadata (network statistics) to the destination VDS.
Increased agility by reducing the time it takes to replace/refresh hardware.
Increased reliability with increased availability of business applications and increased availability during planned maintenance activities.
Requires the source and destination port groups to share the same L2 address space. The IP address within the virtual machine will not change. The following cross virtual switch vSphere vMotion migrations are possible:
vSphere Standard Switch to vSphere Standard Switch
vSphere Standard Switch to VDS
VDS to VDS
Migrating back from a VDS to vSphere Standard Switch is not supported.
 
 
Cross vCenter vSphere vMotion
Built on enhanced vSphere vMotion, shared storage is not required.
Simplify migration tasks in public/private cloud environments with large numbers of vCenter Server instances.
Migrate from a vCenter Server Appliance to a Windows version of vCenter Server and the reverse.
Replace or retire a vCenter Server without disruption.
Resource pool across vCenter Server instances where additional vCenter Server instances are used due to vCenter Server scalability limits.
Migrate virtual machines across local, metro, and continental distances.
Increase reliability of migration to a Windows vCenter Server with an SQL cluster. This can increase availability of vCenter Server services, and increase availability during planned maintenance activities, such as vCenter Server upgrades. Upgrades can now be made without affecting managed virtual machines.
Perform the following virtual machine relocation activities simultaneously, and seamless to the guest operating system:
Change compute (vSphere vMotion) – Performs the migration of virtual machines across compute hosts.
Change storage (VMware vSphere Storage vMotion) – Performs the migration of the virtual machine disks across datastores.
Change network (cross virtual switch vSphere vMotion) – Performs the migration of a VM across different virtual switches.
Change vCenter (cross vCenter vMotion) – Performs the migration of the vCenter Server, which manages the VM.
Reduced cost with migration to a vCenter Server Appliance, eliminating the need for Windows and SQL licenses.
As with virtual switch vSphere vMotion, cross vCenter vSphere vMotion requires L2 network space connectivity, because the IP of the virtual machine will not be changed.
 
Long-distance vSphere vMotion
Long-distance vSphere vMotion is an extension of cross vCenter vSphere vMotion. However, it is targeted for environments where vCenter Server instances are spread across large geographic distances, and where the latency across sites is 150 ms or less between source and destination hosts.
Migrate VMs across physical servers that are spread over a large geographic distance without interruption to applications.
Perform a permanent migration for VMs in another data center.
Migrate VMs to another site to avoid imminent disaster.
Distribute VMs across sites to balance system load.
Follow the sun global support teams can be employed.
Increased reliability with greater availability of business applications during a disaster avoidance situation.
 
Although spread across a long distance, all the standard vMotion guarantees are honored. VMware vSphere Virtual Volumes™ is not required, but this technology is supported along with VMFS/NFS datastores.
The requirements for long-distance vSphere vMotion are the same as cross vCenter vSphere vMotion, with the exception that the maximum latency between the source and destination sites must be 150 ms or less, and there must be 250 Mbps of available bandwidth.
Long-distance vSphere vMotion (cont.)
 
 
The VM network must be a stretched L2 because the IP address of the guest operating system will not change. If the destination port group is not in the same L2 address space, network connectivity to the guest OS will be lost.
This means that in some topologies, such as metro or cross-continental, you will need a stretched L2 technology in place. The stretched L2 technologies are not specified. Any technology that can present the L2 network to the vSphere hosts will work, because the ESXi does not know how the physical network is configured. Some examples of technologies that would work are VXLAN, VMware NSX L2 gateway services, Cisco OTV, or GIF/GRE tunnels.
There is no defined maximum distance that will be supported as long as the network meets these requirements.
Long-distance vSphere vMotion performance will vary, because you are constrained by the laws of physics.
For a complete list of requirements, refer to the VMware Knowledge Base article, Long Distance vMotion requirements in VMware vSphere 6.0 (2106949) at http://kb.vmware.com/kb/2106949.