Architecting a Hybrid Mobility Strategy : VMware Requirements for Long-Distance vSphere vMotion
   
VMware Requirements for Long-Distance vSphere vMotion
vSphere vMotion is the method used by vSphere to migrate active virtual machines from one physical ESXi host to another. Even though previous releases of vSphere had some limitations on this functionality, vSphere vMotion is perhaps the most powerful feature of a vSphere virtual environment, allowing the migration of active virtual machines with zero downtime. A vSphere vMotion event can either be initiated manually by operational teams through the VMware management tools, or VMware vCenter Server® can automatically initiate a live migration as part of the VMware vSphere Distributed Resource Scheduler™ feature, which uses this mechanism to automatically load balance virtual machines across a single cluster.
Figure 12. vSphere vMotion Process
 
One of the major enhancements in vSphere 6 is the introduction of long distance vSphere vMotion, which allows migrating virtual machines from one physical data center to another.
There are several key requirements to achieve this:
A cross data center interconnect with less than 150 ms RTT.
Network bandwidth of at least 250 Mbps for each long distance vSphere vMotion operation.
vSphere 6 at both the on-premises data center and the VMware Cloud Provider Program data center (source and destination).
The same single sign-on domain across data centers (specifically, the same SSO domain is a requirement when operations are carried out through the UI). When executing the vSphere vMotion event through the vCenter Server API, it is possible for the source and destination vCenter Server instances to belong to different SSO domains.
Cross-site Layer 2 connectivity for virtual machine networks. The IP subnet on which the virtual machine resides must be accessible from both the source and destination ESXi servers. This requirement is very important because the virtual machine retains its IP address when it moves to the destination ESXi server to help confirm that its communication with the outside world continues smoothly after the move. This is also required to allow intra-subnet communication with the devices remaining on the original site after the long distance vSphere vMotion event has completed.
As with local data center migration, a dedicated vSphere vMotion network is strongly recommended. The VMkernel interfaces are used by the ESXi host's internal TCP/IP stack for facilitating vSphere vMotion migration of a virtual machine between ESXi hosts. Typically, the interfaces of the source and destination ESXi servers reside on the same IP subnet (the vSphere vMotion network). This is no longer strictly required as a technical requirement because vSphere 6 now has support for routed vSphere vMotion traffic.
vSphere is required to support virtual machine instances, however, cross-site workload mobility requires VMware vSphere 6 Enterprise Edition™ to support long distance vSphere vMotion migration of virtualized resources.
With these requirements in place, vSphere vMotion makes it possible to migrate a virtual machine from one vCenter Server located in an on-premises data center to another vCenter Server located at a VMware Cloud Provider Program hosting partner’s data center.
Figure 13. ESXi Clusters and Long Distance vSphere vMotion
 
The vSphere vMotion process retains the virtual machine’s historical data (events, alarms, performance counters, and so on.), and also its properties related and tied to a specific vCenter Server, such as DRS groups and HA settings. This means that the virtual machine can not only change compute (a host) but also network, management, and storage all at the same time, with one operational action.
Other key long distance vSphere vMotion considerations include:
The virtual machine UUID is maintained across vCenter Server instances
Alarms, events, tasks, and historical data are retained
HA and DRS settings, including affinity and anti-affinity rules, isolation responses, automation level, and start-up priority are retained
Virtual machine resources such as shares, reservations, and limits are retained
The MAC address of the virtual NIC is maintained (a virtual machine which is migrated to another vCenter Server keeps its MAC address, and this MAC address is not reused in the source vCenter Server)
The UUID of the virtual machine remains the same no matter how many long distance vSphere vMotion operations are carried out, and as previously mentioned, a long distance vSphere vMotion operation retains all the historical data, DRS rules, anti-affinity rules, events, alarms, task history, and HA properties. In addition, standard vSphere vMotion compatibility checks are conducted before the long distance vSphere vMotion operation occurs. The only change to the virtual machine is that of the managed ID.
As a side note, VMware vSphere Virtual Volumes™ are supported for long distance vSphere vMotion, but are not a requirement because shared storage is not required.
Additional design considerations associated with this type of architecture are as follows:
vSphere 6 now supports up to 64 ESXi hosts in a single cluster.
For Web GUI-initiated long distance vSphere vMotion operations, both on-premises and hosted clusters must be part of the same VMware Platform Services Controller™ domain. There are several ways to increase the resiliency of the vCenter Server and Platform Services Controller across multiple sites. An in-depth discussion of these methods is out of scope for this paper, but more information can be found at https://blogs.vmware.com/consulting/2015/03/vsphere-datacenter-design-vcenter-architecture-changes-vsphere-6-0-part-1.html.
VMware functionalities such as vSphere DRS, vSphere Multi-Processor Fault Tolerance (SMP-FT), and VMware vSphere High Availability (HA) are only available for ESXi hosts that belong to the same cluster. Therefore, there is no possibility that a virtual machine will be dynamically moved between data center sites. As a result, all workload mobility events between the on-premises data center and the VMware Cloud Provider Program hosting partner’s data center must be manually triggered, or scheduled by an administrator through the VMware vSphere Web Client or API.