Architecting a Hybrid Mobility Strategy : Workload Mobility Implementation Example : 16.1 Technology Overview
   
16.1 Technology Overview
This data center architecture leverages a typical modular separation of components. Starting with compute, Rainpole.com’s design provides multiple Cisco Unified Computing System (UCS) chassis connected to a redundant pair of 6296UP Fabric Interconnect devices. Each blade in the UCS chassis represents a separate ESXi 6 host that is used to deploy multiple virtual machines. The virtual networking is provided by a single vSphere Distributed Switch at each physical location.
Each 6296UP Fabric Interconnect device is connected through a port channel to a pair of Nexus 7710 devices, representing a collapsed data center core and aggregation layer.
The pair of Nexus 7710s are then connected to the data center WAN edge to provide access to the Layer 3 core of the network (this is a WAN enterprise core offered by the same VMware Cloud Provider). The Layer 3 core of the network is where the end user and client connections accessing the specific data center services and applications originate. The F5 Global Traffic Manager (GTM) devices are directly connected to the WAN edge devices.
In this design, the two data center sites are separated by 200 km and are connected through highly available, protected point-to-point DWDM circuits. As outlined previously, this represents a typical design scenario for a two data center solution.
The data center interconnect solution presents various components functioning alongside each other. The various technologies employed that are required to be considered for the implementation of the solution include, but are not limited to, the following:
LAN extension – Given the availability of point-to-point circuits between the two sites and the hardware chosen for Rainpole.com’s design, two different options for LAN extension technologies have been considered. The first solution leverages the Cisco virtual PortChannel (Cisco vPC) capabilities of Nexus 7710 devices to establish an end-to-end port channel between the Nexus 7710 pairs deployed in each data center. The second introduces Overlay Transport Virtualization (OTV), a Cisco LAN extension technology, deployed across DWDM.
Routing – The data center interconnect connection between sites is to be used for both sending LAN extension traffic and for routed communications between subnets that are not stretched. As outlined in Section 10, Deploying Stretched VLANs/LAN Extensions, satisfying this requirement has design implications that depend on the specific LAN extension technology deployed.
Workload mobility – Workload mobility is the core functionality discussed in this document. Live migration leveraging VMware long distance vSphere vMotion is the solution validated in this context.
Storage and compute elasticity – Migrating workloads between sites brings challenges in terms of how these workloads impact the storage and compute solution. If this solution is aimed at facilitating disaster avoidance, sufficient compute, network, and storage resources must be available at both the on-premises and remote VMware Cloud Provider Program hosting partner’s data centers.