Architecting Tenant Networking with NSX in vCloud Director : Networking Layers Examined : 4.2 vCloud Director Multitenant Data Center Networking in vSphere
   
4.2 vCloud Director Multitenant Data Center Networking in vSphere
The detailed analysis the previous section illustrates the representation of a single tenant within the vSphere layer. The following represents the same analysis but applied to a multitenant vCloud Director data center.
Figure 15. vSphere Cloud Service Provider Multitenant Networking
 
The Internet access shown in this graphic is shared across all tenants who do not require a physical firewall. The shared internet access connection is presented as a port in a VLAN-backed port group in the vSphere dvSwitch. Each vCloud Director tenant has an external network connection between a port in this group and the internet interface of the Edge Services Gateway in their Org VDC.
Tenant 4 has elected to retain a physical internet firewall. In their case, within the physical data center infrastructure, the shared internet access is presented to the “outside” of their firewall, and a separate VLAN connects the “inside” of the firewall to a separate VLAN-backed port group within the vSphere dvSwitch. An external network connects a second port in this port group to the internet interface of the Edge Services Gateway in their Org VDC.
Each customer has a separate WAN CE router. Because the connection to their tenant environments could, therefore have overlapping addresses (from within their vCloud Director organization, or from their WAN), each must be separated through the data center and into the vCloud Director managed environment. This typically means that each tenant’s WAN connection is presented as a separate external network with a separate VLAN ID, and therefore requires a separate VLAN-backed port group in the vSphere dvSwitch to connect to the WAN interface of their respective Edge Services Gateways.
While simplified for clarity in this graphic, in the same way as the Edge Services Gateway interfaces and virtual machine interfaces in Figure 14, each tenant’s Org VDC networks appears as a VXLAN-backed port group within the dvSwitch, with ports for the Edge Services Gateway interface and any vApp virtual machines connected to that network.
While this example illustrates the separation of VLANs behind the per-tenant WAN access, in a service provider data center it is likely that at some point in the infrastructure, several VLANs of a similar type and security level will be “trunked” on a single link. When that is the case, the per-VLAN presentation between the vSphere networking layer (beneath the Provider VDC) and the data center infrastructure shown in Figure 15 is not necessary. In a similar way to a physical switch, a dvSwitch Uplink port can carry multiple VLANs encapsulated on the single connected link using IEEE 802.1q. To do this, when a dvSwitch Uplink Port Group is being created, its VLAN Type is set to “VLAN trunking” as shown in the following figure. The range of VLANs allowed on the trunk can also be configured here.
Figure 16. Configuring a dvSwitch Uplink Port Group VLAN Trunk
An example of where this can be applied is shown in the following figure. In this example, the service provider is presenting WAN access by connecting the MPLS Provider Edge (PE) router to the vCloud Director platform. Each customer’s WAN VPN VRF is presented, by the PE router, to a sub-interface on a trunk connection, the other end of which connects to the customer’s vCloud Director tenant, terminating on its Edge Services Gateway. Similarly, multiple customer WAN CE router connections can be terminated on VLAN-tagged access ports of an “aggregation” switch, whose uplink then delivers the trunked connections to each tenant’s Edge Services Gateway.
Figure 17. Trunking Multiple External Networks to a vCloud Director Environment
This technique can equally be applied to multiple, separate internet VLANs where each is presented to a separate customer firewall in the data center, or, where multiple customers’ co-located services are “trunked” into the vCloud Director environment over shared, high-bandwidth connections.
The number of dvSwitches used to deliver these VLAN-backed port groups depends upon a number of design considerations, one of which is the number of physical network adapters in the ESXi host. For hosts with only a single pair of adapters, all VLANs must be trunked over the same uplink port group (as shown in the range in Figure 16). In the case of a host with multiple adapters, or with adapters that can simulate multiple adapters to vSphere (such as some blade servers), dvSwitches can be created to separate, for example, management, internet and WAN traffic. For ease of configuration when this is possible, a range of VLAN IDs can be pre-allocated for “WAN uplinks” and, as each tenant is onboarded, they are allocated the next ID in the range. The Uplink Port Group is created with the specific range configured, and, the port group for each new tenant’s “External Network” configured with their specific VLAN ID from the range.