5. vCloud Resource Design : 5.5 vCloud Networking : 5.5.2 Network Pools
   
5.5.2 Network Pools
Network pools contain network definitions used to instantiate private or routed organization and vApp networks. Networks created from network pools must be isolated at Layer 2.
*The following types of network pools are available:
*vSphere port group-backed network pools are backed by pre-provisioned port groups, distributed port groups, or third-party distributed switch port groups.
*Virtual eXtensible LAN (VXLAN) network pools use a Layer 2 over Layer 3 MAC in UDP encapsulation to provide scalable, standards based traffic isolation across Layer 3 boundaries (requires distributed switch).
*VLAN-backed network pools are backed by a range of pre-provisioned VLAN IDs. For this arrangement, all specified VLANs are trunked into the vCloud environment (requires distributed switch).
*vCloud Director Network Isolation-backed (VCD-NI) network pools are backed by vCloud isolated networks. A vCloud isolated network is an overlay network uniquely identified by a fence ID that is implemented through encapsulation techniques that span hosts and provides traffic isolation from other networks (requires distributed switch).
The following table compares the options for a network pool.
Table 12. Network Pool Options
Consideration
vSphere Port Group Backed
VXLAN Backed
VLAN-Backed
vCloud Network Isolation-Backed
How it works
Isolated port groups must be created and exist on all hosts in cluster.
*Multicast address is mapped to a VXLAN segment ID for isolation.
*Virtual machine to virtual machine traffic is tunneled over a Layer 3 network by a VTEP (ESXi hosts).
*Node learning done through multicast, not broadcast.
*Uses range of available VLANs dedicated for vCloud.
*Network isolation relies on inherent VLAN isolation.
Creates an overlay network (with fence ID) within a shared transport network.
Advantages
N/A
*Does not rely on VLAN IDs for isolation.
*Works over any Layer 3 multicast-enabled network.
*No “distance” restrictions, managed by multicast radius.
*Best network performance.
*vCloud Director creates port groups as needed.
*Scalable to create thousands of networks per transport network.
*More secure than VLAN-backed option due to vCloud Director enforcement.
*vCloud Director creates port groups as needed.
Disadvantages
*Requires manual creation and management of port groups.
*Possible to use a port group that is in fact not isolated.
End-to-end multicast required.
*VLANs are a limited commodity (4096 maximum).
*Requires used VLANs to be configured on all associated physical switches.
*Scoped to a single virtual datacenter and vCenter Server.
Overhead required to perform encapsulation.
 

5.5.2.1. vSphere Port Group-Backed Considerations
*Use standard or distributed virtual switches.
*vCloud Director does not automatically create port groups. Manually provision port groups for vCloud Director to use ahead of time.
5.5.2.2. VXLAN-Backed Considerations
*Distributed switches are required.
*Configure the MTU to be at least 1600 at ESXi and on the physical switches to avoid IP fragmentation.
*Map the guest MTU size to accommodate the VXLAN header insertion at the ESXi level.
*Use explicit failover or “route based on IP hash” as the load balancing policy.
*If VXLAN transport is traversing routers, multicast routing must be enabled (PIM – BIDIR or SM).
*More multicast groups are better.
*Multiple segments can be mapped to a single multicast group.
*If VXLAN transport is contained to a single subnet, IGMP Querier must be enabled on the physical switches.
*Use BIDIR-PIM where available so any sender can be a receiver as well. If BIDIR-PIM is not available, use PIM-SM.
*If VXLAN traffic is traversing a router, enable proxy ARP on the first hop router.
*Use five-tuple hash distribution for uplink and interswitch LACP.
5.5.2.3. VLAN-Backed Considerations
*Distributed switches are required.
*vCloud Director creates port groups automatically as needed.
5.5.2.4. vCloud Network Isolation-Backed Considerations
*Distributed switches are required.
*Increase the MTU size of network devices in the transport VLAN to at least 1600 to accommodate the additional information needed for VCD-NI. The information includes all physical switches and vSphere Distributed Switches. Failure to increase the MTU size causes packet fragmentation, negatively affecting network throughput performance of vCloud workloads.
*Specify a VLAN ID for the VCD-NI transport network (optional, but recommended for security). If no VLAN ID is specified, it defaults to VLAN 0.
*The maximum number of VCD-NI-backed network pools per vCloud instance is 10.
*vCloud Director automatically creates port groups on distributed switches as needed.