Architecting a vSphere Compute Platform : Scalability and Designing Physical Resources : 5.10 vNUMA
   
5.10 vNUMA
If a virtual machine has more than eight vCPUs, virtual non-uniform memory access (vNUMA) is automatically enabled (although this behavior can be modified if required). Virtual NUMA is especially useful with large, high-performance virtual machines or multiple virtual machines within a VMware vSphere vApp or multi-machine blueprint. With vNUMA awareness, the virtual machine's memory and processing power is allocated based on the underlying NUMA topology, as outlined previously, even when it spans more than one physical NUMA node. vNUMA-aware virtual machines must use at least VMware virtual machine hardware version 8 and operate on vSphere 5.0 or later, with NUMA-enabled hardware.
As the ESXi hypervisor creates a NUMA topology that closely mirrors the underlying physical server topology, it allows the guest operating systems to intelligently access memory and processors in the single NUMA node. A virtual machine's vNUMA topology will mimic the topology of the host on which it starts. This topology does not adjust if the virtual machine migrates to a different host, which is one of the reasons why using consistent hardware building blocks is the recommended approach for cluster design.
In previous releases of vSphere, enabling the Hot Add feature on a virtual machine disabled vNUMA functionality. However, with the release of vSphere 6.0, this is no longer the case.