Architecting a vSphere Compute Platform : Scalability and Designing Physical Resources : 5.6 Determining Host CPU and Memory Requirements
   
5.6 Determining Host CPU and Memory Requirements
There are two considerations for compute sizing—processing requirements and memory requirements. On a dynamic cloud platform, designing for empirical data with regard to CPU and memory requirements is unlikely to be possible. Instead, sizing will typically be based on the anticipated workloads that will run on the infrastructure.
Typically, CPU and memory requirements are defined during the project requirements analysis and specific metrics are based on anticipated workloads and expected growth for the lifecycle of the platform. From this information, the architect can determine the aggregate CPU and memory requirements. When designing an environment, look at current requirements and also design a solution that will allow the environment to grow without re-designing the platform every time you need to add capacity.
The processing capabilities of each compute node can be determined by multiplying the number of cores times the speed of the processors. For sizing purposes, we typically want to plan for no more than 80 percent utilization of the processing capabilities.
To fully utilize the processing capabilities that the compute nodes offer, systems must be configured with sufficient memory. In recent years, memory costs have been reduced and the memory density of hardware has increased. With this new extended memory capability of the servers, as much as 768 GB is available on a single half-size blade, and rackmount servers are able to offer in excess of 1 TB of memory (at time of this writing). The balancing point between CPU and memory capacity might not favor multi-core CPUs so strongly.
After the total memory requirements, established during the planning process, have been calculated, divide by the memory per host to determine the number of compute nodes that are required. For this calculation, you can usually ignore the overhead of the hypervisor, which is minimal when you consider the available memory and the efficiency of consolidation.
The final predicted values not only take into account the aggregate CPU and memory requirements, but also the service provider’s desired maximum utilization thresholds, anticipated growth factors (as expressed by business stakeholders), and any expected benefit from virtualization technologies, such Transparent Page File Sharing (TPS), if enabled.
The use of TPS in sizing calculations highlights a relatively new design consideration now facing an architect. An important shift in the VMware policy on Transparent Page File Sharing that must be considered is that as of ESXi 5.5, 5.1, and 5.0 patches in Q4, 2014, TPS is disabled on these hosts, and on all future versions of the core hypervisor. For further details, see the VMware knowledge base article, Additional Transparent Page Sharing management capabilities in SDXi 5.5, 5.1, and 5.0 patches in Q4, 2014 (2091682) at http://kb.vmware.com/kb/2091682.
Also, consider that if TPS is enabled as part of the platform design, an important design note is that estimated savings from memory sharing must intentionally be kept low where the guest operating systems will be 64-bit and as such, large memory pages will be used. For more details, read VMware knowledge base articles, Transparent Page Sharing (TPS) in hardware MMU systems (1021095) at http://kb.vmware.com/kb/1021095 and Use of large pages can cause memory to be fully allocated (1021896) at http://kb.vmware.com/kb/1021896.
Finally, when sizing the platform, it is important to document all assumptions about how you calculated the service provider’s requirements. Because these are estimations, in most cases, based on theoretical workloads, it is important that the business stakeholders accept the methodology that has been employed and understand the confidence level in the information that was used in assessing the compute platform requirements.