Setting | Initial VM Placement | Load Balancing |
Manual | Recommendation is displayed to administrator | DRS makes a recommendation but will not migrate VMs without validation from the administrator. |
Partially Automatic | Automatic placement | While the initial placement is completed automatically by DRS, the migration of powered on virtual machines will only be performed after the administrator validates the recommendation in vCenter Server. |
Automatic | Automatic placement | Fully automated migrates powered on virtual machines automatically. The level of aggressiveness employed by DRS for this automatic migration is based on a threshold corresponding to five different recommended levels from conservative (5 stars) to aggressive (1 star). |
Level | Stars | Description |
Level 1 | 5 | Conservative. Migrations will only take place if rules are not being respected or if a host is placed into maintenance mode. |
Level 2 | 4 | A migration will only take place if level 1 is met or if a migration will bring about significant improvements in performance. |
Level 3 | 3 | A migration will only take place if the first two levels are met or if a migration brings about a good amount of improvements to virtual machine performance. |
Level 4 | 2 | A migration will only take place if the first three levels are met or a migration brings about moderate improvements to virtual machine performance. |
Level 5 | 1 | Aggressive. Migration will occur only if all recommendations from Level 1 to 4 are met or if the migration will bring about minor improvements to virtual machine performance. |
Attribute | Configuration |
Cluster Name | boston-dc-01-payload-003 |
Number of ESXi Hosts | 24 |
DRS | Enabled |
Automation Level | Fully Automated |
Migration Threshold | Moderate, Level 3 (Default) |
vSphere Distributed Power Management (DPM) | N/A |
Enhanced vSphere vMotion Compatibility | Disabled |
Use Cases | Business Benefits | Design Requirements |
Redistribute CPU and/or memory load between ESXi hosts in the cluster. Migrate virtual machines off an ESXi host when it is placed into maintenance mode. Rules to keep virtual machines together on the same host (affinity rule) optimizing communication by ensuring host adjacency of VMs or separating virtual machines on to different ESXi hosts (anti-affinity) in order to maximize availability of services. Use anti-affinity rules to increase availability for service workloads as appropriate, such as in rare cases where applications with high-transactional I/O workloads might require an anti-affinity rule to avoid an I/O bottleneck on the local host. | vSphere DRS collects resource usage information for all hosts and virtual machines in the cluster and will migrate virtual machines in one of two situations: • Initial placement – When you first power on a virtual machine in the cluster, DRS places that virtual machine on the most appropriate host. • Load balancing – DRS aims to improve resource utilization across the cluster by performing automatic migrations of running virtual machines (through vSphere vMotion). • Configuring DRS for full automation, using the default migration threshold: • Reduces daily monitoring and management requirements. • Provides sufficient balance without excessive migration activity. | vMotion migration requirements must be met by all hosts in the DRS cluster. Whether or not to enable Enhanced vMotion Compatibility (EVC) at the appropriate EVC level on hosts. DRS load balancing benefits from having a larger number of hosts in the cluster (scale-out cluster) rather than a smaller number of hosts. DRS affinity and anti-affinity rules should be the exception rather than the norm. Configuring many affinity and anti-affinity rules limits migration choices and could collectively have a negative effect on workload balance. An affinity rule is typically beneficial in the following situations: • Virtual machines on the same network share significant network traffic where the affinity rule localizes traffic within the host’s virtual switch, which reduces traffic on the physical network components. • Applications can share a large memory working set size where Transparent Page Sharing (TPS) can reduce the actual amount of memory used. |