Architecting a vCloud Director Solution : Cloud Management Components : 5.2 vCloud Director
   
5.2 vCloud Director
5.2.1 vCloud Director Cells
vCloud Director functionality is provided by stateless cells—Linux (Red Hat Enterprise Linux or CentOS) machines with the vCloud Director binaries. Each cell contains a set of services such as transfer service, console proxy, vCenter listener, UI services and others. Each cell usually has at least two IP addresses—a primary for the vCloud Director user interface or API, and a secondary for VMware remote console proxy because both run by default on port TCP 443. However, it is possible to move services to non-default ports. The cells communicate with each other through an ActiveMQ message bus on the primary interface. They also share a common vCloud Director database where the cells persist configuration and state data. The transfer service requires that all cells have access to a common shared folder—usually an NFS mount.
The following are vCloud Director cell design considerations:
Use at least 2 vCPUs and 6 GB RAM for the cell VM.
Deploy at least N+1 cells where N is number of resource groups or n/3000+1 where n is the expected number of powered-on VMs (use the larger of the two).
To avoid a split-brain scenario, verify that the cells can communicate with each other through the message bus on the primary network interface.
vCloud Director starts vCenter Server proxy (listener) for each connected vCenter Server. Distribute the vCenter Server proxy among the cells so none is running more than one proxy. This can be done manually by triggering reconnection of the vCenter Server. For the reconnected vCenter Server, a new vCenter Server proxy is started on the least utilized vCloud Director cell.
Use the same consoleproxy certificate on all cells.
It is possible to steer the load-balanced traffic to a specific cell. However, the cells are not site-aware and the tasks are randomly distributed among them. VMware recommends keeping the cells on one site with their database and transfer share and recovering them together in case of disaster recovery.
Use a Web application firewall to terminate vCloud HTTPs traffic at the load balancer and to apply Layer 7 firewall rules. You can filter based on URL, source IP, or authentication header to protect access to certain organizations or API calls (provider scope). The traffic between the load balancer and cells must be HTTPs-encrypted as well.
Enable X-Forwarded-For (XFF) HTTP header insertion on the load balancer to track the source IP of requests in vCloud Director logs.
The VMware remote console proxy traffic cannot be terminated at the load balancer and must be passed through to the cells because it is a proprietary socket SSL connection. The WebMKS (native HTML 5 web console used exclusively as of vCloud Director 8.20) requires the use of TCP port 443 on the load balancer virtual IP address.
The sticky sessions on the load balancer are recommended (for performance reasons), but not required, because the session state is cached at the cell level and also stored in the vCloud database.
Use round-robin or least-connection load-balancing algorithm to share the load.
Use the following load balancer health checks for the cell pool:
GET http://<cell_HTTP_IP>/api/server_status (expected response is “Service is up”).
GET https://<cell_consoleproxy_IP> (expected response 200) or a simple TCP 443 check.
After installing the first vCloud Director cell, back up certificates and the $VCLOUD_HOME/etc/responses.properties file, which contains all necessary information to deploy new or additional cells (for example, database password).
Verify that cell transfer share is accessible for all cells and that the Linux vCloud user has write permissions. The size of the transfer share must be large enough to store all concurrent OVF or ISO imports/exports or migrations between resource groups (for example 10 concurrent 50 GB transfers require up to 500 GB of transfer share capacity). If catalog publishing with early catalog export is used, extend transfer share capacity by the size of the exported catalog.
Redirect vCloud Director logs to an external syslog by editing the $VCLOUD_HOME/etc/log4j.properties file or by installing the vRealize Log Insight agent on the cell.
For large environments deploy more vCloud Director cells (scale out approach). To scale up a single cell it is possible to increase its vCPU, memory, JVM heap size, database connection pool, and jetty connections in $VCLOUD_HOME/bin/vmware-vcd-cell. See the following table.
Table 2. vCloud Director 8.20 Cell Performance Tweaks
Attribute
Location
Default Value
Recommended Value for large environments
Cell vCPU
Cell VM
2 vCPU
4 vCPU
Cell Memory
Cell VM
6 GB RAM
12 GB RAM
JVM Heap Size
$VCLOUD_HOME/bin/vmware-vcd-cell
 
JAVA_OPTS:--Xms1024M -Xmx4096M
JAVA_OPTS:--Xms2048M -Xmx8192M
Database.pool.maxActive
$VCLOUD_HOME/etc/global.properties
75
200
vcloud.http.maxThreads
 
$VCLOUD_HOME/etc/global.properties
128
200
vcloud.http.minThreads
 
$VCLOUD_HOME/etc/global.properties
25
32
vcloud.http.acceptorThreads
 
$VCLOUD_HOME/etc/global.properties
2
16