When we started this series we talked about the impact of the physical CPU in your VDI host. Today we are going to talk about something a little more flexible, a software defined criteria, which is the amount of vCPU allocated to each desktop VM. And of course bigger is better right? Well that depends. It is probably always better for the hardware vendor but for you, your users, and your bottom line it may not be that ideal. That is because the density per host decreases significantly if you don’t want to sacrifice performance. Read below to find out why.
How Does Hypervisor Scheduling Work?
The hypervisor is working to provide each virtual machine with time on the CPU based upon their share allocation and reservation. With solutions like ESX you can guarantee a VM a certain amount of compute resources. But with virtual desktops it usually makes sense to give each VM equal shares without priority because it becomes too cumbersome to manage. With a 2vCPU or 4 vCPU virtual machine the hypervisor must schedule the virtual machine at a time where 2 or 4 physical cores are available respectively simultaneously. If 2 or 4 cores aren’t simultaneously available you will be in an CPU Wait State. A good way to look if you are being affected by this on existing hosts is to look at provide instructions on CPU Ready and CO-STOP (CSTP). Virtualization offers better utilization of the hardware with a slight amount of overhead consumed by the hypervisor for scheduling and managing resources. When you overcommit the number of virtual resources to physical resources the efficiencies can be dramatically reduced to the point where you essentially are thrashing in the worst case scenario. In a scenario such as server virtualization the latency induced by this over commitment may be acceptable because nothing is complaining about the extra latency. However, in a VDI environment where the latency results in a sluggish experience or jittery display it won’t be long before your support phone is ringing.
So for users that truly require a 2vCPU desktop it is important to provide them one because it is possible to provide them a higher compute factor with lower latency. However, you must be reasonable and realize that you can’t pack the same number of concurrent desktops per physical host. In fact our testing shows that it is around a 33% penalty for using 2 vCPU’s as opposed to a single vCPU. Meaning you need roughly 33% more vBOX’s or compute nodes to support the same amount of desktops. To determine if your users need multiple vCPU’s can only be done through testing or using known information about how those applications perform in a virtual desktop environment and how well they work in a muti-threaded multi-core environment.
With Regard to vBOX
Whether you use 1,2,4, or 6 vCPU doesn’t matter much for the hardware selection in vBOX. But what it does factor into is how many vBOX’s you will need and how much RAM you should configure in each vBOX. The rule of thumb holds true in all cases and for a 2 vCPU environment we would expect to see roughly 85 virtual desktops given the users are using computationally intense multithreaded applications.