Buildbot Performance

Christopher Nielsen mascguy at rochester.rr.com
Mon May 17 23:53:50 UTC 2021


Well, my recommendation for our setup, is to avoid pinning.

Why? For several reasons:
* Out-of-the-box, ESX/ESXi makes a best-effort attempt to schedule all vCPUs for a given VM, on a single NUMA node.
* Even when that’s not possible, the hypervisor schedules VM vCPUs to hyperthread pairs.

Pinning our buildbot VMs to specific NUMA nodes can result in starvation, when multiple VMs assigned to a given node are all busy. That would also result in underutilization of the other node, if VMs assigned to that are idle.

Does that make sense?

> On 2021-05-17-M, at 19:34, Jason Liu <jasonliu at umich.edu> wrote:
> 
> If the guests on a virtual server are exerting a heavy enough load that the virtual host is not able to obtain the resources it needs, then the entire system's performance, both physical and virtual, can be affected. I'm not claiming to be familiar enough with the specifics of the situation to claim that this is what's happening, but if it is, then using CPU pinning can help. It's basically analogous to the situation where you overcommit the memory too much, and the virtual host isn't able to have enough memory to do the tasks it needs to.
> 
> On the virtual servers which I set up for my customers, I have the hypervisor set up to automatically pin all but 2 cores. So for example, on an 8 core machine, I pin cores 0-5 for all of the VM settings, so that none of the guests are able to use cores 6 and 7. This effectively removes 2 cores from the pool of CPUs available for the guest to use, which means that the virtual host will always have those 2 cores available to it at all times. This allows my customer to run, on average, 10 guests on the virtual server simultaneously, and everything stays performant, even under heavy vCPU loads.
> 
> The other thing I do is that I tell my customers to never overcommit on memory. My rule of thumb is that the sum of all simultaneously running guests should never exceed the amount of physical RAM minus 4 GB. So, on a server with 128 GB of RAM, the sum of the memory allocated to the guests running simultaneously should never add up to more than 124 GB of memory.
> 
>> On Mon, May 17, 2021 at 6:24 PM Ryan Schmidt <ryandesign at macports.org> wrote:
>> 
>>> On May 17, 2021, at 13:13, Jason Liu wrote:
>>> 
>>> Regarding CPU overcommitment: Are the virtual hosts doing any sort of CPU pinning? Many virtualization products have the ability to specify which of the pCPU cores a guest is allowed to use. As far as I can remember, products like KVM and ESXi can do CPU pinning, while VirtualBox cannot.
>> 
>> Nope, nothing like that is set up. Is there any reason why we would want that?


More information about the macports-dev mailing list