<div dir="ltr"><div dir="ltr"><div><br></div><div><div dir="ltr" class="gmail_attr">On Mon, May 17, 2021 at 7:54 PM Christopher Nielsen <<a href="mailto:mascguy@rochester.rr.com">mascguy@rochester.rr.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
Pinning our buildbot VMs to specific NUMA nodes can result in starvation, when multiple VMs assigned to a given node are all busy. That would also result in underutilization of the other node, if VMs assigned to that are idle.<br>
<br>
Does that make sense?</blockquote></div><div><br></div><div>What you say does make sense, but I believe that what I am suggesting is different than what I think you are describing. I'm not suggesting that the guests get pinned to different NUMA nodes. Instead, I'm suggesting that all guests get pinned to the same 6 nodes. AFAICT, the starvation situation that you're describing can occur when some of the guests get pinned to a different set of NUMA nodes. If all of the guests are pinned to the same nodes, then theoretically, those guests should behave as if they all have equal access to the same pool of physical cores. There shouldn't be any underutilization of any particular node, because the node has been pinned for all of the guests; this means that the hypervisor should be in control of scheduling the utilization of the node among all of the guests which have that node pinned.<br></div><div><br></div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>-- </div><div>Jason Liu<br></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, May 17, 2021 at 7:54 PM Christopher Nielsen <<a href="mailto:mascguy@rochester.rr.com">mascguy@rochester.rr.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Well, my recommendation for our setup, is to avoid pinning.<br>
<br>
Why? For several reasons:<br>
* Out-of-the-box, ESX/ESXi makes a best-effort attempt to schedule all vCPUs for a given VM, on a single NUMA node.<br>
* Even when that’s not possible, the hypervisor schedules VM vCPUs to hyperthread pairs.<br>
<br>
Pinning our buildbot VMs to specific NUMA nodes can result in starvation, when multiple VMs assigned to a given node are all busy. That would also result in underutilization of the other node, if VMs assigned to that are idle.<br>
<br>
Does that make sense?<br>
<br>
> On 2021-05-17-M, at 19:34, Jason Liu <<a href="mailto:jasonliu@umich.edu" target="_blank">jasonliu@umich.edu</a>> wrote:<br>
> <br>
> If the guests on a virtual server are exerting a heavy enough load that the virtual host is not able to obtain the resources it needs, then the entire system's performance, both physical and virtual, can be affected. I'm not claiming to be familiar enough with the specifics of the situation to claim that this is what's happening, but if it is, then using CPU pinning can help. It's basically analogous to the situation where you overcommit the memory too much, and the virtual host isn't able to have enough memory to do the tasks it needs to.<br>
> <br>
> On the virtual servers which I set up for my customers, I have the hypervisor set up to automatically pin all but 2 cores. So for example, on an 8 core machine, I pin cores 0-5 for all of the VM settings, so that none of the guests are able to use cores 6 and 7. This effectively removes 2 cores from the pool of CPUs available for the guest to use, which means that the virtual host will always have those 2 cores available to it at all times. This allows my customer to run, on average, 10 guests on the virtual server simultaneously, and everything stays performant, even under heavy vCPU loads.<br>
> <br>
> The other thing I do is that I tell my customers to never overcommit on memory. My rule of thumb is that the sum of all simultaneously running guests should never exceed the amount of physical RAM minus 4 GB. So, on a server with 128 GB of RAM, the sum of the memory allocated to the guests running simultaneously should never add up to more than 124 GB of memory.<br>
> <br>
>> On Mon, May 17, 2021 at 6:24 PM Ryan Schmidt <<a href="mailto:ryandesign@macports.org" target="_blank">ryandesign@macports.org</a>> wrote:<br>
>> <br>
>>> On May 17, 2021, at 13:13, Jason Liu wrote:<br>
>>> <br>
>>> Regarding CPU overcommitment: Are the virtual hosts doing any sort of CPU pinning? Many virtualization products have the ability to specify which of the pCPU cores a guest is allowed to use. As far as I can remember, products like KVM and ESXi can do CPU pinning, while VirtualBox cannot.<br>
>> <br>
>> Nope, nothing like that is set up. Is there any reason why we would want that?<br>
</blockquote></div></div>