WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple
You are here: Home / ESXi / Experimenting with ESXi CPU affinity and Intel Hybrid CPU Cores

Experimenting with ESXi CPU affinity and Intel Hybrid CPU Cores

01.16.2024 by William Lam // 21 Comments

After debugging a recent issue with using VMware Workstation and Intel Hybrid CPUs, it gave me an idea about an experiment to try with ESXi and Intel Hybrid CPUs.

As a refresher, starting with the Intel 12th Generation (Alder Lake) CPU, a new hybrid big.LITTLE CPU architecture was introduced for consumer Intel CPUs. This new hybrid Intel CPU architecture integrates two types of CPU cores: Performance-cores (P-cores) and Efficiency-cores (E-cores) into the same physical CPU die. For more information about this new hybrid Intel CPU design, check out this resource HERE. The ESXi scheduler does not and has no current plans to support this new Intel Hybrid CPU architecture, especially as this type of architecture is nowhere to be found in traditional Enterprise datacenters and is only limited to Intel Consumer CPUs.

The current recommendation to work around the non-uniformity of the CPU cores is to either disable the E or P-cores within the system BIOS, thus making the system "uniform" and allowing ESXi to run like a normal x86 system. While you can apply a workaround to have ESXi ignore the non-uniformity of the CPU cores, in addition to the non-deterministic behaviors, random PSOD can also occur due to scheduling across two different types of cores.

I was curious to see whether applying ESXi CPU affinity on a VM using Intel Hybrid CPU Cores might yield a different outcome?

I first wanted to see if I could identify which CPU cores were P-cores versus E-cores with ESXi. For my experiment, I used the same Intel NUC 13 Pro which I had used for the VMware Workstation debugging, which has an an Intel i7 1360P (4 x P-Cores and 8 x E-Cores).

The observed behavior with VMware Workstation was that all P-cores (including hyperthreading) came came first, then followed by E-cores.

  • Cores [ 0, 1, 2, 3, 4, 5, 6, 7] are all P-Cores (includes HT cores)
  • Cores [8, 9, 10, 12, 13, 14, 15, 16] are all E-Cores

Unlike VMware Workstation, when ESXi observes non-uniform CPU cores, HT is automatically disabled by ESXi and thus we do not receive 2 x the P-Cores. To confirm whether ESXi has the same P and E-Core ordering behavior as VMware Workstation, I performed a simple test by iterating through each core and assigning it to a Windows VM to benchmark the performance using the popular CPU-Z utility. From this basic test, I was able to conclude that P-Cores were indeed ordered first followed by the E-Cores.

  • Cores [ 0, 1, 2, 3] are all P-Cores (no HT cores)
  • Cores [4, 5, 6, 7, 8, 9, 10, 11] are all E-Cores

Using this information, we can now create VMs and that are affinitized to either P-Cores or E-Cores to ensure consistent performance and hopefully avoid any inconsistent behaviors when schedule across different types of cores. If you are using a standalone non-managed ESXi host (e.g. no vCenter Server), you configure CPU affinity for a VM by using the ESXi Host Client and expanding the CPU configuration section.

Here is a VM configured with 2 x P-Cores which I have affinitized to Core 0 & 1


Here is a VM configured with 2 x E-Cores which I have affinitized to Core 4 & 5


If you have vCenter Server, it looks like CPU affinity was removed from the vSphere UI at some point and the only way to apply CPU affinity is by using the vSphere API. Below is a quick PowerCLI snippet for applying CPU affinity for a specific VM and this might even be better as it will allow you to easily apply the required affinity versus using the UI.

$vm = Get-VM "Win10-PCore-0-1"

$affinitySpec = New-Object VMware.Vim.VirtualMachineAffinityInfo
$affinitySpec.AffinitySet = @(0,1)

$spec = New-Object VMware.Vim.VirtualMachineConfigSpec
$spec.cpuAffinity = $affinitySpec
$task = $vm.ExtensionData.ReconfigVM_Task($spec)
$task1 = Get-Task -Id ("Task-$($task.value)")
$task1 | Wait-Task

As expected, we can see that the VM configured with 2 x P-Cores outperforms the VM configured with 2 x E-Cores.


For those looking to squeeze the most out of their hardware investments when using the new Intel Hybrid CPU Cores, there is at least an option to get consistent performance at the cost of manual CPU core assignment which could yield some CPU inefficiencies depending on how demanding your workloads are. I am curious to hear from the community on whether this is actually a feasible option for real world workloads since this was a pretty basic experiment and YMMV.

More from my site

  • Quick Tip - Virtualized Intel VT-x/EPT or AMD-V/RVI is not supported on this platform for VMware Workstation
  • Quick Tip - Updating Intel ixgben driver enables Multi-gigabit (2.5gbE / 5GbE) selection in ESXi
  • Heads Up - Performance Impact with VMware Workstation on Windows 11 with Intel Hybrid CPUs
  • ESXi support for Intel iGPU with SR-IOV
  • Community Networking Driver for ESXi Fling v1.2.2

Categories // ESXi, Home Lab Tags // Intel

Comments

  1. durdin+*protectedDurdin says

    01/16/2024 at 10:32 pm

    Hi William,

    Thank you for valuable article. I'll definitelly test this out. I've aleady experimented with custom per-vm evc for nun-uniform esxi setup (based on your articles), and this looks like another level to squeeze performance out of homelabs.

    Btw vsphere documentation for 8.0 mentions the affinity shall be set through vsphere client, so definitelly theres something whong somewhere.

    Reply
  2. *protectedRichard May says

    01/17/2024 at 7:28 am

    I went with the 6c/12t i5-12400 when I rebuilt the lab this summer. Sidesteps the entire issue. Unfortunately, with Raptor Lake the best one can do is 4c/8t when avoiding big.LITTLE.

    Reply
  3. *protectedZach says

    01/17/2024 at 8:40 am

    Not sure if it is still true (or if it was ever universally true across all OEMs) but I've seen reports that disabling P-Cores in BIOS still leaves 1 core enabled--you can disable all E-Cores but not all P-Cores.

    I'm not sure that the power savings of disabling P-Cores would be all that remarkable in a homelab setting (in which the machine sits idle 99% of the time and c-states will ensure everything sleeps as much as possible), hence really the only incentive to disable cores is to avoid having to pin VMs to CPUs.

    It's a shame VMware doesn't make minimal edits to ESXi to let somebody assign a CPUID mask to a VM and schedule it on a core that supports that mask. That feature already exists in the context of vMotion, so the only thing that's needed is to copy the same logic into the scheduler. It's certainly not a robust implementation and nowhere near good enough to offer formal support for BIG.little support in ESXi but it would be good enough to avoid having VMs crash.

    How does ESXi on ARM handle this when you run on a raspberry pi?

    Reply
    • *protectedRichard May says

      01/17/2024 at 12:14 pm

      "I'm not sure that the power savings of disabling P-Cores would be all that remarkable in a homelab setting"

      I'm skeptical about the P-core/E-core thing period. Word on the street is the P-cores, when limited to select power states, will meet or exceed E-core performance-per-watt. The E-core marketing angle has been performance-per-watt but I'm not sure the reality matches up.

      Reply
    • *protectedSugi IT Systems of Signapore says

      01/17/2024 at 2:31 pm

      William can you please give us an honest feedback of the future of VMware. I see lots of articles about the negative things due to broadcon

      Reply
    • *protectedJason (J&S Consultancy) says

      09/15/2024 at 1:20 am

      You are overthinking it. The CPU affinity would fulfill your requirements. Assuming a 13900H NUC with 6P and 8E , simply set CPU affinity to 0-5 to use the P core and CPU affinity 6-13 to use E Core. Spilt the load according to your requirements. Never had crash with any of my VMs and I do run pretty CPU intense VMs.

      Reply
    • *protectedJason says

      12/13/2024 at 12:23 am

      The CPU masking feature doesn't exists "in relation" to vmotion, but if you apply the mask, it is applied regardless of vMotion, so there is a possiblity that could possibly prevent the VM crash. Only issue is we do not know the exact reason these VM crash...

      Reply
  4. *protectedDave Thurlby says

    01/17/2024 at 8:42 pm

    Will, great write up. I've been playing with this sort of setup for awhile now. This actually made my cluster a ton more stable. Previously I had affinity rules for vms to separate hosts, 4 node cluster, and it seemed to stop the random psod. The biggest issue was aria operations. W/e.host that had it, would psod. This tweak stopped the psod. Do you ever think we will see an actual CPU scheduler for this sort of thing? I'd love to get back to vanilla setup without the overrides.

    Reply
  5. *protectedBen Kenobi says

    01/18/2024 at 2:54 am

    Hi William,

    starting from 4th Xeon Scalable processor family the E & P core are available on Enterprise market.

    Take a look on "Advanced Technologies" section for details:
    https://ark.intel.com/content/www/us/en/ark/products/231750/intel-xeon-platinum-8468h-processor-105m-cache-2-10-ghz.html

    Are not recognize correctly these high end processors by ESXi scheduler?

    Beniamino

    Reply
    • William Lam says

      01/18/2024 at 4:24 am

      Hybrid CPU do not exists in Datacenter (eg Xeon SKUs). In future, it’ll either be all P or E, but not both hence it’s not an issue

      Reply
    • durdin+*protectedDurdin says

      01/18/2024 at 6:33 am

      This is available since 3rd gen, and unfortunatelly even it looks similar it's different technology, see: https://www.intel.com/content/www/us/en/support/articles/000094490/processors/intel-xeon-processors.html

      Reply
      • William Lam says

        01/18/2024 at 7:47 am

        As already mentioned, the Hybrid P/E Cores that's in Intel's Consumer segment are not expected to come into Datacenter SKUs and while there are some P/E-ness, they do NOT behave like consumer CPU and from an ESXi scheduler point of view, they will work like normal non-hybrid CPU. Furthermore, I know AMD have also announced they'd also be offering simliar type of P/E-like features in their datacenter offering, but again, they'll have consistent features, so from ESXi scheduler POV, they'll all look uniform and hence wouldn't have the problems observed with the consumer Intel Hybrid CPU

        Reply
      • *protectedSteven says

        01/23/2024 at 11:30 am

        This is something different, low prio and high prio cores are basically the same architecture/capabilities with just different frequencies.

        P/E cores in fact do have different architecture/capabilities.

        Reply
  6. *protectedSimont says

    01/23/2024 at 11:34 pm

    Hi William how do you define the high and low priority core in 4th and 5th gen Xeon cpu. They seems to be different performance value. Will esxi code able to see them as same or different.

    Reply
    • William Lam says

      01/24/2024 at 12:38 pm

      As mentioned a number of times, there are NO issues with ANY of the current and future Xeon CPU from Intel. The issues described here are ONLY found in Intel Consumer CPUs

      Reply
  7. *protectedAlex says

    04/26/2024 at 12:24 am

    Hi William,

    Thanks for your tests and your work generally.
    To conclude with ESXi architecture with Intel hybrid ; you recommend to use only P-core with HT (8 logicals cores) or both P/E-core without HT (12 logicals cores) ?

    Your example is clear but I'm asking how to get the most out of my processor with several machines working at the time.

    Reply
    • William Lam says

      04/26/2024 at 6:55 am

      This is answered at the end of the article 🙂

      Reply
      • *protectedOscar says

        08/09/2024 at 1:11 am

        Hi William,

        Is there any news regarding the support of efficiency cores in upcoming updates in vSphere? I did some research myself, but unable to find anything regarding this.

        Wonder if you know more ...

        kind regards,
        Oscar

        Reply
        • William Lam says

          08/09/2024 at 2:50 am

          No

          Reply
  8. *protectedJason says

    09/09/2024 at 1:32 pm

    Couple of interesting results from my limited testing.

    1. A multi vCPU VM performs better when the affinity is set to either P-Core or E-Core, but not both.

    2. I have not managed to get a PSOD when my VMs are either set with affinity to P-Core, or Affinity to E-Core OR a max out the vCPU and essentially create a huge VM that has same number of vCPU as the host Cores.

    Strictly for a home lab testing, is there a way to force the ESXi to enable the hyperthreading in a Hybrid CPU arch ?

    Reply
  9. *protectedTimVCI says

    09/16/2024 at 8:23 am

    A bit late to the game but regarding "If you have vCenter Server, it looks like CPU affinity was removed from the vSphere UI at some point"

    Scheduling Affinity is not shown when editing the settings of a VM which is in a cluster where DRS is enabled (as it prevents vMotion).

    We've had a lot of fun with this in the ICM classes over the years!

    Reply

Leave a Reply to ZachCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...