WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple
You are here: Home / ESXi / ESXi on Minisforum MS-01

ESXi on Minisforum MS-01

02.22.2024 by William Lam // 43 Comments

In recent years, there have been a number of new players that have entered the mini PC market that have really been pushing the boundaries on small form factor systems. Minisforum is one such company, that was founded in 2018 and have been steadily producing more interesting kits to compete with some of the more established vendors in this space.

Early on, the kits from Minisforum were pretty comparable (compute, network and storage capabilities) with other vendors using the popular 4x4 design, pioneered by Intel with their Intel NUC platform. With each new generation of mini PCs from Minisforum, the chassis aesthetics started to become more unique and they started to have more differentiated offerings like broader CPU choices including some of the latest AMD desktop and mobile processors.

Even I was intrigued by some of Minisforum offers from a VMware perspective, but unfortunately Minisforum had no interest in collaborating when I had reached out a while back. Over the years, I stayed informed of new releases from Minisforum but nothing really stood out to me as much as their recent announcement of the Minisforums Workstation MS-01.


UPDATE (03/05/2024) - SimplyNUC has just launched the Onyx Pro, which is nothing more than a rebrand of the Minisforums MS-01 and review here would also apply to SimplyNUC OnyxPro.

The VMware Community also agreed, because when the MS-01 was announced in early January of this year, I had numerous folks reach out asking for my thought of the MS-01, which I had shared some of my initial thoughts on this Twitter/X thread based on their website without actually getting hands on with the system.

At the end of January, I came to learn that fellow VMware colleague, Alex Fink, had purchased several MS-01 units to setup a VMware Cloud Foundation (VCF) environment and he kindly offered to let me borrow one of the units for 24hrs to get some quick hands on. Long story short, here is a detailed review of running ESXi on the Minisforums MS-01 with a big thanks to Alex for contributing back to our community! 🥳

Compute


There are three CPU options to choose from for the MS-01, an Intel 13th Generation i9 (Raptor Lake) and an Intel 12th Generation i9 or i5 (Alder Lake) processor.

  • Intel i9-13900H (6P + 8E)
  • Intel i9-12900H (6P + 8E)
  • Intel i5-12450H (4P + 4E)

Since the MS-01 uses the new Intel Hybrid CPU Cores, which integrates two types of CPU cores: Performance-cores (P-cores) and Efficiency-cores (E-cores) into the same physical CPU die, there are some updated options for those looking to run ESXi, which you can find more details in the ESXi section at the bottom of this blog post.

For memory, the MS-01 supports a maximum of two slots of DDR5 SODIMM memory and you will not be able to use DDR4 SODIMM as they would not be compatible. Capacity wise, only the Intel i9-13900H processor is listed as officially supporting 96GB of memory, which is only possible when using the new non-binary 48GB DDR5 SODIMM memory, which I was able to confirm using my own Mushkin 2 x 48GB DDR5 memory kit.

For the other two Intel 12th Generation CPUs, they are only listed to support a maximum of 64GB (2 x 32GB) memory but if I had to guess, they probably could work as what is officially listed by Intel does NOT always mean it does not work. In fact, this is a good reminder that while Intel NUCs only recently started to officially support 64GB, it had been possible several years earlier as I had demonstrated.

Network


The MS-01 comes with an impressive four onboard network adaptors: Intel I225-V (2.5GbE), Intel I225-LM (2.5GbE) and two Intel X710 SPF+ (10GbE), all of which are fully recognized by ESXi as you can see from the screenshot below. Having multi-2.5GbE is not an uncommon configuration for a small form factor system but combine that with dual 10GbE connectivity, definitely a nice touch by Minisforum and certainly a first of its kind. For those interested in deploying vSAN (OSA or ESA) or NSX with VCF, you not only have the connectivity but also the additional bandwidth to run some serious workloads without being limited by networking.


If for some reason you are not satisfied with the onboard networking, you can certainly add more capacity by using the two Thunderbolt 4 ports and consume these Thunderbolt 10GbE solutions for ESXi. You can also add some USB-based networking by using the popular USB Network Native Driver for ESXi Fling.

Storage


The MS-01 is capable of running 3 x NVMe storage devices and what is really unique about the MS-01 is that it can support two different storage configurations.

  • Configuration 1 - All M.2 SSDs
    • 1 x PCIe Gen 3 M.2 SSD (2280/22110)
    • 1 x PCIe Gen 3 M.2 SSD (2280/22110)
    • 1 x PCIe Gen 4 M.2 SSD (2280)
  • Configuration 2 - M.2 + U.2 SSDs
    • 1 x PCIe Gen 3 M.2 SSD (2280/22110)
    • 1 x PCIe Gen 3  M.2 SSD (2280/22110)
    • 1 x PCIe Gen 4 U.2 SSD (7mm ONLY)

The ability to add a U.2 SSD is really slick because this can enable the use of NVMe namespaces for U.2 SSDs that support it like the Samsung PM9A3, which can allow users to carve out a single SSD for multiple purposes including ESXi OS-Data, VMFS volumes and vSAN!


The MS-01 also includes a U.2 to M.2 adaptor (pictured above) which needs to be plugged into the far left of the M.2 slot if you wish to make use of a U.2 SSD.

***One VERY important thing to note is that there is physical toggle/switch located in the upper left (pictured above) that controls the amount of power to the far left M.2 slot. As you can see from the open chassis picture above, there is also a giant warning sticker right above the toggle/switch that warns users that if the toggle/switch is on the incorrect setting (e.g. U.2 toggle on with M.2 SSD), that it can potentially damage your M.2 SSD. Make sure to triple check that you not only have the correct setting and do not accidentally change it while installing your M.2 or U.2!

IO Expansion


Another neat thing about the MS-01 is the additional IO expansion that is available by using a single half-height low profile PCIe 4.0 x8 adaptor to provide more IO (network or storage) or graphics capabilities, which can support up to an NVIDIA RTX A2000 Mobile GPU. If you are interested in seeing what other IO devices have been tested, check out this Serve The Home forum post that is cataloging what folks have tried with the MS-01.

Form Factor


The size of the MS-01 is pretty impressive given all the capabilities that this kit includes! The form factor of the MS-01 reminds me a lot of the Lenovo ThinkStation P3 Tiny, it would not surprise me if they were inspired or borrowed from that design, especially with the quick release latch to slide out the internal chassis without requiring any tools. Pictured above is the MS-01 stacked on top of my Supermicro E200-8D and as you can see, it is slightly taller and the length coming in shorter, which actually surprised me. The full dimensions of the MS-01 is 196×189×48 mm.

Security

The TPM (Trusted Platform Module) chip that is included in the MS-01 is an fTPm and only supports the CRB (Command-Response Buffer) protocol and not the required industry standard FIFO (First In, First Out), which is a requirement for ESXi to be supported.

Graphics


Depending on the CPU processor that you select for the MS-01, you will have access to either an Intel Xe or UHD Integrated Graphics (iGPU), both of which can be passthrough to an Ubuntu Linux VM, providing up to 96 or 48 execution units (EU).

Note: iGPU passthrough to a Windows VM will NOT work due to lack of Intel driver support as shared in this detailed blog post.

Below are the high level instructions for setting up iGPU passthrough to VM.

Step 1 - Create and install Ubuntu Server 22.04 VM (recommend using 60GB storage or more, as additional packages will need to be installed) or Ubuntu Server 23.04 where the i915 drivers are already incoroprated as part of the distribution. Once the OS has been installed, go ahead and shutdown the VM.

Step 2 - Enable passthrough of the iGPU under the ESXi Configure->Hardware->PCI Devices settings and then add a new PCI Device to the VM and select the iGPU. You can use either DirectPath IO or Dynamic DirectPath IO, it does not make a difference.

Step 3 - Optionally, if you wish to disable the default virtual graphics driver (svga), edit the VM and under VM Options->Advanced->Configuration Parameters change the following setting from true to false:

svga.present

Step 4 - Power on the VM and then follow these instructions for installing the Intel Graphic Drivers for Ubuntu 22.04 and once completed, you will now be able to successfully use the iGPU from within the Ubuntu VM as shown in the screenshot above.

ESXi


As expected, the latest release of ESXi 8.0 Update 2 installs fine on the MS-01 without any issues, no additional drivers are required as the Community Networking Driver for ESXi has been productized as part of the ESXi 8.0 release. If you want to install ESXi 7.x, you will need to use the Community Networking Driver for ESXi Fling to have it recognize the onboard network devices.

On the topic of dealing with the new Intel hybrid CPU architecture, which is now the default for all Intel consumer CPUs starting with the Intel 12th Generation or later, was to either disable all P-Cores or E-Cores to prevent PSODs due to the non-uniform CPU capabilities. More recently, I performed some experiments using ESXi CPU affinity policies, which would allow users to make use of both P-Cores and E-Cores, but it can add some overhead depending on frequency of workload deployments.

More from my site

  • ESXi on palm size iKOOLCORE R1
  • Considerations for future vSphere Homelabs due to upcoming removal of SD card/USB support for ESXi
  • Other Intel and AMD small form factor (SFF) systems for vSphere Homelabs
  • Intel NUC 9 Pro & Extreme - First "Modular" NUC
  • Supermicro E300-9D (SYS-E300-9D-8CN8TP) is a nice ESXi & vSAN kit

Categories // ESXi, Home Lab Tags // ESXi, homelab, Minisforum, SimplyNUC

Comments

  1. *protectedTom J says

    02/22/2024 at 10:39 am

    I appreciate the hard work you have put into the community over the years...but with the current changes, vmware in the homelab (and SMB) is dead. I'm aware of the vmug licenses for $200/year, but vmware will only be in larger enterprises and i'm no longer going to waste time with it. Maybe when Broadcom has squeezed every last ounce of life out of it and throws it in the trash in 8-10 years, hopefully someone can swoop in and resurrect it. On to XCP-NG, Hyper-V, Nutanix, Scale, Proxmox, Harvester, openshift, openstack, etc...

    Reply
    • *protectedDL says

      03/02/2024 at 6:00 pm

      I think Broadcom is waaayy over estimating their hand. I know first hand one of the largest most monolithic financial institutions is actively exploring and actively search for an alternative.

      Reply
  2. *protectedmarco ancillotti says

    02/22/2024 at 1:13 pm

    Same here , all my customer want to drop vmware so no more need for a homelab , years of experience in the trash...

    Reply
  3. *protectedChad Fredericksen says

    02/22/2024 at 1:23 pm

    Moved onto Proxmox after 15 years and building a career on VMware/ESXi. Thanks for your work all these years.

    Reply
  4. *protectedTony Montanta says

    02/22/2024 at 5:08 pm

    William can you do something so Broadcom stops killing VMware. I truly deeply love and enjoy esxi and VMware products but Broadcom is killing this company.

    Reply
    • *protectedWeiss says

      02/23/2024 at 12:32 am

      I'm very curious if it is possible to passthrough GPU and Audio to the VM so the VM will use the onboard HDMI for the output?

      Reply
      • William Lam says

        02/23/2024 at 12:58 pm

        See the Graphics section .... I don't know about audio, typically there's virtual audio IIRC but you'd have to test that but it wouldn't be unique to this system

        Reply
        • *protectedWeiss says

          02/23/2024 at 3:26 pm

          That graphics section got me interested.
          The reason I'm asking is that not all GPU virtualization is full passthrough, and even when it is, not all systems will allow VM with iGPU to take over HDMI.
          I'm curious because it will allow me to have one device to serve all functions (virtualized) : router, SDN controller, NAS and because it is located under TV - media server like libreElec.

          Reply
          • William Lam says

            02/23/2024 at 4:26 pm

            I think you’re conflating two concepts … passthrough means guestOS owns and manages PCIe device and it can’t be used by anyone else including Hypervisor. Virtualized GPU means Hypervisor manages it and typically that’s like VMware vGPU where it’s sliced up and multiple VMs can use.

            You’re right that iGPU passthru doesn’t always means physical monitor output, in fact, that hasn’t been case for iGPU for sometime outside of some recent updates (See https://williamlam.com/2022/11/updated-findings-for-passthrough-of-intel-nuc-integrated-graphics-igpu.html). Given output to physical monitor is pretty selective and I only had kit for <12hrs, it’s possible it may not work …

          • *protectedWeiss says

            02/24/2024 at 1:36 am

            Thank you,
            I'm well aware of the differences.
            Well worst case I'll have to use the PCIe slot for GPU.

  5. *protectedHennessen says

    02/22/2024 at 5:13 pm

    Hello, I recieved an email from VMUG stating that the Advantage membership will stay like it is (for now). I was concerned as well.

    Reply
  6. *protectedsiddiquivmw says

    02/22/2024 at 6:49 pm

    Great work by Will, please, let's not bring in other negative experiences that we are having at the moment. Things are changing daily; let's stay positive towards VMware.

    Reply
    • *protectedRIPVMW says

      02/22/2024 at 7:16 pm

      Positive the leopards certainly won’t eat my face. Yeah, sorry. Party is over. Even if Broadcom yells April Fools on 4/1, nobody will trust them enough to invest in the ecosystem again.

      Thanks William! I’ve loved watching your work over the years.

      Reply
  7. *protectedksgoh says

    02/22/2024 at 9:46 pm

    Thanka william for the updates.. I had been following youe post for a long time. as a vmware user since gsx (more then 20+ years). I feel so emotional to move to other platform ... thanks Broadcom for killing the company..

    Reply
  8. *protectedJoe H says

    02/22/2024 at 10:12 pm

    Thanks for posting this. Never heard of them until now. Been looking at reasonable priced NUC type computers for the start of a new quiet new home network and retire all my power-hungry legacy servers. Great detail on the build options.

    Reply
  9. *protectedKama says

    02/23/2024 at 5:29 am

    Great hardware for proxmox!

    Reply
  10. *protectedWes Duncan says

    02/23/2024 at 10:28 am

    Great write up, but why even talk about VMware anymore? They're a thing of the past everywhere except for the world's largest companies, and I'm sure even they are making plans to move on.

    I've been with VMware for a long time! It's been an amazing product that has been a true joy to work with.

    For now I'm planning on moving to proxmox, but it's definitely not the same. Hopefully it improves rapidly due to all of the extra user base that it is gaining.

    Reply
  11. *protectedRobb Wilcox says

    02/23/2024 at 2:21 pm

    Give Nutanix CE a spin.

    Reply
  12. *protectedTheDDC says

    02/23/2024 at 5:56 pm

    No comment from WL on the recent unpleasantness.

    So long and thanks for the fish perhaps?

    Reply
  13. *protectedPWang says

    02/23/2024 at 8:44 pm

    Does it support sr-iov ?

    Reply
  14. *protectedBogdan says

    02/25/2024 at 9:08 am

    Unfortunately, I came here with the same feelings as many other commenters. After many years of working with VMware (started out with ESX 3 and GSX), both in a home lab and in the data center, it is sad to see the way the company is going.

    A big thank you to William - your work has been invaluable over the years, and it has helped me many times! From specific configurations to troubleshooting various issues, this site has been an incredible resource!

    Reply
  15. *protectedVictor says

    02/26/2024 at 2:17 pm

    Hi William,
    Thank you for the nice article, i also bought the same model.
    Did you manage to show in ESX details 20 cores instead of 14 ?
    Can you share how can be done?
    Did you manage to squeeze also a VCSA inside?
    Thank you,
    Victor

    Reply
    • William Lam says

      02/27/2024 at 7:38 am

      You can't get 20 cores ... it'll either be a total of 6 (disable E-Core), 8 (disable P-Core) or 14 (P+E no HT). Please read https://williamlam.com/2024/01/experimenting-with-esxi-cpu-affinity-and-intel-hybrid-cpu-cores.html for more details

      Reply
  16. *protectedJulian says

    03/28/2024 at 3:41 pm

    Hi William,

    I would like to know which(brand/model) M2 can be used in the Esxi 8?

    Julian

    Reply
    • lamw says

      03/28/2024 at 7:06 pm

      Typically any Intel, Samsung and WD is your best bet. You can also look at https://williamlam.com/2023/02/quick-tip-additional-nvme-vendors-sk-hynix-sabrent-for-esxi-homelab.html

      If you rather not "guess", then you can always use the VMware HCL but those will typically be Enterprise devices

      Reply
  17. *protectedSatan023 says

    05/10/2024 at 10:45 am

    铭凡ms01 16G DDR5 M2 4.0 1TB hardware need 4100 RMB (567$) in china

    Reply
  18. *protectedMartin says

    05/18/2024 at 7:57 am

    Hi, I've just received MS-01. I've bought it as I saw reviews saying that passthrough on ESXI 8 for iGPU is working without any issues. Kind of it's true, ESXi will passthrough the iGPU, but it's not working with Win11 VM, having error code 43. Was digging deeper on the internet and it seems it will not work. Was any if you able to passthrough the iGPU into Windows VM and make it work?
    If not, most probably I will sell this unit as for the use case I bought it, it's not working 🙁

    Reply
    • William Lam says

      05/18/2024 at 10:44 am

      I’ve never claimed nor wrote anything about Windows iGPU functioning (there’s reason I say Linux) - See https://williamlam.com/2022/11/updated-findings-for-passthrough-of-intel-nuc-integrated-graphics-igpu.html for reason and this has been known issue for some time due to lack of supported Intel drivers

      Reply
      • *protectedMartin says

        05/19/2024 at 1:40 am

        yep, that's why it's pity I haven't found this page before doing my purchase. I followed some other few reviews of this product.

        Yesterday was trying with Ubuntu (24.04) to passthrough the iGPU. Seems also this one isn't straight forward. If I create the VM with iGPU, in console window I've got a black screen. So installed the Ubuntu without iGPU and attached the iGPU into the VM after the installation. Same here, if I run the VM, console window is just a black screen.
        Is there any procedure I may study? I think I've followed the one above. Thanks.

        Reply
        • William Lam says

          05/19/2024 at 3:31 am

          This is expected, if you're passing through GPU (External or Internal) to VM, then use SSH or enable remote desktop if you need graphical interface. In fact, we typically recommend disabling the default SVGA for VM for optimal performance.

          Reply
  19. *protectedAlessandro Gnagni says

    05/26/2024 at 3:32 am

    Hello,
    I'm experiencing on 2 different MS-01 a random PSOD once a month.
    From the dump log seems to be something related to the cpu dispatcher.
    Latest ESX 8.0, does someone else have the same issue?

    Reply
    • *protectedkhendar9001 says

      05/26/2024 at 3:40 am

      Additional info: e-cores disabled.

      Reply
  20. *protectedBogdan says

    07/30/2024 at 2:04 am

    With 8.0.3 relasebuild 24022510 -
    HW feature incompatibility detected: cannot start
    And purple scree 🙂

    Reply
    • *protectedBogdan says

      07/30/2024 at 2:19 am

      Disable de E-Core did work to boot him.
      Not sure if cpuUniformityHardCheckPanic=FALSE will do the same but i will test.

      Reply
    • *protectedkhendar9001 says

      07/30/2024 at 2:42 am

      for me is running fine

      Reply
  21. *protectedDittman says

    08/02/2024 at 2:43 pm

    I have two running ESXi 8.0.2 for a couple of months without disabling CPUs but with the two settings enabled without any problems until today when one PSODed with "Fatal CPU mismatch" on "Hyperthreads per cord", "Cores per package", "Intel performance monitoring capabilities", "Cores per die", and "Cores per tile".

    I've disabled the E-cores for now on the one.

    Reply
  22. *protectedTB says

    08/17/2024 at 10:53 am

    I've just received the MS-01 and I'm attempting to install ESXi 8, but I keep encountering a PSOD. Can anyone assist me with this?

    Reply
    • *protectedzubilitic says

      10/17/2024 at 2:21 am

      This happens most probably because you either did not disabled E or P cores in bios or failed to add the needed option during boot: SHIFT+O and cpuUniformityHardCheckPanic=FALSE

      Reply
  23. *protectedJava says

    09/23/2024 at 6:03 am

    Install ESXi 8.0.2u on Gem12 max OK. I just try to do that tonight. 🙂

    Reply
  24. *protectedderekrreynolds says

    12/24/2024 at 11:38 am

    Anyone had any success exposing the MS-01's sensors to ESXI. When I go to Monitor | System Sensors it's giving me an error "This system has no IPMI capabilities, you may need to install a driver to enable sensor data to be retrieved."

    Reply
    • William Lam says

      12/24/2024 at 1:31 pm

      Minisforum would need to create drivers to provide them up to ESXi, no such drivers exists

      Reply
  25. *protectedChencito says

    01/20/2025 at 6:21 pm

    Great review, and thanks for the tutorial to get rid of PSOD of non-uniform CPU's core.

    I have been tinkering a few things in minisforum MS-01, and noticed it's a great and extremely efficient machine for ESXi host. It has the latest CPU instructions (which is a problem because all my host are older and need EVC activated). One of my latest attempts is getting Tesla T4 vGPU to work in the minisforum. That little card is a good card for small LLMs. So far I was able to get GRID vGPU (host and guest) drivers installed in Ubuntu. No success in Windows 10, 11, or Server 22. No idea why I have BSOD when I try to install guest drivers in to their corresponding VMs. The error from VMware log's is that I have TDR errors with tons of AAAAAA.
    If anyone has this issue and was able to fix it, would appreciate the solution. Asked in Discord servers, and they were not able to find the solution. I tried all, turning off the e-cores, etc.
    So far, only Ubuntu guest VM was able to work without issues.

    Reply
  26. *protectedRand Man says

    02/21/2025 at 2:44 pm

    Thanks for this nice article! The Minisforum MS-01 ticks all my boxes (well, a nice IPMI would be nice, but I suppose that's asking too much).

    I'm still on ESXi 6.7. I've got an old perpetual license, and never got the need/urge to upgrade beyond ESXi 6.7. I've got a couple of old ESXi servers still on 6.7. I'm curios if the Minisforum MS-01 can run ESXi 6.7? You mentioned the Community Networking Driver for ESXi Fling for ESXi 7.0. Wondering if it can work for 6.7. I tried to go to the site just now, but the link you provided seems to hang, and I get no response.

    Reply

Thanks for the comment!Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...