In recent years, there have been a number of new players that have entered the mini PC market that have really been pushing the boundaries on small form factor systems. Minisforum is one such company, that was founded in 2018 and have been steadily producing more interesting kits to compete with some of the more established vendors in this space.
Early on, the kits from Minisforum were pretty comparable (compute, network and storage capabilities) with other vendors using the popular 4x4 design, pioneered by Intel with their Intel NUC platform. With each new generation of mini PCs from Minisforum, the chassis aesthetics started to become more unique and they started to have more differentiated offerings like broader CPU choices including some of the latest AMD desktop and mobile processors.
Even I was intrigued by some of Minisforum offers from a VMware perspective, but unfortunately Minisforum had no interest in collaborating when I had reached out a while back. Over the years, I stayed informed of new releases from Minisforum but nothing really stood out to me as much as their recent announcement of the Minisforums Workstation MS-01.
UPDATE (03/05/2024) - SimplyNUC has just launched the Onyx Pro, which is nothing more than a rebrand of the Minisforums MS-01 and review here would also apply to SimplyNUC OnyxPro.
The VMware Community also agreed, because when the MS-01 was announced in early January of this year, I had numerous folks reach out asking for my thought of the MS-01, which I had shared some of my initial thoughts on this Twitter/X thread based on their website without actually getting hands on with the system.
At the end of January, I came to learn that fellow VMware colleague, Alex Fink, had purchased several MS-01 units to setup a VMware Cloud Foundation (VCF) environment and he kindly offered to let me borrow one of the units for 24hrs to get some quick hands on. Long story short, here is a detailed review of running ESXi on the Minisforums MS-01 with a big thanks to Alex for contributing back to our community! 🥳
Compute
There are three CPU options to choose from for the MS-01, an Intel 13th Generation i9 (Raptor Lake) and an Intel 12th Generation i9 or i5 (Alder Lake) processor.
- Intel i9-13900H (6P + 8E)
- Intel i9-12900H (6P + 8E)
- Intel i5-12450H (4P + 4E)
Since the MS-01 uses the new Intel Hybrid CPU Cores, which integrates two types of CPU cores: Performance-cores (P-cores) and Efficiency-cores (E-cores) into the same physical CPU die, there are some updated options for those looking to run ESXi, which you can find more details in the ESXi section at the bottom of this blog post.
For memory, the MS-01 supports a maximum of two slots of DDR5 SODIMM memory and you will not be able to use DDR4 SODIMM as they would not be compatible. Capacity wise, only the Intel i9-13900H processor is listed as officially supporting 96GB of memory, which is only possible when using the new non-binary 48GB DDR5 SODIMM memory, which I was able to confirm using my own Mushkin 2 x 48GB DDR5 memory kit.
For the other two Intel 12th Generation CPUs, they are only listed to support a maximum of 64GB (2 x 32GB) memory but if I had to guess, they probably could work as what is officially listed by Intel does NOT always mean it does not work. In fact, this is a good reminder that while Intel NUCs only recently started to officially support 64GB, it had been possible several years earlier as I had demonstrated.
Network
The MS-01 comes with an impressive four onboard network adaptors: Intel I225-V (2.5GbE), Intel I225-LM (2.5GbE) and two Intel X710 SPF+ (10GbE), all of which are fully recognized by ESXi as you can see from the screenshot below. Having multi-2.5GbE is not an uncommon configuration for a small form factor system but combine that with dual 10GbE connectivity, definitely a nice touch by Minisforum and certainly a first of its kind. For those interested in deploying vSAN (OSA or ESA) or NSX with VCF, you not only have the connectivity but also the additional bandwidth to run some serious workloads without being limited by networking.
If for some reason you are not satisfied with the onboard networking, you can certainly add more capacity by using the two Thunderbolt 4 ports and consume these Thunderbolt 10GbE solutions for ESXi. You can also add some USB-based networking by using the popular USB Network Native Driver for ESXi Fling.
Storage
The MS-01 is capable of running 3 x NVMe storage devices and what is really unique about the MS-01 is that it can support two different storage configurations.
- Configuration 1 - All M.2 SSDs
- 1 x PCIe Gen 3 M.2 SSD (2280/22110)
- 1 x PCIe Gen 3 M.2 SSD (2280/22110)
- 1 x PCIe Gen 4 M.2 SSD (2280)
- Configuration 2 - M.2 + U.2 SSDs
- 1 x PCIe Gen 3 M.2 SSD (2280/22110)
- 1 x PCIe Gen 3 M.2 SSD (2280/22110)
- 1 x PCIe Gen 4 U.2 SSD (7mm ONLY)
The ability to add a U.2 SSD is really slick because this can enable the use of NVMe namespaces for U.2 SSDs that support it like the Samsung PM9A3, which can allow users to carve out a single SSD for multiple purposes including ESXi OS-Data, VMFS volumes and vSAN!
The MS-01 also includes a U.2 to M.2 adaptor (pictured above) which needs to be plugged into the far left of the M.2 slot if you wish to make use of a U.2 SSD.
***One VERY important thing to note is that there is physical toggle/switch located in the upper left (pictured above) that controls the amount of power to the far left M.2 slot. As you can see from the open chassis picture above, there is also a giant warning sticker right above the toggle/switch that warns users that if the toggle/switch is on the incorrect setting (e.g. U.2 toggle on with M.2 SSD), that it can potentially damage your M.2 SSD. Make sure to triple check that you not only have the correct setting and do not accidentally change it while installing your M.2 or U.2!
IO Expansion
Another neat thing about the MS-01 is the additional IO expansion that is available by using a single half-height low profile PCIe 4.0 x8 adaptor to provide more IO (network or storage) or graphics capabilities, which can support up to an NVIDIA RTX A2000 Mobile GPU. If you are interested in seeing what other IO devices have been tested, check out this Serve The Home forum post that is cataloging what folks have tried with the MS-01.
Form Factor
The size of the MS-01 is pretty impressive given all the capabilities that this kit includes! The form factor of the MS-01 reminds me a lot of the Lenovo ThinkStation P3 Tiny, it would not surprise me if they were inspired or borrowed from that design, especially with the quick release latch to slide out the internal chassis without requiring any tools. Pictured above is the MS-01 stacked on top of my Supermicro E200-8D and as you can see, it is slightly taller and the length coming in shorter, which actually surprised me. The full dimensions of the MS-01 is 196×189×48 mm.
Security
The TPM (Trusted Platform Module) chip that is included in the MS-01 is an fTPm and only supports the CRB (Command-Response Buffer) protocol and not the required industry standard FIFO (First In, First Out), which is a requirement for ESXi to be supported.
Graphics
Depending on the CPU processor that you select for the MS-01, you will have access to either an Intel Xe or UHD Integrated Graphics (iGPU), both of which can be passthrough to an Ubuntu Linux VM, providing up to 96 or 48 execution units (EU).
Note: iGPU passthrough to a Windows VM will NOT work due to lack of Intel driver support as shared in this detailed blog post.
Below are the high level instructions for setting up iGPU passthrough to VM.
Step 1 - Create and install Ubuntu Server 22.04 VM (recommend using 60GB storage or more, as additional packages will need to be installed) or Ubuntu Server 23.04 where the i915 drivers are already incoroprated as part of the distribution. Once the OS has been installed, go ahead and shutdown the VM.
Step 2 - Enable passthrough of the iGPU under the ESXi Configure->Hardware->PCI Devices settings and then add a new PCI Device to the VM and select the iGPU. You can use either DirectPath IO or Dynamic DirectPath IO, it does not make a difference.
Step 3 - Optionally, if you wish to disable the default virtual graphics driver (svga), edit the VM and under VM Options->Advanced->Configuration Parameters change the following setting from true to false:
svga.present
Step 4 - Power on the VM and then follow these instructions for installing the Intel Graphic Drivers for Ubuntu 22.04 and once completed, you will now be able to successfully use the iGPU from within the Ubuntu VM as shown in the screenshot above.
ESXi
As expected, the latest release of ESXi 8.0 Update 2 installs fine on the MS-01 without any issues, no additional drivers are required as the Community Networking Driver for ESXi has been productized as part of the ESXi 8.0 release. If you want to install ESXi 7.x, you will need to use the Community Networking Driver for ESXi Fling to have it recognize the onboard network devices.
On the topic of dealing with the new Intel hybrid CPU architecture, which is now the default for all Intel consumer CPUs starting with the Intel 12th Generation or later, was to either disable all P-Cores or E-Cores to prevent PSODs due to the non-uniform CPU capabilities. More recently, I performed some experiments using ESXi CPU affinity policies, which would allow users to make use of both P-Cores and E-Cores, but it can add some overhead depending on frequency of workload deployments.
Tom J says
I appreciate the hard work you have put into the community over the years...but with the current changes, vmware in the homelab (and SMB) is dead. I'm aware of the vmug licenses for $200/year, but vmware will only be in larger enterprises and i'm no longer going to waste time with it. Maybe when Broadcom has squeezed every last ounce of life out of it and throws it in the trash in 8-10 years, hopefully someone can swoop in and resurrect it. On to XCP-NG, Hyper-V, Nutanix, Scale, Proxmox, Harvester, openshift, openstack, etc...
DL says
I think Broadcom is waaayy over estimating their hand. I know first hand one of the largest most monolithic financial institutions is actively exploring and actively search for an alternative.
marco ancillotti says
Same here , all my customer want to drop vmware so no more need for a homelab , years of experience in the trash...
Chad Fredericksen says
Moved onto Proxmox after 15 years and building a career on VMware/ESXi. Thanks for your work all these years.
Tony Montanta says
William can you do something so Broadcom stops killing VMware. I truly deeply love and enjoy esxi and VMware products but Broadcom is killing this company.
Weiss says
I'm very curious if it is possible to passthrough GPU and Audio to the VM so the VM will use the onboard HDMI for the output?
William Lam says
See the Graphics section .... I don't know about audio, typically there's virtual audio IIRC but you'd have to test that but it wouldn't be unique to this system
Weiss says
That graphics section got me interested.
The reason I'm asking is that not all GPU virtualization is full passthrough, and even when it is, not all systems will allow VM with iGPU to take over HDMI.
I'm curious because it will allow me to have one device to serve all functions (virtualized) : router, SDN controller, NAS and because it is located under TV - media server like libreElec.
William Lam says
I think you’re conflating two concepts … passthrough means guestOS owns and manages PCIe device and it can’t be used by anyone else including Hypervisor. Virtualized GPU means Hypervisor manages it and typically that’s like VMware vGPU where it’s sliced up and multiple VMs can use.
You’re right that iGPU passthru doesn’t always means physical monitor output, in fact, that hasn’t been case for iGPU for sometime outside of some recent updates (See https://williamlam.com/2022/11/updated-findings-for-passthrough-of-intel-nuc-integrated-graphics-igpu.html). Given output to physical monitor is pretty selective and I only had kit for <12hrs, it’s possible it may not work …
Weiss says
Thank you,
I'm well aware of the differences.
Well worst case I'll have to use the PCIe slot for GPU.
Hennessen says
Hello, I recieved an email from VMUG stating that the Advantage membership will stay like it is (for now). I was concerned as well.
siddiquivmw says
Great work by Will, please, let's not bring in other negative experiences that we are having at the moment. Things are changing daily; let's stay positive towards VMware.
RIPVMW says
Positive the leopards certainly won’t eat my face. Yeah, sorry. Party is over. Even if Broadcom yells April Fools on 4/1, nobody will trust them enough to invest in the ecosystem again.
Thanks William! I’ve loved watching your work over the years.
ksgoh says
Thanka william for the updates.. I had been following youe post for a long time. as a vmware user since gsx (more then 20+ years). I feel so emotional to move to other platform ... thanks Broadcom for killing the company..
Joe H says
Thanks for posting this. Never heard of them until now. Been looking at reasonable priced NUC type computers for the start of a new quiet new home network and retire all my power-hungry legacy servers. Great detail on the build options.
Kama says
Great hardware for proxmox!
Wes Duncan says
Great write up, but why even talk about VMware anymore? They're a thing of the past everywhere except for the world's largest companies, and I'm sure even they are making plans to move on.
I've been with VMware for a long time! It's been an amazing product that has been a true joy to work with.
For now I'm planning on moving to proxmox, but it's definitely not the same. Hopefully it improves rapidly due to all of the extra user base that it is gaining.
Robb Wilcox says
Give Nutanix CE a spin.
TheDDC says
No comment from WL on the recent unpleasantness.
So long and thanks for the fish perhaps?
PWang says
Does it support sr-iov ?
Bogdan says
Unfortunately, I came here with the same feelings as many other commenters. After many years of working with VMware (started out with ESX 3 and GSX), both in a home lab and in the data center, it is sad to see the way the company is going.
A big thank you to William - your work has been invaluable over the years, and it has helped me many times! From specific configurations to troubleshooting various issues, this site has been an incredible resource!
Victor says
Hi William,
Thank you for the nice article, i also bought the same model.
Did you manage to show in ESX details 20 cores instead of 14 ?
Can you share how can be done?
Did you manage to squeeze also a VCSA inside?
Thank you,
Victor
William Lam says
You can't get 20 cores ... it'll either be a total of 6 (disable E-Core), 8 (disable P-Core) or 14 (P+E no HT). Please read https://williamlam.com/2024/01/experimenting-with-esxi-cpu-affinity-and-intel-hybrid-cpu-cores.html for more details
Julian says
Hi William,
I would like to know which(brand/model) M2 can be used in the Esxi 8?
Julian
lamw says
Typically any Intel, Samsung and WD is your best bet. You can also look at https://williamlam.com/2023/02/quick-tip-additional-nvme-vendors-sk-hynix-sabrent-for-esxi-homelab.html
If you rather not "guess", then you can always use the VMware HCL but those will typically be Enterprise devices
Satan023 says
铭凡ms01 16G DDR5 M2 4.0 1TB hardware need 4100 RMB (567$) in china
Martin says
Hi, I've just received MS-01. I've bought it as I saw reviews saying that passthrough on ESXI 8 for iGPU is working without any issues. Kind of it's true, ESXi will passthrough the iGPU, but it's not working with Win11 VM, having error code 43. Was digging deeper on the internet and it seems it will not work. Was any if you able to passthrough the iGPU into Windows VM and make it work?
If not, most probably I will sell this unit as for the use case I bought it, it's not working 🙁
William Lam says
I’ve never claimed nor wrote anything about Windows iGPU functioning (there’s reason I say Linux) - See https://williamlam.com/2022/11/updated-findings-for-passthrough-of-intel-nuc-integrated-graphics-igpu.html for reason and this has been known issue for some time due to lack of supported Intel drivers
Martin says
yep, that's why it's pity I haven't found this page before doing my purchase. I followed some other few reviews of this product.
Yesterday was trying with Ubuntu (24.04) to passthrough the iGPU. Seems also this one isn't straight forward. If I create the VM with iGPU, in console window I've got a black screen. So installed the Ubuntu without iGPU and attached the iGPU into the VM after the installation. Same here, if I run the VM, console window is just a black screen.
Is there any procedure I may study? I think I've followed the one above. Thanks.
William Lam says
This is expected, if you're passing through GPU (External or Internal) to VM, then use SSH or enable remote desktop if you need graphical interface. In fact, we typically recommend disabling the default SVGA for VM for optimal performance.
Alessandro Gnagni says
Hello,
I'm experiencing on 2 different MS-01 a random PSOD once a month.
From the dump log seems to be something related to the cpu dispatcher.
Latest ESX 8.0, does someone else have the same issue?
khendar9001 says
Additional info: e-cores disabled.
Bogdan says
With 8.0.3 relasebuild 24022510 -
HW feature incompatibility detected: cannot start
And purple scree 🙂
Bogdan says
Disable de E-Core did work to boot him.
Not sure if cpuUniformityHardCheckPanic=FALSE will do the same but i will test.
khendar9001 says
for me is running fine
Dittman says
I have two running ESXi 8.0.2 for a couple of months without disabling CPUs but with the two settings enabled without any problems until today when one PSODed with "Fatal CPU mismatch" on "Hyperthreads per cord", "Cores per package", "Intel performance monitoring capabilities", "Cores per die", and "Cores per tile".
I've disabled the E-cores for now on the one.
TB says
I've just received the MS-01 and I'm attempting to install ESXi 8, but I keep encountering a PSOD. Can anyone assist me with this?