My experiences with small form factor (SFF) systems and Mini PCs over the years have primarily involved Intel-based systems, as they have been the most capable and compatible with the VMware ESXi Hypervisor—especially when it comes to onboard networking options.
Intel's introduction of their Hybrid CPU core architecture starting with 12th Gen (Alder Lake) and continues with 13th Gen (Raptor Lake), 14th Gen (Meteor Lake), and now 15th Gen (Arrow Lake), presents a unique challenge for running ESXi.
When AMD announced their Ryzen AI 300 Series processors (formally codenamed Strix Point) based on their new Zen 5 architecture, I was pretty excited with their approach to a "Hybrid" processor:
AMD's approach to its 'compact' Zen 5c cores is inherently different than Intel's approach with its e-cores. Like Intel's E-cores, AMD's Zen 5c cores are designed to consume less space on a processor die...But the similarities end there. Unlike Intel, AMD employs the same microarchitecture and supports the same features with its smaller cores.
Since both the Zen 5 and Zen 5c cores contain the exact same CPU features, they would be considered uniform cores unlike the Intel platform, which now includes three different core types (Performance, Efficiency & Low-Power Efficiency), requiring additional workarounds to be able to utilize most of the cores available on the SoC.
While there is currently only a handful of Ryzen AI Pro 300 Series kits that are available for purchase, I was fortunate to get hands on with one from a company called GMKtec, who is a relatively new player in the small form factor market. I have personally never used a GMKtec system before, so I was looking forward to seeing what they had to offer.
Disclaimer: As of publishing this blog post, a fellow colleague has not had any luck in contacting GMKtec to initiate a return, they have been completely non-responsive for several weeks now. I have also observed simliar negative feedback on various Reddit threads, which is certainly concerning for potential prospects. Users may want to consider purchasing GMKtec systems using Amazon, rather than directly from the vendor in case you need an exchange or return.
Compute
The EVO-X1 from GMKtec ships with an AMD Ryzen AI 9 HX 370 which is packed with 12-Cores and 24-Threads, which may be a massive upgrade in terms of cores for many users coming from earlier Intel-based systems.
The memory on all Strix Point-based systems right now are currently non-upgradable and uses soldered LPDDR5 memory. As of publishing this blog post, there are currently two memory configurations: 32GB and 64GB for the EVO-X1, which in 2025 is not much when you compare to the 96GB and 128GB memory capacities that are now available via the SODIMM form factor. Cost is most likely the biggest factor when it comes to LPDDR5 memory as it is still significantly more expensive.
While the NVMe Tiering feature in vSphere can certainly help with additional memory capacity, I really hope to see support for SODIMM memory and allow users to pick their desired memory capacity.
Network
The networking on the EVO-X1 is also quite interesting as it includes a pair of Intel i226v (2.5GbE), which is another trend I have been noticing with more recent AMD-based kits favoring Intel over Realtek, which you typically would find in an AMD kit. The Intel i226v is commonly used in an Intel-based system which are fully recognized by ESXi and this is great news for the VMware community.
Since there are no Thunderbolt ports, for additional networking, you will need to look at USB-based networking using the popular USB Network Native Driver for ESXi Fling, supporting over two dozen types of USB-based network adapters.
Storage
The EVO-X1 can support up to 2 x M.2 NVMe PCIe x4 Gen 4 (2280) which is pretty typical for a 4x4 type form factor, but this does also means users must decide on which VMware storage combination they would like to use whether that is NVMe Tiering, vSAN OSA/ESA or standard VMFS. With that said, we can can get creative and just use one of the NVMe devices for a combined NVMe Tiering, ESXi OSData and VMFS datastore, which would then free up the additional NVMe for vSAN ESA 😁
One unique capability in most modern AMD kits is the inclusion of an OCuLink interface (pictured above on the front of the EVO-X1) which provides an alternative to Intel Thunderbolt, enabling external GPU (eGPU) access.
I was curious if the OCuLink interface could be used to add an additional M.2 NVMe drive, similar to Thunderbolt. I found this M.2 NVMe to OCuLink enclosure (pictured above) on Amazon, which was reasonably priced and could be consumed by ESXi. The OCuLink interface on the EVO-X1 does not support hot-plug, so after powering off everything and then connecting the OCuLink NVMe enclosure, the M.2 NVMe device was only partially visible to ESXi and hence it was not functional.
I had attempted to debug the issue with VMware Engineering, but there was not a clear reason on why the NVMe device was only partially seen by ESXi. Engineering had theorized that this could be a physical issue with the OCuLink enclosure, so I have returned the enclosure. I did try one other, more expensive M.2 NVMe to OCuLink enclosure but that one was even worse as the device was not even partially seen when listing via lspci, so it looks like OCuLink with storage is a no go and may only be reserved for eGPU usage, which I have not personally tested with ESXi, so YMMV there as well.
Alternatively, if you really want a 3rd M.2 NVMe, you can remove the WiFi adapter but you will need to purchase an M.2 A+E adapter so that you can add either an M.2 2242 or 2230 NVMe device. I purchased this $8 M.2 A+E adapter and a Sabrant 512GB 2230 NVMe from Amazon and once installed, ESXi immediately saw the NVMe device without any issues. The only caveat with using the M.2 A+E adapter on the EVO-X1 is that the added height of both the adapter itself and NVMe device prevents the EVO-X1 lid from completely closing, as the fan chassis is bumping into the NVMe device. It might be possible to purchase a fan that fits the EVO-X1 lid that is slightly thinner, but something to consider if you are going to go down this route.
Note: Adding a 3rd M.2 NVMe by replacing the WiFi adapter is also possible on the GMKtec K8 Plus as recently shared by Tom Fojta.
Graphics
The AMD Ryzen AI 9 HX 370 ships with integrated graphics (iGPU) using an AMD Radeon 890M which is already pretty capable but with a combined NPU (details in next section), there are some really powerful AI use cases this device can enable, such as this recent example from AMD on running DeepSeek R1 models locally, which I was interested in exploring further.
Unfortunately, passthrough of the iGPU was not successfully using either Windows 11 or the latest Ubuntu 24.10 release and as you can see from dmesg output, it fails to load the amdgpu driver, which is needed for proper functionality. I have also filed an internal bug to see if there is anything that can be done, but for the time being, the iGPU is not functioning properly in passthrough.
[ 2.529400] [drm] amdgpu kernel modesetting enabled. [ 2.529530] [drm] amdgpu version: 6.10.5 [ 2.531049] amdgpu: Virtual CRAT table created for CPU [ 2.531173] amdgpu: Topology: Add CPU node [ 2.534412] amdgpu 0000:02:05.0: enabling device (0000 -> 0003) [ 2.537928] amdgpu 0000:02:05.0: amdgpu: get invalid ip discovery binary signature [ 2.538061] [drm:amdgpu_discovery_set_ip_blocks [amdgpu]] *ERROR* amdgpu_discovery_init failed [ 2.538462] amdgpu 0000:02:05.0: amdgpu: Fatal error during GPU init [ 2.538693] amdgpu 0000:02:05.0: amdgpu: amdgpu: finishing device. [ 2.539185] amdgpu 0000:02:05.0: probe with driver amdgpu failed with error -22
AI Accelerator
As noted earlier, the AMD Ryzen AI Pro 300 Series also includes a Neural Processing Unit (NPU), simliar to modern Intel-based platforms, but unlike Intel where the NPU is available as a regular PCIe based device, the NPU on this AMD system is actually integrated with the iGPU. Since the iGPU can not be consumed via passthrough, it also means the use of the NPU is also currently not possible.
Security
I was hopeful the TPM module on the EVO-X1 was a proper dTPM and not the typical fTPM, which was not the case. The TPM on EVO-X1 only supports the CRB protocol and not FIFO which is required to properly function with ESXi. While there is a mode to switch to a "discrete" TPM by going into the system BIOS under Advanced->AMD CBS->SOC Miscellaneous Control->Trusted Platform Module, it simply gets rid of the warning message in ESXi that a connection can not be established with the TPM.
Form Factor
The EVO-X1 chassis has a very clean and compact design that certainly makes it standout compared to other 4x4 systems that is available in the market. As you can see from the picture above (top to bottom: ASUS NUC 14 pro, Intel NUC 13 Pro Tall and GMKtec EVO-X1), the EVO-X1 it is just slightly higher than the "Tall" version of the Intel NUC, coming in at 110.19 x 107.3 x 63.2 mm. With a compact form factor, you can easily stack several EVO-X1 without taking up much space.
ESXi
When I first booted up the latest version of ESXi which is currently at 8.0 Update 3c, I was actually surprised to see a PSOD due to the detection of non-uniform cores. After speaking with Engineering, the PSOD was not due to non-uniform cores since both Zen 5 and Zen 5c cores all have the same microarchitecture, but that it was the different number of cores per tile between the two core types that caused the PSOD. While this type of an architecture was not expected from an ESXi perspective, improvements to our non-uniformity could be improved in the future.
Luckily, we already have a workaround for the PSOD by adding the following ESXi kernel boot option: cpuUniformityHardCheckPanic=FALSE (more details in configuring this setting can be found HERE):
Once the ESXi kernel boot option was added, ESXi installs and runs perfectly fine and I was even able to successfully setup NVMe Tiering even with just the 32GB LPDDR5 model!
Thank you for sharing! Looking forward to hearing more of such type of hands-on.
BTW, the Sabrant 512GB 2230 NVMe SSD you bought seems to be using Phison E21T controller. Does ESXi have issue to recognize it natively or extra driver will be required? Appreciate your comment!
Please re-read the blog post as the answer is provided there 🙂