SimplyNUC has been a long time partner/reseller of the popular Intel NUC platform for quite some time but over the past few years, they have expanded their portfolio to include additional 4x4 systems that are completely designed in-house by SimplyNUC include Topaz, Ruby, Cypress and Chapel Rock to just name a few.
Moonstone is the latest 4x4 addition from SimplyNUC, which is an AMD-based kit that includes support for the latest AMD Zen 4 (Phoenix) and Zen 3+ (Rembrandt R) processors. The VMware Community has always been interested in an AMD-based kit but typically they do not work well due to the presence of a Realtek-based network adaptor, where drivers from Realtek does not exists for ESXi.
The first thing that caught my attention when I first heard about Moonstone was that this was an AMD kit that features an Intel-based network adapter as its primary network interface! 😲
This definitely took me by surprise, not sure if AMD was making any statements about their networking choices or simply diversifying their networking options? 🤔 Either way, I was not complaining and I thought this might be the first real viable AMD 4x4 candidate, from a VMware perspective, in quite some time.
On the outside, the Moonstone looks exactly like your typical Intel NUC 4x4 design and it definitely gave me Intel NUC 11 Pro (Tall chassis) vibes. There are three different models of the Moonstone (R9, R7 & R5), with the higher-end R9 using the new AMD Zen 4 processor and the R7 and R5 using an AMD Zen 3+ processor.
- AMD Rzyen 9-7940HS
- 8 Cores and 16 Threads
- AMD Ryzen 7-7735U
- 8 Cores and 16 Threads
- AMD Ryzen 5-7535U
- 6 Cores and 12 Threads
The Moonstone officially supports 96GB (DDR5 4800 SO-DIMM) memory with the R9 and 64GB (DDR5 4800 SO-DIMM) memory with the R7/R5. The R9 might be the very first 4x4 system that officially supports the new non-binary DDR5 48GB memory modules, even though the ASUS PN64-E1 was the very first platform that I was able confirm 96GB of memory was even possible. The Moonstone R9 fully recognize all 96GB of memory (Mushkin) which is great for anyone interested in running ESXi on the Moonstone! 😁 While I can not speak for the R7/R5 on whether 96GB of memory might work for those systems, this would be something to consider if you want a guarantee on a higher memory capacity system. As I have recently demonstrated with the Lenovo P3 Tiny, not all DDR5 capable systems will support the new non-binary memory modules.
The networking as mentioned earlier is what really caught my attention with some of these newer AMD kits, which surprisingly uses an Intel network adaptor for its primary networking. The Moonstone comes with an Intel I225-V (2.5GbE), which ESXi fully recognizes as the required driver is now inbox with ESXi 8.0 and later. The chassis of the Moonstone is similiar to that of the "Tall" Intel NUC 11/12/13 Pro, which includes an expansion slot where you can add an additional 2.5GbE network adaptor.
At first, I was really happy to see that the Moonstone had simliar networking capabilities, that many of us have grown accustom to with the 4x4 Intel NUCs. However, after installing ESXi on the Moonstone, I noticed something odd with the additional network adaptor.
It turns out the network adaptor in the expansion slot is not actually connected using PCIe like an Intel NUC but rather it is using USB, which I thought was a peculiar design choice. The second and more important thing that I had noticed was that the second 2.5GbE network adaptor was not the same Intel I225-V but rather it was a Realtek-based USB network adaptor! 😞 Furthermore, because this was a USB-based network adaptor, it would not be recognized by ESXi out of the box unless you have the USB Network Native Driver for ESXi installed, which was exactly what I had setup not knowing that the driver would actually be required by the additional network adaptor.
I think the upside to this configuration is that if the network adaptor in the expansion slot was connected over PCIe, it would completely be unusable as Realtek does not provide any PCIe drivers for ESXi. I personally would have liked to see SimplyNUC provide an Intel-based network add-on that uses PCIe instead, this way you have networking that ESXi can take full advantage of with no additional drivers. This especially would have been useful for anyone looking to deploy VMware Cloud Foundation (VCF) which requires at least two network adaptors (USB networking is not supported).
The Moonstone does also come with two USB 4 ports on the higher-end R9 model that you can add additional networking such as these Thunderbolt 10GbE solutions for ESXi or you can also look at adding more USB-based networking using the popular USB Network Native Driver for ESXi Fling.
The storage option on the Moonstone is not as expansive as I would have liked, especially using the taller 4x4 chassis. It only supports a single M.2 PCIe x4 Gen 4 (2280) which significantly reduces the storages options to just using VMFS storage as there is not enough storage devices to setup vSAN, which is a pretty popular deployment option. When compared with the "Tall" Intel NUC 11/12/13 Pro, it supports an additional M.2 PCIe x1 Gen 3 (2242) that could be used for vSAN which is not possible with the Moonstone.
With that said, if you would like to setup vSAN or add additional storage, the Moonstone does come with two USB 4 ports on the higher-end R9 model that you can use to connect some Thunderbolt M.2 NVMe solutions for ESXi. USB 4 is basically Thunderbolt 3 without the royalty fees being paid to Intel, so you get the exact same benefits as Thunderbolt 3 and if you are able to find any native USB 4 storage chassis, it can also extend the PCIe bus similiar to the screenshot above, which I was using a Netstor NA611TB3, which is a Thunderbolt 3-based unit. If you decide to go with the R7 or R5, there are no USB 4 ports and your only option for storage expansion is using USB-based storage for ESXi.
I was hoping that the Moonstone might use a discrete TPM chip which ESXi requires for proper attestation capabilities but sadly it uses an fTPM, which is pretty common amongst consumer 4x4 systems including the Intel NUCs. In fact, the only 4x4 system that I am aware of today that provides a dTPM is the recent ASUS PN64-E1 and that is fully compatible with ESXi as it supports the FIFO protocol and not CRB, which you can see from the Moonstone BIOS settings. You most likely will want to disable the TPM device in the BIOS as you will not be able to use it and to get rid of the pesky messages from ESXi stating that it is unable to establish a connection with the TPM device.
Depending on the specific Moonstone model that you choose, you will either have an AMD Radeon 780M, 680M or 660M iGPU respectively across the three models. One very important thing to be aware of with an AMD iGPU, which is something I came to learn about while working with other AMD kits is that when you passthrough the iGPU, it indirectly passes through the USB controllers as they seem to be wired up together internally.
This has a major implication if wish to use the expanded network adaptor that I had discussed above, because it is connected via USB, ESXi will not be able to use the network adaptor as it will also be passed through to the VM along with the iGPU, which is another unfortunate behavior of AMD-based kits. I do wonder if there are some internal limitations that these AMD 4x4 kits inherently have compared to a typical Intel system ... 🤔
I was able to successfully passthrough the AMD Radeon 780M to an Ubuntu 23.04 VM and the AMD graphics drivers was automatically picked up as you can see from the screenshot above. You will need to add the following VM Advanced Setting:
pciPassthru.use64bitMMIO = TRUE
or the VM will fail to power on.
On Windows, I was not as lucky and while I was able to passthrough the AMD Radeon 780M to a Windows 10 VM, after downloading and installing the Adrenalin Edition graphics drivers from AMD's website, it fails to load as shown in the screenshot above. The error message is simliar to that of passing through an Intel iGPU to Windows VM with Error 43, so it seems AMD also has the the same fundamental graphics driver issues like Intel when using an iGPU on Windows.
The latest release of ESXi 8.0 Update 1 installs fine on the Moonstone without any issues, no additional drivers are required as the Community Networking Driver for ESXi has been productized as part of the ESXi 8.0 release. With vSphere 8.0 Update 2 now officially available, I figured I would also put that to test and I am pleased to share that it also installs flawlessly on the Moonstone! If you want to install ESXi 7.x, you will need to use of the Community Networking Driver for ESXi Fling to recognize the onboard network devices.
As mentioned in the networking section, you will need to install the USB Network Native Driver for ESXi Fling if you add the additional network adaptor via the expansion slot, because it is connected via USB and hence it will be treated as a USB network device, which requires the driver to function.
During VMware Explore US, I spotted a SimplyNUC being used in the demo for the recently announced Project Keswick, I figured the Moonstone would make for another excellent candidate for running at the Edge, so I wanted to make sure it could also run Keswick!
Project Keswick is completely automated install which uses ESXi Kickstart (I had something to do with this 😉) that configures everything & automatically register the cloud service. Just plug in and power on, it’s that EASY! #VMwareExplore
— William Lam (@lamw.bsky.social | @*protected email*) (@lamw) August 23, 2023
As you can see from the screenshot above, I was able to deploy Keswick on the Moonstone and it successfully auto-registered with the Keswick Cloud Service, ready to deploy containers or VM-based workloads using GitOps!