The name SolidRun is no stranger to the VMware ecosystem, both the Honeycomb LX2 and MacchiatoBin are two popular Arm platforms that are used with the ESXi-Arm Fling, especially for development and testing purposes.
However, what I did not know about SolidRun was that they also catered to the x86 market, which I recently came to learn about with their launch of the Bedrock V3000 (AMD Zen 3) and V7000 (AMD Zen 4) platforms respectively.
Given the opportunity to get hands on with one of the SolidRun x86 kits, the V3000, I knew I had to take it for a spin!
After unboxing the V3000, the first thing that immediately stands out is the overall build quality and aesthetics. The system is truly beautiful to look at and hold, not words I typically use to describe a server 🙂 The V3000 is not for a typical homelab, it is designed to run in harsh and demanding industrial type environments, especially those found at the Edge.
The unique fanless design and cooling of the V3000 enables it to be deployed to a number of locations, including ruggedized environments where traditional mounting kits may not be available and the use of DIN-Rail are required. Another thing that stood out to me while reading about the V3000, is the modularity of their platform where you can easily add a Networking and I/O board (NIO), Storage and Extension Cards board (SX) and Power Module (PM) to address your different use cases and requirements.
Lets now take a closer look at running ESXi on the V3000!
The V3000 can be configured with either a Ryzen V3C48 (45W) or Ryzen V3C18I (15W), both of which are a Zen 3 CPU that support 8 Cores and 16 Threads. For memory, the V3000 can support up to 64GB of memory using DDR5-4800 which can either be ECC or non-ECC memory, which is really nice for customers with higher reliability requirements, especially if you plan to run these at Edge locations where you may not easily be able to service the systems that are running mission critical workloads.
While the V3000 supports DDR5 SO-DIMM, I did not get a chance to check whether the new non-binary 48GB DDR5 memory modules would get recognized, partially because the memory slots were not easily accessible.
I knew the i226 network adaptors would work since the Community Networking Driver for ESXi, which enables these devices are now included with ESXi 8.x and later, but the 10Gig SPF+ adaptors was an unknown.
After installing ESXi, I found that the 10Gig SPF+ adaptors are actually AMD-based NICs (1022:1458), but unfortunately, they are not recognized by ESXi. While ESXi can not make use of the 10Gig adaptors, I had hoped that at least passthrough to a VM should still work, however VM power on failed and I noticed the following error messages in the VMkernel logs:
2023-08-25T20:11:42.706Z In(182) vmkernel: cpu12:1159075)World: vm 1159088: 6916: Starting world vmm0:ubuntu-23.04-c of type 8
2023-08-25T20:11:42.775Z In(182) vmkernel: cpu12:1159075)AMDIommu: 428: Created domain 1
2023-08-25T20:11:42.775Z In(182) vmkernel: cpu12:1159075)AMDIommu: 311: Domain 1: bypass = No, identity-mapped = No, top page table = 0x9acc9f000
2023-08-25T20:11:42.940Z In(182) vmkernel: cpu15:1049431)DOM: DOMOwner_SubscribeClusterEncrState:5874: DOM Owner on fe0ae964-5369-cebe-b10c-d063b4051c77 received duplicate cluster encryption state subscription
2023-08-25T20:11:43.161Z In(182) vmkernel: cpu10:1159075)VSCSI: 5253: handle 4978245053194252(GID:8204)(vscsi0:0):Creating Virtual Device for world 1159088 (FSS handle 42807820) numBlocks=52428800 (bs=512)
2023-08-25T20:11:43.162Z In(182) vmkernel: cpu10:1159075)VSCSI: 272: handle 4978245053194252(GID:8204)(vscsi0:0):Input values: res=0 limit=-2 bw=-1 Shares=1000
2023-08-25T20:11:43.164Z In(182) vmkernel: cpu10:1159075)PCIPassthru: 4551: pcipDevInfo(0x43156f401750) allocated for 0000:09:00.2
2023-08-25T20:11:43.224Z Wa(180) vmkwarning: cpu10:1159075)WARNING: PCI: 666: Dev @ p0000:09:00.2 did not complete its pending transactions prior to being reset; will apply the reset anyway but this may cause PCI errors
2023-08-25T20:11:48.419Z Wa(180) vmkwarning: cpu10:1159075)WARNING: PCI: 745: Dev 0000:09:00.2 is unresponsive after reset
2023-08-25T20:11:48.419Z In(182) vmkernel: cpu10:1159075)PCIPassthru: 5209: 0000:09:00.2 :Reset for device failed with Failure
2023-08-25T20:11:48.419Z In(182) vmkernel: cpu10:1159075)PCIPassthru: 1015: pcipdevInfo: 0x43156f401750 (0000:09:00.2), state 0, destroyed
After sharing the logs with VMware Engineering, it looks like VM passthrough of these 10Gig adaptors will not be possible as PCI reset failed on these devices and that is preventing the VM from powering on 🙁
You can install up to 3 x M.2 PCIe x4 Gen 4 (2280) in the V3000, enabling users to easily setup vSAN using two of the SSDs and then ESXi can be installed on the third SSD to ensure it is running on a reliable medium for the ESX-OSData. If you prefer to use VMFS, then you can have up to three datastores including the ESXi installation which can provide additional storage capacity for running workloads.
I initially thought the TPM chip on the V3000 would not be compatible with ESXi as it showed up as an fTPM. While browsing through the system BIOS, I noticed there were some additional configurations, something called "Route to SPI TPM" which can be found under Advanced->Trusted Computing->AMD fTPM Switch
I decided to give that a try and after rebooting the system, I saw that the TPM settings in the BIOS was now what you would see for a discrete TPM. I was able to confirm that ESXi attestation was now possible as you can see from the screenshot below, I am please to share that the TPM included in the V3000 is a proper dTPM for those interested in this capability, which can be extremely critical when running at the Edge and ensuring you are booting from a secure system.
There are no graphics or video output from the V3000 as it is designed to run as a headless system, which means that you will need to use the provided USB to serial console cable for entering the system BIOS and/or debugging the system. The setup requires you to install the FTDI drivers on your system (Windows, Mac or Linux), which will enable you to communicate using the serial port, complete instruction setup can be found HERE.
Another implication to be aware of when using a headless system with ESXi is that the serial console will boot up to the part where it says "Shutting down firmware services" as shown in the screenshot below. While it may seem like the ESXi boot up process has hung, it is actually still running but you will not see the remainder of the boot process nor be able to access the DCUI over the serial console.
You will not be able to perform an interactive installation of ESXi but instead, you will need to use an unattended installation such as ESXi Kickstart to install ESXi on the V3000. Below, I have provided a working ESXi kickstart example.
Note: If you were to install other operating systems like Ubuntu, the same non-interactive installation is also required as outlined in the SolidRun documentation found HERE.
Both ESXi 8.0 Update 1 as well as the recently announced vSphere 8.0 Update 2 installs on the V3000 without any issues, no additional drivers are required as the Community Networking Driver for ESXi has been productized is a part of the ESXi 8.0 release. If you want to install ESXi 7.x, you will need to use of the Community Networking Driver for ESXi Fling to recognize the onboard 2.5GbE network devices.
As mentioned in the Graphics section above, with the V3000 being a headless system, ESXi will need to be installed using a non-interactive method. The quickest way is to use an ESXi scripted installation aka ESXi Kickstart and installing it via USB with an embedded kickstart configuration file. To learn more about using USB to perform an ESXi Kickstart installation, please refer to this blog post HERE for more information.
Below is an example KS.CFG that can be used to perform a basic ESXi installation onto the V3000. In this example, I am installing ESXi to a specific SSD but you can change it to select a different device if you know the identifier up front or you can even have it pick the first disk it finds. For networking, I am using a static IP Address but you could also use DHCP reservation to ensure that the system always obtains a specific address so that you can reach it after the installation as you will not be able to see what DHCP address it has received.
# ESXi 8.x Kickstart Example for SolidRun V3000 vmaccepteula install --disk=t10.NVMe____Samsung_SSD_980_PRO_1TB_________________658AB931B4382500 rootpw VMware1! reboot network --bootproto=static --ip=192.168.30.249 --gateway=192.168.30.1 --nameserver=192.168.30.1 --netmask=255.255.255.0 --hostname=v3000.primp-industries.local %firstboot --interpreter=busybox # Ensure hostd is ready while ! vim-cmd hostsvc/runtimeinfo; do sleep 10 done # enable & start SSH vim-cmd hostsvc/enable_ssh vim-cmd hostsvc/start_ssh # enable & start ESXi Shell vim-cmd hostsvc/enable_esx_shell vim-cmd hostsvc/start_esx_shell # Enable VSAN traffice on vmk0 esxcli network ip interface tag add -i vmk0 -t VSAN # Suppress ESXi Shell warning esxcli system settings advanced set -o /UserVars/SuppressShellWarning -i 1 # Configure NTP esxcli system ntp set -e true -s pool.ntp.org # Regenerate certificates /sbin/generate-certificates /etc/init.d/hostd restart