I always love hearing about new and interesting hardware vendors from our VMware community, which is exactly how I came to learn about Protectli back in 2021 from fellow vExpert Trevor Smith. It was in 2021 where Trevor had shared with the vExpert Community that Protectli was developing a 10GbE hardware kit, which was really interesting since there was not many options for small form factor systems at that time that included 10GbE connectivity.
After following up with Protectcli in 2021, they confirmed a 10GbE kit was in the works but it was still in early design and development, so there was nothing to evaluate at the time. Fast forward to April 2024, I received a follow-up email asking if I would be interested in kicking the tires on their first 10GbE kits with their new Protectli Vault Pro 6600 series.
So here is your first look at the ESXi running on the new Protectli 10G Vault Pro!
Compute
The 10G Vault Pro is available in two models, both using an Intel 12th Generation (Alder Lake) mobile processor.
- VP6670 - Intel i7-1255U (2P + 8E)
- VP6650 - Intel i5-1235U (2P + 8E)
Since the 10G Vault Pro uses the new Intel Hybrid CPU Cores, which integrates two types of CPU cores: Performance-cores (P-cores) and Efficiency-cores (E-cores) into the same physical CPU die, there are some updated options for running ESXi, which you can find more details in the ESXi section at the bottom of this blog post. It would have been nice to see the 10G Vault Pro support the latest Intel 14th Generation CPU or at least the Intel 13th Generation, but this could have been a result of when development started and/or to optimize cost.
While the official documentation states the maximum memory for the 10G Vault Pro is 64GB using 2 x 32GB DDR5-4800 (SODIMM), I am happy to share that it works perfectly fine with the new non-binary 48GB DDR5 SODIMM memory, which I was able to test using my Mushkin 2 x 48GB DDR5 memory kit, providing a total of 96GB of memory 😀
Network
The networking on the 10G Vault Pro is plentiful with total of 6 network interfaces: 4 x Intel i226-V (2.5GbE )and 2 x Intel X710-BM2 (10GbE SPF+), both of which are fully compatible with ESXi as you can see from the screenshot below.
In fact, the type of network adaptors provided by the 10G Vault Pro is very similar to that of the Minisforum MS-01, supporting the same Intel X710 10GbE SPF+ but where 10G Vault Pro differ is use of Intel I226-V for its 2.5GbE interface and there are a total of four interfaces versus the two when comparing it to MS-01.
Unlike most modern Intel kits which includes at least one Thunderbolt interface allowing for additional IO expansion, the 10G Vault Pro does not look to support this interface but you can add additional USB-based networking by using the popular USB Network Native Driver for ESXi Fling in case the six existing interfaces were not enough 😉
Storage
For storage, the 10G Vault Pro can support 1 x PCIe Gen 3 M.2 SSD (2280) and 2 x SATA 3.0 (2.5"), which will allow you to deploy both vSAN (consuming 2 out of the 3 SSD) and then using the third disk for installing ESXi, which is the recommended install media over using USB for greater reliability.
Due to the lack of support for a Thunderbolt interface, you will not be able to add additional PCIe-based storage as mentioned earlier in the networking section, but you can look at USB-based storage for both VMFS and/or VSAN, if you really need additional capacity for lab purposes.
Form Factor
With everything the 10G Vault Pro includes, I was surprised at how compact the 10G Vault Pro is with the dimensions coming in at 191 x 178 x 76 mm. For those familiar with the popular Supermicro E200-8D, it is very comparable in size to the 10G Vault Pro. In fact, the 10G Vault Pro is slightly shorter when you compare the E200-8D when looking from the top but is indeed taller due to the fanless heatsinks for cooling. I have also included an Intel NUC (pictured above), so you have a reference when comparing to some of the smaller form factor kits.
Security
The 10G Vault Pro can be configured to also include a TPM (Trusted Platform Module) chip and good news, the TPM that is available is fully compatible with ESXi which means you can take full advantage of all the built-in attestations capabilities that is provided by vSphere. It is great to see more and more vendors provide optional TPM chips that are inexpensive but more importantly, they are Enterprise grade with ESXi compatibility.
Graphics
Both the 10G Vault Pro i7 and i5 includes an Intel integrated graphics that can be successfully passthrough to an an Ubuntu Linux VM providing 96 Execution Units and 80 Execution Units respectively.
Note: iGPU passthrough to a Windows VM will NOT work due to lack of Intel driver support as shared in this detailed blog post.
Below are the high level instructions for setting up iGPU passthrough to VM, I used Ubuntu Server 24.04 for testing purposes since it already includes the i915 drivers out of the box with no additional installation required.
Step 1 - Create and install Ubuntu Server 24.04 VM (recommend using 60GB storage or more, as additional packages will need to be installed). Once the OS has been installed, go ahead and shutdown the VM.
Step 2 - Enable passthrough of the iGPU under the ESXi Configure->Hardware->PCI Devices settings and then add a new PCI Device to the VM and select the iGPU. You can use either DirectPath IO or Dynamic DirectPath IO, it does not make a difference.
Step 3 - Optionally, if you wish to disable the default virtual graphics driver (svga), edit the VM and under VM Options->Advanced->Configuration Parameters change the following setting from true to false:
svga.present
Step 4 - Power on the VM and you should see the iGPU available from within the Ubuntu VM as shown in the screenshot above.
ESXi
The 10G Vault Pro can run the latest release of ESXi 8.0 Update 2b without any issues, no additional drivers are required as the Community Networking Driver for ESXi has been productized as part of the ESXi 8.0 release. If you want to install ESXi 7.x, you will need to use the Community Networking Driver for ESXi Fling to have it recognize the onboard network devices.
In dealing with the new Intel hybrid CPU architecture, which is now the default for all Intel consumer CPUs starting with the Intel 12th Generation or later, was to either disable all P-Cores or E-Cores to prevent PSODs due to the non-uniform CPU capabilities. To disable the E-Cores on the 10G Vault Pro, enter the system BIOS and then navigate to Advanced->CPU Configuration->Active Efficient-cores and select 0 and reboot. If you wanted to disable the P-Cores to take advantage of the low power E-Cores, this is currently a limitation of the 10G Vault Pro BIOS, perhaps this might be possible in a future BIOS update.
If you wish to use of both P-Cores and E-Cores, I recently ran some additional experiments using ESXi CPU affinity policies and this would allow you to maximize the benefit of the hardware but it can add some additional overhead depending on your frequency of workload deployments, but this can be another option for users.
Lastly, I also had a chance to quickly try out the latest Continuous vSphere Beta release and I am also happy to share that ESXi ran without any issues and it detected all networking and storage, so the 10G Vault Pro can be a future proof solution 🙂 Similar to the MS-01, the 10G Vault Pro is another nice candidate for those looking to deploy VMware Cloud Foundation (VCF), so definitely a nice kit to be aware of for those looking to refresh their lab in 2024!
Znuff says
Who's this review for?
There's no more evolution version for VMware for people that would run it on such hardware.
Broadcomm has priced people out of esxi.
Yes, I do realize you work for VMware, but this feels tone deaf.
William Lam says
You’re making an assumption the only consumers for ESXi is personal homelab or enthusiasts … there’s plenty of users who still need/require non-datacenter hardware for various development, testing, automation, edge, etc.
Steven says
Broadcom has shit itself in the foot, as a DoD contractor who has been using vmware from day one. The big players that are stuck currently using and paying for vmware at their new prices are looking hard for alternatives like Proxmox abs others. We are home users who use it at home to keep up on it at work and we'll Broadcom has no idea what mess they have caused, we engineers as long as Broadcom hit us at home we no longer advise using vmware at work. The DoD will shed vmware and broadcom will feel it in their sales. I already see the preparations coming to move away to other more supportive companies and systems.
Brad says
Thanks for the review William. As someone who works for a VCSP I still need my home lab. My current lab uses 10gb iSCSI for storage which is a no go for VCF management domains so I’ve been looking for upgrade options. This review helps.