I first wrote about the tiny palm size iKOOLCORE R1 back in Spring of this year, I was pretty impressed at how capable the R1 was, especially given the tiny footprint that is slightly taller than a Lego minifigure!
It has not even been a full year since the release of the R1 and the second generation of the R-Series is now available, not surprisingly dubbed the R2.
Compute
The R2 is available in two variants using the latest Alder Lake-N mobile CPU configuration:
- Intel N95
- 3.4 Ghz Turbo
- 4 Cores, 4 threads, 15W TDP
- 8GB or 16GB memory (non-upgradable)
- Intel i3-N300
- 3.8 Ghz Turbo
- 8 Cores, 8 threads, 7W TDP
- 8GB or 16GB memory (non-upgradable)
As of publishing this blog post, the R2 is currently $20 USD off ($239 from $259) but not sure how long this price will be available.
Network
The R2 also includes four 2.5GbE built-in network adaptors, but there is one minor difference when comparing to the original R1 model. Three out of the four network adaptors continue to use an Intel 2.5GbE (i226-V), which are fully recognized by ESXi. However, the big change with the R2 is that the fourth network adaptor (far upper right) is actually using a Realtek 2.5GbE (RTL8156BG), which of course is not recognized by ESXi since there are no ESXi drivers from Realtek.
Furthermore, the 2.5GbE Realtek network adaptor is not actually wired up using PCIe but rather USB, which ended up being a really good thing because it can actually be recognized by ESXi once you install the popular USB Network Native Driver for ESXi Fling! The reason for this change was that iKOOLCORE wanted to add support for a WiFI module, which would require one of the remainder PCIe slot and hence the only option to maintain the fourth network adaptor was to wire it up via USB.
Another way to consume the Realtek network adaptor is to also pass it through to a VM and happy to share that it works perfectly fine for both Linux a Windows VM as you can see below.
Here is a screenshot of a Ubuntu 22.04 VM using the Realtek 2.5GB network adaptor via USB passthrough:
Here is a screenshot of a Windows 10 VM using the Realtek 2.5GB network adaptor via USB passthrough:
Note: Using Windows Update, the required driver can be obtained automatically but for Linux such as Ubuntu, you will need to manually download and compile the Realtek driver module which can be found HERE.
Storage
With the extreme small form factor, the R2 is limited to just a single M.2 NVMe (2242) storage device, which a smaller version of the typical M.2 NVMe (2280). You do have an option of adding a 128GB, 512GB or 2TB SSD from iKOOLCORE, which uses Union Memory, or you can acquire your own storage. My R2 was configured with the 512GB SSD option and the nice thing about that is that the SSD is automatically recognized by ESXi and can be used for both the ESXi installation including the ESX-OSData and/or VMFS volume for running your workloads.
Note: If you do purchase the 128GB SSD from iKOOLCORE or something smaller, be sure to check out the ESXi section below on how to reduce the default size of the ESX-OSData, so that you can also use some of the storage capacity for VMFS volume to run your workloads.
While storage is limited to just a single M.2, you can add more storage using the USB-A and USB-C ports and with an M.2 NVMe to USB chassis, which can then be consumed by ESXi for additional VMFS and/or vSAN storage. For those with existing vSAN infrastructure, you could even configure vSAN HCI Mesh and have R2 remotely mount the vSAN storage as another option or simply connect to remote network storage like NFS or iSCSI.
While the R2 does also include a micro-SD slot which can be used to boot and install ESXi, which I have also verified myself, VMware does recommend using a more reliable media like an SSD, especially for future proofing as outlined in VMware KB 85685.
Graphics
An Intel integrated graphics (iGPU) is included in both R2 models, with the N95 capable of 16 execution units and the N300 with 32 execution units. If you have some basic graphics needs such as running a Plex server, another popular workload amongst VMware Homelabs, then you can pass the iGPU into an Ubuntu Linux VM. In fact, the process to passthrough the R2 iGPU is exactly the same as any of the recent Intel NUC 12 systems.
As you can see from the screenshot above, I have an Ubuntu 22.04 VM which has the default virtual graphics disabled and is connected using a remote session utilizing the iGPU passthrough from R2 running latest ESXi 8.0 Update 2 release. Below are the high level instructions for setting up iGPU passthrough to VM.
Step 1 - Create and install Ubuntu Server 22.04 VM (recommend using 60GB storage or more, as additional packages will need to be installed). Once the OS has been installed, go ahead and shutdown the VM.
Step 2 - Enable passthrough of the iGPU under the ESXi Configure->Hardware->PCI Devices settings and then add a new PCI Device to the VM and select the iGPU. You can use either DirectPath IO or Dynamic DirectPath IO, it does not make a difference.
Step 3 - Optionally, if you wish to disable the default virtual graphics driver (svga), edit the VM and under VM Options->Advanced->Configuration Parameters change the following setting from true to false:
svga.present
Step 4 - Power on the VM and then follow these instructions for installing the Intel Graphic Drivers for Ubuntu 22.04 and once completed, you will now be able to successfully use the iGPU from within the Ubuntu VM.
ESXi
No surprise, ESXi runs perfectly fine on the R2 and can support both ESXi 7.x and 8.x release. As mentioned earlier, three of the four onboard Intel 2.5GBE networking is automatically recognized when using ESXi 8.x as the Community Networking Driver Fling for ESXi has been productized as of ESXi 8.0. However, if you need to install ESXi 7.x, then you will need to incorporate the Community Networking Driver Fling for ESXi into the ESXi installer image before it can detect the onboard network adapters. The fourth Realtek 2.5GBE will require the use of the USB Network Native Driver for ESXi Fling as it is wired up as a USB NIC.
As mentioned earlier, if you purchase the 128GB+ SSD from iKOOLCORE or if you use your own SSD that is smaller than 146GB, then you will want to reduce the size of the ESX-OSData volume during the installation or you will not have any storage left for running VMs. The detailed instructions for reducing the ESX-OSDatata can be found in this blog post and you can use either the systemMediaSize or autoPartitionOSDataSize kernel boot option to specify your desired size. Since my R1 has the stock 128GB SSD, I decided to configure my ESX-OSData to 4GB and so I opted for using the legacy autoPartitionOSDataSize parameter and set it to a value of 4096 as the "min" size for the other setting only reduces the ESX-OSData to 25GB.
VMware Edge Cloud Orchestrator (VECO)
Last but not least, I was able to also successful deploy the latest release of VMware Edge Cloud Orchestrator (VECO), formally known as Project Keswick on the R2, which was my first time using the new build and it went without a hitch, well minus the fact that I forgot to setup NTP. Even with the tiny form factor of the R2, you can still get your GitOps on!
durdin says
Hi William, thanks for nice article. Imho the new model looks much better than R1, especially with the Intel i3-N300, as the TDP is only 7W with more cores to offer. It might be nice replacement for Raspberry Pi 4 which is if I am not mistaken around 5W, but here with additional benefit of being on X86 (so "official" ESXi image support), many NICs (so nice for NSX edge lab) and native storage (so no fiddling with clumsy USB-quirks) and more. Only downside might be the price, as even with current Christmas discount it is still $379 (USD) for 16GB variant without storage.
Angus says
Would placing ESX-OSData partition on the SSD have an impact of the life of the SSD.
William Lam says
No, that’s reason we recommend to use SSD over USB, you need reliable media such as SSD