Yes, you read that correctly. 512 gigabytes of memory on an Intel NUC. Not only is this pretty 🤯 but this is actually possible today with an already released Intel NUC!
A few months back, I was made aware of some really cool technology from Intel called Intel Memory Drive Technology (IMDT) which leverages Intel Optane SSDs to extend memory of a system beyond its physical memory (DRAM) capacity. This technology is made possible with their IMDT software, which is a purpose built Hypervisor whose sole purpose is to just manage memory and this Hypervisor runs on top of the Intel Optane SSD. You can think of this like a Software-Defined Memory (SDM) solution. In fact, SDM was actually coined in this performance white paper evaluating IMDT with scientific-based applications back in 2018.
Note: This should not be confused with Intel Optane and its Datacenter Persistent Memory (PMEM) solution which vSphere already supports today.
The target use case for this type of technology is for memory intensive applications such as SAP HANA, Oracle, Redis, Memcache and Apache Spark to just name a few. These workloads can easily gobble up 10's of terabytes of memory that can bring a number of challenges when needing to scale up these solutions. High capacity memory DIMMS are not only expensive, but once you exhaust the number of physical DIMM slots, your only option for scale up is to add additional servers which is very costly.
Using IMDT, customers can expand their physical DRAM capacity from 8x to 15x, which can significantly improve cost, performance but also the operational overhead in managing additional systems. Putting aside the in-memory based workloads, I think there is also huge potential for general purpose workloads that can also get the exact same benefits, especially when you think about constraints like power, cooling and location such as Edge or ROBO locations. Since this solution works on an Intel NUC, a really interesting use case for this technology that immediately came to mind was for a vSphere/NSX/vSAN homelab environment.
IMDT Native Mode
Today, IMDT is only supported with a Linux based operating system (RHEL, CentOS, SLES & Ubuntu), which boots after the IMDT Hypervisor has started up. From the operating system point of view, it just sees the total amount of memory that has been pooled together from both DRAM and the Intel Optane SSDs and this is all done transparently without any modification to the OS or application.
For running a non-Type 1 Hypervisor on IMDT, this is a great solution that can benefit workloads that demand a large amount of memory. For a Type-1 Hypervisor like ESXi, it is not ideal for a number of reasons. Although IMDT is not your typical Type-1 Hypervisor, it does mean that ESXi would have to be "Nested" inside of a VM as it would be booted after IMDT. More importantly, the biggest challenge that I see with such a solution is actually general management and day 2 operations, customers would now have to change and rely on 3rd party tooling to manage ESXi in a VM, since only Linux is officially supported today. Not only is this not supported by IMDT but VMware also does not supported Nested Virtualization of any type, including running ESXi on top of ESXi yet alone on a 3rd party Hypervisor like IMDT.
IMDT Passthrough Mode
A new version of the IMDT software was just released a couple of weeks back which now enables a new mode that would allow ESXi to take advantage of the IMDT technology. Instead of booting into the IMDT Hypervisor, ESXi would first boot up and and then using VMDirect Path to passthrough the Intel Optane SSD to a specific VM. From an ESXi point of view, it will only see the system memory and none of the extended memory. When the VM starts up, the IMDT software will first boot up and then the actual operating system will load which will then see the extended memory. This is all done transparently and without requiring any modifications to the guest operating system.
This hybrid approach is a good compromise as it would enable VMware customers to get the benefits of IMDT while maintaining their existing ESXi operating model. With that said, there are some limitations with this solution. With PCIe passthrough, it is a 1:1 mapping between a physical device and a VM. This means, you will also be limited by the number of Intel Optane SSDs that can be installed on your system which may not be a problem for non-Intel NUC platforms. The biggest constraint that I see is that the extended memory from a single Intel Optane SSD can only be used with one VM, it can not be shared. If you have a specific workload which can benefit from having at least 100GB (smallest capacity Intel Optane), this is a good solution. If you wanted to "share" the extended memory across different VMs, this is not currently not possible today. In my opinion, this latter use case is where I see the biggest potential for this type of technology.
Even in passthrough mode, Linux is still the only supported guest operating system and today you are limited to running just RHEL 8.x, CentOS 8.x, SLES 15 and Amazon Linux 2. In addition, the guest operating system must also be installed using legacy BIOS firmware, EFI is currently not supported. Lastly, because ESXi is unaware of the extended memory, unknown behaviors could occur if the system is under memory pressure such overcommitting memory.
Hardware Requirements
Aside from other Intel based Xeon platforms, IMDT is currently supported on both the Intel NUC 9 Pro (Xeon) and the NUC 9 Extreme (i9), which I wrote about earlier this year as Intel's first "modular" NUC. For the Intel Optane SSD requirements, both the Intel Optane Datacenter 4801x as well as the Intel Optane 905 series are supported. If you are using a non-NUC platform, you can also choose from either the Intel Optane 4800x or Intel Optane 900 series.
For those not familiar with the Intel NUC 9, it supports up to 3 x M.2 slots (2 x 2280/2242/22110 and 1 x 2280/2242). In addition, it is also unique as it is the first NUC platform to have 2 x PCIe slots (x4 & x16), so you can really let your imagination go wild on a nice vSAN setup, 10GbE or GPU using the PCIe slots and still have room for IMDT!
Thanks to the awesome folks over at SimplyNUC and Intel, I was able to get my hands on a NUC 9 Pro kit that included an Intel Optane 905 (380GB) so that I could try out IMDT. At the time, the 480GB Intel Optane (which requires the add-in-card PCIe models) was not available and this meant that I would not be able to reach the 512GB of expanded memory. The kit also included two standard Samsung Pro NVMe so that I can also setup vSAN 🙂
For those interested, SimplyNUC has put together several packages that includes the Intel NUC 9, an Intel Optane SSD and the IMDT license as a base offering and customers can then customize further. This package not only includes the hardware but also the software and support to go with the solution. For more details, please visit https://simplynuc.com/mini-data-center/
Installation
Step 1 - Install the IMDT software on an Intel Optane SSD. This can be done by just booting a Live Ubuntu environment from USB device (no need to install) and then downloading the latest IMDT installer shell script and your IMDT license key which is required for activation.
./imdt_installer-10.0.1595.2.sh in -n IMDT_Licenses-IMDTxxxxxxxx.txt
Step 2 - Install the desired version of ESXi onto the Intel NUC and make sure you do not consume the Intel Optane SSD. In my setup, I deployed the latest ESXi 7.0 Update 1 release
Step 3 - Create your Linux VM with the minimum settings and then install the guest operating system like you normally would
- 2 vCPU or greater
- 10GB memory (minimum for IMDT to function)
- Memory reservation will be required for passthrough to function
- I/O MMU Enabled
- SCSI Controller must be LSI Parallel or SAS (VMware Paravirtualized not recognized by IMDT)
- Firmware configured as BIOS (EFI not supported)
After the guest operating system has been installed, go ahead and shutdown the VM
Step 4 - Update the Linux VM with the following boot settings:
- Firmware configured to EFI (this is required to boot IMDT)
- Force EFI setup (to select the Intel Optane SSD as our boot device)
and then finally attach the Intel Optane SSD as passthrough device and power up the VM.
Step 5 - The VM should boot into the EFI setup screen and just select the EFI VMware Virtual NVMe Namespace (NSID 1) device and hit enter to boot. This only needs to be done once, after that it will automatically boot from the IMDT device prior to booting the guest operating system.
Step 6 - The IMDT software should start booting up at this point and when you see the option to go into IMDT settings, hit F5. Here we need to change the boot mode from EFI to Legacy (BIOS) so that we can boot our guest operating system.
By default, IMDT will expand memory up to 8x which you can see in the System Memory section. In our example, we have a VM configured with 10GB of physical memory and the expanded memory will be 80GB as shown in the screenshot below. You can override this behavior to either decreasing or increasing the expanded memory by hitting enter and then specifying the amount. The maximum amount of expanded memory will be based on both the amount Optane capacity as well as the configured memory of your VM, so you will need to adjust this setting if you make further changes. In our example, we will go ahead and change it to maximum amount which will be ~164GB.
Hit ESC to continue the normal boot process. You may see a warning about not having enough CPU, this is a benign message which you can ignore and I have been told will be fixed in a future update of IMDT. Once the Linux operating system has booted up, you can login to verify that it now has more memory than what it was physically allocated.
Maximum Memory Expansion
By default, IMDT can expand up to 8x the memory assigned to a VM and this can be increased up to 15x or more in some cases. This will of course depend on both memory allocated to the VM but also the available capacity of your Intel Optane SSD. If had a larger capacity Intel Optane like a 480GB, then I could have reached the 512GB limit. Another option is to add another 100GB Intel Optane, which would also allow me to reach the 512GB limit as a single VM can also consume multiple Intel Optane SSD with IMDT.
Here are a couple more experiments using the 380GB Intel Optane SSD.
In my setup, I had vSAN configured, which meant a portion of the physical memory in the Intel NUC would be used for vSAN itself. This mean the maximum amount of memory that I could allocate to a single VM for IMDT was 40GB which translated to a maximum amount of extended memory of 314GB.
By disabling vSAN and using standard VMFS, I was able to allocate 56GB of memory to a single VM for IMDT which translated to 378GB of extended memory, which is the majority of the Intel Optane SSD.
Futures
I think there is a lot of potential for this type of technology, especially for general purpose workloads. This also includes the infinite use cases when it comes to running a VMware homelab whether that is actually for a "home" lab or general development/testing. IMDT in passthrough mode for ESXi is a great start but I think the list of supported Guest Operating system needs to expand to other Linux distributions but also beyond just Linux for this to be more broadly applicable. This includes support for Microsoft Windows and potentially even support for Nested ESXi, which would really unlock the possibilities. I did attempt to run our VMware PhotonOS, Windows Server 2019 and Nested ESXi using IMDT, but they immediately crashed.
I also think the 1:1 mapping of an Intel Optane SSD to a single VM is a pretty big constraint as I suspect most folks (including myself) will not have a need to expand the memory of just a single VM but rather a collection of VMs. Putting aside the initial target use case of intensive memory applications, the only other use case that comes to my mind for having a single large VM would be for running Nested ESXi, where a single ESXi VM can have the full memory expansion and then deploy workloads on top of that VM which is a very common use case when it comes to VMware homelabs.
This is just the first release of IMDT supporting ESXi and in talking to Intel, they are definitely open to feedback from customers. Do you see a use case or applications that can benefit from using IMDT in your own environment, whether it is for a homelab or something else ... maybe Edge or ROBO? I am curious to hear from the VMware community on their thoughts and will also be sure to share any feedback with the IMDT product team.
Jeff says
Very interesting and yes, Centos is just perfect and no need for other, but a windows version can be great to have. But was it only made for Nuc or ?
William Lam says
Not unique to NUC, any Xeon based system can be used with IMDT but as mentioned, for NUC 9, you can use either i9 or Xeon models for IMDT
Charles A. Windom Sr. says
Excellent Article. You could have reached out to me, I actually have a NUC9 Xeon here at home. I do need to price some Intel Optane SSDs though. Keep pushing the edge my friend !!
Zach says
I think a lot of the limitations could be solved for with NVMe namespace support...could you leverage your contacts at Intel to ask them about adding namespace support to the consumer Optane drives? I'll be aggressive and assert the only reason I can think of to justify Intel's lack of support for namespaces on consumer Optane disks is product segmentation--literally preserving the feature for higher-margin enterprise SKUs. That's a real shame and a conspicuous money grab.
Zach says
Also what's performance like with this setup? Can you run some memory benchmarks on a Linux guest? In particular I'm curious to see if the behavior is a steep cliff--performance tanks after the physical memory is exhausted--or if the IMDT hypervisor is doing some intelligent swapping between RAM and Optane such that the guest perceives a much more uniform quality of service...
Joe K says
Really interesting stuff, encouraging for homelab environments, as long as you're ready to pony up for the egregiously overpriced i9/Xeon NUCs, but I would love to see some expansion for this into the more pedestrian NUCs. It would be a great feature to play with for those of us that've already invested into their labs and don't have $2K more to throw at a whole new platform.
kurthv71 says
It would be great to see this feature for Nested ESXi hosts in a HomeLab environment.
But with the current restrictions I don't see much use cases in a vSphere environment.