A really cool new capability that was introduced in vSphere 6.7 is the support for the extremely fast memory technology known as non-volatile memory (NVM), also known as persistent memory (PMem). Customers can now benefit from the high data transfer rate of volatile memory with the persistence and resiliency of traditional storage. As of this blog post, both Dell and HP have Persistent Memory support and you can see the list of supported devices and systems here and here.
PMem can be consumed in one of two methods:
- Virtual PMem (vPMem) - In this mode, the GuestOS is actually PMem-aware and can consume the physical PMem device on the ESXi host as standard, byte-addressable memory. In addition to using an OS that supports PMem, you will also need to ensure that the VM is running the latest Virtual Hardware 14
- Virtual PMem Disks (vPMemDisk) - In this mode, the GuestOS is NOT PMem-aware and does not have access to the physical PMem device. Instead, a new virtual PMem hard disk can be created and attached to a VM. To ensure the PMem hard disk is placed on the PMem Datastore as part of this workflow, a new PMem VM Storage Policy will be applied automatically. There are no additional GuestOS or VM Virtual Hardware requirement for this scenario, this is great for legacy OS that are not PMem-aware
Customers who may want to familiarize themselves with these new PMem workflows, especially for Automation or educational purposes, could definitely benefit from the ability to simulate PMem in their vSphere environment prior to obtaining a physical PMem device. Fortunately, this is something you can actually do if you have some additional spare memory from your physical ESXi host.
Disclaimer: This is not officially supported by VMware. Unlike a real physical PMem device where your data will be persisted upon a reboot, the simulated method will NOT persist your data. Please use this at your own risk and do not place important or critical VMs using this method.
In ESXi 6.7, there is an advanced boot option which enables you to simulate or "fake" PMem by consuming a percentage of your physical ESXi hosts memory and allocating that to form a PMem datastore. You can append this boot option during the ESXi boot up process (e.g. Control+O) or you can easily manage it using ESXCLI which is my preferred method of choice.
Run the following command and replace the value with the desired percentage for PMem allocation:
esxcli system settings kernel set -s fakePmemPct -v 25
Note: To disable fake PMem, simply set the value to 0
You can also verify whether fake PMem is enabled or its current configured value is by running the following command:
esxcli system settings kernel list -o fakePmemPct
For the changes to go into affect, you obviously will need to reboot your ESXi host.
Once the ESXi host has been rebooted, you can confirm the changes were applied by directly logging into the Embedded ESXi Host Client (https://[ESX-IP]/ui) of your ESXi host and you should now see that a new PMem datastore has been automatically created as shown in the screenshot below. You now have a PMem datastore that has been constructed using a portion of your physical ESXi host memory, how cool is that!? In case it was not obvious, do not place important or critical VMs that you wish to persist upon a reboot. This should only be used for educational or testing purposes, you have been WARNED AGAIN.
vPMem Workflow
Create a new or existing vHW 14 VM, you should now be able to add a new NVDIMM device using either the vSphere Web/H5 Client.
As part of the vPMem workflow, a NVDIMM controller will automatically be added for you when using the UI. From here, you will be able to see the available amont of PMem storage for consumption, so you can allocate accordingly.
At this point, the rest of the configuration will be within the GuestOS as it will depend on the steps to consume a PMem device. One easy way to verify this workflow is working is by running Nested ESXi as the GuestOS. It turns out ESXi itself can actually consume a Virtual PMem device and after the ESXi VM boots up, you can login to its Embedded Host Client UI and you should see the PMem datastore just like you did for your physical ESXi host 🙂 Hats off to our Engineers for enabling this path, especially for learning/testing purposes.
vPMemDisk Workflow
Create a new VM and you should see an option during the Storage selection to specify either a Standard or PMem datastore. The Latter option will store all VMDKs onto the PMem Datastore and you just need to select a regular vSphere Datastore for the VM home as shown in the screenshot below.
When specifying the capacity of your VMDK, you will also see the available amount of PMem storage that you can allocate from along with the default PMem VM Storage Policy to ensure correct placement of the VMDK.
Note: You can also consume vPMemDisk with an existing VM by attaching a newly created PMem hard disk.
Although the above is purely for education and learning purposes, I am curious to see if folks might consider using vPMemDisk in their vSphere Home Lab environments as a way to easily accelerate certain workloads, perhaps for demos, especially if they have additional memory to spare from their physical ESXi hosts?
Amol says
Hi William,
Thank you for this information.
I have vSphere 6.5 setup where I have created a vSphere6.7 ESXi. I ran CLI commands as you explained on ESXi 6.7 and I can see the output :
Name Type Configured Runtime Default Description
----------- ----- ---------- ------- ------- ------------------------------
fakePmemPct uint8 40 40 0 Amount of fake persistent
memory (in pct of all volatile
memory)
However when I login into ESXi UI I don't see PMem datastore under storage->datastores.
Can you please help me with it?
Thanks
Amol says
Note: I followed all the steps(rebooted the ESXi). I forgot mention in comment.
Thanks
Richard A Williamson says
Thanks to your guide I got this to work, I prepared to automate mounting,formatting, and copying data to the disk to use it, but the data-store UID changes with every boot, is there any way to get around that or automate changing the name to a constant?
Amy Reed says
I am failing to find any api examples to automate adding real nvdimm to a vm. Easy to do through the UI, but if you have any insight...Feel free to pass along!
R Walker says
I'm interested in experiences, if anyone would like to share. I like the idea of ramdisks for legacy-OS, provided I have enough RAM for my VMs and containers. I currently have 128GB RAM but it could easily go to 256GB with very little argument. Would be fun to see Windows10 and MacOS (OSX) running in RAM. I imagine Proxmox VE is doing this, as Debian has had ramdisks for decades. Been out of the loop for a wile, so I need to research Persistent Memory. As already noted, this could be a great way run a homelab with the nested ESXi. I second the hat's off to the engineers who have implemented this-- now to try it out.
Lam says
I can not take a snapshot of VM when enable vPMEM. Why?