WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Apple NVMe driver for ESXi using new Community NVMe Driver for ESXi Fling 

02.23.2021 by William Lam // 77 Comments

VMware has been making steady progress on enabling both the Apple 2018 Mac Mini 8,1 and the Apple 2019 Mac Pro 7,1 for our customers over the past couple of years. These enablement efforts have had its challenges, including the lack of direct hardware access for our developers and supporting teams due to the global pandemic but also the lack of participation from Apple has certainly not made this easier.

Today, I am happy to share that we have made some progress on enabling ESXi to see and consume the local Apple NVMe storage device found in the recent Apple T2-based mac systems such as the 2018 Mac Mini and 2019 Mac Pro. There were a number of technical challenges the team had to overcome, especially since the Apple NVMe was not just a consumer grade device but it also did not follow the standard NVMe specification that you normally would see in most typical NVMe devices.

This meant there was a lot of poking and prodding to reverse engineer the behavior of the Apple NVMe to better understand how this device works, which often leads to sudden reboot or PSODs. With the Apple NVMe being a consumer device, it also meant there were a number of workarounds that the team had to come up with to enable ESXi to consume the device. The implementation is not perfect, for example we do not have native 4kn support for SSD devices within ESXi and we had to fake/emulate a non-SSD flag to work around some of the issues. From our limited testing, we have also not observed any significant impact to workloads when utilizing this driver and we also had had several internal VMware teams who have already been using this driver for a couple of months now without reporting any issues.

A huge thanks goes out to Wenchao and Yibo from the VMkernel I/O team who developed the initial prototype which has now been incorporated into the new Community NVMe Driver for ESXi Fling.

UPDATE 2 (06/30/2023) - Thanks to reader Spotsygamer, who shared v1.2 of NVMe Fling also works with ESXi 8.x and vSAN ESA

UPDATE 1 (11/21/2021) - v1.2 of NVMe Fling works with ESXi 7.x

Caveats

Before folks rush out to grab and install the driver, it is important to be aware of a couple of constraints that we have not been able to work around yet.

  1. ESXi versions newer then ESXi 6.7 Patch 03 (Build 16713306) is currently NOT supported and will cause ESXi to PSOD during boot up.
  2. The onboard Thunderbolt 3 ports does NOT function when using the Community NVMe driver and can cause ESXi to PSOD if activated.

Note: For detailed ESXi version and build numbers, please refer to VMware KB 2143832

VMware Engineering has not been able to pin point why the ESXi PSOD is happening. For now, this is a constraint to be aware of which may impact anyone who requires the use of the Thunderbolt 3 ports for additional networking or storage connectivity.

With that out of the way, customers can either incorporate the Community NVMe Driver for ESXi offline bundle into a new ESXi Image Profile (using vSphere Image Builder UI/CLI) and then exporting image as an ISO and then installing that on either a Mac Mini or Mac Pro or you can manually install the offline bundle after ESXi has been installed over USB and upon reboot, the local Apple NVME will then be visible for VMFS formatting.

Here is a screenshot of ESXi 6.7 Patch 03 installed on my 2018 Mac Mini with the Apple NVMe formatted with VMFS and running macOS VM

Categories // Apple, ESXi, vSphere 6.7, vSphere 7.0 Tags // apple, mac mini, mac pro, NVMe

New Community Networking Driver for ESXi Fling

02.17.2021 by William Lam // 29 Comments

I am super excited to announce the release of a new Community Networking Driver for ESXi Fling! The idea behind this project started about a year ago when we released an enhancement to the ne1000 driver as a community update which enabled ESXi to recognize the onboard network adapter for the Intel 10th Gen (Frost Canyon) NUC. Although the Intel NUC is not an officially supported VMware platform, it is extremely popular amongst the VMware Community. In working with the awesome Songtao, we were able to release this driver early last year for customers to take advantage of the latest Intel NUC release.

At the time, I knew that this would not be the last occurrence dealing with driver compatibility. We definitely wanted an easier way to distribute various community networking drivers that is packaged into a single deliverable for customers to easily consume and hence this project was born. In fact, it was quite timely as I had just received engineering samples of the new Intel NUC 11 Pro and Performance (Panther Canyon and Tiger Canyon) at the end of 2020 and work needed to be done before we could enable the onboard 2.5GbE (multi-gigabit) network adapter which is a default component of the new Intel Tiger Lake architecture. As reported back in early Jan, Songtao and colleague Shu were successful in getting ESXi to recognize the new 2.5GbE network adapter and has also been incorporated into this new Fling. In addition, we also started to receive reports from customers that after upgrading to a newer ESXi 7.0 releases, the onboard network adapters for the Intel 8th Gen NUC was no longer functioning. In an effort to help customers with this older platform, we have also updated the original community ne1000e driver to include the relevant PCI IDs within this Fling.


The new Community Networking Driver for ESXi is for PCIe-based network adapters and currently contains the following two driver modules:

  • igc-community - which adds support for Intel 11th Gen NUCs and any other hardware platform that uses the same 2.5GbE devices
  • e1000-community - which adds support for Intel 8th Gen NUC and any other hardware platform that uses same 1GbE devices

For a complete list of supported devices (VendorID/ProductID), please take a look at the Requirements tab on the Fling website. As with any Fling, this is being developed and supported in our spare time. In the future, we may consider adding other types of devices based on feedback from the broader community. I know Realtek-based PCIe NICs is something that many have been asking about and as mentioned back in this blog post, I have been in engaged with the Realtek team and hopefully in the near future, we may see an ESXi driver that can support some of the more popular devices in the community. If there are other PCIe-based networking adapters that could fit the Fling model, feel free to leave a comment on the Fling website and we can evaluate as time permits.

Categories // ESXi, Home Lab, vSphere 7.0 Tags // igc, Intel NUC, ne1000e

Intel NUC with 512GB memory

12.03.2020 by William Lam // 7 Comments

Yes, you read that correctly. 512 gigabytes of memory on an Intel NUC. Not only is this pretty 🤯 but this is actually possible today with an already released Intel NUC!

A few months back, I was made aware of some really cool technology from Intel called Intel Memory Drive Technology (IMDT) which leverages Intel Optane SSDs to extend memory of a system beyond its physical memory (DRAM) capacity. This technology is made possible with their IMDT software, which is a purpose built Hypervisor whose sole purpose is to just manage memory and this Hypervisor runs on top of the Intel Optane SSD. You can think of this like a Software-Defined Memory (SDM) solution. In fact, SDM was actually coined in this performance white paper evaluating IMDT with scientific-based applications back in 2018.

Note: This should not be confused with Intel Optane and its Datacenter Persistent Memory (PMEM) solution which vSphere already supports today.

The target use case for this type of technology is for memory intensive applications such as SAP HANA, Oracle, Redis, Memcache and Apache Spark to just name a few. These workloads can easily gobble up 10's of terabytes of memory that can bring a number of challenges when needing to scale up these solutions. High capacity memory DIMMS are not only expensive, but once you exhaust the number of physical DIMM slots, your only option for scale up is to add additional servers which is very costly.

Using IMDT, customers can expand their physical DRAM capacity from 8x to 15x, which can significantly improve cost, performance but also the operational overhead in managing  additional systems. Putting aside the in-memory based workloads, I think there is also huge potential for general purpose workloads that can also get the exact same benefits, especially when you think about constraints like power, cooling and location such as Edge or ROBO locations. Since this solution works on an Intel NUC, a really interesting use case for this technology that immediately came to mind was for a vSphere/NSX/vSAN homelab environment.

[Read more...]

Categories // ESXi, Home Lab Tags // IMDT, Intel Memory Drive Technology, Intel NUC, Intel Optane, Quartz Canyon

  • « Previous Page
  • 1
  • …
  • 50
  • 51
  • 52
  • 53
  • 54
  • …
  • 146
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...