WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

AHCI (vmw_ahci) performance issue resolved in ESXi 6.5 Update 1

07.27.2017 by William Lam // 44 Comments

For customers who had SATA controllers that consumed the VMware Advanced Host Controller Interface (AHCI) driver found that after upgrading to ESXi 6.5, the disk performance for those devices were significantly impacted. Basic operations such as cloning or uploading an OVF/OVA would literally double if not triple in time. In fact, I too had observed this same behavior when I had upgraded my Intel NUC (not an officially supported platform) to ESXi 6.5. One thing I had noticed at the time when others were reporting simliar issues was that their HW platforms were also not on the VMware HCL, so I was not sure if this was limited to only home-lab environments?

In any case, I and others eventually stumbled onto this blog article by Sebastian Foss who I believe may have been the first to identify a workaround which was to simply disable the new AHCI Native Driver which loads by default and forcing it fall back to using the legacy AHCI driver which made the issue go away after a reboot. Although the folks who had reported seeing simliar issue were all using hardware platforms that were not officially on the VMware HCL, I decided to still file an internal bug and hoped someone could take a look to see what was going on.

With the release of ESXi 6.5 Update 1, I am happy to report the observed performance issues with the Native AHCI driver have now been resolved! I have been running on earlier release of ESXi 6.5 Update 1 build for couple of weeks now and have not seen any of the problems I had before. For those interested, the official fix went is in version 1.0.0-37vmw or greater of the vmw_ahci driver.

You can easily verify for this by running the following ESXCLI command to retrieve the version of your vmw_ahci driver:


If you had disabled the Native AHCI driver, you will definitely want to re-enable it. You can check if its been disabled by running the following ESXCLI command and checking the second column to see if it shows "false":

esxcli system module list | grep vmw_ahci

If the Native AHCI driver is disabled as shown in the previous command, then you can re-enable it by running the following ESXCLI command:

esxcli system module set --enabled=true --module=vmw_ahci

Once you have re-enabled the driver, you will need to reboot for the changes to go into effect.

Categories // ESXi, Home Lab Tags // AHCI, ESXi 6.5 Update 1, native device driver, vmw_ahci

ESXi 5.5 introduces a new Native Device Driver Architecture Part 2

11.07.2013 by William Lam // 4 Comments

Following up from Part 1 where I provided an overview of the new Native Device Driver architecture introduced in ESXi 5.5, we will now take a deeper look at how this new device driver model works in ESXi. A new concept of driver priority loading is introduced with the Native Device Driver model and the diagram below provides the current ordering of how device drivers are loaded.

As you can see OEM drivers will have the highest priority and by default Native Drivers will be loaded before "legacy" vmklinux drivers. On a clean installation of ESXi 5.5 you should see at least two of these directories: /etc/vmware/default.map.d/ and /etc/vmware/driver.map.d/ which contains driver map files pertaining to Native Device and "legacy" vmklinux drivers.

Here is a screenshot of the map files for both of these directories on an ESXi host:

The following inbox Native Drivers are included in default installation of ESXi 5.5:

Device Device Driver Name
Emulex 10GBe NIC elxnet
Emulex FC lpfc
LSI Megaraid lsi_mr3
LSI mptsas lsi_msgpt3
Micron SSD mtip32xx_native
QLogic FC qlnativefc
SAS/SATA rste
vmxnet3 & graphics vmkernel

As I mentioned earlier, Native Drivers by default will always load before vmklinux drivers, however if you need to perform some troubleshooting, one option is to disable the specific driver in question by using ESXCLI which is applicable to both Native Drivers as well as vmklinux drivers.

To do so, run the following ESXCLI command:

esxcli system module set --enabled=false --module=[DRIVER-NAME]

Categories // Uncategorized Tags // ESXi 5.5, native device driver, nddk, vmklinux, vSphere 5.5

ESXi 5.5 introduces a new Native Device Driver Architecture Part 1

10.28.2013 by William Lam // 12 Comments

With a new release of vSphere, many of us are excited about all the new features that we can see and touch. However, what you may or may not notice are some of the new features and enhancements that VMware Engineering has made to the underlying vSphere platform to continue making it better, faster and stronger. One such improvement is the introduction of a new Native Device Driver architecture in ESXi 5.5. Though this feature is primarily targeted at our hardware ecosystem partners, I know some of you have asked about this and I thought it might be useful to share some of the details.

Note: If you are a hardware ecosystem partner and would like to learn more, please reach out to your VMware TAP account managers.

If we take a look back at the early days of ESX, VMware made a decision to use Linux derived drivers to provide the widest variety of support for storage, network and other hardware devices for ESX. Since ESX and specifically the VMkernel is NOT Linux, to accomplish this we built a translation (shim) layer module called vmklinux which sits in between the VMkernel and drivers. This vmklinux module is what enables ESX to function with the linux derived drivers and provides an API which can speak directly to the VMkernel.

Here is a quick diagram of what that looks like:

So why the change in architecture? Since the stability, reliability and performance of these device drivers are importantly critical to ESX(i) second to the VMkernel itself. There is actually a variety of challenges with this architecture in addition to the overhead that is introduced with the translation layer. The vmklinux module must be tied to a specific Linux kernel version and the continued maintenance of vmklinux to provide backwards compatibility across both new and old drivers is quite challenging. From a functionality perspective, we are also limited by the capabilities of the Linux drivers as they are not built specifically for the VMkernel and can not support features such as hot-plug/ To solve this problem, VMware developed a new Native Device Driver model interface that allows a driver to speak directly to the VMkernel and removing the need for the “legacy” vmklinux interface.

Here is a quick diagram of what that looks like:
What are some of the benefits of this new device driver model?
  • More efficient and flexible device driver model compared to vmklinux
  • Standardized information for debugging/troubleshooting
  • Improved performance as we no longer have a translation layer
  • Support for new capabilities such as PCIe hot-plug

This new architecture was developed with backwards compatibility in mind as we all know it is not possible for our entire hardware ecosystem to port their current drivers in one release. To that extent, ESXi 5.5 can run a hybrid of both “legacy” vmklinux drivers as well as the new Native Device Driver. Going forward, VMware will be primarily investing in the Native Device Driver architecture and encourage new device drivers to be developed using the new architecture. VMware also provides an NDDK (Native Driver Development Kit) to our ecosystem partners as well as a sample Native Device Driver which some of you may have seen in the release of vSphere 5.1 with a native vmxnet3 VMkenel module for nested ESXi.

Hopefully this has given you a good overview of the new Native Device Driver architecture and in part 2 of the article I will go into a bit more details on where to find these drivers, which vendor supports this new architecture today and how they are loaded.

Categories // Uncategorized Tags // ESXi 5.5, native device driver, nddk, vmklinux, vSphere 5.5

  • « Previous Page
  • 1
  • 2

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...