WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Search Results for: nested esxi

ESXi 5.5 introduces a new Native Device Driver Architecture Part 1

10.28.2013 by William Lam // 12 Comments

With a new release of vSphere, many of us are excited about all the new features that we can see and touch. However, what you may or may not notice are some of the new features and enhancements that VMware Engineering has made to the underlying vSphere platform to continue making it better, faster and stronger. One such improvement is the introduction of a new Native Device Driver architecture in ESXi 5.5. Though this feature is primarily targeted at our hardware ecosystem partners, I know some of you have asked about this and I thought it might be useful to share some of the details.

Note: If you are a hardware ecosystem partner and would like to learn more, please reach out to your VMware TAP account managers.

If we take a look back at the early days of ESX, VMware made a decision to use Linux derived drivers to provide the widest variety of support for storage, network and other hardware devices for ESX. Since ESX and specifically the VMkernel is NOT Linux, to accomplish this we built a translation (shim) layer module called vmklinux which sits in between the VMkernel and drivers. This vmklinux module is what enables ESX to function with the linux derived drivers and provides an API which can speak directly to the VMkernel.

Here is a quick diagram of what that looks like:

So why the change in architecture? Since the stability, reliability and performance of these device drivers are importantly critical to ESX(i) second to the VMkernel itself. There is actually a variety of challenges with this architecture in addition to the overhead that is introduced with the translation layer. The vmklinux module must be tied to a specific Linux kernel version and the continued maintenance of vmklinux to provide backwards compatibility across both new and old drivers is quite challenging. From a functionality perspective, we are also limited by the capabilities of the Linux drivers as they are not built specifically for the VMkernel and can not support features such as hot-plug/ To solve this problem, VMware developed a new Native Device Driver model interface that allows a driver to speak directly to the VMkernel and removing the need for the “legacy” vmklinux interface.

Here is a quick diagram of what that looks like:
What are some of the benefits of this new device driver model?
  • More efficient and flexible device driver model compared to vmklinux
  • Standardized information for debugging/troubleshooting
  • Improved performance as we no longer have a translation layer
  • Support for new capabilities such as PCIe hot-plug

This new architecture was developed with backwards compatibility in mind as we all know it is not possible for our entire hardware ecosystem to port their current drivers in one release. To that extent, ESXi 5.5 can run a hybrid of both “legacy” vmklinux drivers as well as the new Native Device Driver. Going forward, VMware will be primarily investing in the Native Device Driver architecture and encourage new device drivers to be developed using the new architecture. VMware also provides an NDDK (Native Driver Development Kit) to our ecosystem partners as well as a sample Native Device Driver which some of you may have seen in the release of vSphere 5.1 with a native vmxnet3 VMkenel module for nested ESXi.

Hopefully this has given you a good overview of the new Native Device Driver architecture and in part 2 of the article I will go into a bit more details on where to find these drivers, which vendor supports this new architecture today and how they are loaded.

Categories // Uncategorized Tags // ESXi 5.5, native device driver, nddk, vmklinux, vSphere 5.5

Quick Tip - Marking an HDD as SSD or SSD as HDD in ESXi

08.15.2013 by William Lam // 9 Comments

This was a neat little trick that I picked up in one of our internal storage email distribution groups which I thought was quite interesting. Some of you may recall an article I wrote a few years back on how to trick ESXi 5 in seeing an SSD device which relied on adding an SATP rule for a particular storage device. The actual use case for this feature was that not all real SSD devices would automatically be detected by ESXi and this allowed a user to manually mark it as an SSD.

The other "non-official" use case for this feature allows a user to basically "simulate" an SSD by marking a regular HDD as an SSD and I this actually helped me test the new Host Cache (Swap-to-SSD) feature which was part of the vSphere 5 release. Recently there was a customer inquiry asking for the complete reverse, in which you could mark an SSD as an HDD. I am not sure what the use case was behind this request but I did learn it was actually possible using a similar method of adding a SATP rule to a device.

Note: If you are running Nested ESXi, a much simpler solution for simulating an SSD is to use the following trick noted here.

Before you begin, you will need to identify the storage device in which you wish to mark as an SSD or HDD. Use the following ESXCLI command to do so:

esxcli storage core device list

In the screenshot above, we can see for our device mpx.vmhba1.C0:T2:L0 shows "Is SSD" parameter as false. After running two commands below, we should then see that property change to true.

Marking HDD as SSD:

esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d mpx.vmhba1:C0:T2:L0 -o enable_ssd
esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T2:L0

 

Marking SSD as HDD:

esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d mpx.vmhba1:C0:T1:L0 -o disable_ssd
esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T1:L0

To perform the opposite, you simply just need to add the disable_ssd option. If you receive an error regarding a duplicate rule, you will need to first remove the SATP rule and then re-create with the appropriate option.

Another useful tidbit is that if you are running Nested Virtualization and the virtual disk of that VM is stored on an actual SSD, that virtual disk will automatically show up within the guestOS as an SSD so no additional changes are required.

Categories // Automation, ESXi, VSAN Tags // enable_ssd disable_ssd, esxcli, ESXi, hdd, ssd

Ravello: An interesting solution using Nested Virtualization

08.08.2013 by William Lam // 6 Comments

As many of you know, I am a huge fan of VMware Nested Virtualization and I am always interested to learn how customers and partners are using this technology to help enable them to solve interesting problems. I recently met up with a startup company called Ravello who has a product that leverages Nested Virtualization in a very unique way.

Note: Ravello is not the only company using Nested Virtualization in interesting ways. Bromium, another startup in the security space, is also doing interesting things with Nested Virtualization.

Ravello is a SaaS solution that allows you to take an existing VMware or KVM virtual machine and without any modifications to that VM, run it on a variety of public cloud infrastructures including Amazon EC2, HP Cloud, Rackspace and even private clouds that are running on vCloud Director (support coming soon). Ravello is basically "normalizing" the VM by virtualizing it in their Cloud Application Hypervisor so that it can run on any cloud infrastructure.  From the diagram below, the unmodified VM is actually running inside of another VM which runs a flavor of Linux. This Linux VM loads up their HVX Hypervisor and is running on one of the public cloud infrastructures.

Similar to a regular hypervisor, HVX provides an abstraction, but instead of the underlying physical hardware it abstracts away the underlying cloud infrastructure. The HVX hypervisor provides the following three core capabilities:

  • Presents a set of virtual hardware that is compatible with VMware ESXi, KVM and XEN virtual machines
  • Virtual networking layer that is a secure L2 overlay on top of the cloud infrastructure L3 networking using a protocol similar to GRE but running over UDP
  • Cloud storage abstraction that provides storage to the VM through Ravello Image Store that can be back-ended by Amazon S3, CloudFiles or even block/NFS volumes

My first thought after hearing how Ravello works, is that this is pretty neat! Of course the next logical question that I am sure most of you are asking is how is the performance? We know that running one level of Nested Virtualization will incur some performance penalty and this will continue with additional levels of Nested Virtualization. Ravello is also not leveraging Hardware-Assisted Virtualization but Binary Translation (a technique developed by VMware) as that can not be guaranteed to be available on all cloud infrastructures. In addition to Binary Translation, they are also using various techniques such as caching and chaining translated code, fast shadow MMU, direct execution of user space code and few others to efficiently run in a nested environment.

I was told that performance was still pretty good and sometimes even out performing regular cloud infrastructures. There was no mention of specific applications or performance numbers, so I guess this is something customers will need to validate in their own environment. I am also interested to see what the overhead is by doing two-levels of Nested Virtualization and what impact that has to the guestOS and more importantly, the applications. To be fair, Ravello's current target audience is Dev/Test workloads, so performance may not be the most critical factor. They also provide two modes of deployment based on cost optimized or performance and if the latter is selected, overcommitment of resources or consolidation will not be used.

Overall, I thought Ravello's solution was pretty interesting and could benefit some customers looking to run their workloads in other public cloud infrastructures. I think performance is just one of the things customers will need to consider but also how do they go about managing and operating this new VM container and how tightly integrated is Ravello with the VMware platform or other hypervisors for that matter. Though the VM and the underlying applications does not need to change, what operational challenges does this introduce to administrators? 

Ravello also recently presented their HVX Cloud Application Hypervisor at a recent USENIX conference and you can find more details in their presentation called HVX Virtualizing Cloud along with their research paper which can be found here.

One thing that I did want to point out after watching the presentation is that one of the presenter mentioned that their HVX nested hypervisor runs more efficiently than any other hypervisor out there and that others would require things like Intel's VMSC Shadowing feature to be comparable. I can not speak for other hypervisors, but when running VMware Hypervisors on top of our ESXi Hypervisor, our hypervisor has already been optimized for VMREAD/VMEXITS and Intel's VMSC Shadowing feature would only benefit slightly. You can read more about those techniques in this blog article.

Ravello will be at VMworld US booth #425 and I will probably drop by for a demo to see their solution in action.

Categories // Uncategorized Tags // binary translation, hypervisor, nested, nested virtualization, ravello, startup

  • « Previous Page
  • 1
  • …
  • 34
  • 35
  • 36
  • 37
  • 38
  • …
  • 68
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Ultimate Lab Resource for VCF 9.0 06/25/2025
  • VMware Cloud Foundation (VCF) on ASUS NUC 15 Pro (Cyber Canyon) 06/25/2025
  • VMware Cloud Foundation (VCF) on Minisforum MS-A2 06/25/2025
  • VCF 9.0 Offline Depot using Synology 06/25/2025
  • Deploying VCF 9.0 on a single ESXi host? 06/24/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025