WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

ESXi 5.5 introduces a new Native Device Driver Architecture Part 2

11.07.2013 by William Lam // 4 Comments

Following up from Part 1 where I provided an overview of the new Native Device Driver architecture introduced in ESXi 5.5, we will now take a deeper look at how this new device driver model works in ESXi. A new concept of driver priority loading is introduced with the Native Device Driver model and the diagram below provides the current ordering of how device drivers are loaded.

As you can see OEM drivers will have the highest priority and by default Native Drivers will be loaded before "legacy" vmklinux drivers. On a clean installation of ESXi 5.5 you should see at least two of these directories: /etc/vmware/default.map.d/ and /etc/vmware/driver.map.d/ which contains driver map files pertaining to Native Device and "legacy" vmklinux drivers.

Here is a screenshot of the map files for both of these directories on an ESXi host:

The following inbox Native Drivers are included in default installation of ESXi 5.5:

Device Device Driver Name
Emulex 10GBe NIC elxnet
Emulex FC lpfc
LSI Megaraid lsi_mr3
LSI mptsas lsi_msgpt3
Micron SSD mtip32xx_native
QLogic FC qlnativefc
SAS/SATA rste
vmxnet3 & graphics vmkernel

As I mentioned earlier, Native Drivers by default will always load before vmklinux drivers, however if you need to perform some troubleshooting, one option is to disable the specific driver in question by using ESXCLI which is applicable to both Native Drivers as well as vmklinux drivers.

To do so, run the following ESXCLI command:

esxcli system module set --enabled=false --module=[DRIVER-NAME]

Categories // Uncategorized Tags // ESXi 5.5, native device driver, nddk, vmklinux, vSphere 5.5

How to run Nested ESXi on top of a VSAN datastore?

11.07.2013 by William Lam // 35 Comments

Today I found an interesting article on my Twitter timeline regarding some issues when trying to install and run Nested ESXi on top of a VSAN datastore. Shortly after the ESXi installation begins, the following error message is observed:

This program has encountered an error:

Error (see log for more info):
Could not format a vmfs volume.
Command ‘/usr/sbin/vmkfstools -C vmfs5 -b 1m -S datastore1
/vmfs/devices/disks/mpx.vmhba1:C0:T0:L0:3′ exited with status 1048320

This of course piqued my interest given the topic and I would have expected this to just work and thought this might have been some miss-configuration. I decided to try this out in my lab and to my surprise, I encountered the exact same problem.

Here is a quick screenshot of error message:

I pinged a couple of folks from the VSAN development team to see if this was a known issue and if so, why was it occurring? After a couple of email exchanges, it turns the problem is with a SCSI-2 reservation being generated as part of creating a default VMFS datastore. Even though VMFS-5 no longer uses SCSI-2 reservations, the underlying LVM (Logical Volume Manager) driver for VMFS still requires it. Since VSAN does not make use of SCSI-2 reservations, it did not make sense to support it and hence the issue. Having said that, since Nested Virtualization is heavily used at VMware, the VSAN development team has come up with a nifty solution as they too hit this problem early on during the development of VSAN. Big thanks to Christian Dickmann (Tech Lead on the VSAN Engineering team) for providing this little tidbit.

Disclaimer: Nested Virtualization is not officially supported by VMware nor are the configuration changes described below, please use at your own risk.

To get around this problem, the VSAN team added in an advanced ESXi setting that would "fake" SCSI Reservations and this needs to be configured for the physical ESXi hosts providing up the physical VSAN datastore.

Run the following ESXCLI command (either locally on ESXi Shell or remotely)

esxcli system settings advanced set -o /VSAN/FakeSCSIReservations -i 1

A system reboot is not required and this change can be done live on the ESXi host prior to starting the ESXi installation. Once this is done, you will now be able to proceed with installing Nested ESXi on top of a VSAN datastore. Here is a screenshot of my Nested ESXi VM running on top of Nested ESXi VSAN datastore 🙂

Hopefully this workaround will be useful for anyone running VSAN and would like to fully make use of this storage by running Nested ESXi for development or testing.

Categories // Nested Virtualization, VSAN Tags // ESXi 5.5, nested, nested virtualization, scsi reservation, Virtual SAN, VSAN, vSphere 5.5

Automate the reverse, migrating from vSphere Distributed Switch to Virtual Standard Switch using PowerCLI 5.5

11.05.2013 by William Lam // 14 Comments

Last week I demonstrated how you can easily leverage the new PowerCLI 5.5 VDS cmdlets to migrate from a VSS (Virtual Standard Switch) to a VDS (vSphere Distributed Switch). During the development of the script, I needed a way to easily jump back and fourth between VSS->VDS and VDS->VSS and I wanted to automate this so I did not have to use the UI to reset my environment.

I initially thought this was not possible after playing around with a couple of the cmdlets but thanks to Kamen, a PowerCLI Engineer who was able to provide me with the necessary information to create a reverse migration script going from VDS to VSS.

Here is what my lab environment looks like which includes three ESXi hosts connected to a VDS called "VDS-01" which is backed by 4 pNICSs. The VDS contains 3 VMkernel interfaces and here are their respective DVPortgroup names: Management Network, Storage Network and vMotion Network.

On each ESXi host, there is an already created VSS called "vSwitch0". If one is not created or if you decide to name it something differently, then you will need to modify the script. Here is a quick screenshot of what that looks like

The PowerCLI example script below uses the Add-VirtualSwitchPhysicalNetworkAdapter cmdlet which accepts a list of pNICs, VMkernel interfaces and the portgroups to migrate from VDS to VSS. The order in which the VMkernel and portgroups are specified is critically important as they will be assigned based on the provided ordering. The script also create the necessary portgroups on the VSS which of course can be modified based on your environment. Once the migration has been completed, it will then use the Remove-VDSwitchVMHost cmdlet to remove the ESXi hosts from the VDS.

Disclaimer: Please ensure you test this script in a development/test lab before using it in a production environment.

Connect-VIServer -Server vcenter55-1.primp-industries.com -User *protected email* -Pass vmware</b></i>

# ESXi hosts to migrate from VSS-&gt;VDS
$vmhost_array = @("vesxi55-1.primp-industries.com","vesxi55-2.primp-industries.com","vesxi55-3.primp-industries.com")

# VDS to migrate from
$vds_name = "VDS-01"
$vds = Get-VDSwitch -Name $vds_name

# VSS to migrate to
$vss_name = "vSwitch0"

# Name of portgroups to create on VSS
$mgmt_name = "Management Network"
$storage_name = "Storage Network"
$vmotion_name = "vMotion Network"

foreach ($vmhost in $vmhost_array) {
Write-Host "`nProcessing" $vmhost

# pNICs to migrate to VSS
Write-Host "Retrieving pNIC info for vmnic0,vmnic1,vmnic2,vmnic3"
$vmnic0 = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmnic0"
$vmnic1 = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmnic1"
$vmnic2 = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmnic2"
$vmnic3 = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmnic3"

# Array of pNICs to migrate to VSS
Write-Host "Creating pNIC array"
$pnic_array = @($vmnic0,$vmnic1,$vmnic2,$vmnic3)

# vSwitch to migrate to
$vss = Get-VMHost -Name $vmhost | Get-VirtualSwitch -Name $vss_name

# Create destination portgroups
Write-Host "`Creating" $mgmt_name "portrgroup on" $vss_name
$mgmt_pg = New-VirtualPortGroup -VirtualSwitch $vss -Name $mgmt_name

Write-Host "`Creating" $storage_name "Network portrgroup on" $vss_name
$storage_pg = New-VirtualPortGroup -VirtualSwitch $vss -Name $storage_name

Write-Host "`Creating" $vmotion_name "portrgroup on" $vss_name
$vmotion_pg = New-VirtualPortGroup -VirtualSwitch $vss -Name $vmotion_name

# Array of portgroups to map VMkernel interfaces (order matters!)
Write-Host "Creating portgroup array"
$pg_array = @($mgmt_pg,$storage_pg,$vmotion_pg)

# VMkernel interfaces to migrate to VSS
Write-Host "`Retrieving VMkernel interface details for vmk0,vmk1,vmk2"
$mgmt_vmk = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk0"
$storage_vmk = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk1"
$vmotion_vmk = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk2"

# Array of VMkernel interfaces to migrate to VSS (order matters!)
Write-Host "Creating VMkernel interface array"
$vmk_array = @($mgmt_vmk,$storage_vmk,$vmotion_vmk)

# Perform the migration
Write-Host "Migrating from" $vds_name "to" $vss_name"`n"
Add-VirtualSwitchPhysicalNetworkAdapter -VirtualSwitch $vss -VMHostPhysicalNic $pnic_array -VMHostVirtualNic $vmk_array -VirtualNicPortgroup $pg_array  -Confirm:$false
}

Write-Host "`nRemoving" $vmhost_array "from" $vds_name
$vds | Remove-VDSwitchVMHost -VMHost $vmhost_array -Confirm:$false

<i><b>Disconnect-VIServer -Server $global:DefaultVIServers -Force -Confirm:$false

Here is a screenshot of the script executing:

Categories // Automation, PowerCLI Tags // distributed virtual switch, migration, PowerCLI, vds, vSphere 5.5, vss

  • « Previous Page
  • 1
  • …
  • 434
  • 435
  • 436
  • 437
  • 438
  • …
  • 565
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • PowerCLI remediation script for running NSX Edge on AMD Ryzen for VCF 9.0 06/20/2025
  • Failed to locate kickstart on Nested ESXi VM CD-ROM in VCF 9.0 06/20/2025
  • NVMe Tiering with Nested Virtualization in VCF 9.0 06/20/2025
  • VCF 9.0 Installer workaround for ESXi hosts with different vendor 06/19/2025
  • NVMe Tiering with AMD Ryzen CPU workaround for VCF 9.0 06/19/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025