WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Exploring the vSphere Flash Read Cache (vFRC) APIs Part 2

11.12.2013 by William Lam // Leave a Comment

Continuing from Part 1 of Exploring the vSphere Flash Read Cache (vFRC) APIs, we will now explore the necessary vSphere APIs to setup and configure vFRC on your ESXi host. There are two workflows in which you can create your Virtual Flash Resource, the first is simply adding all valid SSDs as you would when using the vSphere Web Client which automatically creates a VFFS (Virtual Flash File System) which is used to manage the underlying SSD devices. The second workflow is to start with a single SSD and to manually create the VFFS volume which then allows you to extend the VFFS by adding additional SSD devices. We will be going over both set of workflows and the necessary vSphere APIs required to perform these operations.

To automate the configuration of vFRC on your ESXi hosts, you will need to access both vFlashManager and storageSystem along with the following vSphere API methods:

  • QueryAvailableSsds
  • ConfigureVFlashResourceEx_Task
  • DestroyVffs
  • FormatVffs
  • HostConfigureVFlashResource
  • ExtendVffs

To demonstrate the functionality of these vSphere APIs, I have created a vSphere SDK for Perl sample script called vflashHostMgmt.pl and it supports the following operations: query, listssd, add, format, extend and destroy

Workflow 1 - Add all valid SSD devices

To configure a Virtual Flash Resource for your ESXi host, you will need to use the vSphere Web Client and click on the "Add Capacity" button and select all valid SSD devices for that particular ESXi host as seen in the screenshot below.

To automate the same workflow, we first need to be able to identify the list of available SSD devices that could be used for either vFRC or even VSAN. There is a nice vSphere API method under the storageSystem called QueryAvailableSsds which has been implemented in the script as the "listssd" operation.

Here is an example execution of the "listssd" operation:

./vflashHostMgmt.pl --config .vcenter55-1 --vihost vesxi55-4.primp-industries.com --operation listssd

As you can see from the output we have three available SSD devices matching the vSphere Web Client output. To add these SSD devices and create your Virtual Flash Resource, you will need to use the "add" operation within the script that accepts a comma seperated list of the SSD device paths as shown in the above output. Next we need to call the vFlashManager's ConfigureVFlashResourceEx_Task method which thnan accepts an array of SSD device paths to automatically configure and add the Virtual Flash Resource.

Here is an example execution of the "add" operation:

./vflashHostMgmt.pl --config .vcenter55-1 --vihost vesxi55-4.primp-industries.com --operation add --disk /vmfs/devices/disks/naa.6000c297de55bcf0471f311abc865449,/vmfs/devices/disks/naa.6000c2992cfbf14a2d827303c48632fa,/vmfs/devices/disks/naa.6000c2989357b5d31eb20256e39f9338

We can confirm that our Virtual Flash Resource was successfully created by running the "query" operation.

Here is an example execution of the "query" operation:

./vflashHostMgmt.pl --config .vcenter55-1 --vihost vesxi55-4.primp-industries.com --operation query

From the output we can see a VFFS was automatically created for us including its name and UUID and it contains the three SSD devices we added in earlier. We can also confirm by logging into our vSphere Web Client and we should see the same output as well.

In preparation for the next workflow, we can easily destroy our VFFS which is the same operation within the vSphere Web Client by selecting the "Remove All" button. To do so, we need to use the storageSystem's DestroyVffs method. In the script, this has been implemented as the "destroy" operation.

Here is an example execution of the "destroy" operation:

As you can see workflow 1 is pretty straight forward if you have an ESXi host that contains all the SSD devices you wish to add to your Virtual Flash Resource. In workflow 2, we will take a look at starting with a single SSD and manually creating the VFFS which can then be extended OR if you have an existing Virtual Flash Resource and would like to extend it, the set of APis shown in workflow 2 will aide in that use case.

Workflow 2 - Create VFFS using single SSD device / Extend VFFS

When going through workflow 1, the VFFS volume is automatically created for the user and is not something on would need to think about unless you would like to extend an existing VFFS. In this workflow we start out by adding a single SSD device which will require the creation of VFFS volume and then we will then extend that VFFS with additional SSD devices so we end up in the same end state as workflow 1.

To create a VFFS, you will need to use the FormatVffs API method which accepts a single SSD device and VFFS label and then using the HostConfigureVFlashResource API method to mount the VFFS volume to the ESXi host. This has been implemented as the "format" operation which is similar to the "add" operation but require an additional --vffs parameter which denotes the VFFS volume label.

Here is an example execution of the "format" operation:

./vflashHostMgmt.pl --config .vcenter55-1 --vihost vesxi55-4.primp-industries.com --operation format --vffs vghetto-vffs --disk /vmfs/devices/disks/naa.6000c297de55bcf0471f311abc865449

As part of the result, it will return the VFFS UUID which is required when extending a VFFS. You can also get this information by using the "query" operation which we can also see the label that we have assigned our VFFS.

To add additional SSD devices to our existing VFFS using either workflow 1 or 2, you will need to use the ExtendVffs API method which requires the VFFS UUID and the SSD device you wish to add. This has been implemented as the "extend" operation within the script.

Here is an example execution of the "extend" operation:

./vflashHostMgmt.pl --config .vcenter55-1 --vihost vesxi55-4.primp-industries.com --operation extend --vffs_uuid 527fc6e6-249cdb69-d502-005056adfa73 --disk /vmfs/devices/disks/naa.6000c2992cfbf14a2d827303c48632fa

We can confirm our changes by using the "query" operation as well as looking at our Virtual Flash Resource using the vSphere Web Client. We should see the two SSD devices that we have added to our VFFS.

 

In Part 3 of exploring the vSphere Flash Read Cache (vFRC) APIs, we will take a look at migrating a virtual machine which has vFRC configured and the options we have in terms of either migrating or dropping the vFRC cache.

Categories // Uncategorized Tags // ESXi 5.5, vffs, vflash, vFRC, virtual flash file system, vSphere 5.5, vSphere Flash Read Cache

ESXi 5.5 introduces a new Native Device Driver Architecture Part 2

11.07.2013 by William Lam // 4 Comments

Following up from Part 1 where I provided an overview of the new Native Device Driver architecture introduced in ESXi 5.5, we will now take a deeper look at how this new device driver model works in ESXi. A new concept of driver priority loading is introduced with the Native Device Driver model and the diagram below provides the current ordering of how device drivers are loaded.

As you can see OEM drivers will have the highest priority and by default Native Drivers will be loaded before "legacy" vmklinux drivers. On a clean installation of ESXi 5.5 you should see at least two of these directories: /etc/vmware/default.map.d/ and /etc/vmware/driver.map.d/ which contains driver map files pertaining to Native Device and "legacy" vmklinux drivers.

Here is a screenshot of the map files for both of these directories on an ESXi host:

The following inbox Native Drivers are included in default installation of ESXi 5.5:

Device Device Driver Name
Emulex 10GBe NIC elxnet
Emulex FC lpfc
LSI Megaraid lsi_mr3
LSI mptsas lsi_msgpt3
Micron SSD mtip32xx_native
QLogic FC qlnativefc
SAS/SATA rste
vmxnet3 & graphics vmkernel

As I mentioned earlier, Native Drivers by default will always load before vmklinux drivers, however if you need to perform some troubleshooting, one option is to disable the specific driver in question by using ESXCLI which is applicable to both Native Drivers as well as vmklinux drivers.

To do so, run the following ESXCLI command:

esxcli system module set --enabled=false --module=[DRIVER-NAME]

Categories // Uncategorized Tags // ESXi 5.5, native device driver, nddk, vmklinux, vSphere 5.5

Automate the migration from Virtual Standard Switch to vSphere Distributed Switch using PowerCLI 5.5

10.31.2013 by William Lam // 22 Comments

I have been spending quite a bit of time in the lab lately working with some of our "future" software and one of the fun tasks I get to do is perform frequent rebuilds of my lab environment. Depending on the issues I encounter, I may even need to rebuild it on a daily basis and of course I have the majority of this automated so it is not as painful as it would be if I had to go through this manually.

The output of this build is a complete working vSphere environment that consists of several ESXi hosts connected to a vCenter Server with all the network and storage configured. On the networking front, the ESXi hosts were all running on a regular Virtual Standard Switch (VSS) and I needed to migrate them over to a Virtual Distributed Switch (VDS). In this particular environment, there is some Windows infrastructure and I thought about the different ways I could accomplish this and I remember hearing about some new VDS cmdlets that came out of PowerCLI 5.5. release.

Since I already had some scripts being kicked off on this Windows system, I thought I give the new PowerCLI cmdlets a try for VSS->VDS migration as I have heard good things about the new cmdlets. I performed my prototyping on a vSphere 5.5 environment, but I believe you might even be able to use this on older releases of vSphere.

Here is a list of the new VDS cmdlets that I used for the script:

  • New-VDSwitch
  • Get-VDSwitch
  • New-VDPortgroup
  • Add-VDSwitchVMHost
  • Add-VDSwitchPhysicalNetworkAdapter

Here are additional vSphere networking cmdlets that were required for script:

  • Get-VMHostNetworkAdapter
  • Set-VMHostNetworkAdapter
  • Get-VirtualSwitch
  • Get-VirtualPortGroup
  • Remove-VirtualPortGroup

Even as a beginner of PowerCLI, I was able to quickly knock out a script that performed the migration from VSS to VDS and was able migrate ALL VMkernel interfaces and physical interfaces without any downtime. These new cmdlets definitely make it very easy for administrators to go from old Virtual Standard Switch over to the vSphere Distributed Switch.

Here is a overview of what my environment looks like which consists of three ESXi hosts with four physical NICs and three VMkernel interfaces.

The script below will create a brand new VDS and their associated Distributed Portgroups and attach a list of ESXi hosts which is configurable and performs the migration of VMkernel and physical interfaces. It does this by first moving two of the four physical NICs to the new VDS to ensure connectivity and then starts the migration of all VMkernel interfaces. Once that is complete, it will move the remainder physical NICs and then delete the Virtual Stand Switch portgroups.

Disclaimer: Please ensure you test this script in a development/test lab before using it in a production environment.

Connect-VIServer -Server vcenter55-1.primp-industries.com -User *protected email* -Pass vmware

# ESXi hosts to migrate from VSS->VDS
$vmhost_array = @("vesxi55-1.primp-industries.com", "vesxi55-2.primp-industries.com", "vesxi55-3.primp-industries.com")

# Create VDS
$vds_name = "VDS-01"
Write-Host "`nCreating new VDS" $vds_name
$vds = New-VDSwitch -Name $vds_name -Location (Get-Datacenter -Name "VSAN-Datacenter")

# Create DVPortgroup
Write-Host "Creating new Management DVPortgroup"
New-VDPortgroup -Name "Management Network" -Vds $vds | Out-Null
Write-Host "Creating new Storage DVPortgroup"
New-VDPortgroup -Name "Storage Network" -Vds $vds | Out-Null
Write-Host "Creating new vMotion DVPortgroup"
New-VDPortgroup -Name "vMotion Network" -Vds $vds | Out-Null
Write-Host "Creating new VM DVPortgroup`n"
New-VDPortgroup -Name "VM Network" -Vds $vds | Out-Null

foreach ($vmhost in $vmhost_array) {
# Add ESXi host to VDS
Write-Host "Adding" $vmhost "to" $vds_name
$vds | Add-VDSwitchVMHost -VMHost $vmhost | Out-Null

# Migrate pNIC to VDS (vmnic0/vmnic1)
Write-Host "Adding vmnic0/vmnic1 to" $vds_name
$vmhostNetworkAdapter = Get-VMHost $vmhost | Get-VMHostNetworkAdapter -Physical -Name vmnic0
$vds | Add-VDSwitchPhysicalNetworkAdapter -VMHostNetworkAdapter $vmhostNetworkAdapter -Confirm:$false
$vmhostNetworkAdapter = Get-VMHost $vmhost | Get-VMHostNetworkAdapter -Physical -Name vmnic1
$vds | Add-VDSwitchPhysicalNetworkAdapter -VMHostNetworkAdapter $vmhostNetworkAdapter -Confirm:$false

# Migrate VMkernel interfaces to VDS

# Management #
$mgmt_portgroup = "Management Network"
Write-Host "Migrating" $mgmt_portgroup "to" $vds_name
$dvportgroup = Get-VDPortgroup -name $mgmt_portgroup -VDSwitch $vds
$vmk = Get-VMHostNetworkAdapter -Name vmk0 -VMHost $vmhost
Set-VMHostNetworkAdapter -PortGroup $dvportgroup -VirtualNic $vmk -confirm:$false | Out-Null

# Storage #
$storage_portgroup = "Storage Network"
Write-Host "Migrating" $storage_portgroup "to" $vds_name
$dvportgroup = Get-VDPortgroup -name $storage_portgroup -VDSwitch $vds
$vmk = Get-VMHostNetworkAdapter -Name vmk1 -VMHost $vmhost
Set-VMHostNetworkAdapter -PortGroup $dvportgroup -VirtualNic $vmk -confirm:$false | Out-Null

# vMotion #
$vmotion_portgroup = "vMotion Network"
Write-Host "Migrating" $vmotion_portgroup "to" $vds_name
$dvportgroup = Get-VDPortgroup -name $vmotion_portgroup -VDSwitch $vds
$vmk = Get-VMHostNetworkAdapter -Name vmk2 -VMHost $vmhost
Set-VMHostNetworkAdapter -PortGroup $dvportgroup -VirtualNic $vmk -confirm:$false | Out-Null

# Migrate remainder pNIC to VDS (vmnic2/vmnic3)
Write-Host "Adding vmnic2/vmnic3 to" $vds_name
$vmhostNetworkAdapter = Get-VMHost $vmhost | Get-VMHostNetworkAdapter -Physical -Name vmnic2
$vds | Add-VDSwitchPhysicalNetworkAdapter -VMHostNetworkAdapter $vmhostNetworkAdapter -Confirm:$false
$vmhostNetworkAdapter = Get-VMHost $vmhost | Get-VMHostNetworkAdapter -Physical -Name vmnic3
$vds | Add-VDSwitchPhysicalNetworkAdapter -VMHostNetworkAdapter $vmhostNetworkAdapter -Confirm:$false

# Remove old vSwitch portgroups
$vswitch = Get-VirtualSwitch -VMHost $vmhost -Name vSwitch0

Write-Host "Removing vSwitch portgroup" $mgmt_portgroup
$mgmt_pg = Get-VirtualPortGroup -Name $mgmt_portgroup -VirtualSwitch $vswitch
Remove-VirtualPortGroup -VirtualPortGroup $mgmt_pg -confirm:$false

Write-Host "Removing vSwitch portgroup" $vmotion_portgroup
$vmotion_pg = Get-VirtualPortGroup -Name $vmotion_portgroup -VirtualSwitch $vswitch
Remove-VirtualPortGroup -VirtualPortGroup $vmotion_pg -confirm:$false

Write-Host "Removing vSwitch portgroup" $storage_portgroup
$storage_pg = Get-VirtualPortGroup -Name $storage_portgroup -VirtualSwitch $vswitch
Remove-VirtualPortGroup -VirtualPortGroup $storage_pg -confirm:$false
Write-Host "`n"
}

Disconnect-VIServer -Server $global:DefaultVIServers -Force -Confirm:$false

Here is a screenshot of running through the script:

If we now take a look at our enviornment, we can see all three ESXi hosts have been migrated over to the VDS.

UPDATE (11/4/13) -  Thanks to one of the PowerCLI engineers, it looks like there is a PowerCLI cmdlet that can be used to migrate from VDS->VSS. I will be sharing that script in another blog post for those that may want to perform the reverse.

One caveat that I hit during the development of this script is needing the ability to easily migrate between VSS->VDS and VDS->VSS. I was hoping it was simply reversing the set of operations and moving the VMkernel interfaces back to the Virtual Standard Switch but what I found for the Set-VMHostNetworkAdapter cmdlet is that it only accepts a Distributed Virtual Portgroup. This meant that I could only migrate to a VDS but not to a VSS. Though this will probably will fit the majority of customer use cases, for me this was a problem and means I will need to dig into the vSphere APIs to be able to seamlessly perform a VDS->VSS migration. Given that PowerCLI is an abstraction, we should be able to easily add this feature and I will be filing an FR with Engineering to see if we can get this added as I think it would be a useful feature to have.

Categories // PowerCLI, Uncategorized Tags // distributed virtual switch, migration, PowerCLI, vds, vSphere 5.5, vss

  • « Previous Page
  • 1
  • …
  • 14
  • 15
  • 16
  • 17
  • 18
  • …
  • 74
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025