WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Automatically Remediating SvMotion / VDS Issue Using vCenter Alarms

04.20.2012 by William Lam // 8 Comments

UPDATE 07/13/2012 - vSphere 5.0 Update 1a has just been released which resolves this issue, please take a look here for more details for the patch as this script is no longer required.

In my previous article Identifying & Fixing Virtual Machines Affected By SvMotion / VDS Issue, I provided a script for users to easily identify the impacted VMs as well as a way to remediate them. Though the the issue was only temporarily fixed, as any of the remediated VMs can be re-impacted if they are Storage vMotion again (manually or automatically) by Storage DRS. This meant that users would to re-run these scripts every so often to ensure their environment is not affected by this problem.

I decided to look into a more automated and hands-off approach in which a Storage vMotion of a VM will automatically trigger the execution of the remedation script. I was able to successfully accomplish this by leveraging vCenter Alarms and running a script on the vCenter Server (Here's a cool thing I did with alarms awhile back) .

Disclaimer: This script is not officially supported by VMware, please test this in a development environment before using on production systems.

You can create the alarm at any level of the inventory hierarchy, but I would recommend placing it at least at the datacenter or cluster level. The alarm type will be for a VirtualMachine and it we use "monitor for specific events". For the trigger, we will need to use "VM migrated" and set the status to "Unset" which will not create an alarm icon when it is triggered.

You might wonder why we selected "VM migrated" versus "VM relocated" and this is actually due to the fact that a Storage vMotion starts out just like a vMotion and if you manually perform a vMotion or Storage vMotion, only this event type will be triggered. Due to this single event being triggered by two completely different operations, it has an interesting impact which we will discuss in a bit.

Next we need to create an action for this alarm which will be running a command, you will need to specify the full path to perl.exe (assuming you're using my script which is based on vSphere SDK for Perl and you will need to have vCLI installed on the vCenter Server) as well as the path to the alarm script which in this example is called alarm.pl. Also ensure you set the green->yellow action to execute once.

You will need to create the alarm.pl script on your vCenter Server and here is what it looks like:

#!/usr/bin/perl -w
# William Lam
# http://www.virtuallyghetto.com/

use strict;
use warnings;

my $scriptlocation = "C:\\querySvMotionVDSIssue.pl";
my $server = "localhost"
my $username = "VC-USERNAME";
my $password = "VC-PASSWORD";
my $debug = 0;

###########################
# DO NOT MODIFY PAST HERE #
###########################

my $start1 = "from";
my $start2 = "to";
my $end = ",";

# extract VMware env variables from alarm
my $eventstring = $ENV{'VMWARE_ALARM_EVENTDESCRIPTION'};
my $vmname = $ENV{'VMWARE_ALARM_EVENT_VM'};

my @sourcehost = $eventstring =~ /$start1 (.*?)$end/;
my @destinationhost = $eventstring =~ /$start2 (.*?)$end/;


# Output environmental variables to see what's up
if($debug) {
 open(FILE,">C:\\output.txt");
 foreach my $key (keys %ENV) {
  print FILE $key . "=" . $ENV{$key} . "\n";
 }
 close(FILE);
}

# if the source/destination host is the same, means we had a Storage vMotion instead of vMotion
# and we execute the remediation script on the VM
if($sourcehost[0] eq $destinationhost[0]) {
 `"$scriptlocation --server $server --username $username --password $password --vmname $vmname --fix true"`;
}

You will need to fill in the script location, in this example I have all scripts stored in C:\ and you will also need to populated the credentials which will be used to execute the script.

Earlier we mentioned that both a Storage vMotion and vMotion trigger the same event and because of that, we need to be able to identify when a Storage vMotion actually happens to run the script. The alarm.pl script above will be executed when the alarm is triggered and using the VMware specific environmental variables that is populated from the vCenter Alarm, we can extract from the event description to figure out whether it was a vMotion or Storage vMotion. Once we confirm it is a Storage vMotion, we then execute our remediation script which is from my previous article.

Note: Ensure you download the latest version of of the querySvMotionVDSIssue.pl from the previous article, as it has been updated to handle single remediation and targeted for this use case.

Now to verify that our alarm is functioning as expected, we can perform a manual Storage vMotion of a VM and we should see our alarm.pl execute and then after the Storage vMotion has completed, we should see some VM reconfiguration tasks which is from our remediation script.

So there you have it, you no longer have to worry about running the script every so often to ensure your VMs are not being impacted by the SvMotion / VDS problem. Again, I would like to stress though we are able to automate this remediation, this is not a real solution and VMware is actively working on a fix for this problem.

If you have any questions, feel free to leave a comment.

Categories // Uncategorized Tags // alarm, distributed virtual switch, dvportgroup, dvs, storage drs, svmotion, vds

Disable LUN During ESXi Installation

04.17.2012 by William Lam // 14 Comments

For many of us who worked with classic ESX back in the day, can recall one of the scariest thing during an install/re-install or upgrade of an ESX host that had SAN attached storage, was the potential risk of accidentally installing ESX onto one of the LUNs that housed our Virtual Machines. As a precaution, most vSphere administrators would ask their Storage administrators to either disable/unplug the ports on the switch or temporarily mask away the LUNs at the array during an install or upgrade.

Another trick that gained popularity due to it's simplicity was unloading the HBA drivers before the installation of ESX began and this was usually done as part of the %pre section of a kickstart installation. This would ensure that your SAN LUNs would not be visible during the installation and it was much faster than involving your Storage administrators. With the release of ESXi, this trick no longer works. Though, there have been several enhancements in the ESXi kickstart to allow you to specify specific types of disks during installation, however, it is possible that you could still see your SAN LUNs during the installation.

I know the question about disabling the HBA drivers for ESXi comes up pretty frequently and I just assumed it was not possible. A recent question on the same topic in our internal Socicalcast site got me thinking. With some research and testing, I found a way to do this by leveraging LUN masking at the ESXi host level using ESXCLI. My initial thought was to mask based on the HBA adapter (C:*T:*L:*) and this would still be somewhat manual depending on your various host configurations.

The above solution was not ideal, but with the help from some of our VMware GSS engineers (Paudie/Daniel), they mentioned that you could create claim rules based on variety of criteria, one of which is the transport type. This meant that I could create a claim rule to mask all LUNs that had one of the following supported transport type: block, fc, iscsi, iscsivendor, ide, sas, sata, usb, parallel or unknown.

Here are the following commands to run if you wish to create a claim rule to mask away all LUNs that are FC based:

esxcli storage core claimrule add -r 2012 -P MASK_PATH -t transport -R fc
esxcli storage core claimrule load
esxcli storage core claiming unclaim -t plugin -P NMP
esxcli storage core claimrule run

Another option that was mentioned by Paudie, was that you could also mask based on a particular driver, such as the Emulex driver (lpfc680). To see the type of driver a particular adapter is being supported by, you can run the following ESXCLI command:

esxcli storage core adapter list

Here is a screenshot of a sample output:

For more details about creating claim rules be sure to use the --help option or take a look at the ESXCLI documentation starting on pg 88 here.

Now this is great, but how do we go about automating this a bit further? Since the claim rules would still need to be executed by a user before starting an ESXi installation and also removed after the post-installation. I started doing some testing with creating a customized ESXi 5 ISO that would "auto-magically" create the proper claim rules and remove them afterwards and with some trial/error, I was able to get it working.

The process is exactly the same as laid out in an earlier article How to Create Bootable ESXi 5 ISO & Specifying Kernel Boot Option, but instead of tweaking the kernelopt in the boot.cfg, we will just be appending a custom mask.tgz file that contains our "auto-magic" claim rule script. Here is what the script looks like:

#!/bin/ash

localcli storage core claimrule add -r 2012 -P MASK_PATH -t transport -R fc
localcli storage core claimrule load
localcli storage core claiming unclaim -t plugin -P NMP
localcli storage core claimrule run

cat >> /etc/rc.local << __CLEANUP_MASKING__
localcli storage core claimrule remove -r 2012
__CLEANUP_MASKING__

cat > /etc/init.d/maskcleanup << __CLEANUP_MASKING__
sed -i 's/localcli.*//g' /etc/rc.local
rm -f /etc/init.d/maskcleanup
__CLEANUP_MASKING__

chmod +x /etc/init.d/maskcleanup

The script above will create a claim rule to mask all FC LUNs before the installation of ESXi starts, this ensure that the FC LUNs will not be visible during the installation. It will also append a claim rule remove to /etc/rc.local which will actually execute before the installation is complete, but does note take effect since it is not loaded. This ensures the claim rule is automatically removed before rebooting and we also create a simple init.d script to clean up this entry upon first boot up. All said and done, you will not be able to see your FC LUNs during the installation but they will show up after the first reboot.

Disclaimer: Please ensure you do proper testing in a lab environment before using in Production.

To create the custom mask.tgz file, you will need to follow the steps below and then take the mask.tgz file and follow the article above in creating a bootable ESXi 5 ISO.

  1. Create the following directory: mkdir -p test/etc/rc.local.d
  2. Change into the "test/etc/rc.local.d" directory and create a script called mask.sh and copy the above lines into the script
  3. Set the execute permission on the script chmod +x mask.sh
  4. Change back into the root of the "test" director and run the following command: tar cvf mask.tgz *
  5. Update the boot.cfg as noted in the article and append mask.tgzto the module list.

Once you create your customized ESXi 5 ISO, you can just boot it up and either perform a clean installation or an upgrade without having to worry about SAN LUNs being seen by the installer. Though these steps are specific to ESXi 5, they should also work with ESXi 4.x (ESXCLI syntax may need to be changed), but please do verify before using in a production environment.

You can easily leverage this in a kickstart deployment by adding the claim rule creation in the %pre section and then adding claim rule removal in the %post to ensure that upon first boot up, everything is ready to go. Take a look at this article for more details for kickstart tips/tricks in ESXi 5.

Categories // Automation, ESXi Tags // ESXi 4.1, ESXi 5.0, kickstart, ks.cfg, LUN

Identifying & Fixing Virtual Machines Affected By SvMotion / VDS Issue

04.16.2012 by William Lam // 24 Comments

UPDATE 07/13/2012 - vSphere 5.0 Update 1a has just been released which resolves this issue, please take a look here for more details for the patch as this script is no longer required.

Duncan Epping recently wrote an article about Clarifying the SvMotion / VDS problem in which he describes the scenario that would impact your VMs as well as a way to remediate those impacted VMs. I would recommend you go through Duncan's article before moving any further.

The challenge now, is how to easily identify all VMs that are currently impacted by this problem in your environment? The answer is of course Automation and leveraging the vSphere API! I created the following vSphere SDK for Perl script called querySvMotionVDSIssue.pl which searches for all VMs that are connected to a VDS and checks whether or not it's expected dvPortgroup file exists in the appropriate datastore. To use the script, you just need a system with the vCLI installed or you can just use the vMA appliance.

UPDATE: The script has now been updated to support remediation for VMs connected to both a VMware VDS as well as Cisco N1KV. The solution, thanks to one of our internal engineers was to "move" the VM's dvport from one to another, all while staying within the existing dvPortgroup which will also force the creation of the .dvsdb port file. Once the dvport move has successfully completed, we will move it back to it's original dvport that it initially resided on. We no longer have to rely on creating a temporally dvPortgroup and best of all, we can now remediate both VDS and N1KV. The script now combines both the "query" and "remediation" into single script. Please take a look at the examples below on usage.

Disclaimer: This script is not officially supported by VMware, please test this in a development environment before using on production systems.

Here is a sample output of the script running in "query" mode:

Only impacted VMs will be listed in the output. To remediate, I have combined the remediation script into the query script, if you wish to remediate ALL VMs that were listed as being impacted, you can specify the --fix flag and providing the option "true". This will go ahead and remediate all impacted VMs that were listed as before.

Here is a sample output of the script running in "remediation" mode:

In the screenshot above, you may noticed a few interesting details with VM3 and VM4. If you run out of dvports in a dvPortgroup, the script will automatically increase the number of ports to satisfy the swap (max of 10 due to number of ethernet interfaces a VM can have). Once the VM has been remediated, the dvportgroup will be reconfigured to it's original configured number of ports as shown with VM3.

If you have an impacted VM that is connected to an ephemeral dvportgroup, we will not be able to remediate due to the nature of how an ephemeral binding works. You will get a message on the specific interface and you will need to manually remediate using the steps outlined by Duncan or using the "old" remediation script which will create a temporally dvPortgroup (again, this will only work for VMware VDS' only).

If you run into any issues or have questions, feel free to leave a comment.

Categories // Uncategorized Tags // distributed virtual switch, dvportgroup, dvs, storage drs, svmotion, vds

  • « Previous Page
  • 1
  • …
  • 489
  • 490
  • 491
  • 492
  • 493
  • …
  • 562
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Crowdsourced Lab Hardware for ESXi 9.0 Dashboard 06/17/2025
  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025