WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Enabling/Disabling EVC using the vSphere MOB

05.07.2012 by William Lam // 2 Comments

There were some discussions this morning on twitter regarding the configuration of EVC for a vSphere Cluster using one of the vSphere CLI's such as PowerCLI or directly leveraging the vSphere API. Unfortunately, this is not possible today as the operations pertaining to EVC are not currently exposed in the vSphere API. This means you will not be able to use the vCLI, PowerCLI, vCO or the vSphere API to configure and manage EVC configurations, you will need to use the vSphere Client to do so.

Having said that, one could still "potentially" automate EVC configurations using the vSphere MOB interface using the private vSphere API, but it may not be ideal and will require some "creativity" and custom coding to integrate with your existing automation solution. This particular limitation of the vSphere API is one that I have personally faced and have filed a bug with VMware awhile back. I am hoping this will eventually be added to the public vSphere API, so that users can fully automate all aspects and configurations of a vSphere Cluster.

Disclaimer: This is not officially supported by VMware, use at your own risk and discretion.

Step 1 - Connect to your vCenter MOB and traverse to the vSphere Cluster of interest (note the MOID will be different in your specific cluster).

Step 2 -  Now replace the URL with the following while substituting the cluster MOID that you see in your browser:

https://reflex.primp-industries.com/mob/?moid=domain-c1550&method=transitionalEVCManager

and hit enter and you'll be brought to TransitionalEVCManager() method, you'll then want to click on the "Invoke Method". Once you do so, you should be returned with a task object and you'll have a link to something like evcdomain-cXXXX. Click on this and you'll be brought to ClusterTransitionalEVCManager.

Step 3 - From here you'll have have some basic evcState information which you can click on to see what the current EVC configuration is set to, guaranteedCPUFeatures and valid EVC Modes (the last part will be important for reconfiguring EVC)

Step 4 - Now let's say the cluster currently has EVC Mode set to intel-merom and you would like to change it to Nehalem, you would need to retrieve the key from the previous page, in our example it's intel-nehalem. Now, you need to click on the method link called ConfigureEVC_Task which is pretty straight forward, it just accepts the EVC Mode Key, enter the string and click on "Invoke Method" and now your cluster will be reconfigured if you go back to the evcState or look at your vCenter task. You can also disable EVC by using DisableEVC_Task

 
Note: If EVC is already configured in your vSphere Cluster, you can use the vSphere API to view it's current configuration by looking at the ClusterComputeResource's summary property. You just will not be able to make any changes or disabling EVC using the vSphere API.

Categories // Uncategorized Tags // api, evc, mob, vSphere

Removing Previous Local Datastore Label for Reinstall in ESXi 5

04.24.2012 by William Lam // 18 Comments

If you reinstall ESXi 5 on system that had a previous copy, one thing you might have noticed is the local VMFS datastore label is preserved. This is also true if you perform an unattended installation using kickstart and specifying the overwritevmfs parameter, a new VMFS volume is created but it still uses the old label. This can cause some issues for scripted installs where you decide to rename the local datastore from the expected default "datastore1" label.

UPDATE (12/21) - This issue has been resolved in the latest release of ESXi 5.0 Update 2, you can refer to the release notes for more details on other updates and fixes.

Though it is actually pretty easy to get around this problem by deleting the VMFS partition prior to starting the new ESXi installation. Below are three methods depending on the installation option you have chosen. Please be absolutely sure about the VMFS volume prior to deleting the partition.

Method 1 - While you still have login access to previous ESXi install

If you still have access to the system before the re-install, you can delete the VMFS partition before rebooting and starting the installation (ISO or kickstart). You will first need to identify the device that is backing your local datastore, you can use the following ESXCLI command which will provide a mapping of your datastore to device.

You will need to make a note of the "Device Name" which can be a naa.* or mpx.* depending on how your ESXi host identifies the disk. You should also make a note of the partition number for the VMFS volume which we will also confirm in the next step. Using the partedUtil we can check the partitions found on the disk and we can confirm that partition 3 is being used for VMFS. Using the "getptbl" option and specifying the full path to the disk which is under /vmfs/devices/disks/naa.* we can retrieve the partition info as shown below.

Now we just need to delete this partition which will wipe the VMFS headers which includes the datastore label. We can do this by using partedUtil and using the "delete" option which will require the full path to the disk in our previous step.

You can now reinstall ESXi and it will use "datastore1" as it's default VMFS label.

Note: The disk that contains the local ESXi 5 install will always have VMFS as the 3rd partition, where as other VMFS volumes will only have a single partition.

Method 2 - During manual installation using ESXi 5 ISO

When you boot up the ISO, you are brought to the "Welcome to VMware ESXi 5.0.0 Installation" page, you will need to login to ESXi Shell by pressing ALT+F1. The username will be root and there is no password, just hit enter. Just like in Method 1, you will need to identify the device for your local datastore but instead of using esxcli, you will need to use localcli as hostd is currently not running.

Here is a screenshot of the identifying the local datatstore device and deleting the VMFS partition:

You can now jump back to the installer by pressing ALT+F2 and continuing with the reinstall and it will use "datastore1" as it's default VMFS label.

Method 3 - Kickstart Installation

If you wish to ensure that the default "datastore1" label is always available for scripted installs, you can using the following snippet in your %pre section of your kickstart. This will search for all disks under /vmfs/devices/disks and  find the deivce that is backing a local ESXi installation and delete it's VMFS partition prior to starting the installation.

for DISK in $(ls /vmfs/devices/disks | grep -v vml);
do
        DISK_PATH=/vmfs/devices/disks/${DISK}
        VMFS_PARTITION_ID=$(partedUtil getptbl ${DISK_PATH} | grep vmfs | awk '{print $1}')
        if [[ ! -z ${VMFS_PARTITION_ID} ]] && [[ ${VMFS_PARTITION_ID} -eq 3 ]]; then
                partedUtil delete ${DISK_PATH} 3
        fi
done

Note: To be extra cautious, you should also consider disabling any additional remote LUNs that can be seen during the installation using the trick found here.

Categories // Uncategorized Tags // datastore label, local datastore, partedUtil

Automatically Remediating SvMotion / VDS Issue Using vCenter Alarms

04.20.2012 by William Lam // 8 Comments

UPDATE 07/13/2012 - vSphere 5.0 Update 1a has just been released which resolves this issue, please take a look here for more details for the patch as this script is no longer required.

In my previous article Identifying & Fixing Virtual Machines Affected By SvMotion / VDS Issue, I provided a script for users to easily identify the impacted VMs as well as a way to remediate them. Though the the issue was only temporarily fixed, as any of the remediated VMs can be re-impacted if they are Storage vMotion again (manually or automatically) by Storage DRS. This meant that users would to re-run these scripts every so often to ensure their environment is not affected by this problem.

I decided to look into a more automated and hands-off approach in which a Storage vMotion of a VM will automatically trigger the execution of the remedation script. I was able to successfully accomplish this by leveraging vCenter Alarms and running a script on the vCenter Server (Here's a cool thing I did with alarms awhile back) .

Disclaimer: This script is not officially supported by VMware, please test this in a development environment before using on production systems.

You can create the alarm at any level of the inventory hierarchy, but I would recommend placing it at least at the datacenter or cluster level. The alarm type will be for a VirtualMachine and it we use "monitor for specific events". For the trigger, we will need to use "VM migrated" and set the status to "Unset" which will not create an alarm icon when it is triggered.

You might wonder why we selected "VM migrated" versus "VM relocated" and this is actually due to the fact that a Storage vMotion starts out just like a vMotion and if you manually perform a vMotion or Storage vMotion, only this event type will be triggered. Due to this single event being triggered by two completely different operations, it has an interesting impact which we will discuss in a bit.

Next we need to create an action for this alarm which will be running a command, you will need to specify the full path to perl.exe (assuming you're using my script which is based on vSphere SDK for Perl and you will need to have vCLI installed on the vCenter Server) as well as the path to the alarm script which in this example is called alarm.pl. Also ensure you set the green->yellow action to execute once.

You will need to create the alarm.pl script on your vCenter Server and here is what it looks like:

#!/usr/bin/perl -w
# William Lam
# http://www.virtuallyghetto.com/

use strict;
use warnings;

my $scriptlocation = "C:\\querySvMotionVDSIssue.pl";
my $server = "localhost"
my $username = "VC-USERNAME";
my $password = "VC-PASSWORD";
my $debug = 0;

###########################
# DO NOT MODIFY PAST HERE #
###########################

my $start1 = "from";
my $start2 = "to";
my $end = ",";

# extract VMware env variables from alarm
my $eventstring = $ENV{'VMWARE_ALARM_EVENTDESCRIPTION'};
my $vmname = $ENV{'VMWARE_ALARM_EVENT_VM'};

my @sourcehost = $eventstring =~ /$start1 (.*?)$end/;
my @destinationhost = $eventstring =~ /$start2 (.*?)$end/;


# Output environmental variables to see what's up
if($debug) {
 open(FILE,">C:\\output.txt");
 foreach my $key (keys %ENV) {
  print FILE $key . "=" . $ENV{$key} . "\n";
 }
 close(FILE);
}

# if the source/destination host is the same, means we had a Storage vMotion instead of vMotion
# and we execute the remediation script on the VM
if($sourcehost[0] eq $destinationhost[0]) {
 `"$scriptlocation --server $server --username $username --password $password --vmname $vmname --fix true"`;
}

You will need to fill in the script location, in this example I have all scripts stored in C:\ and you will also need to populated the credentials which will be used to execute the script.

Earlier we mentioned that both a Storage vMotion and vMotion trigger the same event and because of that, we need to be able to identify when a Storage vMotion actually happens to run the script. The alarm.pl script above will be executed when the alarm is triggered and using the VMware specific environmental variables that is populated from the vCenter Alarm, we can extract from the event description to figure out whether it was a vMotion or Storage vMotion. Once we confirm it is a Storage vMotion, we then execute our remediation script which is from my previous article.

Note: Ensure you download the latest version of of the querySvMotionVDSIssue.pl from the previous article, as it has been updated to handle single remediation and targeted for this use case.

Now to verify that our alarm is functioning as expected, we can perform a manual Storage vMotion of a VM and we should see our alarm.pl execute and then after the Storage vMotion has completed, we should see some VM reconfiguration tasks which is from our remediation script.

So there you have it, you no longer have to worry about running the script every so often to ensure your VMs are not being impacted by the SvMotion / VDS problem. Again, I would like to stress though we are able to automate this remediation, this is not a real solution and VMware is actively working on a fix for this problem.

If you have any questions, feel free to leave a comment.

Categories // Uncategorized Tags // alarm, distributed virtual switch, dvportgroup, dvs, storage drs, svmotion, vds

  • « Previous Page
  • 1
  • …
  • 42
  • 43
  • 44
  • 45
  • 46
  • …
  • 74
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025