WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Removing Previous Local Datastore Label for Reinstall in ESXi 5

04.24.2012 by William Lam // 18 Comments

If you reinstall ESXi 5 on system that had a previous copy, one thing you might have noticed is the local VMFS datastore label is preserved. This is also true if you perform an unattended installation using kickstart and specifying the overwritevmfs parameter, a new VMFS volume is created but it still uses the old label. This can cause some issues for scripted installs where you decide to rename the local datastore from the expected default "datastore1" label.

UPDATE (12/21) - This issue has been resolved in the latest release of ESXi 5.0 Update 2, you can refer to the release notes for more details on other updates and fixes.

Though it is actually pretty easy to get around this problem by deleting the VMFS partition prior to starting the new ESXi installation. Below are three methods depending on the installation option you have chosen. Please be absolutely sure about the VMFS volume prior to deleting the partition.

Method 1 - While you still have login access to previous ESXi install

If you still have access to the system before the re-install, you can delete the VMFS partition before rebooting and starting the installation (ISO or kickstart). You will first need to identify the device that is backing your local datastore, you can use the following ESXCLI command which will provide a mapping of your datastore to device.

You will need to make a note of the "Device Name" which can be a naa.* or mpx.* depending on how your ESXi host identifies the disk. You should also make a note of the partition number for the VMFS volume which we will also confirm in the next step. Using the partedUtil we can check the partitions found on the disk and we can confirm that partition 3 is being used for VMFS. Using the "getptbl" option and specifying the full path to the disk which is under /vmfs/devices/disks/naa.* we can retrieve the partition info as shown below.

Now we just need to delete this partition which will wipe the VMFS headers which includes the datastore label. We can do this by using partedUtil and using the "delete" option which will require the full path to the disk in our previous step.

You can now reinstall ESXi and it will use "datastore1" as it's default VMFS label.

Note: The disk that contains the local ESXi 5 install will always have VMFS as the 3rd partition, where as other VMFS volumes will only have a single partition.

Method 2 - During manual installation using ESXi 5 ISO

When you boot up the ISO, you are brought to the "Welcome to VMware ESXi 5.0.0 Installation" page, you will need to login to ESXi Shell by pressing ALT+F1. The username will be root and there is no password, just hit enter. Just like in Method 1, you will need to identify the device for your local datastore but instead of using esxcli, you will need to use localcli as hostd is currently not running.

Here is a screenshot of the identifying the local datatstore device and deleting the VMFS partition:

You can now jump back to the installer by pressing ALT+F2 and continuing with the reinstall and it will use "datastore1" as it's default VMFS label.

Method 3 - Kickstart Installation

If you wish to ensure that the default "datastore1" label is always available for scripted installs, you can using the following snippet in your %pre section of your kickstart. This will search for all disks under /vmfs/devices/disks and  find the deivce that is backing a local ESXi installation and delete it's VMFS partition prior to starting the installation.

for DISK in $(ls /vmfs/devices/disks | grep -v vml);
do
        DISK_PATH=/vmfs/devices/disks/${DISK}
        VMFS_PARTITION_ID=$(partedUtil getptbl ${DISK_PATH} | grep vmfs | awk '{print $1}')
        if [[ ! -z ${VMFS_PARTITION_ID} ]] && [[ ${VMFS_PARTITION_ID} -eq 3 ]]; then
                partedUtil delete ${DISK_PATH} 3
        fi
done

Note: To be extra cautious, you should also consider disabling any additional remote LUNs that can be seen during the installation using the trick found here.

Categories // Uncategorized Tags // datastore label, local datastore, partedUtil

vSphere Security Hardening Report Script for vSphere 5

04.23.2012 by William Lam // 10 Comments

The much anticipated vSphere 5 Security Hardening Guide was just released last week by VMware and includes several new guidelines for the vSphere 5 platform. In addition to the new guidelines, you will also find that the old vSphere 4.x guideline identifiers (e.g. VMX00, COS00, VCENTER00) are no longer being used and have been replaced by a new set of identifiers. You might ask why the change? Though I can not provide any specifics, but rest assure this has been done for a very good reason. There is also a change in the security guidance levels, in the vSphere 4.x guide, you had enterprise, SSLF and DMZ and with the vSphere 5 guide, you now have profile1, profile2 and profile3 where profile1 provides the most secure guidelines. To get a list of all the guideline changes between the 4.1 and 5.0 Security Hardening Guide, take a look at this document here.

I too was impacted by these changes as it meant I had to add additional logic and split up certain guidelines to support both the old and new identifiers in my vSphere Security Hardening Script. One of the challenges I faced with the old identifiers and creating my vSphere Security Hardening Script is that a single ID could be applicable for several independent checks and this can make it difficult to troubleshoot. I am glad that each guideline is now an individual and unique ID which should also make it easier for users to interpret.

To help with your vSphere Security Hardening validation, I have updated my security hardening script to include the current public draft of the vSphere 5 Security Hardening Guide. You can download the script here.

Disclaimer: This script is not officially supported by VMware, please test this in a development environment before using on production systems.  

The script now supports both a vSphere 4.x environment as well as vSphere 5.0 environment. In addition to adding the new guideline checks and enhancing a few older ones, I have also included two additional checks that are not in Hardening Guide which is to verify an ESX(i) host or vCenter Server's SSL certificate expiry. I recently wrote an article on the topic here, but thought this would be a beneficial check to include in my vSphere Security Hardening Script. If you would like to see the verification of SSL certificate expiry in the official vSphere 5 Security Hardening Guide, please be sure to provide your feedback here.

Here is a sample output for the Security Hardening Report for a vSphere 5 environment using "profile2" check:
vmwarevSphereSecurityHardeningReport-SAMPLE.html

UPDATE (06/03/12): VMware just released the official vSphere 5 Security Hardening Guide this week and I have also updated my script to include all modifications. If there are any feedback/bug reports, please post them in the vSphere Security Hardening Report VMTN Group.

If you have any feedback/questions, please join the vSphere Security Hardening Report VMTN Group for further discussions.

Categories // Uncategorized Tags // ESXi 5.0, ESXi 5.0, hardening guide, security, vSphere 5.0

Automatically Remediating SvMotion / VDS Issue Using vCenter Alarms

04.20.2012 by William Lam // 8 Comments

UPDATE 07/13/2012 - vSphere 5.0 Update 1a has just been released which resolves this issue, please take a look here for more details for the patch as this script is no longer required.

In my previous article Identifying & Fixing Virtual Machines Affected By SvMotion / VDS Issue, I provided a script for users to easily identify the impacted VMs as well as a way to remediate them. Though the the issue was only temporarily fixed, as any of the remediated VMs can be re-impacted if they are Storage vMotion again (manually or automatically) by Storage DRS. This meant that users would to re-run these scripts every so often to ensure their environment is not affected by this problem.

I decided to look into a more automated and hands-off approach in which a Storage vMotion of a VM will automatically trigger the execution of the remedation script. I was able to successfully accomplish this by leveraging vCenter Alarms and running a script on the vCenter Server (Here's a cool thing I did with alarms awhile back) .

Disclaimer: This script is not officially supported by VMware, please test this in a development environment before using on production systems.

You can create the alarm at any level of the inventory hierarchy, but I would recommend placing it at least at the datacenter or cluster level. The alarm type will be for a VirtualMachine and it we use "monitor for specific events". For the trigger, we will need to use "VM migrated" and set the status to "Unset" which will not create an alarm icon when it is triggered.

You might wonder why we selected "VM migrated" versus "VM relocated" and this is actually due to the fact that a Storage vMotion starts out just like a vMotion and if you manually perform a vMotion or Storage vMotion, only this event type will be triggered. Due to this single event being triggered by two completely different operations, it has an interesting impact which we will discuss in a bit.

Next we need to create an action for this alarm which will be running a command, you will need to specify the full path to perl.exe (assuming you're using my script which is based on vSphere SDK for Perl and you will need to have vCLI installed on the vCenter Server) as well as the path to the alarm script which in this example is called alarm.pl. Also ensure you set the green->yellow action to execute once.

You will need to create the alarm.pl script on your vCenter Server and here is what it looks like:

#!/usr/bin/perl -w
# William Lam
# http://www.virtuallyghetto.com/

use strict;
use warnings;

my $scriptlocation = "C:\\querySvMotionVDSIssue.pl";
my $server = "localhost"
my $username = "VC-USERNAME";
my $password = "VC-PASSWORD";
my $debug = 0;

###########################
# DO NOT MODIFY PAST HERE #
###########################

my $start1 = "from";
my $start2 = "to";
my $end = ",";

# extract VMware env variables from alarm
my $eventstring = $ENV{'VMWARE_ALARM_EVENTDESCRIPTION'};
my $vmname = $ENV{'VMWARE_ALARM_EVENT_VM'};

my @sourcehost = $eventstring =~ /$start1 (.*?)$end/;
my @destinationhost = $eventstring =~ /$start2 (.*?)$end/;


# Output environmental variables to see what's up
if($debug) {
 open(FILE,">C:\\output.txt");
 foreach my $key (keys %ENV) {
  print FILE $key . "=" . $ENV{$key} . "\n";
 }
 close(FILE);
}

# if the source/destination host is the same, means we had a Storage vMotion instead of vMotion
# and we execute the remediation script on the VM
if($sourcehost[0] eq $destinationhost[0]) {
 `"$scriptlocation --server $server --username $username --password $password --vmname $vmname --fix true"`;
}

You will need to fill in the script location, in this example I have all scripts stored in C:\ and you will also need to populated the credentials which will be used to execute the script.

Earlier we mentioned that both a Storage vMotion and vMotion trigger the same event and because of that, we need to be able to identify when a Storage vMotion actually happens to run the script. The alarm.pl script above will be executed when the alarm is triggered and using the VMware specific environmental variables that is populated from the vCenter Alarm, we can extract from the event description to figure out whether it was a vMotion or Storage vMotion. Once we confirm it is a Storage vMotion, we then execute our remediation script which is from my previous article.

Note: Ensure you download the latest version of of the querySvMotionVDSIssue.pl from the previous article, as it has been updated to handle single remediation and targeted for this use case.

Now to verify that our alarm is functioning as expected, we can perform a manual Storage vMotion of a VM and we should see our alarm.pl execute and then after the Storage vMotion has completed, we should see some VM reconfiguration tasks which is from our remediation script.

So there you have it, you no longer have to worry about running the script every so often to ensure your VMs are not being impacted by the SvMotion / VDS problem. Again, I would like to stress though we are able to automate this remediation, this is not a real solution and VMware is actively working on a fix for this problem.

If you have any questions, feel free to leave a comment.

Categories // Uncategorized Tags // alarm, distributed virtual switch, dvportgroup, dvs, storage drs, svmotion, vds

  • « Previous Page
  • 1
  • …
  • 488
  • 489
  • 490
  • 491
  • 492
  • …
  • 561
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025