WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Disable LUN During ESXi Installation

04.17.2012 by William Lam // 14 Comments

For many of us who worked with classic ESX back in the day, can recall one of the scariest thing during an install/re-install or upgrade of an ESX host that had SAN attached storage, was the potential risk of accidentally installing ESX onto one of the LUNs that housed our Virtual Machines. As a precaution, most vSphere administrators would ask their Storage administrators to either disable/unplug the ports on the switch or temporarily mask away the LUNs at the array during an install or upgrade.

Another trick that gained popularity due to it's simplicity was unloading the HBA drivers before the installation of ESX began and this was usually done as part of the %pre section of a kickstart installation. This would ensure that your SAN LUNs would not be visible during the installation and it was much faster than involving your Storage administrators. With the release of ESXi, this trick no longer works. Though, there have been several enhancements in the ESXi kickstart to allow you to specify specific types of disks during installation, however, it is possible that you could still see your SAN LUNs during the installation.

I know the question about disabling the HBA drivers for ESXi comes up pretty frequently and I just assumed it was not possible. A recent question on the same topic in our internal Socicalcast site got me thinking. With some research and testing, I found a way to do this by leveraging LUN masking at the ESXi host level using ESXCLI. My initial thought was to mask based on the HBA adapter (C:*T:*L:*) and this would still be somewhat manual depending on your various host configurations.

The above solution was not ideal, but with the help from some of our VMware GSS engineers (Paudie/Daniel), they mentioned that you could create claim rules based on variety of criteria, one of which is the transport type. This meant that I could create a claim rule to mask all LUNs that had one of the following supported transport type: block, fc, iscsi, iscsivendor, ide, sas, sata, usb, parallel or unknown.

Here are the following commands to run if you wish to create a claim rule to mask away all LUNs that are FC based:

esxcli storage core claimrule add -r 2012 -P MASK_PATH -t transport -R fc
esxcli storage core claimrule load
esxcli storage core claiming unclaim -t plugin -P NMP
esxcli storage core claimrule run

Another option that was mentioned by Paudie, was that you could also mask based on a particular driver, such as the Emulex driver (lpfc680). To see the type of driver a particular adapter is being supported by, you can run the following ESXCLI command:

esxcli storage core adapter list

Here is a screenshot of a sample output:

For more details about creating claim rules be sure to use the --help option or take a look at the ESXCLI documentation starting on pg 88 here.

Now this is great, but how do we go about automating this a bit further? Since the claim rules would still need to be executed by a user before starting an ESXi installation and also removed after the post-installation. I started doing some testing with creating a customized ESXi 5 ISO that would "auto-magically" create the proper claim rules and remove them afterwards and with some trial/error, I was able to get it working.

The process is exactly the same as laid out in an earlier article How to Create Bootable ESXi 5 ISO & Specifying Kernel Boot Option, but instead of tweaking the kernelopt in the boot.cfg, we will just be appending a custom mask.tgz file that contains our "auto-magic" claim rule script. Here is what the script looks like:

#!/bin/ash

localcli storage core claimrule add -r 2012 -P MASK_PATH -t transport -R fc
localcli storage core claimrule load
localcli storage core claiming unclaim -t plugin -P NMP
localcli storage core claimrule run

cat >> /etc/rc.local << __CLEANUP_MASKING__
localcli storage core claimrule remove -r 2012
__CLEANUP_MASKING__

cat > /etc/init.d/maskcleanup << __CLEANUP_MASKING__
sed -i 's/localcli.*//g' /etc/rc.local
rm -f /etc/init.d/maskcleanup
__CLEANUP_MASKING__

chmod +x /etc/init.d/maskcleanup

The script above will create a claim rule to mask all FC LUNs before the installation of ESXi starts, this ensure that the FC LUNs will not be visible during the installation. It will also append a claim rule remove to /etc/rc.local which will actually execute before the installation is complete, but does note take effect since it is not loaded. This ensures the claim rule is automatically removed before rebooting and we also create a simple init.d script to clean up this entry upon first boot up. All said and done, you will not be able to see your FC LUNs during the installation but they will show up after the first reboot.

Disclaimer: Please ensure you do proper testing in a lab environment before using in Production.

To create the custom mask.tgz file, you will need to follow the steps below and then take the mask.tgz file and follow the article above in creating a bootable ESXi 5 ISO.

  1. Create the following directory: mkdir -p test/etc/rc.local.d
  2. Change into the "test/etc/rc.local.d" directory and create a script called mask.sh and copy the above lines into the script
  3. Set the execute permission on the script chmod +x mask.sh
  4. Change back into the root of the "test" director and run the following command: tar cvf mask.tgz *
  5. Update the boot.cfg as noted in the article and append mask.tgzto the module list.

Once you create your customized ESXi 5 ISO, you can just boot it up and either perform a clean installation or an upgrade without having to worry about SAN LUNs being seen by the installer. Though these steps are specific to ESXi 5, they should also work with ESXi 4.x (ESXCLI syntax may need to be changed), but please do verify before using in a production environment.

You can easily leverage this in a kickstart deployment by adding the claim rule creation in the %pre section and then adding claim rule removal in the %post to ensure that upon first boot up, everything is ready to go. Take a look at this article for more details for kickstart tips/tricks in ESXi 5.

Categories // Automation, ESXi Tags // ESXi 4.1, ESXi 5.0, kickstart, ks.cfg, LUN

How to Create Bootable ESXi 5 ISO & Specifying Kernel Boot Options

03.30.2012 by William Lam // 21 Comments

This week I helped to answer a few questions about creating your own ESXi 5 bootable ISO along with automatically using a static IP Address when the custom ISO first boots up. Although all this information is available via the vSphere documentation, it may not always be easy to put all the pieces together and thought I share the steps for others to also benefit.

You will need access to a UNIX/Linux system and a copy of the base ESXi 5 ISO image. In this example I will be using VMware vMA and VMware-VMvisor-Installer-5.0.0.update01-623860.x86_64.iso and walk you through two different configurations. We will also be referencing the vSphere documentation Create an Installer ISO Image with a Custom Installation or Upgrade Script and Kernel Boot Options.

Create ESXi 5 Bootable ISO w/Remote ks.cfg:

In this configuration, we will create a custom ESXi ISO that will boot with a static IP Address and use a remote ks.cfg (kickstart) configuration file.

Step 1 - Mount base ESXi ISO using the "mount" utility:

$ mkdir esxi_cdrom_mount
$ sudo mount -o loop VMware-VMvisor-Installer-5.0.0.update01-623860.x86_64.iso esxi_cdrom_mount

Step 2 - Copy the contents of the mounted image to a local directory called "esxi_cdrom":

$  cp -r esxi_cdrom_mount esxi_cdrom

Step 3 - Unmount the ISO after you have successfully copied it and change into the esxi_cdrom directory

$ sudo umount esxi_cdrom_mount
$ cd esxi_cdrom

Step 4 - Edit the boot.cfg and specifically the "kernelopt" line to not use the weasel installer but kickstart and also specifying the remote location of your ks.cfg. To get more details on the various kernel boot options, please take a look at the vSphere Boot Options documentation above.

You will also need to specify the static IP Address you wish to have the host automatically use when the ISO first boots up on the same line.

Step 5 - Once you have finished your edits and saved the boot.cfg, you will now change back to the parent directory and use the "mkisofs" to create your new bootable ISO. In this example, we will name the new ISO "custom_esxi.iso":

$ sudo mkisofs -relaxed-filenames -J -R -o custom_esxi.iso -b isolinux.bin -c boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table esxi_cdrom/

You now have a new bootable ESXi 5 ISO called "custom_esxi.iso" which will now automatically boot up with the specified static IP Address and install based on the ks.cfg that was specified.

Create ESXi 5 Bootable ISO w/Local ks.cfg:

Similar to the above configuration, we will create a custom ESXi ISO that will boot with a static IP Address but use a local ks.cfg (kickstart) configuration file that will be included within the custom ISO.

Step 1 through 3 is exactly the same as above

Step 4 - By default, a basic ks.cfg is included in the ESXi 5 ISO in /etc/vmware/weasel/ks.cfg and we will create a custom *.tgz file that will include our ks.cfg within the ISO. First off by creating a temporary directory which will be used to store our ks.cfg:

$ mkdir -p temp/etc/vmware/weasel

Step 5 - Copy your ks.cfg file into the temp/etc/vmware/weasel:

$ cp ks_custom.cfg temp/etc/vmware/weasel

Step 6 - Create a *.tgz file containing the path to our ks.cfg using the "tar" utility. In this example, we will called it customks.tgz:

$ cd temp
$ tar czvf customks.tgz *

Step 7 -  Copy the customks.tgz from temp directory to your esxi_cdrom directory:

$ cp temp/customks.tgz esxi_cdrom

Step 8 -  Change into the "esxi_cdrom" directory and edit the boot.cfg just like the above, but we will be using the "file://" stanza to specify the path to our ks.cfg, static IP Address as well as adding our customks.tgz to the module list to ensure that it loads up which contains the actual ks.cfg file that is called in the boot.cfg.

Step 9 - Same as Step 5 above, you now just need to run the "mkisofs" utility to create your bootable ISO.

You now have a new bootable ESXi 5 ISO called "custom_esxi.iso" which will now automatically boot up with the specified static IP Address and install based on the ks.cfg that is included within the ISO.

Categories // Automation, ESXi Tags // bootable, ESXi 5.0, ESXi 5.0, iso, kickstart

Automating Dead Space Reclamation in ESXi 5.0u1

03.24.2012 by William Lam // 4 Comments

VMware released vSphere 5.0 Update 1 last week, which mainly included bug fixes but it also brought back one very cool feature that was initially introduced with the release of vSphere 5.0 called Thin Provisioning UNMAP primitive for an ESXi host. You can read more about the details in this article by my colleague Cormac Hogan.

As you can see from From Cormac's article, the process of reclaiming of dead space on a thin provisioned LUN is currently a manual process, but does it have to be? The answer is, No of course, we can can definitely automate this!

Disclaimer: This script is not officially supported by VMware, please test this in a development environment before using on production system. This script is provided as an example on you can automate this manual process.

Before you proceed, please understand that the UNMAP operation can potentially take a few minutes up to a few hours depending on the size of your datastore and how your array handles this operation. You should consider performing this operation during a maintenance window or during off peak hours else you could impact VMs residing on the datastore. You should also ensure you have a VAAI-capable storage array before performing running this script.

I wrote a simple shell script called reclaimMyDeadsSace.sh which needs to be executed on the ESXi Shell via SSH. The script will also perform some validation such as ensuring you are running ESXi 5.0 Update 1 and that your host is in maintenance mode as a per-caution to ensure no running VMs are on the host during this process.

You will only need to run the script on one of the hosts connected to all the datastores you wish to reclaim dead space on. You may use scp or WinSCP to transfer the script to your ESXi host and ensure you set the execute permission on the script (chmod +x reclaimMyDeadSpace.sh)

The script can be executed in two ways:

  1. Identify ALL VMFS3 and VMFS5 volumes and perform the reclaim based on the percentage entered by the user
  2. Reclaim on specific datastores specified by the user as well as the percentage to be reclaimed (this is recommended, that way script does not choose all datastores including local ones)

Here is an example of selecting ALL VMFS3 and VMFS5 datastores to reclaim 60% of free space:

Here is an example of selecting just 4 datastores specified in a file and we will be reclaiming 60% of free space:

In this example, we created a file called "datastore_list.txt" (you may name the file anything you want) which contains the following:
iSCSI-1
iSCSI-2
iSCSI-3
iSCSI-4

So if you are using thin provisioned LUNs and would like to reclaim some of that dead space back and have a VAAI-capable storage array, be sure to check out the UNMAP functionality in ESXi 5.0u1.

Categories // Uncategorized Tags // ESXi 5.0, unmap, vaai, vmkfstools, vSphere 5.0

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025