WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud
  • Tanzu
    • Application Modernization
    • Tanzu services
    • Tanzu Community Edition
    • Tanzu Kubernetes Grid
    • vSphere with Tanzu
  • Home Lab
  • Nested Virtualization
  • Apple

ESXi 5.5 Kickstart script for setting up VSAN

07.21.2014 by William Lam // 12 Comments

In my lab, when I need to provision a new or rebuild an existing ESXi host, I still prefer to use the true and tried method of an unattended/scripted installation also known as Kickstart. Below is a handy ESXi 5.5 Kickstart that I have been using to setup a basic VSAN environment. I figure this might come in handy for anyone looking to automate their ESXi 5.5 deployment and include some of the VSAN configurations like creating a VSAN Disk Group or enabling VSAN Traffic type on a particular VMkernel interface. For more details about this Kickstart, refer to the bottom of the file where I break down the file in more detail.

# Sample kickstart for ESXi 5.5 configuring VSAN Disk Groups
# William Lam
# www.virtuallyghetto.com
#########################################

accepteula
install --firstdisk --overwritevmfs
rootpw vmware123
reboot

%include /tmp/networkconfig
%pre --interpreter=busybox

# extract network info from bootup
VMK_INT="vmk0"
VMK_LINE=$(localcli network ip interface ipv4 get | grep "${VMK_INT}")
IPADDR=$(echo "${VMK_LINE}" | awk '{print $2}')
NETMASK=$(echo "${VMK_LINE}" | awk '{print $3}')
GATEWAY=$(esxcfg-route | awk '{print $5}')
DNS="172.30.0.100"
HOSTNAME=$(nslookup "${IPADDR}" "${DNS}" | grep Address | grep "${IPADDR}" | awk '{print $4}')

echo "network --bootproto=static --addvmportgroup=true --device=vmnic0 --ip=${IPADDR} --netmask=${NETMASK} --gateway=${GATEWAY} --nameserver=${DNS} --hostname=${HOSTNAME}" > /tmp/networkconfig

%firstboot --interpreter=busybox

vsan_syslog_key = "VSAN-KS"

logger $vsan_syslog_key " Enabling & Starting SSH"
vim-cmd hostsvc/enable_ssh
vim-cmd hostsvc/start_ssh

logger $vsan_syslog_key " Enabling & Starting ESXi Shell"
vim-cmd hostsvc/enable_esx_shell
vim-cmd hostsvc/start_esx_shell

logger $vsan_syslog_key " Suppressing ESXi Shell Warning"
esxcli system settings advanced set -o /UserVars/SuppressShellWarning -i 1

logger $vsan_syslog_key " Reconfiguring VSAN Default Policy"
esxcli vsan policy setdefault -c vdisk -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"
esxcli vsan policy setdefault -c vmnamespace -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"

logger $vsan_syslog_key "Enabling VSAN Traffic on vmk0"
esxcli vsan network ipv4 add -i vmk0

# assign license
vim-cmd vimsvc/license --set AAAAA-BBBBB-CCCCC-DDDDD-EEEEE

%firstboot --interpreter=python

import commands, os, uuid, syslog

vsan_syslog_key = "VSAN-KS"
debug = False

# Build VSAN Disk Group command based on vdq -q output
def createVsanDiskGroup():
	vdqoutput = eval(commands.getoutput("/sbin/vdq -q"))
	md = []
	ssd = ''
	for i in vdqoutput:
		if i['State'] == 'Eligible for use by VSAN':
			if i['Reason'] == 'Non-local disk':
				syslog.syslog(vsan_syslog_key + " Setting enable_local and reclaiming " + i['Name'])
				if debug == False:
					os.system("esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -o enable_local -d " + i['Name'])
					os.system("esxcli storage core claiming reclaim -d " + i['Name'])
			if i['IsSSD'] == '1':
				ssd = i['Name']
			else:
				md.append(i['Name'])

	diskgroup_cmd = 'esxcli vsan storage add -s ' + ssd + ' -d ' + ' -d '.join(md)
	syslog.syslog(vsan_syslog_key + " Creating VSAN Disk Group using SSD: " + ssd +  " MD: " + ', '.join(md))
	if debug == False:
		os.system(diskgroup_cmd)

# Create VSAN Cluster (required only for first ESXi node)
def createVsanCluster():
	# generate UUID for VSAN Cluster
	vsan_uuid = str(uuid.uuid4())

	syslog.syslog(vsan_syslog_key + " Creating VSAN Cluster using UUID: " + vsan_uuid)
	if debug == False:
		os.system("esxcli vsan cluster join -u " + vsan_uuid)

createVsanDiskGroup()
createVsanCluster()

If you would like to see more details on creating ESXi Kickstart, make sure to check out my ESXi 4.x & 5.x examples here.

Line 6-9 This is generic Kickstart configurations specifying EULA, how to install, password, etc. You can refer to VMware's scripted install documentation.

Line 11-25 This extracts the DHCP IP Address (static allocation) and re-creates the network configuration in Line 25 for statically assigning the IP Address to the ESXi host

Line 27 This starts the firstboot script and assumes "Busybox" as the interpreter which means basic shell commands

Line 30 I create a custom key which will be logged in syslog for our installation

Line 32-41 Basic ESXi configurations leveraging vim-cmd and ESXCLI

Line 43-45 Configure the VSAN default storage policy, please refer to this article for more details.

Line 47-38 Configure the VSAN Traffic type on vmk0

Line 35 This starts a second firstboot script, but now using "Python"

Line 50-51 Assign a license to ESXi host

Line 53 Importing the appropriate libraries that will be used in the Python script

Line 58 Using the same custom key that I created earlier for logging to syslog

Line 61-81 A method for creating VSAN Disk Group by inspecting vdq CLI and marking disks as local

Line 83-90 A method for creating VSAN Cluster, please refer to this article for more details.

Line 92-93 Invoking the two Python methods. You can either create a custom Kickstart for your "first" ESXi node if you decide to bootstrap your VSAN Cluster onto a single ESXi host. You can also use custom boot options to specify whether the ESXi host being provisioned is the first or additional nodes. This topic is a bit advanced, but if you are interested, take a look at this article here.

Categories // Automation, ESXCLI, ESXi, VSAN, vSphere, vSphere 5.5 Tags // esxi 5.5, kickstart, ks.cfg, VSAN, vSphere 5.5

Automating ESXi 5.1 Kickstart Tips & Tricks

09.17.2012 by William Lam // 38 Comments

There is not a whole lot of changes for kickstart configurations between ESXi 5.1 and ESXi 5.0, majority of the tips and tricks noted in the ESXi 5.0 kickstart guide are still relevant for ESXi 5.1. Below are a few new tips and tricks (some old) as well as a complete working ESXi 5.1 kickstart example that can be used as a reference.

Tip #1

There are 82 new ESXCLI commands, number of which are new as well as enhancements to existing commands and operations. The kickstart sample below converts many of the legacy esxcfg-* and vim-cmd/vsish commands over to ESXCLI such as, here are just a few:

  • esxcli network ip route [ipv4|ipv6] (VMkernel routes)
  • esxcli system snmp (SNMP)
  • esxcli system maintenanceMode (maintenance mode)
  • esxcli network ip interface tag (tag VMkernel traffic types)

Please refer to the vCLI/ESXCLI release notes for all new ESXCLI commands.

Tip #2

In previous releases of ESXi, you could add custom commands in /etc/rc.local which will automatically execute after all startup scripts have finished. With the latest release of ESXi 5.1, this functionality has been moved to /etc/rc.local.d/local.sh. If you try to edit the old file, you will find that it does not allow you to write any changes. This will be important as you migrate to ESXi 5.1 kickstart if you make use of this file for any custom startup commands.

Tip #3

To run nested ESXi and other hypervisors in ESXi 5.1, you need to to specify new vhv.enable parameter, please take a look at this article for more details.

Tip #4

There is a new ESXi Advanced Setting in ESXi 5.1 that allows you to control when an interactive ESXi Shell session will automatically logout based on configured idle time (in seconds). You can find more details in this blog article by Kyle Gleed.

esxcli system settings advanced set -o /UserVars/ESXiShellInteractiveTimeOut -i 3600

Tip #5

By default, an ESXi host will automatically grant root permission to the "ESX Admins" group for use when a host is joined to an Active Directory domain. You can alter the default group name if you already have an AD group defined by using the following command:

vim-cmd hostsvc/advopt/update Config.HostAgent.plugins.hostsvc.esxAdminsGroup string "Ghetto ESXi Admins"

Tip #6

A really neat feature in ESXi 5.1 is the ability to control which local users have full admin privileges to the DCUI, this is really useful for troubleshooting and you want to provide DCUI console access but not administrative permissions on the ESXi host itself. You can specify a list of local users by using the following command:

vim-cmd hostsvc/advopt/update DCUI.Access string root,william,tuan

Tip #7

If you wish to prevent VMs from sending out BPDU (Bridge Protocol Data Unit) packets, there is a new global configuration on an ESXi 5.1 host which you can set. By default, this setting is disabled and you will need to configure this on every ESXi host if you wish to block VM guests from sending out BPDU packets.

esxcli system settings advanced set -o /Net/BlockGuestBPDU -i 1

Tip #8

Here's an article about enabling/disabling IPv6 using ESXCLI

Tip #9

Here's an article about creating custom VIB for ESXi 5.1

Here is a complete working example of an ESXi 5.1 kickstart that can help you convert your existing ESX(i) 4.x/5.x to ESXi 5.1:

# Sample kickstart for ESXi 5.1
# William Lam
# www.virtuallyghetto.com
#########################################
 
accepteula
install --firstdisk --overwritevmfs
rootpw vmware123
reboot
 
%include /tmp/networkconfig
 
%pre --interpreter=busybox
 
# extract network info from bootup
VMK_INT="vmk0"
VMK_LINE=$(localcli network ip interface ipv4 get | grep "${VMK_INT}")
IPADDR=$(echo "${VMK_LINE}" | awk '{print $2}')
NETMASK=$(echo "${VMK_LINE}" | awk '{print $3}')
GATEWAY=$(localcli network ip route ipv4 list | grep default | awk '{print $3}')
DNS="172.30.0.100,172.30.0.200"
HOSTNAME=$(nslookup "${IPADDR}" "${DNS}" | grep Address | grep "${IPADDR}" | awk '{print $4}')
 
echo "network --bootproto=static --addvmportgroup=false --device=vmnic0 --ip=${IPADDR} --netmask=${NETMASK} --gateway=${GATEWAY} --nameserver=${DNS} --hostname=${HOSTNAME}" > /tmp/networkconfig
 
%firstboot --interpreter=busybox
 
# enable VHV (Virtual Hardware Virtualization to run nested 64bit Guests + Hyper-V VM)
grep -i "vhv.enable" /etc/vmware/config || echo "vhv.enable = \"TRUE\"" >> /etc/vmware/config
 
# enable & start remote ESXi Shell  (SSH)
vim-cmd hostsvc/enable_ssh
vim-cmd hostsvc/start_ssh
 
# enable & start ESXi Shell (TSM)
vim-cmd hostsvc/enable_esx_shell
vim-cmd hostsvc/start_esx_shell
 
# supress ESXi Shell shell warning - Thanks to Duncan (http://www.yellow-bricks.com/2011/07/21/esxi-5-suppressing-the-localremote-shell-warning/)
esxcli system settings advanced set -o /UserVars/SuppressShellWarning -i 1
 
# ESXi Shell interactive idle time logout
esxcli system settings advanced set -o /UserVars/ESXiShellInteractiveTimeOut -i 3600
 
# Change the default ESXi Admins group "ESX Admins" to a custom one "Ghetto ESXI Admins" for AD
vim-cmd hostsvc/advopt/update Config.HostAgent.plugins.hostsvc.esxAdminsGroup string "Ghetto ESXi Admins"
 
# Users that will have full access to DCUI even if they don't have admin permssions on ESXi host
vim-cmd hostsvc/advopt/update DCUI.Access string root,william,tuan
 
# Block VM guest BPDU packets, global configuration
esxcli system settings advanced set -o /Net/BlockGuestBPDU -i 1
 
# copy SSH authorized keys & overwrite existing
wget http://air.primp-industries.com/esxi5/id_dsa.pub -O /etc/ssh/keys-root/authorized_keys
 
# disable SSH keys - uncomment the next section
# sed -i 's/AuthorizedKeysFile*/#AuthorizedKeysFile/g' /etc/ssh/sshd_config
 
# rename local datastore to something more meaningful
vim-cmd hostsvc/datastore/rename datastore1 "$(hostname -s)-local-storage-1"
 
# assign license
vim-cmd vimsvc/license --set AAAAA-BBBBB-CCCCC-DDDDD-EEEEE
 
## SATP CONFIGURATIONS ##
esxcli storage nmp satp set --satp VMW_SATP_SYMM --default-psp VMW_PSP_RR
esxcli storage nmp satp set --satp VMW_SATP_DEFAULT_AA --default-psp VMW_PSP_RR
 
###########################
## vSwitch configuration ##
###########################
 
#####################################################
# vSwitch0 : Active->vmnic0,vmnic1 Standby->vmnic2
#       failback: yes
#       faildectection: beacon
#       load balancing: portid
#       notify switches: yes
#       avg bw: 1000000 Kbps
#       peak bw: 1000000 Kbps
#       burst size: 819200 KBps
#       allow forged transmits: yes
#       allow mac change: no
#       allow promiscuous no
#       cdp status: both
 
# attach vmnic1,vmnic2 to vSwitch0
esxcli network vswitch standard uplink add --uplink-name vmnic1 --vswitch-name vSwitch0
esxcli network vswitch standard uplink add --uplink-name vmnic2 --vswitch-name vSwitch0
 
# configure portgroup
esxcli network vswitch standard portgroup add --portgroup-name VMNetwork1 --vswitch-name vSwitch0
esxcli network vswitch standard portgroup set --portgroup-name VMNetwork1 --vlan-id 100
esxcli network vswitch standard portgroup add --portgroup-name VMNetwork2 --vswitch-name vSwitch0
esxcli network vswitch standard portgroup set --portgroup-name VMNetwork2 --vlan-id 200
esxcli network vswitch standard portgroup add --portgroup-name VMNetwork3 --vswitch-name vSwitch0
esxcli network vswitch standard portgroup set --portgroup-name VMNetwork3 --vlan-id 333
 
# configure cdp
esxcli network vswitch standard set --cdp-status both --vswitch-name vSwitch1
 
### FAILOVER CONFIGURATIONS ###
 
# configure active and standby uplinks for vSwitch0
esxcli network vswitch standard policy failover set --active-uplinks vmnic0,vmnic1 --standby-uplinks vmnic2 --vswitch-name vSwitch0
 
# configure failure detection + load balancing (could have appended to previous line)
esxcli network vswitch standard policy failover set --failback yes --failure-detection beacon --load-balancing portid --notify-switches yes --vswitch-name vSwitch0
 
### SECURITY CONFIGURATION ###
esxcli network vswitch standard policy security set --allow-forged-transmits yes --allow-mac-change no --allow-promiscuous no --vswitch-name vSwitch0
 
### SHAPING CONFIGURATION ###
esxcli network vswitch standard policy shaping set --enabled yes --avg-bandwidth 100000 --peak-bandwidth 100000 --burst-size 819200 --vswitch-name vSwitch0
 
#####################################################
# vSwitch1 : Active->vmnic3,vmnic4 Standby->vmnic5
#       failback: no
#       faildectection: link
#       load balancing: mac
#       notify switches: no
#       allow forged transmits: no
#       allow mac change: no
#       allow promiscuous no
#       cdp status: listen
#       mtu: 9000
 
# add vSwitch1
esxcli network vswitch standard add --ports 256 --vswitch-name vSwitch1
 
# attach vmnic3,4,5 to vSwitch0
esxcli network vswitch standard uplink add --uplink-name vmnic3 --vswitch-name vSwitch1
esxcli network vswitch standard uplink add --uplink-name vmnic4 --vswitch-name vSwitch1
esxcli network vswitch standard uplink add --uplink-name vmnic5 --vswitch-name vSwitch1
 
# configure mtu + cdp
esxcli network vswitch standard set --mtu 9000 --cdp-status listen --vswitch-name vSwitch1
 
# configure portgroup
esxcli network vswitch standard portgroup add --portgroup-name NFS --vswitch-name vSwitch1
esxcli network vswitch standard portgroup add --portgroup-name FT_VMOTION --vswitch-name vSwitch1
esxcli network vswitch standard portgroup add --portgroup-name VSPHERE_REPLICATION --vswitch-name vSwitch1
 
### FAILOVER CONFIGURATIONS ###
 
# configure active and standby uplinks for vSwitch1
esxcli network vswitch standard policy failover set --active-uplinks vmnic3,vmnic4 --standby-uplinks vmnic5 --vswitch-name vSwitch1
 
# configure failure detection + load balancing (could have appended to previous line)
esxcli network vswitch standard policy failover set --failback no --failure-detection link --load-balancing mac --notify-switches no --vswitch-name vSwitch1
 
### SECURITY CONFIGURATION ###
esxcli network vswitch standard policy security set --allow-forged-transmits no --allow-mac-change no --allow-promiscuous no --vswitch-name vSwitch1
 
# configure vmkernel interface for NFS traffic, FT_VMOTION and VSPHERE_REPLICATION traffic
VMK0_IPADDR=$(esxcli network ip interface ipv4 get | grep vmk0 | awk '{print $2}')
VMK1_IPADDR=$(echo ${VMK0_IPADDR} | awk '{print $1".51."$3"."$4}' FS=.)
VMK2_IPADDR=10.10.0.2
VMK3_IPADDR=10.20.0.2
esxcli network ip interface add --interface-name vmk1 --mtu 9000 --portgroup-name NFS
esxcli network ip interface ipv4 set --interface-name vmk1 --ipv4 ${VMK1_IPADDR} --netmask 255.255.255.0 --type static
esxcli network ip interface add --interface-name vmk2 --mtu 9000 --portgroup-name FT_VMOTION
esxcli network ip interface ipv4 set --interface-name vmk2 --ipv4 ${VMK2_IPADDR} --netmask 255.255.255.0 --type static
esxcli network ip interface add --interface-name vmk3 --mtu 9000 --portgroup-name VSPHERE_REPLICATION
esxcli network ip interface ipv4 set --interface-name vmk3 --ipv4 ${VMK3_IPADDR} --netmask 255.255.255.0 --type static
 
# Configure VMkernel traffic type (Management, VMotion, faultToleranceLogging, vSphereReplication)
esxcli network ip interface tag add -i vmk2 -t Management
esxcli network ip interface tag add -i vmk2 -t VMotion
esxcli network ip interface tag add -i vmk2 -t faultToleranceLogging
esxcli network ip interface tag add -i vmk3 -t vSphereReplication
 
# Configure VMkernel routes
esxcli network ip route ipv4 add -n 10.20.183/24 -g 172.30.0.1
esxcli network ip route ipv4 add -n 10.20.182/24 -g 172.30.0.1
 
# Disable IPv6 for VMkernel interfaces
esxcli system module parameters set -m tcpip3 -p ipv6=0
 
### MOUNT NFS DATASTORE ###
esxcli storage nfs add --host 172.51.0.200 --share /volumes/Primp/primp-6 --volume-name himalaya-NFS-primp-6
 
### ADV CONFIGURATIONS ###
esxcli system settings advanced set --option /Net/TcpipHeapSize --int-value 30
esxcli system settings advanced set --option /Net/TcpipHeapMax --int-value 120
esxcli system settings advanced set --option /NFS/HeartbeatMaxFailures --int-value 10
esxcli system settings advanced set --option /NFS/HeartbeatFrequency --int-value 20
esxcli system settings advanced set --option /NFS/HeartbeatTimeout --int-value 10
esxcli system settings advanced set --option /NFS/MaxVolumes --int-value 128
 
### SYSLOG CONFIGURATION ###
esxcli system syslog config set --default-rotate 20 --loghost vcenter50-3.primp-industries.com:514,udp://vcenter50-3.primp-industries.com:514,ssl://vcenter50-3.primp-industries.com:1514,udp://vcenter50-3.primp-industries.com:514,udp://vcenter50-3.primp-industries.com:514,ssl://vcenter50-3.primp-industries.com:1514,ssl://vcenter50-3.primp-industries.com:1514
 
# change the individual syslog rotation count
esxcli system syslog config logger set --id=hostd --rotate=20 --size=2048
esxcli system syslog config logger set --id=vmkernel --rotate=20 --size=2048
esxcli system syslog config logger set --id=fdm --rotate=20
esxcli system syslog config logger set --id=vpxa --rotate=20
 
### NTP CONFIGURATIONS ###
cat > /etc/ntp.conf << __NTP_CONFIG__
restrict default kod nomodify notrap noquery nopeer
restrict 127.0.0.1
server 0.vmware.pool.ntp.org
server 1.vmware.pool.ntp.org
__NTP_CONFIG__
/sbin/chkconfig ntpd on
 
### FIREWALL CONFIGURATION ###
 
# enable firewall
esxcli network firewall set --default-action false --enabled yes
 
# services to enable by default
FIREWALL_SERVICES="syslog sshClient ntpClient updateManager httpClient netdump"
for SERVICE in ${FIREWALL_SERVICES}
do
 esxcli network firewall ruleset set --ruleset-id ${SERVICE} --enabled yes
done
 
# backup ESXi configuration to persist changes
/sbin/auto-backup.sh
 
# enter maintenance mode
esxcli system maintenanceMode set -e true
 
# copy %first boot script logs to persisted datastore
cp /var/log/hostd.log "/vmfs/volumes/$(hostname -s)-local-storage-1/firstboot-hostd.log"
cp /var/log/esxi_install.log "/vmfs/volumes/$(hostname -s)-local-storage-1/firstboot-esxi_install.log"
 
# Needed for configuration changes that could not be performed in esxcli
esxcli system shutdown reboot -d 60 -r "rebooting after host configurations"

Categories // Uncategorized Tags // esxcli, esxi5.1, kickstart, ks.cfg, vSphere 5.1

Disable LUN During ESXi Installation

04.17.2012 by William Lam // 14 Comments

For many of us who worked with classic ESX back in the day, can recall one of the scariest thing during an install/re-install or upgrade of an ESX host that had SAN attached storage, was the potential risk of accidentally installing ESX onto one of the LUNs that housed our Virtual Machines. As a precaution, most vSphere administrators would ask their Storage administrators to either disable/unplug the ports on the switch or temporarily mask away the LUNs at the array during an install or upgrade.

Another trick that gained popularity due to it's simplicity was unloading the HBA drivers before the installation of ESX began and this was usually done as part of the %pre section of a kickstart installation. This would ensure that your SAN LUNs would not be visible during the installation and it was much faster than involving your Storage administrators. With the release of ESXi, this trick no longer works. Though, there have been several enhancements in the ESXi kickstart to allow you to specify specific types of disks during installation, however, it is possible that you could still see your SAN LUNs during the installation.

I know the question about disabling the HBA drivers for ESXi comes up pretty frequently and I just assumed it was not possible. A recent question on the same topic in our internal Socicalcast site got me thinking. With some research and testing, I found a way to do this by leveraging LUN masking at the ESXi host level using ESXCLI. My initial thought was to mask based on the HBA adapter (C:*T:*L:*) and this would still be somewhat manual depending on your various host configurations.

The above solution was not ideal, but with the help from some of our VMware GSS engineers (Paudie/Daniel), they mentioned that you could create claim rules based on variety of criteria, one of which is the transport type. This meant that I could create a claim rule to mask all LUNs that had one of the following supported transport type: block, fc, iscsi, iscsivendor, ide, sas, sata, usb, parallel or unknown.

Here are the following commands to run if you wish to create a claim rule to mask away all LUNs that are FC based:

esxcli storage core claimrule add -r 2012 -P MASK_PATH -t transport -R fc
esxcli storage core claimrule load
esxcli storage core claiming unclaim -t plugin -P NMP
esxcli storage core claimrule run

Another option that was mentioned by Paudie, was that you could also mask based on a particular driver, such as the Emulex driver (lpfc680). To see the type of driver a particular adapter is being supported by, you can run the following ESXCLI command:

esxcli storage core adapter list

Here is a screenshot of a sample output:

For more details about creating claim rules be sure to use the --help option or take a look at the ESXCLI documentation starting on pg 88 here.

Now this is great, but how do we go about automating this a bit further? Since the claim rules would still need to be executed by a user before starting an ESXi installation and also removed after the post-installation. I started doing some testing with creating a customized ESXi 5 ISO that would "auto-magically" create the proper claim rules and remove them afterwards and with some trial/error, I was able to get it working.

The process is exactly the same as laid out in an earlier article How to Create Bootable ESXi 5 ISO & Specifying Kernel Boot Option, but instead of tweaking the kernelopt in the boot.cfg, we will just be appending a custom mask.tgz file that contains our "auto-magic" claim rule script. Here is what the script looks like:

#!/bin/ash

localcli storage core claimrule add -r 2012 -P MASK_PATH -t transport -R fc
localcli storage core claimrule load
localcli storage core claiming unclaim -t plugin -P NMP
localcli storage core claimrule run

cat >> /etc/rc.local << __CLEANUP_MASKING__
localcli storage core claimrule remove -r 2012
__CLEANUP_MASKING__

cat > /etc/init.d/maskcleanup << __CLEANUP_MASKING__
sed -i 's/localcli.*//g' /etc/rc.local
rm -f /etc/init.d/maskcleanup
__CLEANUP_MASKING__

chmod +x /etc/init.d/maskcleanup

The script above will create a claim rule to mask all FC LUNs before the installation of ESXi starts, this ensure that the FC LUNs will not be visible during the installation. It will also append a claim rule remove to /etc/rc.local which will actually execute before the installation is complete, but does note take effect since it is not loaded. This ensures the claim rule is automatically removed before rebooting and we also create a simple init.d script to clean up this entry upon first boot up. All said and done, you will not be able to see your FC LUNs during the installation but they will show up after the first reboot.

Disclaimer: Please ensure you do proper testing in a lab environment before using in Production.

To create the custom mask.tgz file, you will need to follow the steps below and then take the mask.tgz file and follow the article above in creating a bootable ESXi 5 ISO.

  1. Create the following directory: mkdir -p test/etc/rc.local.d
  2. Change into the "test/etc/rc.local.d" directory and create a script called mask.sh and copy the above lines into the script
  3. Set the execute permission on the script chmod +x mask.sh
  4. Change back into the root of the "test" director and run the following command: tar cvf mask.tgz *
  5. Update the boot.cfg as noted in the article and append mask.tgzto the module list.

Once you create your customized ESXi 5 ISO, you can just boot it up and either perform a clean installation or an upgrade without having to worry about SAN LUNs being seen by the installer. Though these steps are specific to ESXi 5, they should also work with ESXi 4.x (ESXCLI syntax may need to be changed), but please do verify before using in a production environment.

You can easily leverage this in a kickstart deployment by adding the claim rule creation in the %pre section and then adding claim rule removal in the %post to ensure that upon first boot up, everything is ready to go. Take a look at this article for more details for kickstart tips/tricks in ESXi 5.

Categories // Automation, ESXi Tags // esxi 5, esxi4.1, kickstart, ks.cfg, LUN

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • Next Page »

Search

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Infrastructure Business Group (CIBG) at VMware. He focuses on Cloud Native technologies, Automation, Integration and Operation for the VMware Cloud based Software Defined Datacenters (SDDC)

Connect

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Recent

  • Blocking vSphere HTML5 VM Console and allowing only Standalone VM Remote Console (VMRC)? 02/08/2023
  • Quick Tip - Inventory core count for vSphere+, vSAN+ & VCF+ Cloud Service 02/07/2023
  • How to automate adding a license into vCenter Server with custom label?  02/06/2023
  • Automated ESXi Installation with a USB Network Adapter using Kickstart 02/01/2023
  • How to bootstrap ESXi compute only node and connect to vSAN HCI Mesh? 01/31/2023

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2023