If you reinstall ESXi 5 on system that had a previous copy, one thing you might have noticed is the local VMFS datastore label is preserved. This is also true if you perform an unattended installation using kickstart and specifying the overwritevmfs parameter, a new VMFS volume is created but it still uses the old label. This can cause some issues for scripted installs where you decide to rename the local datastore from the expected default "datastore1" label.
UPDATE (12/21) - This issue has been resolved in the latest release of ESXi 5.0 Update 2, you can refer to the release notes for more details on other updates and fixes.
Though it is actually pretty easy to get around this problem by deleting the VMFS partition prior to starting the new ESXi installation. Below are three methods depending on the installation option you have chosen. Please be absolutely sure about the VMFS volume prior to deleting the partition.
Method 1 - While you still have login access to previous ESXi install
If you still have access to the system before the re-install, you can delete the VMFS partition before rebooting and starting the installation (ISO or kickstart). You will first need to identify the device that is backing your local datastore, you can use the following ESXCLI command which will provide a mapping of your datastore to device.
You will need to make a note of the "Device Name" which can be a naa.* or mpx.* depending on how your ESXi host identifies the disk. You should also make a note of the partition number for the VMFS volume which we will also confirm in the next step. Using the partedUtil we can check the partitions found on the disk and we can confirm that partition 3 is being used for VMFS. Using the "getptbl" option and specifying the full path to the disk which is under /vmfs/devices/disks/naa.* we can retrieve the partition info as shown below.
Now we just need to delete this partition which will wipe the VMFS headers which includes the datastore label. We can do this by using partedUtil and using the "delete" option which will require the full path to the disk in our previous step.
You can now reinstall ESXi and it will use "datastore1" as it's default VMFS label.
Note: The disk that contains the local ESXi 5 install will always have VMFS as the 3rd partition, where as other VMFS volumes will only have a single partition.
Method 2 - During manual installation using ESXi 5 ISO
When you boot up the ISO, you are brought to the "Welcome to VMware ESXi 5.0.0 Installation" page, you will need to login to ESXi Shell by pressing ALT+F1. The username will be root and there is no password, just hit enter. Just like in Method 1, you will need to identify the device for your local datastore but instead of using esxcli, you will need to use localcli as hostd is currently not running.
Here is a screenshot of the identifying the local datatstore device and deleting the VMFS partition:
You can now jump back to the installer by pressing ALT+F2 and continuing with the reinstall and it will use "datastore1" as it's default VMFS label.
Method 3 - Kickstart Installation
If you wish to ensure that the default "datastore1" label is always available for scripted installs, you can using the following snippet in your %pre section of your kickstart. This will search for all disks under /vmfs/devices/disks and find the deivce that is backing a local ESXi installation and delete it's VMFS partition prior to starting the installation.
for DISK in $(ls /vmfs/devices/disks | grep -v vml); do DISK_PATH=/vmfs/devices/disks/${DISK} VMFS_PARTITION_ID=$(partedUtil getptbl ${DISK_PATH} | grep vmfs | awk '{print $1}') if [[ ! -z ${VMFS_PARTITION_ID} ]] && [[ ${VMFS_PARTITION_ID} -eq 3 ]]; then partedUtil delete ${DISK_PATH} 3 fi done
Note: To be extra cautious, you should also consider disabling any additional remote LUNs that can be seen during the installation using the trick found here.
Paz says
This comment has been removed by the author.
Paz says
Hello,
I had issues with your script. I read the discussion http://communities.vmware.com/thread/328233
and came up with a modified script that works in my case
%pre --interpreter=busybox
# Removing Previous Local Datastore Label for Reinstall in ESXi 5
# Find the current local datastore name/LABEL (Exclude all SAN HSV200 datastore and Exclude all IDE CD/DVD)
DeviceName="$(esxcli --formatter=csv --format-param=fields="Device,Model" storage core device list | grep -v "HSV200" | grep -v "IDE CDR" | grep -v "Device" | cut -d, -f1)"
DatastoreName="$(esxcli storage vmfs extent list | grep $DeviceName | awk '{print $1}')"
partedUtil delete "/vmfs/devices/disks/$DeviceName" 3
or
# Rename local datastore to something more meaningful
# Find the current local datastore name/LABEL (Exclude all SAN HSV200 datastore) (Exclude all IDE CD/DVD by PAZ)
DeviceName="$(esxcli --formatter=csv --format-param=fields="Device,Model" storage core device list | grep -v "HSV200" | grep -v "IDE CDR" | grep -v "Device" | cut -d, -f1)"
DatastoreName="$(esxcli storage vmfs extent list | grep $DeviceName | awk '{print $1}')"
NewDataStoreName="$(hostname -s)-local-storage-1"
# Rename the datastore
vim-cmd hostsvc/datastore/rename $DatastoreName "$(hostname -s)-local-storage-1"
William says
@Paz,
What issues did you have with the script? FYI - If you're running things in %pre, esxcli will not be valid as hostd is not running and you'll need to use localcli. Also localcli does not have --formatter, did you verify that your snippet of code works during %pre? Renaming is another option but not really ideal
Kirk says
You can also rename it after the build via PowerCLI if you do any sort of powershell scripting post install.
This will grab the name of the local VMFS that your ESXi 5.0 installation exists on:
$LocalDSName = Get-Datastore -VMHost $your_hostname | Select Name,FileSystemVersion,@{N="Shared";E={$_.Extensiondata.Summary.MultipleHostAccess}} | ?{$_.Shared -ne "True"} | ?{$_.FileSystemVersion -eq "5.54"} | %{$_.Name}
Once you have that name in a variable ($LocalDSName in this example), you can then perform a rename:
Get-Datastore -VMHost $your_hostname -Name $LocalDSName | Set-Datastore -Name $new_name_for_local_vmfs
This is performed while connected to your vCenter Server with PowerCLI.
I realize this is a bit off topic as it's a post configuration step after your host is at the very least on the network and connected to a vCenter Server, but it's an option if anyone is doing post install scripting with PowerCLI.
JC says
I've placed the code in the %pre section but it does not appear to work. The original local datastore name/label is still present.
Also, when I walked through the code in a standalone script, the awk/print statement was generating an error (unexpected end of line) so I tried using "cut -f1 d" " which processed clean. But with either awk/print or cut, the original datastore name is still there at the end of the scripted build.
JC says
Update: Instead of trying to destroy the existing datastore I decided to just retrieve and use the datastore name in the %post and %firstboot sections. Assumption is their is no attached storage and only 1 local datastore. I also use the FC Mask code in the %pre section that's mentioned above (thanks!). Only change is I unmask at the very end of the %firstboot section just to make sure now datastores reappear. Someone might have a cleaner way to do this but it works for me on an existing server as well as a clean build.
############################
%post --interpreter=busybox
############################
# FIND EXISTING LOCAL DATASTORE NAME (if any)
LINECOUNT=`localcli --format-param=show-header=false storage vmfs extent list | wc -l`
if [ $LINECOUNT = 1 ] ; then
LOCALDS=`localcli --format-param=show-header=false storage vmfs extent list | cut -f1 -d" "`
else
LOCALDS="datastore1"
fi
# MOVE FILES TO LOCAL DATASTORE
mv /*.zip /vmfs/volumes/$LOCALDS
################################
%firstboot --interpreter=busybox
################################
SVRNAMESHORT=`hostname | cut -d'.' -f1`
LINECOUNT=`localcli --format-param=show-header=false storage vmfs extent list | wc -l`
if [ $LINECOUNT = 1 ] ; then
OLDDATASTORENAME=`localcli --format-param=show-header=false storage vmfs extent list | cut -f1 -d" "`
else
OLDDATASTORENAME="datastore1"
fi
DATASTORENAME="$SVRNAMESHORT-Datastore"
.....
.....
.....
vim-cmd hostsvc/datastore/rename $OLDDATASTORENAME $DATASTORENAME
William says
@JC,
The script syntax looks fine, it's pretty straight forward. Not sure why you had issues running it.
Anyhow, identifying the previous VMFS volume and just renaming is definitely another option. If you have a simple setup, this definitely should work fine but if you have additional local datastore, that could pose an issue/etc. Thanks for the feedback and sharing the snippet of code
Anonymous says
hello William,
I ave use your code in the %pre, but i doesn't work , why ?
Regards
Eric
William says
@Anonymous,
Can you manually run those commands and see if it produces the correct results? What version of ESXi are you using?
Anonymous says
I couldn't get it to work either. Here is what I did in the first few lines of the %firstboot section:
# Find the current datastore name
DeviceName="$(esxcli --formatter=csv --format-param=fields="Device,Model" storage core device list | grep -v "100E-00" | grep -v "Device" | grep -v "IDE CDR" | grep -v "iSCSIDisk" |grep -v "Virtual CDROM" | cut -d, -f1)"
DatastoreName="$(esxcli storage vmfs extent list | grep $DeviceName | awk '{print $1}')"
# Rename the partition to LocalESXDrive#
NewDataStoreName="LocalESXDrive59"
vim-cmd hostsvc/datastore/rename $DatastoreName $NewDataStoreName
You probably don't need all that many greps in the first statement, grep -v "Device" | grep -v "CDR" is probably good enough. I used the other greps were used to eliminate SAN datastores that should not be actually be visible if you run this as the first entries in the %firstboot.
grep -v is used to eliminate lines. run the following command and see what all entries are displayed on a machine before you rebuild your system to see if anything else needs to be removed in order to leave only the local disk:
esxcli --formatter=csv --format-param=fields="Device,Model" storage core device list
trodemaster says
Here is my variation on this technique. Includes some log file redirection and moving.
# Create local vmfs name
localvmfsNewName=$(hostname -s)-boot
# Determine real disk name
for DISK in $(ls /vmfs/devices/disks | grep -v vml);
do
DISK_PATH=/vmfs/devices/disks/${DISK}
VMFS_PARTITION_ID=$(partedUtil getptbl ${DISK_PATH} | grep vmfs | awk '{print $1}')
if [[ ! -z ${VMFS_PARTITION_ID} ]] && [[ ${VMFS_PARTITION_ID} -eq 3 ]]; then
localvmfsOldName=$(esxcli --formatter=csv --format-param=fields="DeviceName,VolumeName" storage vmfs extent list | grep ${DISK} | cut -d "," -f 2)
fi
done
# rename local datastore to something more meaningful
vim-cmd hostsvc/datastore/rename $localvmfsOldName $localvmfsNewName
if [ -e /vmfs/volumes/${localvmfsNewName} ]
then
echo "rename of local datastore " $localvmfsOldName " to " $localvmfsNewName " successful!"
# Generate a new scratch directory path for this host on a Datastore
scratchdirectory=/vmfs/volumes/${localvmfsNewName}/Logs-$(hostname 2> /dev/null)-$(esxcfg-info -b 2> /dev/null)
# Create the scratch directory
mkdir -p $scratchdirectory
# Change the advanced configuration option
vim-cmd hostsvc/advopt/update ScratchConfig.ConfiguredScratchLocation string $scratchdirectory
# copy %first boot script logs to persisted datastore
cp /var/log/hostd.log $scratchdirectory
cp /var/log/esxi_install.log $scratchdirectory
else
echo "$localvmfsOldName Failed to be renamed & scratch directory not setup!!"
# copy %first boot script logs to persisted datastore
cp /var/log/hostd.log /vmfs/volumes/$localvmfsOldName
cp /var/log/esxi_install.log /vmfs/volumes/$localvmfsOldName
fi
Anonymous says
All of this is very interesting maybe one of you can help me with a nagging question. My Kickstart script does almost all I need. Does anyone have a method to create a second datastore for the VM's on a separate disk set during the kickstart process?
Thanks for any guidance with script code anyone has for this. This is ESXI 5.0 ks.
Anonymous says
Thanks for posting this but I couldn't get it to work in the %pre section of the kickstart file either. The delete command failed because the filesystem was read-only:
2012-10-08 19:19:15.096Z DEBUG Running pre script: ...
2012-10-08 19:19:17.789Z HUMAN Error: Read-only file system during write on /dev/disks/naa.5000c5000954c363
Unable to delete partition 3 from device /vmfs/devices/disks/naa.5000c5000954c363
Am I missing something?
Thanks.
E.H.
Anonymous says
Thank you. Method 2 saved my mind.
NARASIMHA says
Hi , i am Unable to delete partition 2 from device existing partition this is the Problem can you help me
partedUtil delete /vmfs/devices/disks/naa.6b8ca3a0f3316b0019b958cc17474586 2
Error: Read-only file system during write on /dev/disks/naa.6b8ca3a0f3316b0019b958cc17474586
Unable to delete partition 2 from device /vmfs/devices/disks/naa.6b8ca3a0f3316b0019b958cc17474586
Jing ZHou says
Hi NARASIMHA, I encountered the same issue with you, do you have resolved it? thanks!
Mack says
Hello,
Is there a way to change local disk device after ESXi 5.1 was cloned using hard disk from the other ESXi host? This is causing issue when we add both ESXi nodes in vcenter as it detects both disk device as same and mount as same datastore instead of
separate datastore. The problem is ls -l /dev/disks on both ESXi host original and cloned has same naa.xxx id. is there any way to get around such issue?
/dev/disks # ls -l /dev/disks
-rw------- 1 root root 598999040000 Feb 8 21:10 naa.600605b00ae81a901e3ce1f7ea38eaad
-rw------- 1 root root 4161536 Feb 8 21:10 naa.600605b00ae81a901e3ce1f7ea38eaad:1
-rw------- 1 root root 4293918720 Feb 8 21:10 naa.600605b00ae81a901e3ce1f7ea38eaad:2
-rw------- 1 root root 593761385984 Feb 8 21:10 naa.600605b00ae81a901e3ce1f7ea38eaad:3
-rw------- 1 root root 262127616 Feb 8 21:10 naa.600605b00ae81a901e3ce1f7ea38eaad:5
-rw------- 1 root root 262127616 Feb 8 21:10 naa.600605b00ae81a901e3ce1f7ea38eaad:6
-rw------- 1 root root 115326976 Feb 8 21:10 naa.600605b00ae81a901e3ce1f7ea38eaad:7
-rw------- 1 root root 299876352 Feb 8 21:10 naa.600605b00ae81a901e3ce1f7ea38eaad:8
lrwxrwxrwx 1 root root 36 Feb 8 21:10 vml.0200000000600605b00ae81a901e3ce1f7ea38eaad536572766552 -> naa.600605b00ae81a901e3ce1f7ea38eaad
lrwxrwxrwx 1 root root 38 Feb 8 21:10 vml.0200000000600605b00ae81a901e3ce1f7ea38eaad536572766552:1 -> naa.600605b00ae81a901e3ce1f7ea38eaad:1
lrwxrwxrwx 1 root root 38 Feb 8 21:10 vml.0200000000600605b00ae81a901e3ce1f7ea38eaad536572766552:2 -> naa.600605b00ae81a901e3ce1f7ea38eaad:2
lrwxrwxrwx 1 root root 38 Feb 8 21:10 vml.0200000000600605b00ae81a901e3ce1f7ea38eaad536572766552:3 -> naa.600605b00ae81a901e3ce1f7ea38eaad:3
lrwxrwxrwx 1 root root 38 Feb 8 21:10 vml.0200000000600605b00ae81a901e3ce1f7ea38eaad536572766552:5 -> naa.600605b00ae81a901e3ce1f7ea38eaad:5
lrwxrwxrwx 1 root root 38 Feb 8 21:10 vml.0200000000600605b00ae81a901e3ce1f7ea38eaad536572766552:6 -> naa.600605b00ae81a901e3ce1f7ea38eaad:6
lrwxrwxrwx 1 root root 38 Feb 8 21:10 vml.0200000000600605b00ae81a901e3ce1f7ea38eaad536572766552:7 -> naa.600605b00ae81a901e3ce1f7ea38eaad:7
lrwxrwxrwx 1 root root 38 Feb 8 21:10 vml.0200000000600605b00ae81a901e3ce1f7ea38eaad536572766552:8 -> naa.600605b00ae81a901e3ce1f7ea38eaad:8
/dev/disks #