WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

How to Create and Modify vgz (vmtar) Files on ESXi 3.x/4.x

08.09.2011 by William Lam // 6 Comments

There were several questions today on the VMTN community forums with regards to manipulating .vgz files in ESXi, also known as vmtar files. Due to the sparse amount of information on the web, I wanted to document some of the common operations that can be performed on the vmtar files. I will not be going over the use cases for manipulating or creating custom vmtar files, but here is one use case.

UPDATE (10/16/18) - For ESXi 6.5+, please use the following commands and the example below is using the s.v00 file:

Decompress file:

gunzip < s.v00 > s.v00.xz
xz --single-stream --decompress < s.v00.xz > s.v00.vtar
vmtar -v -x s.v00.vtar -o s.v00.tar
tar -xvf s.v00.tar

Compress file:

tar -cvf s.v00-new.tar bin/ etc/ lib/ lib64/ opt/ usr/ var/
vmtar -v -c s.v00.tar -o s.v00.vtar
xz --single-stream --compress < s.v00.vtar > s.v00.xz
xz --single-stream --compress < s.v00.vtar > s.v00

You can find some of these vmtar files with .vgz extension in the ESXi installation iso, here are a few highlighted in red:

To operate on existing vmtar files, you will need access to an ESXi host via ESXi Shell and using the /sbin/vmtar utility.

Usage: vmtar {[-x vtar/vgz-file] [-c tar/tgz-file] [-v] -o destination} | -t < vtar/vgz-file

In this example, we will copy the install.vgz to an ESXi host to perform some operations.

To list the contents of a vmtar file, you will need to use the -t option:

To extract the contents of vmtar file, you will need to use the -x and -o option:

vmtar -x install.vgz -o install.tar

Note: The output will be a standard tar file which will then need to be extracted before getting to the actual contents

To extract the tar file, we will be using the tar utility:

Let's say we made a change to one of the files and now we would like to re-create the vmtar file, we will first need to tar up the contents by using the tar utility again:

To verify the contents were all tarred up, we can view the contents by using the following command:

tar -tf install.tar

Now we will create the vmtar file using the vmtar utility:

vmtar -c install.tar -o install.vgz

We can confirm the contents by using vmtar -t option once again:

vmtar -t < install.vgz

If you decide to create your own custom vmtar files and want to verify the file layout, you can use vmkramdisk to assist you. Using vdf command, make note of the number of tardisks that have been mounted up.

Also make note of the filesystem layout by performing an "ls" on / (slash):

Now let's say you wanted to create a directory called virtuallyGhetto with a file in that directory called foobar and you wanted it to be mounted up under /

Here are the steps to perform the above:

Do you notice anything different? How about performing an "ls" on / (slash) again?

To umount the vmtar disk, you would use the following command:

vmkramdisk -u virtuallyGhetto.vgz

Categories // Automation, ESXi, Not Supported Tags // ESXi 4.1, ESXi 6.5, ESXi 6.7, vgz, vmtar, vSphere 4.1

Automating Storage DRS & Datastore Cluster Management in vSphere 5

07.27.2011 by William Lam // 1 Comment

Storage DRS is probably one of, if not the coolest feature in vSphere 5. Storage DRS allows you to cluster your datastores into what is known as a datastore cluster (storage pod) and automatically balances both your storage I/O and capacity just like DRS does with your compute. The user interface is extremely easy to use but as always, if you need to click through several screens to get to the outcome, some automation can never hurt 🙂

I decided to create a vSphere SDK for Perl script called datastoreClusterManagement.pl which allows you to automate all aspects of creating and managing your storage pod/cluster. You will need a system that has the vCLI installed or you can use VMware vMA 5 to run the script. You will also need to connect to a vCenter Server 5 for all SDRS operations.

The script supports 8 different types of operations and are describe below:

Operation Description
List List all available datastore clusters
Query  Query details for a specific datastore cluster
Create  Create a datastore cluster
Delete  Delete a datastore cluster (Datastores are left intact)
Add Datastore  Add datastore(s) to an existing datastore cluster
Remove Datastore  Remove datastore(s) from an existing datastore cluster
Enter Maintenance Mode  Put a datastore into maintenance mode
Exit Maintenance Mode  Take a datastore out of maintenance mode
Here is an example of performing the "list" operation: 

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation list

Here is an example of performing the "query" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation query --pod homer-NFS-pod

Here is an example of performing the "create" operation w/single datastore:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation create --datacenter MN-physical --enable_sdrs true --enable_sdrs_iometric true --pod moe-NFS-pod --sdrs_automation automated --sdrs_evaluate_period 480 --sdrs_imbal_thres 30 --sdrs_latency 15 --sdrs_util_diff 20 --sdrs_util_space 60 --datastore himalaya-NFS-moe-primp-1

Here is an example of performing the "create" operation w/datastore file:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation create --datacenter MN-physical --enable_sdrs true --enable_sdrs_iometric true --pod moe-NFS-pod --sdrs_automation automated --sdrs_evaluate_period 480 --sdrs_imbal_thres 30 --sdrs_latency 15 --sdrs_util_diff 20 --sdrs_util_space 60 --datastore_file dsfile

Here is an example of performing the "delete" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation delete --pod moe-NFS-pod

Here is an example of performing the "add_datastore" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation add_datastore --pod moe-NFS-pod --datastore himalaya-NFS-moe-primp-2

Here is an example of performing the "remove_datastore" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation remove_datastore --pod moe-NFS-pod --datastore himalaya-NFS-moe-primp-1

Note: Both "add_datastore" and "remove_datastore" operation support single datastore and/or datastore file

Here is an example of performing the "ent_maint" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation ent_maint --pod homer-NFS-pod --datastore himalaya-NFS-moe-primp-5

Here is an example of performing the "ext_maint" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation exi_maint --pod homer-NFS-pod --datastore himalaya-NFS-moe-primp-5

There is also complete perl docs for this script which can be called using the following command:

perldoc datastoreClusterManagement.pl

Categories // Automation, vSphere Tags // ESXi 5.0, SDRS, storage drs, storagePod, vSphere 5.0

How to Format and Create VMFS5 Volume using the CLI in ESXi 5

07.19.2011 by William Lam // 39 Comments

VMware always recommends formatting and creating a new VMFS volume using the vSphere Client as it automatically aligns your VMFS volume. However, if you do not have access to the vSphere Client or you wanted to format additional VMFS volumes via a kickstart, you can do so using the CLI and the partedUtil under /sbin.

~ # /sbin/partedUtil
Not enough arguments
Usage:
Get Partitions : get
Set Partitions : set ["partNum startSector endSector type attr"]*
Delete Partition : delete Resize Partition : resize
Get Partitions : getptbl
Set Partitions : setptbl

With ESXi 5, an MBR (Master Boot Record) partition table is no longer used and has been replaced with a GPT (GUID Partition Table) partition table. There is also only one block size of 1MB versus the 2,4 and 8 that was available in ESX(i) 4.x

We can view the partitions of a device by using the "getptbl" option and ensure we don't have an existing VMFS volume:

~ # /sbin/partedUtil "getptbl" "/vmfs/devices/disks/mpx.vmhba1:C0:T2:L0"
gpt
652 255 63 10485760

Next we will need to create a partition by using the "setptbl" option:

/sbin/partedUtil "setptbl" "/vmfs/devices/disks/mpx.vmhba1:C0:T2:L0" "gpt" "1 2048 10474379 AA31E02A400F11DB9590000C2911D1B8 0"

The "setptbl" accepts 3 arguments:

  • diskName
  • label
  • partitionNumber startSector endSector type/GUID attribute

The diskName in this example is the full path to the device which is /vmfs/devices/disks/mpx.vmhba1:C0:T2:L0

The label will be gpt

The last argument is actually a string comprised of 5 individual parameters:

  • partitionNumber - Pretty straight forward
  • startSector - This will always be 2048 for 1MB alignment for VMFS5
  • endSector - This will need to be calculated based on size of your device
  • type/GUID - This is the GUID key for a particular partition type, for VMFS it will always be AA31E02A400F11DB9590000C2911D1B8

To view all GUID types, you can use the "showGuids" option:

~ # /sbin/partedUtil showGuids
Partition Type       GUID
vmfs                 AA31E02A400F11DB9590000C2911D1B8
vmkDiagnostic        9D27538040AD11DBBF97000C2911D1B8
VMware Reserved      9198EFFC31C011DB8F78000C2911D1B8
Basic Data           EBD0A0A2B9E5443387C068B6B72699C7
Linux Swap           0657FD6DA4AB43C484E50933C84B4F4F
Linux Lvm            E6D6D379F50744C2A23C238F2A3DF928
Linux Raid           A19D880F05FC4D3BA006743F0F84911E
Efi System           C12A7328F81F11D2BA4B00A0C93EC93B
Microsoft Reserved   E3C9E3160B5C4DB8817DF92DF00215AE
Unused Entry         00000000000000000000000000000000

Once you have the 3 arguments specified, we can now create the partition:

~ # /sbin/partedUtil "setptbl" "/vmfs/devices/disks/mpx.vmhba1:C0:T2:L0" "gpt" "1 2048 10474379 AA31E02A400F11DB9590000C2911D1B8 0"
gpt
0 0 0 0
1 2048 10474379 AA31E02A400F11DB9590000C2911D1B8 0

UPDATE (01/15) - Here is a quick shell snippet that you can use to automatically calculate End Sector as well as creating the VMFS5 volume:

partedUtil mklabel ${DEVICE} msdos
END_SECTOR=$(eval expr $(partedUtil getptbl ${DEVICE} | tail -1 | awk '{print $1 " \\* " $2 " \\* " $3}') - 1)
/sbin/partedUtil "setptbl" "${DEVICE}" "gpt" "1 2048 ${END_SECTOR} AA31E02A400F11DB9590000C2911D1B8 0"
/sbin/vmkfstools -C vmfs5 -b 1m -S $(hostname -s)-local-datastore ${DEVICE}:1

Note: You can also use the above to create a VMFS-based datastore running on a USB device, however that is not officially supported by VMware and performance with USB-based device will vary depending on the hardware and the speed of the USB connection. 

We can verify by running the "getptbl" option on the device that we formatted:

~ # /sbin/partedUtil "getptbl" "/vmfs/devices/disks/mpx.vmhba1:C0:T2:L0"
gpt
652 255 63 10485760
1 2048 10474379 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

Finally we will now create the VMFS volume using our favorite vmkfstools, the syntax is the same as previous release of ESX(i):

~ # /sbin/vmkfstools -C vmfs5 -b 1m -S himalaya-SSD-storage-3 /vmfs/devices/disks/mpx.vmhba1:C0:T2:L0:1
Checking if remote hosts are using this device as a valid file system. This may take a few seconds...
Creating vmfs5 file system on "mpx.vmhba1:C0:T2:L0:1" with blockSize 1048576 and volume label "himalaya-SSD-storage-3".
Successfully created new volume: 4dfdb7b0-8c0dcdb5-e574-0050568f0111

Now you can refresh the vSphere Client or run vim-cmd hostsvc/datastore/refresh to view the new datastore that was created.

Categories // Automation, ESXi Tags // ESXi 5.0, gpt, partedUtil, usb, vmfs, vSphere 5.0

  • « Previous Page
  • 1
  • …
  • 220
  • 221
  • 222
  • 223
  • 224
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Ultimate Lab Resource for VCF 9.0 06/25/2025
  • VMware Cloud Foundation (VCF) on ASUS NUC 15 Pro (Cyber Canyon) 06/25/2025
  • VMware Cloud Foundation (VCF) on Minisforum MS-A2 06/25/2025
  • VCF 9.0 Offline Depot using Synology 06/25/2025
  • Deploying VCF 9.0 on a single ESXi host? 06/24/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025