WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

How to Install VMware VSA with Running VMs

09.26.2011 by William Lam // 1 Comment

For those of you who want to quickly test out the new VMware VSA (vSphere Storage Appliance) will notice that you can not just throw a few ESXi 5 hosts that have running virtual machines on them. If you try to proceed with the VSA installation, you will see an error message regarding the presence of virtual machines whether they are running or not.

This can make it difficult to evaluate or test the new VSA if you do not have additional hosts that can be easily re-deployed as vanilla ESXi 5 installations. While working on the previous article How to Install VMware VSA in Nested ESXi 5 Host Using the GUI, I decided to test out the behavior of a few other configuration variables found in the dev.properties file for the VSA Manager. It turns out that you can actually disable the host audit check which includes the validation of running virtual machines by changing the host.audit variable from "true" to "false" using the same trick documented here. You will need to restart the VSA Manager and then the vCenter Server service for the change to go into effect.

**** DISCLAIMER: This is not supported by VMware and there maybe specific checks that are now bypassed by disabling the host.audit parameter. Please use at your own risk and test before deploying on actual systems **** 

One interesting observation made while testing this in a nested ESXi configuration is that even though there is a message warning the user that any data found on the local VMFS volumes will be deleted, I did not see any process that was kicked off to do so. This does not mean this was not the original intention, but there was no reformatting of the local VMFS or removal or powering off of the running virtual machines. While testing both a "supported" and "ghetto" installation of the VSA, I found that several advanced settings were updated as part of the VSA installation, you should see the same if you look in the vmkernel.log of one of the ESXi 5 hosts:

2011-09-23T17:36:33.030Z cpu0:3475)Config: 346: "HostLocalSwapDirEnabled" = 0, Old Value: 0, (Status: 0x0)
2011-09-23T17:38:00.971Z cpu0:3258)Config: 346: "HeartbeatPanicTimeout" = 60, Old Value: 900, (Status: 0x0)
2011-09-23T17:38:07.069Z cpu1:2851)Config: 346: "EnableSVAVMFS" = 1, Old Value: 0, (Status: 0x0)
2011-09-23T17:38:07.090Z cpu1:2851)Config: 346: "VmkStressEnable" = 0, Old Value: 1, (Status: 0x0)
2011-09-23T17:44:22.163Z cpu1:3477)Config: 346: "SIOControlFlag2" = 1, Old Value: 0, (Status: 0x0)

One that sparked my curiosity is EnableSVAVMFS which is a hidden setting found on the ESXi host but one can view it using vsish. Per the limited documentation found in vsish, this parameter is to enable some sort of optimization with the local VMFS volume.

Thanks to @VMwareStorage (Cormac Hogan, VMware Technical Marketing for Storage) for the quick answer to my question on twitter, it looks like this parameter does the following:

"Forces linear allocation of VMDKs on local VMFS for VSA. Improves mirroring performance across VSAs apparently" 

There was nothing in the vmkernel.log that would indicate the local VMFS was reformatted or files had to be delete to support the VSA installation. I can understand why VMware wanted a vanilla installation which included no running VMs to simplify the installation process. Another reason that I can think of is by having some initial storage consumption, it can offset the amount of "available" storage that needs to be setup on VSA cluster. The amount of available storage per host must be equal on all two or three node cluster, to ensure there is sufficient space for replication. As long as you understand by having running virtual machines on one or more ESXi nodes, the node with the smallest amount of free physical storage is what the rest of the VSA nodes will be configured to.

You potentially may also find yourself in a chicken and the egg problem if VSA installation fails to install and reverts it's changes, which includes putting the ESXi host into maintenance mode. This will cause it to fail on the node that is running the vCenter Server and VSA Manager, another reason you would want to run the management system outside of the VSA cluster.

Without further ado, I recorded a quick 6minute video demonstrating the installation of the new VMware VSA on ESXi 5 hosts that has running virtual machines which includes the vCenter Server and VSA Manager running on one of the nodes (video is awesome when you bump up the audio):

Installing VMware VSA with Running VMs from lamw on Vimeo.

Not only is this not supported, but it is also NOT a best practice to run the vCenter Server and VSA Manager within the VSA cluster because you may potentially have issues with replication if vCenter and VSA Manager goes down. In my testing, I found that I could take down vCenter and VSA Manager and the NFS volumes continue to function and the cluster continues to churn away. Any virtual machines running on the VSA volumes will automatically be restarted by vSphere HA. Once the VSA manager has recovered, it'll automatically ensure the volumes have all synchronized and re-protect the VSA cluster.

Note: It is important to understand that even though you can install the VMware VSA with running virtual machines using the hack above, the requirement of a vanilla ESXi 5 installation is still 100% mandatory. You MUST still have only a single vSwitch (vSwitch0) with only a single vmnic (vmnic0) connected to the vSwitch with only two default portgruops that must exists: "VM Network" and "Management Network", there is no workaround for this requirement. If you you have a host that you plan on running VMs prior to VSA installation, make sure they are on the "VM Network" portgroup as additional portgroups are not supported prior to installation of the VSA.

I am hoping that some of these requirements are relaxed in a future release of the VMware VSA and possibly a version that would work with the vCVA (vCenter Virtual Appliance). For now if have limited hardware or would like to use existing ESXi 5 host with running virtual machines (needs to be configured like a vanilla installation of ESXi 5) then you can run everything on either a two or three node cluster, just be aware of the caveats.

For more in-depth information and details about the new VSA, please check out the VMware Storage Blog - vSphere Storage Appliance Links and be sure to follow Cormac Hogan on twitter at @VMwareStorage

Categories // Uncategorized Tags // ESXi 5.0, evc, nested, vsa, vSphere 5.0

How to Query VM Disk Format in vSphere 5

09.25.2011 by William Lam // 5 Comments

Prior to vSphere 5, it was not trivial to identify the particular disk format for a given virtual machine's disk. Using the vSphere Client, you would see a virtual machine's disk be displayed as either thin or thick. The problem with this is that the "thick" format can be either:

  • zeroedthick - A thick disk has all space allocated at creation time and the space is zeroed on demand as the space is used
  • eagerzeroedthick - An eager zeroed thick disk has all space allocated and wiped clean of any previous contents on the physical media at creation time. Such disks may take longer time during creation compared to other disk formats.

Users would not be able to distinguish the exact type using the vSphere Client or the vSphere 4 APIs. With the release of vSphere 4, VMware did introduce a new property in the vSphere 4 API called eagerlyScrub which was supposed to help identify whether a virtual disk was allocated as an eagerzeroedthick disk. Unfortunately there may have been a bug with the property as it never gets modified whether a disk is created as zeroedthick or eagerzeroedthick.

The only method that I was aware of to truly figuring out the disk format would be to manually parse the virtual machine's vmware.log file to identify the disk type which I wrote a script for in 2009.

During the vSphere 5 beta, I had noticed the vSphere Client UI now properly displays all three virtual machine disk format: zeroedthick (displayed as flat), thin and eagerzeroedthick (displayed as thick).

Seeing that VMware now displays the three different formats, I wanted to see if it was possible to extract this using the vSphere 5 APIs and not have to rely on the hack of reading the vmware.log files. It turns out that the eagerlyScrub property is now functioning properly when a VMDK is provisioned or has been inflated/converted to the eagerzeroedthick format. I wrote a simple vSphere SDK for Perl script called getVMDiskFormat.pl which allows you to extract the disk formats of all virtual machines connecting to either vCenter or directly to an ESX(i) host.

The script allows for two types of output: console (directly on the console) or csv (creates .csv file)

If you select csv output, by default it will be stored in a file called "vmDiskFormat.csv". You also have the option of specifying the filename by using the --filename flag and providing a name of your choosing.

You can then load the csv file into excel and easily sort through the various disk format types.

All this is already included in the latest version of the VMware vSphere Health Check Report 5.0 if you want a centralize report that includes virtual machine disk format.

Categories // Uncategorized Tags // api, eagerzeroedthick, ESXi 5.0, thin, vmdk, vSphere 5.0, vsphere sdk for perl, zeroedthick

How to Install VMware VSA in Nested ESXi 5 Host Using the GUI

09.19.2011 by William Lam // 13 Comments

We upgraded the ghettoDatacenter to vSphere 5 this weekend and one of the things I wanted to play with was the new VMware VSA (vSphere Storage Appliance). Since we only have a single host, running nested ESXi would be our only option and this would allow us to easily deploy three vESXi 5.0 hosts and vCenter to tinker with the new VMware VSA.

UPDATE (09/16/12): You can use the same process outlined in this article to run the new VMware VSA 5.1 (vSphere Storage Appliance) in a nested ESXi configuration. Below is a screenshot of running VSA 5.1 in Nested ESXi 5.1.

One caveat in using vESXi hosts to test the VSA is during the selection of your ESXi hosts, VSA expects the hosts to be EVC capable. The VSA will create a vSphere Cluster and automatically enable EVC baseline based on your cpus. As you may or may not know, EVC can not be supported in a vESXi host and this would prevent you from selecting these hosts.

Luckily this issue was solved by a Vijay in his blog post here. There is a configuration file called dev.properties located in C:\Program Files\VMware\Infrastructure\tomcat\webapps\VSAManager\WEB-INF\classes which contains a line specifying whether or not EVC should be configured "evc.config". This configuration file appears to be used by VMware internally for some type of development but by changing the parameter from true to false, the VSA will support non-EVC capable hosts and not enable EVC for the VSA vSphere cluster.

Note: This is most likely not supported by VMware, please use at your own risk along with modifying any other values within this file.

Now, you might wonder if Vijay had already documented this process in his blog, why am I repeating it? Well the issue that Vijay had identified by tweaking this configuration file was that the VSA GUI installer did not detect the change and had to rely on an alternative method of installation using the commandline. Though not ideal, this method does work but for first time evaluators of the VMware VSA, the various commandline options can overwhelm or confuse users. It would be great if one could using the VSA GUI to perform the installation which is much more intuitive and that is reason for this article.

For the VSA to detect the new changes, you will need to restart the VMware VSA and then the vCenter Service under the Windows Services utility. I am not sure why both services need to be restarted, but I guess the VSA extension is not updated when just the VSA is restarted which is unfortunate. 

Once both services have started up, open a new vSphere Client session to your vCenter Server and proceed with the VSA installation. During the selection of the hosts, you will now have the option of selecting your vESXi 5 hosts and a warning message is presented stating "Unsupported Hardware" but the installer will allow you to continue on. 

After you have selected either two or three of your vESXi hosts, you will be prompted once more that this configuration is not supported nor in the VMware HCL, go ahead and click OK.

After this, you will be able to go through the rest of the VSA installation as long as meet the default requirements of the VMware VSA noted in the documentation.

So if you have some interests in the new VMware VSA and do not have physical hardware to test with, you should consider deploying a couple of vESXi hosts and kick the tires with the new VSA.

Categories // Uncategorized Tags // ESXi 5.0, evc, nested, vsa, vSphere 5.0

  • « Previous Page
  • 1
  • …
  • 506
  • 507
  • 508
  • 509
  • 510
  • …
  • 560
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025