For those of you who want to quickly test out the new VMware VSA (vSphere Storage Appliance) will notice that you can not just throw a few ESXi 5 hosts that have running virtual machines on them. If you try to proceed with the VSA installation, you will see an error message regarding the presence of virtual machines whether they are running or not.
This can make it difficult to evaluate or test the new VSA if you do not have additional hosts that can be easily re-deployed as vanilla ESXi 5 installations. While working on the previous article How to Install VMware VSA in Nested ESXi 5 Host Using the GUI, I decided to test out the behavior of a few other configuration variables found in the dev.properties file for the VSA Manager. It turns out that you can actually disable the host audit check which includes the validation of running virtual machines by changing the host.audit variable from "true" to "false" using the same trick documented here. You will need to restart the VSA Manager and then the vCenter Server service for the change to go into effect.
**** DISCLAIMER: This is not supported by VMware and there maybe specific checks that are now bypassed by disabling the host.audit parameter. Please use at your own risk and test before deploying on actual systems ****
One interesting observation made while testing this in a nested ESXi configuration is that even though there is a message warning the user that any data found on the local VMFS volumes will be deleted, I did not see any process that was kicked off to do so. This does not mean this was not the original intention, but there was no reformatting of the local VMFS or removal or powering off of the running virtual machines. While testing both a "supported" and "ghetto" installation of the VSA, I found that several advanced settings were updated as part of the VSA installation, you should see the same if you look in the vmkernel.log of one of the ESXi 5 hosts:
2011-09-23T17:36:33.030Z cpu0:3475)Config: 346: "HostLocalSwapDirEnabled" = 0, Old Value: 0, (Status: 0x0)
2011-09-23T17:38:00.971Z cpu0:3258)Config: 346: "HeartbeatPanicTimeout" = 60, Old Value: 900, (Status: 0x0)
2011-09-23T17:38:07.069Z cpu1:2851)Config: 346: "EnableSVAVMFS" = 1, Old Value: 0, (Status: 0x0)
2011-09-23T17:38:07.090Z cpu1:2851)Config: 346: "VmkStressEnable" = 0, Old Value: 1, (Status: 0x0)
2011-09-23T17:44:22.163Z cpu1:3477)Config: 346: "SIOControlFlag2" = 1, Old Value: 0, (Status: 0x0)
One that sparked my curiosity is EnableSVAVMFS which is a hidden setting found on the ESXi host but one can view it using vsish. Per the limited documentation found in vsish, this parameter is to enable some sort of optimization with the local VMFS volume.
Thanks to @VMwareStorage (Cormac Hogan, VMware Technical Marketing for Storage) for the quick answer to my question on twitter, it looks like this parameter does the following:
"Forces linear allocation of VMDKs on local VMFS for VSA. Improves mirroring performance across VSAs apparently"
There was nothing in the vmkernel.log that would indicate the local VMFS was reformatted or files had to be delete to support the VSA installation. I can understand why VMware wanted a vanilla installation which included no running VMs to simplify the installation process. Another reason that I can think of is by having some initial storage consumption, it can offset the amount of "available" storage that needs to be setup on VSA cluster. The amount of available storage per host must be equal on all two or three node cluster, to ensure there is sufficient space for replication. As long as you understand by having running virtual machines on one or more ESXi nodes, the node with the smallest amount of free physical storage is what the rest of the VSA nodes will be configured to.
You potentially may also find yourself in a chicken and the egg problem if VSA installation fails to install and reverts it's changes, which includes putting the ESXi host into maintenance mode. This will cause it to fail on the node that is running the vCenter Server and VSA Manager, another reason you would want to run the management system outside of the VSA cluster.
Without further ado, I recorded a quick 6minute video demonstrating the installation of the new VMware VSA on ESXi 5 hosts that has running virtual machines which includes the vCenter Server and VSA Manager running on one of the nodes (video is awesome when you bump up the audio):
Installing VMware VSA with Running VMs from lamw on Vimeo.
Not only is this not supported, but it is also NOT a best practice to run the vCenter Server and VSA Manager within the VSA cluster because you may potentially have issues with replication if vCenter and VSA Manager goes down. In my testing, I found that I could take down vCenter and VSA Manager and the NFS volumes continue to function and the cluster continues to churn away. Any virtual machines running on the VSA volumes will automatically be restarted by vSphere HA. Once the VSA manager has recovered, it'll automatically ensure the volumes have all synchronized and re-protect the VSA cluster.
Note: It is important to understand that even though you can install the VMware VSA with running virtual machines using the hack above, the requirement of a vanilla ESXi 5 installation is still 100% mandatory. You MUST still have only a single vSwitch (vSwitch0) with only a single vmnic (vmnic0) connected to the vSwitch with only two default portgruops that must exists: "VM Network" and "Management Network", there is no workaround for this requirement. If you you have a host that you plan on running VMs prior to VSA installation, make sure they are on the "VM Network" portgroup as additional portgroups are not supported prior to installation of the VSA.
I am hoping that some of these requirements are relaxed in a future release of the VMware VSA and possibly a version that would work with the vCVA (vCenter Virtual Appliance). For now if have limited hardware or would like to use existing ESXi 5 host with running virtual machines (needs to be configured like a vanilla installation of ESXi 5) then you can run everything on either a two or three node cluster, just be aware of the caveats.
For more in-depth information and details about the new VSA, please check out the VMware Storage Blog - vSphere Storage Appliance Links and be sure to follow Cormac Hogan on twitter at @VMwareStorage
Warner Carter says
“Moral rules are directions for running the human machine. Every moral rule is there to prevent a breakdown, or a strain, or a friction, in the running of that machine. running machine reviews