When deploying vSphere Infrastructure Navigator (VIN), it is automatically associated with the vCenter Server from which it was deployed from and this behavior is by design. This means if you have two vCenter Servers, you will need to deploy two separate VIN instances, one for each vCenter Server as shown in the diagram below.
For scenarios where you have a separate management and compute cluster, each with their own vCenter Server, it can pose a problem if you want to run all your "infrastructure" virtual machines in the management cluster and not in the compute cluster. This very topic was recently brought up in an internal discussion and after explaining how VIN works, I safely assumed this behavior could not be modified. It turns out the discussion peaked the interest of one of the VIN developers and a suggestion was made on how one could potentially change this behavior. This un-tested (NotSupported) "workaround" would allow a user to deploy a single VIN instance under the management cluster and allow it to associate with another vCenter Server and its workloads. Below is a diagram on what this would look like.
Disclaimer: This is not officially supported by VMware, use at your own risk.
There was also one major issue with the workaround which was the changes would not be persisted after a reboot of the VIN instance, which meant you had to repeat the series of steps each time you needed to reboot VIN. After doing a bit of research and testing, I came up with a solution in which the changes will automatically be applied to ensure they are persisted upon each reboot. Basically we are going to do is deploy a VIN instance on each vCenter Server and initially and only configure the VIN instance in the compute vCenter Server which you wish to monitor. Once that is configured, we will then copy a few configuration properties and transfer that to over to our management vCenter Server.
Step 1 - Deploy and configure a VIN instance as you normally would to the vCenter Server in which you wish to manage. From my testing, you will also need to ensure you enable discovery using the vSphere Web Client before proceeding to the next step.
Step 2 - Login to the VIN instance via SSH and we will need to collect a few pieces of information from /opt/vmware/etc/vami/ovfEnv.xml which contains the vService Extension information for the VIN instance. You will need to record the following pieces of information from the file:
- evs:URL
- evs:Token
- evs:X509Thumbprint
- evs:IP
- evs:Address
Step 3 - Download the updateVINvCenterServer.sh script and update it with the information you have collected in step 2. Once this is done, you can now power off the VIN instance you just deployed (we will delete this instance later).
Step 4 - Deploy another VIN instance into the management vCenter Server. Once the VIN instance has network connectivity, go ahead and scp the modified script to it. Login to the VIN instance via SSH and execute the script by running the following command: ./updateVINvCenterServer.sh This will take the information that was collected and update its ovfEnv.xml. Lastly, you will need to restart the VIN engine for the changes to go into effect by running the following command: /etc/init.d/vadm-engine restart
Step 5 - Login to your compute vCenter Server via the vSphere Web Client and re-enable discovery under the vSphere Infrastructure Navigator tab. You are now actually talking to the VIN instance running in your management vCenter Server!
Step 6 - Shortly after the discovery process has started, you should now be able to see VIN information for the virtual machines residing under your compute vCenter Server without having to actually run VIN under that same vCenter Server. At this point, you can delete the VIN instance located in your compute vCenter Server.
Step 7 - This very last step is needed to ensure the configurations are persisted upon a reboot as the ovfEnv.xml is dynamically regenerated upon each reboot. To do so, we just need to create a simple boot script that executes the script as well as restarts the VIN engine. We normally would add the commands under /etc/rc.local but since VIN is SLES based, that file does not exists. However, you can create /etc/init.d/after.local (which will execute commands after the initial init scripts) and add the following two commands:
/root/updateVINvCenterServer.sh
/etc/init.d/vadm-engine restart
Using this solution, you can now run multiple VIN instances and have them manage separate vCenter Servers all while running under a single vCenter Server compute cluster. Below is a diagram of what that would look like as well as a screenshot of the environment I have setup in my lab. Though this solution is not officially supported, I have seen a few folks ask for this functionality within VIN. Do you think this is something VIN should support and allow it to be decoupled from the vCenter Server it is deployed from? If so, please leave a comment or if you have additional feedback on this which will help engineering decide whether this is something they should look into.
Hi William
Yes, this is what I would like for clients always just like you can with vCenter Operations. Having it linked to the vCenter you are deploying from is silly in most cases.
I know I have mailed you about this previously. Good to know you can get it to work.
Thanks
David
It clearly explains our quarries and have idea to change over this infrastructure into world class infrastructure.