WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple
You are here: Home / Uncategorized / Configuring vSphere Infrastructure Navigator (VIN) To Manage An Alternate vCenter Server

Configuring vSphere Infrastructure Navigator (VIN) To Manage An Alternate vCenter Server

06.11.2013 by William Lam // 2 Comments

When deploying vSphere Infrastructure Navigator (VIN), it is automatically associated with the vCenter Server from which it was deployed from and this behavior is by design. This means if you have two vCenter Servers, you will need to deploy two separate VIN instances, one for each vCenter Server as shown in the diagram below.

For scenarios where you have a separate management and compute cluster, each with their own vCenter Server, it can pose a problem if you want to run all your "infrastructure" virtual machines in the management cluster and not in the compute cluster. This very topic was recently brought up in an internal discussion and after explaining how VIN works, I safely assumed this behavior could not be modified. It turns out the discussion peaked the interest of one of the VIN developers and a suggestion was made on how one could potentially change this behavior. This un-tested (NotSupported) "workaround" would allow a user to deploy a single VIN instance under the management cluster and allow it to associate with another vCenter Server and its workloads. Below is a diagram on what this would look like.

Disclaimer: This is not officially supported by VMware, use at your own risk.

There was also one major issue with the workaround which was the changes would not be persisted after a reboot of the VIN instance, which meant you had to repeat the series of steps each time you needed to reboot VIN. After doing a bit of research and testing, I came up with a solution in which the changes will automatically be applied to ensure they are persisted upon each reboot. Basically we are going to do is deploy a VIN instance on each vCenter Server and initially and only configure the VIN instance in the compute vCenter Server which you wish to monitor. Once that is configured, we will then copy a few configuration properties and transfer that to over to our management vCenter Server.

Step 1 - Deploy and configure a VIN instance as you normally would to the vCenter Server in which you wish to manage. From my testing, you will also need to ensure you enable discovery using the vSphere Web Client before proceeding to the next step.

Step 2 - Login to the VIN instance via SSH and we will need to collect a few pieces of information from /opt/vmware/etc/vami/ovfEnv.xml which contains the vService Extension information for the VIN instance. You will need to record the following pieces of information from the file:

  • evs:URL
  • evs:Token
  • evs:X509Thumbprint
  • evs:IP
  • evs:Address


Step 3 - Download the updateVINvCenterServer.sh script and update it with the information you have collected in step 2. Once this is done, you can now power off the VIN instance you just deployed (we will delete this instance later).

Step 4 - Deploy another VIN instance into the management vCenter Server. Once the VIN instance has network connectivity, go ahead and scp the modified script to it. Login to the VIN instance via SSH and execute the script by running the following command: ./updateVINvCenterServer.sh This will take the information that was collected and update its ovfEnv.xml. Lastly, you will need to restart the VIN engine for the changes to go into effect by running the following command: /etc/init.d/vadm-engine restart

Step 5 - Login to your compute vCenter Server via the vSphere Web Client and re-enable discovery under the vSphere Infrastructure Navigator tab. You are now actually talking to the VIN instance running in your management vCenter Server!

Step 6 - Shortly after the discovery process has started, you should now be able to see VIN information for the virtual machines residing under your compute vCenter Server without having to actually run VIN under that same vCenter Server. At this point, you can delete the VIN instance located in your compute vCenter Server.

Step 7 - This very last step is needed to ensure the configurations are persisted upon a reboot as the ovfEnv.xml is dynamically regenerated upon each reboot. To do so, we just need to create a simple boot script that executes the script as well as restarts the VIN engine. We normally would add the commands under /etc/rc.local but since VIN is SLES based, that file does not exists. However, you can create /etc/init.d/after.local (which will execute commands after the initial init scripts) and add the following two commands:

/root/updateVINvCenterServer.sh
/etc/init.d/vadm-engine restart

Using this solution, you can now run multiple VIN instances and have them manage separate vCenter Servers all while running under a single vCenter Server compute cluster. Below is a diagram of what that would look like as well as a screenshot of the environment I have setup in my lab. Though this solution is not officially supported, I have seen a few folks ask for this functionality within VIN. Do you think this is something VIN should support and allow it to be decoupled from the vCenter Server  it is deployed from? If so, please leave a comment or if you have additional feedback on this which will help engineering decide whether this is something they should look into.

 

 
All the credit goes to to Boris Serebro, the VIN developer who suggested this neat workaround. Thanks for sharing!

More from my site

  • Extracting VIN (vSphere Infrastructure Navigator) information using PowerCLI & vROps REST API
  • How To Configure vCenter Server 5.0 To Work With VIN 2.0?
  • How to Update vSphere Infrastructure Navigator (VIN) After Changing vCenter Server IP Address
  • VIN 2.0 Supports New Export to CSV & Maps Feature
  • Alternative Way of Extracting VIN (vSphere Infrastructure Navigator) Information

Categories // Uncategorized Tags // infrastructure navigator, ovfEnv.xml, vIN

Comments

  1. *protectedDavid says

    06/17/2013 at 4:00 am

    Hi William

    Yes, this is what I would like for clients always just like you can with vCenter Operations. Having it linked to the vCenter you are deploying from is silly in most cases.

    I know I have mailed you about this previously. Good to know you can get it to work.

    Thanks
    David

    Reply
  2. *protectedGeorge Nelson says

    12/13/2013 at 9:18 am

    It clearly explains our quarries and have idea to change over this infrastructure into world class infrastructure.

    Reply

Thanks for the comment!Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025