WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Caveats when multi-homing the vCenter Server Appliance 6.x w/multiple vNICs

02.01.2016 by William Lam // 10 Comments

The topic of multi-homing the vCenter Server Appliance (VCSA) in which additional virtual network cards (vNICs) are added has been some what of an active topic of discussion lately, at least internally. By no means is this a new topic, in fact if you do a search online, you will find several articles on how to configure additional vNICs for older releases of the VCSA. Some of the common use cases for requiring this capability for the VCSA or any other infrastructure VM for that matter range from having a dedicated backup network, 3rd party monitoring and accessing the management stack through a bastion jump host which resides on the public side of a DMZ environment.

UPDATE (04/20/20) - In vSphere 7, multi-homing of the VCSA is now officially supported. You can have up to 7 additional network adapters attached to the VCSA which will be preserved upon upgrades and backup. The networks MUST be different from the default management network on the VCSA. Lastly, if you intend to use vCenter High Availability (VCHA), you should NOT use eth1 as that is the default interface that will be selected when you enable VCHA.

Disclaimer: Although it is possible to add additional vNICs to the VCSA, as far as I have been told, this is currently not officially supported by VMware. Please use at your own risk.

If you are considering multi-homing your VCSA 6.x (applies to vCenter Server 6.x for Windows), there are a few things to be aware of from earlier versions of vSphere. Before jumping into the details, lets take a look at an example deployment of the VCSA which has 2 vNICs and that is connected to two different networks. The first network is 192.168.1.0/24 which is connected to the first interface (eth0) of the VCSA. The VCSA's primary network identifier (PNID) is using the FQDN of the VCSA which is vcenter.primp-industries.net. We also have a DNS server and an internal Windows desktop client residing on this network which we will refer to as Network A. We then have a second network 172.30.0.0/24 which we will refer to as Network B which is connected to the second interface (eth1) of the VCSA. There is also a Windows desktop client residing on Network B.

multi-home-vcsa-7

Accessing the vSphere Web Client:

For the internal desktop client, I can point my browser to either the FQDN (vcenter.primp-industries.net) or the IP Address (192.168.1.200) and I can connect to the vSphere Web Client without any issues. The issue which some of you may have found in vSphere 6.x is that if you try to connect to the second IP Address (172.30.0.200) of the VCSA from the external desktop client, you will notice that a redirect occurs when the vSphere Web Client loads and you are taken to the SSO endpoint which fails to resolve as seen in the screenshot below.

multi-home-vcsa-4
In my example, I have an Embedded VCSA, but if you had an External Platform Services Controller (PSC), you would see that the SSO URL would be that of your PSC. This happens because the PSC binds to the PNID which can either be an FQDN or IP Address. To workaround this limitation, you will need to either manually add a hosts entry on the external desktop client mapping the second IP Address to the FQDN of the VCSA or configure your DNS server to also map the IP Address to the FQDN. In my lab, I decided to just update the hosts entry on my Windows client like the following:

172.30.0.200 vcenter.primp-industries.net

Once the change have been made, you simply just need to refresh the browser and as you can see from the screenshot below, I am now logged into the vSphere Web Client using the secondary IP Address of the VCSA. This is also another reason to always use an FQDN vs. IP Address which I discuss in more detail in the next section.

multi-home-vcsa-5
Note: If you just want to access the vCenter Server using API or vSphere C# Client, you will not have to perform the tweak as neither of those interfaces require going through SSO. If you plan to use the vCloud Suite SDKs which do require SSO, then the tweaks mentioned above will be required.

FQDN vs a IP Address for PNID:

As you can see from the previous issue, if we had selected an IP Address instead of using the FQDN for the PNID, we would not have been able to access the vSphere Web Client even with our tweaks. The reason for this is that the PNID is now tied to the IP Address and you would not be able to resolve that address from the external desktop client. In addition to this issue, if you chose IP Address instead of FQDN, you will run into another problem with the default self-signed SSL Certificates which are bounded by either the FQDN or IP Address. If you do not plan to use your own certificates which you could add additional entries, then make sure to always use an FQDN.

Same network for both vNICs:

Make sure that you are using two completely different networks (e.g. different VLANs & different gateways) else you will run into problems with Linux IP reverse path filtering. If you really need to use the same network on both vNICs, then you will need to disable ip reverse path filtering. This is not a new issue and applies to all modern OSes as you can only have a single default gateway. If you do need to specify a second gateway for the additional vNIC, you have the option of either using a static route or creating a secondary routing table.

Configuring vNIC using new HTML5 VAMI:

Starting with VCSA 6.0 Update 1, there is now a new HTML5 based VAMI interface for managing and configuring the VCSA appliance. You can add a new vNIC to the VCSA while it is running using either the vSphere Web/C# Client which manages the VCSA. Once that operation has completed, you can login to the VAMI UI by opening a browser to https://[VCSA-HOSTNAME]:5480 and select the "Networking" tab on the left hand corner. You should see both vNICs, with the newly added vNIC in an un-configured state as shown in the screenshot below.

multi-home-vcsa-2
To configure the new vNIC, just click on the "Edit" button and specify either an IPv4 or IPv6 address. Make sure you do not add a gateway UNLESS you plan on changing the system default gateway. You can potentially negatively impact the VCSA's networking if you did not mean to change the default gateway of the system.

multi-home-vcsa-3
In addition to the VAMI UI, you can also configure additional vNICs from the command-line using the default appliancesh interface by running the following command:

networking.ipv4.set --interface nic1 --mode static --address 172.30.0.200 --prefix 24

or if you have defaulted to the bash shell, then you can run the following VAMI command:

/opt/vmware/share/vami/vami_set_network STATICV4 172.30.0.200 255.255.255.0 default

For those interested, I had used VMware Photon and the following Docker Container to quickly stand up a DNS Server. This might come in handy for anyone looking to try this out in their lab or just for general testing.

Categories // VCSA, vSphere 6.0 Tags // multi-homing, vcenter server appliance, VCSA, vcva, vNic

Easily retrieve VM memory overhead using the vSphere 6.0 API

01.29.2016 by William Lam // 1 Comment

A handy API that was introduced in vSphere 6.0 is the ability to easily retrieve the amount of memory overhead for a given Virtual Machine. Though this was not a common task, it was not trivial to find and often required customers to scoure the various VM logs. In vSphere 6.0, we now have a module called the Overhead Memory Manager which provides a very simple API method called the LookupVmOverheadMemory() to retrieve this information. I know this question has come up from time to time and I figure I do a quick blog about it as I have not seen anyone write about this API yet.

I have created example implementation using PowerCLI to exercise this API which I have called Get-VMMemovehead.ps1 Once the method is loaded, you pipe the output of the Get-VM cmdlet to this new operation as seen in the screenshot below:

Get-VM "vcenter60-2" | Get-VMMemOverhead

vm-memory-overhead

Categories // Automation, vSphere 6.0 Tags // memory overhead, vSphere 6.0, vSphere API

Cheatsheet for the entire VMware AppCatalyst API using cURL

01.22.2016 by William Lam // 1 Comment

There were a few questions recently about the required syntax for specific VMware AppCatalyst operations when consuming the REST API using cURL. I figured I put together a quick "cheatsheet" that contains cuRL examples for the entire VMware AppCatalyst API which not only would it help me in future but could also benefit others. Like many, I also learn by example and having explicit samples to start with is a great way to get familiar with a new technology or product. If you are new to VMware AppCatalyst and would like a quick run down on how to quickly get started, be sure to check out my getting started article here for more details.

While going through the AppCatalyst API, I did find a couple of API operations which had some inconsistencies and did not strictly adhere to the JSON format. Thanks to Roman Tarnvski for providing the solution. I am hopeful that these issues will be resolved in a future update of AppCatalyst as I do like the ease of use of their API. For the majority of the API, the self documentation via the AppCatalyst API Explorer is accurate, which you can see from the screenshot below.

appcatalyst-api-explorer
Before you can interact with the AppCatalyst REST API, you will need to start the AppCatalyst Daemon by running the following command:

/opt/vmware/appcatalyst/bin/appcatalyst-daemon

Once the AppCatalyst Daemon is running, you can open a new terminal and start working with the REST API via cURL or any other tool of choice.

1. Create a new VM from the default Photon OS VM template:

You technically only need to specify the unique "id" property, but you can also give a display name for the VM by using the "name" property.

curl -d '{"id":"VM1", "name":"MyAppCat-VM1"}' -X POST localhost:8080/api/vms

1. CreateVM
2. Clone a VM from an existing VM:

Similar to creating a new VM, you also have option of using the "tag" property to associate additional metadata with the VM.

curl -d '{"id":"VM2", "parentid":"VM1", "name":"MyAppCat-VM2", "tag":"Development"}' -X POST localhost:8080/api/vms

2. Clone VM
3. List all VMs

curl -X GET localhost:8080/api/vms

3. List VMs
4. Get a specific VM:

To retrieve a specific VM, you will need to power on the VM before this operation is allowed. I did find it strange that this was the case, but perhaps this could be enhanced in the future to not have this requirement, especially if you want to pull out details such as the "tag" property.

curl -X GET localhost:8080/api/vms/VM1

4. Get specific VM
5. Power On a VM:

curl -d 'on' -X PATCH localhost:8080/api/vms/power/VM1

Note: Other VM Power Operations: off, shutdown, suspend, pause & unpause

5. Power VM
6. Get the power state of a VM:

curl -X GET localhost:8080/api/vms/power/VM1

6. Get Power State
7. Get the IP Address of a VM:

curl -X GET localhost:8080/api/vms/VM1/ipaddress

7. Get IP Address
8. Enable folder sharing for a VM:

curl -d "true" -X PATCH localhost:8080/api/vms/VM1/folders

8. Enable Shared Folders
9. Create a shared folder mapping for a VM:

The "guestPath" property is not an absolute path within the guestOS, but rather a logical name. For more details about shared folders in AppCatalyst, please have a look at this article here. Currently there is only one "flags" property with the value of 4 which enables read/write, please refer to the article in the link above for more details about folder sharing in AppCatalyst.

curl -d '{"guestPath":"shared-folder","hostPath":"/Users/wlam/git","flags":4}' -X POST localhost:8080/api/vms/VM1/folders

9. Create Shared Folder
10. List all shared folders to a VM:

curl -X GET localhost:8080/api/vms/VM1/folders

10. List all Shared Folders
11. List a specific shared folder for a VM:

curl -X GET localhost:8080/api/vms/VM1/folders/shared-folder

11. List specific shared folder
12. Delete a shared folder for a VM:

curl -X DELETE localhost:8080/api/vms/VM1/folders/shared-folder

12. Delete shared folder
13. Delete VM:

curl -X DELETE localhost:8080/api/vms/VM1

13. Delete VM

Categories // Automation, Cloud Native Tags // appcatalyst, curl, REST API

  • « Previous Page
  • 1
  • …
  • 332
  • 333
  • 334
  • 335
  • 336
  • …
  • 567
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Ultimate Lab Resource for VCF 9.0 06/25/2025
  • VMware Cloud Foundation (VCF) on ASUS NUC 15 Pro (Cyber Canyon) 06/25/2025
  • VMware Cloud Foundation (VCF) on Minisforum MS-A2 06/25/2025
  • VCF 9.0 Offline Depot using Synology 06/25/2025
  • Deploying VCF 9.0 on a single ESXi host? 06/24/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...