WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Migrating ESXi to a Distributed Virtual Switch with a single NIC running vCenter Server

11.18.2015 by William Lam // 29 Comments

Earlier this week I needed test something which required a VMware Distributed Virtual Switch (VDS) and this had to be a physical setup, so Nested ESXi was out of the question. I could have used my remote lab, but given what I was testing was a bit "experimental", I prefered using my home lab in the event I need direct console access. At home, I run ESXi on a single Apple Mac Mini and one of the challenges with this and other similar platforms (e.g. Intel NUC) is that they only have a single network interface. As you might have guessed, this is a problem when looking to migrate from a Virtual Standard Switch (VSS) to VDS, as it requires at least two NICS.

Unfortunately, I had no other choice and needed to find a solution. After a couple minutes of searching around the web, I stumbled across this serverfault thread here which provided a partial solution to my problem. In vSphere 5.1, we introduced a new feature which would automatically roll back a network configuration change if it negatively impacted network connectivity to your vCenter Server. This feature could be disabled temporarily by editing the vCenter Server Advanced Setting (config.vpxd.network.rollback) which would allow us to by-pass the single NIC issue, however this does not solve the problem entirely. What ends up happening is that the single pNIC is now associated with the VDS, but the VM portgroups are not migrated and the reason that this is problematic is that the vCenter Server is also running on the ESXi host which it is managing and has now lost network connectivity 🙂

I lost access to my vCenter Server and even though I could connect directly to the ESXi host, I was not able to change the VM Network to the Distributed Virtual Portgroup (DVPG). This is actually an expected behavior and there is an easy work around, let me explain. When you create a DVPG, there are three different bindings: Static, Dynamic, and Ephemeral that can be configured and by default, Static binding is used. Both Static and Dynamic DVPGs can only be managed through vCenter Server and because of this, you can not change the VM network to a non-Ephemeral DVPG and in fact, it is not even listed  when connecting to the vSphere C# Client. The simple work around is to create a DVPG using the Ephemeral binding and this will allow you to then change the VM network of your vCenter Server and is the last piece to solving this puzzle.

Disclaimer: This is not officially supported by VMware, please use at your own risk.

Here are the exact steps to take if you wish to migrate an ESXi host with a single NIC from a VSS to VDS and running vCenter Server:

Step 1 - Change the following vCenter Server Advanced Setting config.vpxd.network.rollback to false:

migrating-from-vss-to-vds-with-single-nic-1
Note: Remember to re-enable this feature once you have completed the migration

Step 2 - Create a new VDS and the associated Portgroups for both your VMkernel interfaces and VM Networks. For the DVPG which will be used for the vCenter Server's VM network, be sure to change the binding to Ephemeral before proceeding with the VDS migration.

migrating-from-vss-to-vds-with-single-nic-0
Step 3 - Proceed with the normal VDS Migration wizard using the vSphere Web/C# Client and ensure that you perform the correct mappings. Once completed, you should now be able connect directly to the ESXi host using either the vSphere C# Client or ESXi Embedded Host Client to confirm that the VDS migration was successful as seen in the screenshot below.

migrating-from-vss-to-vds-with-single-nic-2
Note: If you forgot to perform Step 2 (which I initially did), you will need to login to the DCUI of your ESXi host and restore the networking configurations.

Step 4 - The last and final step is to change the VM network for your vCenter Server. In my case, I am using the VCSA and due to a bug I found in the Embedded Host Client, you will need to use the vSphere C# Client to perform this change if you are running VCSA 6.x. If you are running Windows VC or VCSA 5.x, then you can use the Embedded Host Client to modify the VM network to use the new DVPG.

migrating-from-vss-to-vds-with-single-nic-3
Once you have completed the VM reconfiguration you should now be able to login to your vCenter Server which is now connected to a DVPG running on a VDS which is backed by a single NIC on your ESXi host 😀

There is probably no good use case for this outside of home labs, but I was happy that I found a solution and hopefully this might come in handy for others who might be in a similar situation and would like to use and learn more about VMware VDS.

Categories // ESXi, Not Supported, vSphere Tags // distributed portgroup, distributed virtual switch, dvs, ESXi, notsupported, vds

Using Ansible to provision a Kubernetes Cluster on VMware Photon

11.05.2015 by William Lam // 1 Comment

ansible-vmware-photon-kubernetes
I am always interested in learning and playing with new technologies, solutions and tools. Ansible, a popular configuration management tool which was recently acquired by Redhat, is one such tool that I have had on my to do list for some time now. It is quite difficult to find extra free time and with a new 7month year old, it has gotten even harder. However, in the last week or so I have been waking up randomly at 4-5am and I figured I might as well put this time to go use and give Ansible a try.

As the title suggests, I will be using Ansible to deploy a Kubernetes Cluster running on top of VMware's Photon OS. The motivation behind this little project was after watching Kelsey Hightower's recorded session at HashiConf on Managing Applications at Scale and comparing HashiCorp's Nomad and Google's Kubernetes (K8s) scheduler. I knew there were already a dozen different ways to deploy K8s, but I figure I would try something new and add a VMware spin to it by using the Photon OS.

I had found an out dated reference on setting up K8s in the Photon OS documentation and though a few of the steps are no longer needed, it provided a good base for me on creating the Ansible playbook for setting up a K8s Cluster. If you are not familiar with Ansible, this getting started guide was quite helpful. For our K8s setup, we will have a 2-Node setup, one being the Master and the other the Minion. If you are interested in an overview of K8s, be sure to check out the official documentation here.

Step 1 - You will need to deploy at least 2 Photon OS VMs, one for the Kubernetes Master and one for the Minon. This can be done using either the ISO or by deploying the pre-packaged OVA. For more details on how to setup Photon OS, please refer to the documentation here. This should take only a few minutes as the installation or deployment of Photon OS is pretty quick. In my setup, I have 192.168.1.133 as Master and 192.168.1.111 as the Minion.

Step 2 - Download and install Ansible on your client desktop. There are several options depending on the platform you plan to use. For more information take a look at the documentation here. In my setup, I will be using a Mac OS X system and you can easily install Ansible by running the following command:

brew install ansible

Step 3 - Next, to verify and test that our installation of Ansible was successful, we will need to create our inventory host file (I called it hosts but you can name it anything you want) which will contain the mappings to our Photon OS VMs. The example below assumes you do not have DNS running in your environment and I am making use of the variable options in host file to specify a friendly names versus just using the IP Addresses which will be read in later. If you do have DNS in your environment, you do not need the last section of the file.

[kubernetes_cluster]
192.168.1.133
192.168.1.111

[masters]
192.168.1.133

[minions]
192.168.1.111

[kubernetes_cluster:vars]
master_hostname=photon-master
master_ip=192.168.1.133
minion_hostname=photon-node
minion_ip=192.168.1.111

Step 3 - We will be performing a basic "ping" test to validate that Ansible is in fact working and can communicate with our deployed Photon VMs. Run the following command which will specify the inventory host file as input:

ansible -i hosts all -m ping --user root --ask-pass

Screen Shot 2015-11-04 at 5.45.12 PM
Step 4 - If the previous step was successful, we can now create our Ansible playbook which will contain the instructions on setting up our K8s Cluster. Download the kubernetes_cluster.yml to your desktop and then run the following command:

ansible-playbook -i hosts --user root --ask-pass kubernetes_cluster.yml

If you want to use SSH keys for authentication and if you have already uploaded the public keys to your Photon VMs, then you can replace --ask-pass with --private-key and specify the full path to your SSH private keys.

using-ansible-to-provision-kubernetes-cluster-running-on-vmware-photon-0
Step 5 - Once the Ansible playbook has been successfully executed, you should see summary at the end showing everything was ok. To verify that our K8s Cluster has been properly setup, we will check the Minon's node status state which should show "ready". To do so, we will login to the K8s Master node and run the following command:

kubectl get nodes

You should see that the status field shows "Ready" which means the K8s Cluster has been properly configured.

using-ansible-to-provision-kubernetes-cluster-running-on-vmware-photon-1
At this point you have a basic K8s Cluster setup running on top of VMware Photon. If you are interested in exploring K8s further, there are some nice 101 and 201 official tutorials that can be found here. Another handy reference that I used for creating my Ansible playbook was this article here which provided a way to create loops using the lineinfile param.

Categories // Automation, Cloud Native, vSphere Tags // Ansible, cloud native apps, K8s, Kubernetes, Photon

Quick Tip - Changing default port for HTTP Reverse Proxy on both vCenter Server & ESXi

10.27.2015 by William Lam // 11 Comments

If you decide to use a custom port for the HTTP Reverse Proxy (rhttpproxy) on vCenter Server which uses port 80 (HTTP) and 443 (HTTPS) by default, you should also apply the same change on all ESXi hosts being managed by that vCenter Server for proper functionality. The configuration files for the rhttpproxy has since changed from the early days of vSphere 5.x and in vSphere 6.x, there are now different.

UPDATE (04/27/18) - With release of vSphere 6.7, VMware now officially supports customizing the Reverse HTTP(s) Ports on the VCSA. Below is a screenshot using the VCSA Installer UI and this can also be customized in the JSON configuration file using the VCSA CLI Installer for automation purposes.

Below are the instructions for modifying the default ports for rhttproxy service for both Windows vCenter Server, vCenter Server Appliance (VCSA) and ESXi host.

Note: If you change the default ports of your vCenter Server, you will need to ensure that all VMware/3rd Party products that communicate with vCenter Server are also modified.

vCenter Server for Windows

On Windows, you will need to modify C:\ProgramData\VMware\vCenterServer\cfg\vmware-rhttpproxy\config.xml and look for the following lines to change either the HTTP and/or HTTPs ports:

<httpPort>80</httpPort>
<httpsPort>443</httpsPort>

Once you have saved the changes, you will need to restart the VMware HTTP Reverse Proxy service using Windows Services Manager.

vCenter Server Appliance (VCSA)

On the VCSA, you will need to modify /etc/vmware-rhttpproxy/config.xml and look for the following lines to change either the HTTP and/or HTTPs ports:

<httpPort>80</httpPort>
<httpsPort>443</httpsPort>

Once you have saved the changes, you will need to restart the rhttpproxy service by running the following command:

/etc/init.d/rhttpproxy restart

ESXi

Disclaimer: VMware does not officially support modifying the default HTTP/HTTPS ports on an ESXi host.

Pre-ESXi 8.0 - Use the following instructions:

On ESXi, you will need to modify /etc/vmware/rhttpproxy/config.xml and look for the following lines to change either the HTTP and/or HTTPs ports:

<httpPort>80</httpPort>
<httpsPort>443</httpsPort>

Once you have saved the changes, you will need to restart the rhttpproxy service by running the following command:

/etc/init.d/rhttpproxy restart

  • For ESXi 8.0 - Please see Changing the default HTTP(s) Reverse Proxy Ports on ESXi 8.0 for updated instructions
  • For ESXi 8.0 Update 1 and later - Please see Changing the default HTTP(s) Reverse Proxy Ports on ESXi 8.0 Update 1 for updated instructions

Categories // ESXi, VCSA, vSphere, vSphere 6.0 Tags // ESXi, reverse proxy, rhttpproxy, vCenter Server, vcenter server appliance, VCSA, vcva

  • « Previous Page
  • 1
  • …
  • 70
  • 71
  • 72
  • 73
  • 74
  • …
  • 109
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...