WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

How to change the default ports on the vCenter Server Appliance in vSphere 6.0?

01.20.2016 by William Lam // 13 Comments

When deploying the vCenter Server Appliance (VCSA), there are a set default network ports that are already pre-defined by VMware. It is generally recommended to stick with these defaults unless you have a really good reason to modify them. I am a big fan of strong defaults which can help reduce the number of steps it takes to deploy the VCSA, however I do understand that there are some organizations who may have specific security requirements which requires them to change some of the default ports. It is also important to note that changing the default network ports post-installation is not supported.

Disclaimer: This is not officially supported by VMware, please use at your own risk.

If you deploy the VCSA using the new Guided UI installer, you will not be able to modify the default network ports. However, if you deploy using the new Scripted CLI installer, you do have the option of overriding some of the default ports. Below is a table of the ports that can be modified which includes the variable name, default port number and their port usage which is described in the vSphere 6.0 documentation here. The variable port names are required in the JSON configuration file if you decide to modify from the default.

Variable Name Port Port Usage
rhttpproxy.ext.port1 80 HTTP Reverse Proxy Port
rhttpproxy.ext.port2 443 HTTPs Reverse Proxy Port
syslog.ext.port 514 Syslog Service Port
vpxd.ext.port1 902 ESXi Heartbeat port
syslog.ext.tls.port 1514 Syslog Service TLS port
netdumper.ext.serviceport 6500 ESXi Dump Collector port
autodeploy.ext.serviceport 6501 Auto Deploy Service port
autodeploy.ext.managementport 6502 Auto Deploy Management port
sts.ext.port1 7444 Secure Token Service port
vsphere-client.ext.port1 9443  vSphere Web Client port

Under the "Networking" section of the JSON configuration file, there is a "Ports" field which accepts a JSON encoded string of the ports you wish to modify. It actually took me a bit of time to figure out the exact syntax as this was not clearly documented anywhere. Lets say we wish to change the default HTTPS Reverse Proxy from 443 to 13443 and PSC's STS port from 7444 to 7441, you will need to specify it as shown in the example below. The key is properly escape the inner-double quotations since ports accepts a single string input.

"network": {
    "hostname": "192.168.1.140",
    "dns.servers": [
        "192.168.1.1"
    ],
    "gateway": "192.168.1.1",
    "ip": "192.168.1.140",
    "ip.family": "ipv4",
    "mode": "static",
    "prefix": "24",
    "ports": "{\"rhttpproxy.ext.port2\":\"13443\",\"sts.ext.port1\":\"7441\"}"
},

If everything was successful, when you connect to the VCSA, you should see that we no longer use the default port of 443 to connect to the vCenter Server as you can see from the screenshot below.

changing-default-vcenter-server-appliance-ports
If you ever wonder what ports were selected for either a vCenter Server or Platform Services Controller, you can easily find that by following the instructions in this article.

For customers using the Windows version of vCenter Server, you do have the option of modifying the default ports using the Guided UI since there is no guarantee these ports are not in use as VMware does not control the underlying OS. You can also use the Windows Scripted CLI to modify the default ports which you can find more information here.

vcenter-server-appliance-default-ports-1

Categories // Automation, VCSA, vSphere 6.0 Tags // platform service controller, psc, rhttpproxy, vcenter server appliance, VCSA, vcva, vSphere 6.0

How to update AppCatalyst's default PhotonOS VM template w/Docker 1.9?

12.19.2015 by William Lam // 9 Comments

For those of you using VMware's free AppCatalyst Hypervisor, you may have noticed that there is currently not a way to update to the latest Docker 1.9 Engine/Client using Photon's package manager, Tiny Dandified YUM (tndf). Although you can manually install Docker 1.9 on any deployed PhotonOS VMs from AppCatalyst, the only issue with that is that all VMs based off of the default PhotonOS template will still be running version 1.8, which means you would need to apply to do this on ever new VM creation.

Not ideal at the moment, but there is a simple fix which is to update the default PhotonOS template with Docker 1.9. We can easily this simply by leveraging AppCatalyst itself to update itself 😉 or rather its default PhotonOS VMDK. Below are the manual steps if you wish to walk through this yourself, but if you rather just run a simple script that will compeltely automate the entire process, just jump straight to the bottom of this article 🙂

Step 1 - Create a new temp PhotonOS VM (in my example, I'm calling it PHOTON-DOCKER-1.9) which we will use to install Docker 1.9. You can do so by running the following command:

/opt/vmware/appcatalyst/bin/appcatalyst vm create PHOTON-DOCKER-1.9

update-docker-client-to-19-in-appcatalyst-photonos-template-1
Step 2 - Power on the temp PhotonOS VM that you just created by running the following command:

/opt/vmware/appcatalyst/bin/appcatalyst vmpower on PHOTON-DOCKER-1.9

update-docker-client-to-19-in-appcatalyst-photonos-template-2
Step 3 - Retrieve the IP Address of the temp PhotonOS by running the following command (this could take up to a minute or so to display):

/opt/vmware/appcatalyst/bin/appcatalyst guest getip PHOTON-DOCKER-1.9

update-docker-client-to-19-in-appcatalyst-photonos-template-3
Step 4 - We can now SSH to the temp PhotonOS by using the IP Address from the previous step. You will need to specify the private key to login to the PhotonOS by running the following command:

ssh -i /opt/vmware/appcatalyst/etc/appcatalyst_insecure_key photon@[IP-ADDRESS]

update-docker-client-to-19-in-appcatalyst-photonos-template-4
Step 5 - Now we will download and install Docker 1.9 but before doing so we also need to grab the "tar" utility as it does not seem to be part of the default PhotonOS. Here is a one-liner that will automatically install the tar utility and then perform the download and install of Docker (Thanks to Massimo Re'Ferre for his original install steps, just polishing up his awesomeness). Copy and paste the snippet below which you will run that inside of the PhotonOS:

sudo tdnf -y install tar;curl -O https://get.docker.com/builds/Linux/x86_64/docker-1.9.1.tgz;tar -zxvf docker-1.9.1.tgz;sudo systemctl stop docker;sudo cp usr/local/bin/docker /usr/bin/docker;sudo systemctl start docker;rm -rf usr/;rm -f docker-1.9.1.tgz;exit

update-docker-client-to-19-in-appcatalyst-photonos-template-5
Step 6 - Now that we have our PhotonOS updated with Docker 1.9, we just need to shut it down and replace the VMDK with AppCatalyst's default PhotonOS VMDK. Run the following command to find the name of your temp PhotonOS VMDK.

find "${HOME}/Documents/AppCatalyst/PHOTON-DOCKER-1.9" -name '*.vmdk'

Once you have the VMDK name, you should backup the original AppCatalyst's PhotonOS VMDK by running the following command:

sudo mv /opt/vmware/appcatalyst/photonvm/photon-disk1-cl1.vmdk /opt/vmware/appcatalyst/photonvm/photon-disk1-cl1.vmdk.bak

Next, we will copy our temp PhotonOS VMDK to AppCatalyst's default PhotonOS directory and replace its original VMDK by running the following command:

sudo cp ${HOME}/Documents/AppCatalyst/PHOTON-DOCKER-1.9/photon-disk1-cl2.vmdk /opt/vmware/appcatalyst/photonvm/photon-disk1-cl1.vmdk

Finally, we need to make sure the new VMDK has the correct permissions by running the following command:

sudo chmod 644 /opt/vmware/appcatalyst/photonvm/photon-disk1-cl1.vmdk

Step 7 - At this point, you can verify that your changes were successful by creating a new PhotonOS VM and once logged into the VM, you can run "docker version" to verify that you are now running Docker 1.9 as shown in the screenshot below.

update-docker-client-to-19-in-appcatalyst-photonos-template-6
For those those of you who do not wish to go through the manual steps, here is a quick script called update_docker_client_in_appcatalyst.sh which automates this entire process describe in the instructions above. Here is a quick screenshot of a sample execution of the script. I am hoping the Photon team will be updating the tdnf online repo so that you can simple run yum -y update docker in the future.

update-docker-client-to-19-in-appcatalyst-photonos-template-7

Categories // Apple, Automation, Cloud Native, Docker Tags // appcatalyst, Docker, Photon, tdnf

How to automatically repoint & failover VCSA to another replicated Platform Services Controller (PSC)?

12.18.2015 by William Lam // 30 Comments

For those of you who read my previous article (if you have not read it, please do so before proceeding forward), at the very end I showed off a screenshot of a script that I had created for the vCenter Server Appliance (VCSA) which automatically monitors the health of the primary Platform Services Controller (PSC) it is connected to and in the event of a failure, it would automatically repoint and failover to another healthy PSC. The way it accomplishes this is by first deploying two externally replicated PSC's and then associating the VCSA with just the first PSC which we will call our primary/preferred PSC node. Both PSC's are in an Active/Active configuration using a multi-master replication and any changes made in SSO on psc-01 (as shown in the diagram) will automatically be replicated to psc-02.

automatically-repoint-failover-vcsa-to-replicated-platform-services-controller-4
From a vCenter Server's point of view, it is only get requests serviced by a single PSC, which is psc-01 as shown in the diagram above. Within the VCSA, there is a script which runs a cronjob that will periodically check psc-01's connectivity by performing a simple GET operation on the /websso endpoint. If it is unable to connect, the script will retry for a certain number of times before declaring that the primary/preferred PSC node is no longer available. At this point, the script will automatically re-point the VCSA to the secondary PSC and in a couple of minutes, any users who might have tried to login to the vSphere Web Client will be able login and this happens transparently behind the scenes without any manual interaction. For users that have already logged in to vCenter Server, those sessions will continue to work unless they have timed out, in which case you would need to log back in.

The script is configurable in terms of the number of times to check the PSC for connectivity as well as the amount of time to wait in between each check. In addition, if you have an SMTP server configured on the vCenter Server, you can also specify an email address which the script can send a notification after the failover and alert administrators to the failed PSC node. Although this example is specific to the VCSA, a similar script could be developed on a Windows platform using the same core foundation.

Disclaimer: This script is not officially supported by VMware, it is intended as an example of what can be done with the cmsso-util utility. Use at your own risk.

To setup a similar configuration, you will need to perform the following:

Step 1 - Deploy two External PSCs that are replicated with each other. Ensure you select the "Join an SSO domain in an existing vCenter 6.0 platform services controller" option to setup replication and ensure you are joining the same SSO Site.

automatically-repoint-failover-vcsa-to-replicated-platform-services-controller-1
Step 2 - Deploy your VCSA and when asked to specify the PSC to connect to, specify the primary/preferred PSC node you had deployed earlier. In my example, this would be psc-01 as seen in the screenshot below.

automatically-repoint-failover-vcsa-to-replicated-platform-services-controller-2
Step 3 - Download the checkPSCHealth.sh script which can be found here.

Step 4 - SCP the script to your VCSA and store it under /root and then set it to be executable by running the following command:

chmod +x /root/checkPSCHealth.sh

Step 5 - Next, you will need to edit the script and adjust the following variables listed below. They should all be self explanatory and if you do not have an SMTP server setup, you can leave the EMAIL_ADDRESS variable blank.

  • PRIMARY_PSC - IP/Hostname of Primary PSC
  • SECONDARY_PSC- IP/Hostname of Secondary PSC (must already be replicating with Primary PSC)
  • NUMBER_CHECKS- Number of times to check PSC connectivity before failing over (default: 3)
  • SLEEP_TIME - Number of seconds to wait in between checks (default: 30)
  • EMAIL_ADDRESS- Email when failover occurs

Step 6 - Lastly, we need to setup a scheduled job using cron. To do so you can run the following command:

crontab -e

Copy the following snippet as shown below into the crontab of the root user account. The first half just covers all the default paths and the expected libraries to perform the operation. I found that without having these paths, you will run into issues calling into the cmsso-util and I figured it was easier to take all the VMware paths from running the env command and just making it available. The very last last line is actually what setups the scheduling and in the example below, it will automatically run the script every 5 minutes. You can setup even more complex rules on how to run the script, for more info, take a look here.

PATH=/sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:/usr/games:/usr/lib/mit/bin:/usr/lib/mit/sbin:/usr/java/jre-vmware/bin:/opt/vmware/bin

SHELL=/bin/bash
VMWARE_VAPI_HOME=/usr/lib/vmware-vapi
VMWARE_RUN_FIRSTBOOTS=/bin/run-firstboot-scripts
VMWARE_DATA_DIR=/storage
VMWARE_INSTALL_PARAMETER=/bin/install-parameter
VMWARE_LOG_DIR=/var/log
VMWARE_OPENSSL_BIN=/usr/bin/openssl
VMWARE_TOMCAT=/opt/vmware/vfabric-tc-server-standard/tomcat-7.0.55.A.RELEASE
VMWARE_RUNTIME_DATA_DIR=/var
VMWARE_PYTHON_PATH=/usr/lib/vmware/site-packages
VMWARE_TMP_DIR=/var/tmp/vmware
VMWARE_PERFCHARTS_COMPONENT=perfcharts
VMWARE_PYTHON_MODULES_HOME=/usr/lib/vmware/site-packages/cis
VMWARE_JAVA_WRAPPER=/bin/heapsize_wrapper.sh
VMWARE_COMMON_JARS=/usr/lib/vmware/common-jars
VMWARE_TCROOT=/opt/vmware/vfabric-tc-server-standard
VMWARE_PYTHON_BIN=/opt/vmware/bin/python
VMWARE_CLOUDVM_RAM_SIZE=/usr/sbin/cloudvm-ram-size
VMWARE_VAPI_CFG_DIR=/etc/vmware/vmware-vapi
VMWARE_CFG_DIR=/etc/vmware
VMWARE_JAVA_HOME=/usr/java/jre-vmware

*/5 * * * * /root/checkPSCHealth.sh

Step 7 - Finally, you will probably want to test the script to ensure it is doing what you expect. The easiest way to do this is by disconnecting the vNIC on psc-01 and depending on how you have configured the script, in a short amount of time it should automatically start the failover. All operations are automatically logged to the system logs which you can find under /var/log/messages.log and I have also tagged the log entries with a prefix of vGhetto-PSC-HEALTH-CHECK, so you can easily filter out those message in Syslog as seen in the screenshot below.

Screen Shot 2015-11-23 at 3.11.45 PM
If a failover occurs, the script will also log additional output to /root/psc-failover.log which can be used to troubleshoot in the case a failover was attempted but failed. To ensure that the script does not try to failover again, it creates an empty file under /root/ran-psc-failover which the script checks at the beginning before proceeding. Once you have verified the script is doing what you expect, you will probably want to manually fail back the VCSA to the original PSC node and then remove the /root/ran-psc-failover file else the script will not run when it is schedule to.

As mentioned earlier, though this is specifically for the the VCSA, you can build a similar solution on a Windows system using Windows Task Scheduler and the scripting language of your choice. I, of course highly recommend customers to take a look at the VCSA for its simplicity in management and deployment, but perhaps thats just my bias 🙂

Categories // Automation, VCSA, vSphere 6.0 Tags // cmsso-util, cron, load balancer, platform service controller, psc, sso, sso replication, vCenter Server, VCSA, vcva

  • « Previous Page
  • 1
  • …
  • 173
  • 174
  • 175
  • 176
  • 177
  • …
  • 224
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...