WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

What does load balancing the Platform Services Controller really give you?

12.16.2015 by William Lam // 22 Comments

The Platform Services Controller (PSC) is a new infrastructure component that was first introduced in vSphere 6.0 that provides common services such as Single Sign-On, Licensing and Certificate Management capabilities for vCenter Server and other VMware-based products. A PSC can be deployed on the same system as the vCenter Server referred to as an Embedded deployment or outside of the vCenter Server which is known as an External PSC deployment. The primary use case for having an External PSC is to be able to take advantage of the new Enhanced Linked Mode (ELM) feature which provides customers with a single pane of glass for managing all of their vCenter Servers from within the vSphere Web Client.

When customers start to plan and design their vSphere 6.0 architecture, a topic that is usually brought up for discussion is whether or not they should be load balancing a pair (up to four) of their PSC's? The idea behind using a load balancer is to provider higher levels of availability for their PSC infrastructure, however it does come as an additional cost both from an Opex and Capex standpoint. More importantly, given the added complexity, does it really provide you with what you think it does?

A couple of things that stood out to me when I look at the process (VMware KB 2113315) of setting up a load balancer (VMware NSX, F5 BIG-IP, & Citrix NetScalar) for your PSC:

  • The load balancer is not actually "load balancing" the incoming requests and spreading the load across the different backend PSC nodes
  • Although all PSCs behind the load balancer is in an Active/Active configuration (multi-master replication), the load balancer itself has been configured to affinitzed to just a single PSC node

When talking to customers, they are generally surprised when I mention the above observations. When replication is setup between one or more PSC nodes, all nodes are operating in an Active/Active configuration and any one of the PSC nodes can service incoming requests. However, in a load balanced configuration, a single PSC node is actually "affinitized" to the load balancer which will be used to provide services to the registered vCenter Servers. From the vCenter Server's point of view, only a single PSC is really active in servicing the requests even though all PSCs nodes are technically in an Active/Active state. If you look at the implementation guides for the three supported load balancers (links above), you will see that this artificial "Active/Passive" behavior is actually accomplished by specifying a higher weight/priority on the primary or preferred PSC node.

So what exactly does load balancing the PSC really buy you? Well, it does provide you with a higher levels of availability for your PSC infrastructure, but it does this by simply failing over to one of the other available PSC nodes when the primary/preferred PSC node is no longer available or responding. Prior to vSphere 6.0 Update 1, this was the only other option to provide higher availability to your PSC infrastructure outside of using vSphere HA and SMP-FT. If you ask me, this is a pretty complex and potentially costly solution just to get a basic automatic node failover without any of the real benefits of setting up a load balancer in the first place.

In vSphere 6.0 Update 1, we introduced a new capability that allows us to repoint an existing vCenter Server to another PSC node as long as it is part of the same SSO Domain. What is really interesting about this feature is that you can actually get a similar behavior to what you would have gotten with load balancing your PSC minus the added complexity and cost of actually setting up the load balancer and the associated configurations on the PSC.

load-balancing-psc
In the diagram above, instead of using a load balancer as shown in the left, the alternative solution that is shown to the right is to manually "failover" or repoint to the other available and Active PSC nodes when the primary/preferred is no longer responding. With this solution, you are still deploying the same number of PSC's and setting up replication between the PSC nodes, but instead of relying on the load balancer to perform the failover for you automatically, you would be performing this operation yourself by using the new repoint functionality. The biggest benefit here is that you get the same outcome as the load balanced configure without the added complexity of setting up and managing a single or multiple load balancers which in my opinion is huge cost. At the end of the day, both solutions are fully supported by VMware and it is important to understand what capabilities are provided with using a load balancer and whether it makes sense for your organization to take on this complexity based on your SLAs.

The only down side to this solution is that when a failure occurs with the primary/preferred PSC, a manual intervention is required to repoint to one of the available Active PSC nodes. Would it not be cool if this was automated? ... 🙂

Well, I am glad you asked as this is exactly what I had thought about. Below is a sneak peak at a log snippet for a script that I had prototyped for the VCSA which automatically runs a scheduled job to periodically check the health of the primary/preferred PSC node. When it detects a failure, it will retry N-number of times and when concludes that the node has failed, it will automatically initiate a failover to the available Active PSC node. In addition, if you have an SMTP server configured on your vCenter Server, it can also send out an email notification about the failover. Stay tune for a future blog post for more details on the script which can be found here.

Screen Shot 2015-11-23 at 3.11.45 PM

Categories // Automation, vSphere 6.0 Tags // load balancer, platform service controller, psc, vSphere 6.0

Automating post-configurations for both PSC & VCSA 6.0u1 using appliancesh

11.23.2015 by William Lam // 4 Comments

In vSphere 6.0, we introduced a new command-line option to allow you to automate both the deployment and upgrade of a vCenter Server Appliance (VCSA) and Platform Services Controller (PSC) using a simple JSON configuration file. This has been a very popular request from customers and one that I have been asking for some time now and was glad to see it was finally made available with the VCSA. One thing that was still missing from an Automation standpoint was being able to some basic post-configurations after the initial deployment. Common operations such as adding additional user accounts, configuring SNMP for monitoring or adding proxy server were available but had to be done interactively and manually.

In vSphere 6.0 Update 1, an enhancement was made to the appliancesh interface which will now allow customers to automate the post-configurations of either a VCSA or PSC by simply re-directing a series of appliancesh commands within a file using SSH. Although SSH may not be ideal for all customers and having a programmatic interface via an API is ultimately where we want to get to; This at least allows customers to automate the end-to-end deployment of both the VCSA and PSC as well as covering any additional post-configurations that might be required to stand up a vSphere environment.

To make use of this feature, you simply create a file that contains the list of appliancesh commands that you wish to run on either the VCSA and/or PSC. Here is an example configuration called psc.config (you can name it anything you want):

access.shell.set --enabled false
access.ssh.set --enabled false
ntp.server.add --servers "0.pool.ntp.org,1.pool.ntp.org"
timesync.set --mode NTP
services.restart --name ntp
proxy.set --protocol https --server proxy.primp-industries.com
localaccounts.user.add --email *protected email* --role operator --fullname 'William Lam' --username lamw --password 'VMware1!'
snmp.set --communities public --targets 192.168.1.160@161/public
snmp.enable

Once you have saved the configuration file, you simply SSH to either your VCSA or PSC and re-direct the configuration file by running the following command:

ssh *protected email* < psc.config

Once authenticated, the series of appliancesh commands will be executed and then you will be automatically logged off as seen in the screenshot below.
automating-post-configurations-for-psc-and-vcsa-using-appliancesh-0
If you have any feedback in this particular area, please leave a comment as I know both PM/Engineering are interested in hearing your thoughts and what you might want to see in the future in terms of post-configuration of the VCSA and PSC.

Categories // Automation, VCSA, vSphere 6.0 Tags // appliancesh, psc, vami, vcenter server appliance, VCSA, vcva, vSphere 6.0 Update 1

Automating the silent installation of Site Recovery Manager 6.0/6.1 w/Embedded vPostgres DB

11.09.2015 by William Lam // 4 Comments

For customers looking to Automate the latest release of Site Recovery Manager 6.0 / 6.1 with an Embedded vPostgres DB, you may have found that my previous deployment scripts for SRM 5.8 no longer work with the latest release. The reason for this is that SRM 6.x now supports the Platform Services Controller (PSC) and in doing so, there are a couple of new silent installer flags that are now required. With the help of the SRM Engineering team, I was able to modify my script to include these new options for automating the silent installation of both SRM 6.0 and 6.1. You can download the new script called install_srm6x.bat.

Before using this script, I highly recommend that you take a look my previous article here which provides more details on how the script works in general.

There are 5 new silent options that have been introduced with SRM 6.x which are all required:

  • PLATFORM_SERVICES_CONTROLLER_HOST - The hostname of the Platform Services Controller
  • PLATFORM_SERVICES_CONTROLLER_PORT - The port for the PSC, default is 443 (recommend leaving this the default)
  • PLATFORM_SERVICES_CONTROLLER_THUMBPRINT - PSC SSL SHA1 Thumbprint (Must be in all CAPS)
  • SSO_ADMIN_USER - The SSO Administrator account (e.g. *protected email*)
  • SSO_ADMIN_PASSWORD - The SSO Administrator password

In addition to the above options, you will still need to populate the following options below and the script outlines which options need to be modified before running the script.

  • SRM_INSTALLER - The full path to the SRM 6.x installer
  • DR_TXT_VCHOSTNAME - vCenter Server Hostname
  • DR_TXT_VCUSR - vCenter Server Username
  • DR_TXT_VCPWD - vCenter Server Password
  • VC_CERTIFICATE_THUMBPRINT - vCenter Server SSL SHA1 Thumbprint (Must be in all CAPS)
  • DR_TXT_LSN - SRM Local Site Name
  • DR_TXT_ADMINEMAIL - SRM Admin Email Address
  • DR_CB_HOSTNAME_IP - SRM Server IP/Hostname
  • DR_TXT_CERTPWD - SSL Certificate Password
  • DR_TXT_CERTORG - SSL Certificate Organization Name
  • DR_TXT_CERTORGUNIT - SSL Certification Organization Unit Name
  • DR_EMBEDDED_DB_DSN - SRM DB DSN Name
  • DR_EMBEDDED_DB_USER - SRM DB Username
  • DR_EMBEDDED_DB_PWD - SRM DB Password
  • DR_SERVICE_ACCOUNT_NAME - Windows System Account to run SRM Service

Note: If you deployed either your vCenter Server or PSC using FQDN, be sure to specify that for both DR_TXT_VCHOSTNAME and PLATFORM_SERVICES_CONTROLLER_HOST. This is a change in behavior compared to SRM 5.8 which only required the IP Address of the vCenter Server.

If you run into any issues, you can take a look at the logs that are generated. From what I have seen, you will normally get a 1603 error code which you need to step back through the logs and eventually you will see the actual error.

Categories // Automation, SRM, vSphere 6.0 Tags // site recovery manager, srm, vpostgres, VSAN, vSphere Replication

  • « Previous Page
  • 1
  • …
  • 174
  • 175
  • 176
  • 177
  • 178
  • …
  • 224
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...