WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Automating the silent installation of Site Recovery Manager 6.0/6.1 w/Embedded vPostgres DB

11.09.2015 by William Lam // 4 Comments

For customers looking to Automate the latest release of Site Recovery Manager 6.0 / 6.1 with an Embedded vPostgres DB, you may have found that my previous deployment scripts for SRM 5.8 no longer work with the latest release. The reason for this is that SRM 6.x now supports the Platform Services Controller (PSC) and in doing so, there are a couple of new silent installer flags that are now required. With the help of the SRM Engineering team, I was able to modify my script to include these new options for automating the silent installation of both SRM 6.0 and 6.1. You can download the new script called install_srm6x.bat.

Before using this script, I highly recommend that you take a look my previous article here which provides more details on how the script works in general.

There are 5 new silent options that have been introduced with SRM 6.x which are all required:

  • PLATFORM_SERVICES_CONTROLLER_HOST - The hostname of the Platform Services Controller
  • PLATFORM_SERVICES_CONTROLLER_PORT - The port for the PSC, default is 443 (recommend leaving this the default)
  • PLATFORM_SERVICES_CONTROLLER_THUMBPRINT - PSC SSL SHA1 Thumbprint (Must be in all CAPS)
  • SSO_ADMIN_USER - The SSO Administrator account (e.g. *protected email*)
  • SSO_ADMIN_PASSWORD - The SSO Administrator password

In addition to the above options, you will still need to populate the following options below and the script outlines which options need to be modified before running the script.

  • SRM_INSTALLER - The full path to the SRM 6.x installer
  • DR_TXT_VCHOSTNAME - vCenter Server Hostname
  • DR_TXT_VCUSR - vCenter Server Username
  • DR_TXT_VCPWD - vCenter Server Password
  • VC_CERTIFICATE_THUMBPRINT - vCenter Server SSL SHA1 Thumbprint (Must be in all CAPS)
  • DR_TXT_LSN - SRM Local Site Name
  • DR_TXT_ADMINEMAIL - SRM Admin Email Address
  • DR_CB_HOSTNAME_IP - SRM Server IP/Hostname
  • DR_TXT_CERTPWD - SSL Certificate Password
  • DR_TXT_CERTORG - SSL Certificate Organization Name
  • DR_TXT_CERTORGUNIT - SSL Certification Organization Unit Name
  • DR_EMBEDDED_DB_DSN - SRM DB DSN Name
  • DR_EMBEDDED_DB_USER - SRM DB Username
  • DR_EMBEDDED_DB_PWD - SRM DB Password
  • DR_SERVICE_ACCOUNT_NAME - Windows System Account to run SRM Service

Note: If you deployed either your vCenter Server or PSC using FQDN, be sure to specify that for both DR_TXT_VCHOSTNAME and PLATFORM_SERVICES_CONTROLLER_HOST. This is a change in behavior compared to SRM 5.8 which only required the IP Address of the vCenter Server.

If you run into any issues, you can take a look at the logs that are generated. From what I have seen, you will normally get a 1603 error code which you need to step back through the logs and eventually you will see the actual error.

Categories // Automation, SRM, vSphere 6.0 Tags // site recovery manager, srm, vpostgres, VSAN, vSphere Replication

Docker Container for the Ruby vSphere Console (RVC)

11.08.2015 by William Lam // 2 Comments

The Ruby vSphere Console (RVC) is an extremely useful tool for vSphere Administrators and has been bundled as part of vCenter Server (Windows and the vCenter Server Appliance) since vSphere 6.0. One feature that is only available in the VCSA's version of RVC is the VSAN Observer which is used to capture and analyze performance statistics for a VSAN environment for troubleshooting purposes.

For customers who are still using the Windows version of vCenter Server and wish to leverage this tool, it is generally recommended that you deploy a standalone VCSA just for the VSAN Observer capability which does not require any additional licensing. Although it only takes 10 minutes or so to setup, having to download and deploy a full blown VCSA to just use the VSAN Observer is definitely not ideal, especially if you are resource constrained in your environment. You also may only need the VSAN Observer for a short amount of time, but it could take you longer to deploy and in a troubleshooting situation, time is of the essence.

I recently came across an internal Socialcast thread and one of the suggestion was why not build a tiny Photon OS VM that already contained RVC? Instead of building a specific Photon OS that was specific to RVC, why not just create a Docker Container for RVC? This also means you could pull down the Docker Container from Photon OS or any other system that has Docker installed. In fact, I had already built a Docker Container for some handy VMware Utilities, it would be simple enough to just have an RVC Docker Container.

The one challenge that I had was that the current RVC github repo does not contain the latest vSphere 6.x changes. The fix was simple, I just copied the latest RVC files from a vSphere 6.0 Update 1 deployment of the VCSA (/opt/vmware/rvc and /usr/bin/rvc) and used that to build my RVC Docker Container which is now hosted on Docker Hub here and includes the Dockerfile in case someone was interested in how I built it.

To use the RVC Docker Container, you just need access to a Linux Container Host, for example VMware Photon OS which can be deployed using an ISO or OVA. For instructions on setting that up, please take a look here which should only take a minute or so. Once logged in, you just need to run the following commands to pull down the RVC Docker Container and to star the container:

docker pull lamw/rvc
docker run --rm -it lamw/rvc

ruby-vsphere-console-docker-container-1
As seen in the screenshot above, once the Docker Container has started, you can then access RVC like you normally would. Below is an quick example of logging into one of my VSAN environments and using RVC to run the VSAN Health Check command.

ruby-vsphere-console-docker-container-0
If you wish to run the VSAN Observer with the live web server, you will need to map the port from the Linux Container Host to the VSAN Observer port which runs on 8010 by default when starting the RVC Docker Container. To keep things simple, I would recommend mapping 80->8010 and you would run the following command:

docker run --rm -it -p 80:8010 lamw/rvc

Once the RVC Docker Container has started, you can then start the VSAN Observer with --run-webserver option and if you connect to the IP Address of your Linux Container Host using a browser, you should see the VSAN Observer Stats UI.

Hopefully this will come in handy for anyone who needs to quickly access RVC.

Categories // Docker, VSAN, vSphere 6.0 Tags // container, Docker, Photon, ruby vsphere console, rvc, vcenter server appliance, VCSA, vcva, VSAN, VSAN 6.1, vSphere 6.0 Update 1

Automating full configuration of a VSAN Stretched Cluster using RVC

10.23.2015 by William Lam // Leave a Comment

A couple of weeks back I had spent some time setting up several VSAN Stretched Clusters in my lab for some testing and although it was extremely easy to setup using the vSphere Web Client, I still prefer to stand up the environment completely automated 🙂

In looking to automate the VSAN Stretched Cluster configuration, I was interested in something that would pretty much work out of the box and not require any additional download or setup. The obvious answer would be to use the Ruby vSphere Console (RVC) is a really awesome tool that is available as part of vCenter Server included in both Windows vCenter Server and the VCSA.

For those of you who have not used RVC before, I highly recommend you give it a try and you can take a look at this article to see some of the cool features and benefits. I am making use of the RVC script option which I have written about in the past here to perform the VSAN Stretched Configuration. One of the new RVC namespaces that have been introduced in vSphere 6.0 Update 1 is the vsan.stretchedcluster.* commands and the one we are specifically interested in is the vsan.stretchedcluster.config_witness command.

There are a couple of things the script expects from an environment setup, so I will just spend a few minutes covering the pre-reqs and the assumptions before diving into the script. I will assume you already have a vCenter Server deployed and configured with an empty inventory. I also assume you have already deployed at least two ESXi hosts and a VSAN Witness VM that meets all the VSAN pre-reqs like at least one VSAN enabled VMkernel interface and associated disk requirements. Below is a screenshot of the vSphere Web Client of the initial environment.

automate-the-full-configuration-of-vsan-stretched-cluster-using-rvc-0
Next, we will need to download the RVC script deploy_stretch_cluster.rb and upload that to your vCenter Server. Before you can execute the script, you will need to edit the script and adjust the variable names based on your environment. Once you have saved the changes, you can then run the RVC script by running the following command:

rvc -s deploy_stretch_cluster.rb [VC-USERNAME]@localhost

Here is a screenshot of running the script on the VCSA using Nested ESXi VMs + VSAN Witness VM for the Stretched Clustering configuration:

aautomate-the-full-configuration-of-vsan-stretched-cluster-using-rvc-1
If everything executed successfully, you should see a "Task result: success" which signifies that the VSAN Witness VM was successfully added to the VSAN Stretched Cluster. If we now refresh the vSphere Web Client and under the Fault Domains configurations in the VSAN Cluster, we now see both our 2-Node VSAN Cluster and the VSAN Witness VM.

automate-the-full-configuration-of-vsan-stretched-cluster-using-rvc-2

Hopefully this script can also benefit others who are interested in quickly standing up a VSAN Stretched Cluster, especially for evaluation or testing purposes. Enjoy getting your VSAN on!

Categories // Automation, ESXi, VSAN, vSphere 6.0 Tags // ruby vsphere console, rvc, stretched cluster, VSAN, VSAN 6.1

  • « Previous Page
  • 1
  • …
  • 18
  • 19
  • 20
  • 21
  • 22
  • …
  • 39
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...