WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Neat way of installing or updating any VIB using just the ESXi Embedded Host Client

11.10.2015 by William Lam // 5 Comments

A couple of months back I had tossed out an idea on Twitter asking if others would like to see an automatic update mechanism built into the ESXi Embedded Host Client which would allow users to easily update to newer releases of the Fling versus the current method which requires copying the VIB and then running command in the ESXi Shell.

Wonder if its just me,but would others like to see an automatic update mechanism in the ESXi Embedded Host Client UI? pic.twitter.com/R9KFMOE4zu

— William Lam (@lamw.bsky.social | @*protected email*) (@lamw) August 26, 2015

To no surprise, the feedback was an astounding yes! Literally within a couple of hours, Etienne Le Sueur, one of the two VMware Engineers working on the Fling shared a screenshot that demonstrated that this would possible. The first release of this feature would simply ask for the URL to the updated ESXi Embedded Host Client VIB and this was included in the v3 release of the Fling.

One additional tidbit that Etienne had shared was that the way this feature was implemented, it was not only limited to Embedded Host Client VIB but you could do this for any ESXi VIB. This is done by using the vSphere API and calling into the InstallHostPatchV2_Task() method which allows you to install or update an ESXi VIB from a URL source. Most recently, there a twitter conversation between myself, Etienne and Christian Mohn on how this capability could be further extended to include updating ESXi itself which can either be from an Image Profile or offline bundle. For those with a detailed eye, you may have noticed that the same API method can also support an offline bundle URL which would make this possible. As of right now, the feature is actually included in an internal build of the Embedded Host Client, but perhaps we will see this in a future update of the Embedded Host Client? 😉

Going back to the original topic of this blog post, to use the VIB install/update mechanism, you would need to first upload the ESXi VIB to an HTTP Server and then specify the URL. This is fine if you have an existing HTTP Server but if you do not, it is sort of a pain and though there are other methods like uploading directly to the ESXi's python based HTTP Server as mentioned by Christian, it would still require using something like SCP which is an additional step. My initial goal and hope was to be able to install or update an ESXi VIB or ESXi itself using purely the Embedded Host Client. This would keep things simple and not require things like SSH to be enabled on the ESXi host.

After a bit of brainstorming with Etienne, he actually found a super clever way of accomplishing this after our conversation. The idea I had was to make use of the ESXi Datastore to store the VIB which can be uploaded through the Embedded Host Client. By default, there is also an HTTP based interface to the datastore, however it requires authentication which would be a problem. The neat idea that was suggested was why not try to specify the local VMFS path to the ESXi VIB (e.g./vmfs/volumes/datastore1/my.vib)? It turns out that this actually works as well!

With just two easy steps, you can now upload an ESXi VIB and then install/update all using just the Embedded Host Client with no additional dependencies

Step 1 - Navigate to the Datastore section in the Embedded Host Client and then upload the ESXi VIB that you wish to install or update.

install-or-updating-vib-using-embedded-host-client-1
Step 2 - To install/update the VIB, click on Help in the upper right hand corner of the Embedded Host Client and select the "Update" option. Specify the local VMFS path to ESXi VIB and then click on Update to apply.

Note: A reboot may be required after applying a new VIB. It will be your responsibility to shutdown the VMs and reboot the ESXi host for changes to go into effect if required.

install-or-updating-vib-using-embedded-host-client-0
At this point, you should also see a task kicked off applying the VIB. If there are any errors thrown, they will be displayed else you should see a successful task completion. For educational purposes, here is a quick screenshot of /var/log/esxupdate.log showing the VIB being applied, this can be used for further troubleshooting if required.

install-or-updating-vib-using-embedded-host-client-2
Hope you enjoyed this neat little trick and with just two easy steps you can install or update any ESXi VIB using the Embedded Host Client without additional dependencies or enabling SSH on the ESXi host.

Categories // ESXi Tags // embedded host client, ESXi, fling, vib

Automating the silent installation of Site Recovery Manager 6.0/6.1 w/Embedded vPostgres DB

11.09.2015 by William Lam // 4 Comments

For customers looking to Automate the latest release of Site Recovery Manager 6.0 / 6.1 with an Embedded vPostgres DB, you may have found that my previous deployment scripts for SRM 5.8 no longer work with the latest release. The reason for this is that SRM 6.x now supports the Platform Services Controller (PSC) and in doing so, there are a couple of new silent installer flags that are now required. With the help of the SRM Engineering team, I was able to modify my script to include these new options for automating the silent installation of both SRM 6.0 and 6.1. You can download the new script called install_srm6x.bat.

Before using this script, I highly recommend that you take a look my previous article here which provides more details on how the script works in general.

There are 5 new silent options that have been introduced with SRM 6.x which are all required:

  • PLATFORM_SERVICES_CONTROLLER_HOST - The hostname of the Platform Services Controller
  • PLATFORM_SERVICES_CONTROLLER_PORT - The port for the PSC, default is 443 (recommend leaving this the default)
  • PLATFORM_SERVICES_CONTROLLER_THUMBPRINT - PSC SSL SHA1 Thumbprint (Must be in all CAPS)
  • SSO_ADMIN_USER - The SSO Administrator account (e.g. *protected email*)
  • SSO_ADMIN_PASSWORD - The SSO Administrator password

In addition to the above options, you will still need to populate the following options below and the script outlines which options need to be modified before running the script.

  • SRM_INSTALLER - The full path to the SRM 6.x installer
  • DR_TXT_VCHOSTNAME - vCenter Server Hostname
  • DR_TXT_VCUSR - vCenter Server Username
  • DR_TXT_VCPWD - vCenter Server Password
  • VC_CERTIFICATE_THUMBPRINT - vCenter Server SSL SHA1 Thumbprint (Must be in all CAPS)
  • DR_TXT_LSN - SRM Local Site Name
  • DR_TXT_ADMINEMAIL - SRM Admin Email Address
  • DR_CB_HOSTNAME_IP - SRM Server IP/Hostname
  • DR_TXT_CERTPWD - SSL Certificate Password
  • DR_TXT_CERTORG - SSL Certificate Organization Name
  • DR_TXT_CERTORGUNIT - SSL Certification Organization Unit Name
  • DR_EMBEDDED_DB_DSN - SRM DB DSN Name
  • DR_EMBEDDED_DB_USER - SRM DB Username
  • DR_EMBEDDED_DB_PWD - SRM DB Password
  • DR_SERVICE_ACCOUNT_NAME - Windows System Account to run SRM Service

Note: If you deployed either your vCenter Server or PSC using FQDN, be sure to specify that for both DR_TXT_VCHOSTNAME and PLATFORM_SERVICES_CONTROLLER_HOST. This is a change in behavior compared to SRM 5.8 which only required the IP Address of the vCenter Server.

If you run into any issues, you can take a look at the logs that are generated. From what I have seen, you will normally get a 1603 error code which you need to step back through the logs and eventually you will see the actual error.

Categories // Automation, SRM, vSphere 6.0 Tags // site recovery manager, srm, vpostgres, VSAN, vSphere Replication

Docker Container for the Ruby vSphere Console (RVC)

11.08.2015 by William Lam // 2 Comments

The Ruby vSphere Console (RVC) is an extremely useful tool for vSphere Administrators and has been bundled as part of vCenter Server (Windows and the vCenter Server Appliance) since vSphere 6.0. One feature that is only available in the VCSA's version of RVC is the VSAN Observer which is used to capture and analyze performance statistics for a VSAN environment for troubleshooting purposes.

For customers who are still using the Windows version of vCenter Server and wish to leverage this tool, it is generally recommended that you deploy a standalone VCSA just for the VSAN Observer capability which does not require any additional licensing. Although it only takes 10 minutes or so to setup, having to download and deploy a full blown VCSA to just use the VSAN Observer is definitely not ideal, especially if you are resource constrained in your environment. You also may only need the VSAN Observer for a short amount of time, but it could take you longer to deploy and in a troubleshooting situation, time is of the essence.

I recently came across an internal Socialcast thread and one of the suggestion was why not build a tiny Photon OS VM that already contained RVC? Instead of building a specific Photon OS that was specific to RVC, why not just create a Docker Container for RVC? This also means you could pull down the Docker Container from Photon OS or any other system that has Docker installed. In fact, I had already built a Docker Container for some handy VMware Utilities, it would be simple enough to just have an RVC Docker Container.

The one challenge that I had was that the current RVC github repo does not contain the latest vSphere 6.x changes. The fix was simple, I just copied the latest RVC files from a vSphere 6.0 Update 1 deployment of the VCSA (/opt/vmware/rvc and /usr/bin/rvc) and used that to build my RVC Docker Container which is now hosted on Docker Hub here and includes the Dockerfile in case someone was interested in how I built it.

To use the RVC Docker Container, you just need access to a Linux Container Host, for example VMware Photon OS which can be deployed using an ISO or OVA. For instructions on setting that up, please take a look here which should only take a minute or so. Once logged in, you just need to run the following commands to pull down the RVC Docker Container and to star the container:

docker pull lamw/rvc
docker run --rm -it lamw/rvc

ruby-vsphere-console-docker-container-1
As seen in the screenshot above, once the Docker Container has started, you can then access RVC like you normally would. Below is an quick example of logging into one of my VSAN environments and using RVC to run the VSAN Health Check command.

ruby-vsphere-console-docker-container-0
If you wish to run the VSAN Observer with the live web server, you will need to map the port from the Linux Container Host to the VSAN Observer port which runs on 8010 by default when starting the RVC Docker Container. To keep things simple, I would recommend mapping 80->8010 and you would run the following command:

docker run --rm -it -p 80:8010 lamw/rvc

Once the RVC Docker Container has started, you can then start the VSAN Observer with --run-webserver option and if you connect to the IP Address of your Linux Container Host using a browser, you should see the VSAN Observer Stats UI.

Hopefully this will come in handy for anyone who needs to quickly access RVC.

Categories // Docker, VSAN, vSphere 6.0 Tags // container, Docker, Photon, ruby vsphere console, rvc, vcenter server appliance, VCSA, vcva, VSAN, VSAN 6.1, vSphere 6.0 Update 1

  • « Previous Page
  • 1
  • …
  • 331
  • 332
  • 333
  • 334
  • 335
  • …
  • 560
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...