WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Docker Container for the Ruby vSphere Console (RVC)

11.08.2015 by William Lam // 2 Comments

The Ruby vSphere Console (RVC) is an extremely useful tool for vSphere Administrators and has been bundled as part of vCenter Server (Windows and the vCenter Server Appliance) since vSphere 6.0. One feature that is only available in the VCSA's version of RVC is the VSAN Observer which is used to capture and analyze performance statistics for a VSAN environment for troubleshooting purposes.

For customers who are still using the Windows version of vCenter Server and wish to leverage this tool, it is generally recommended that you deploy a standalone VCSA just for the VSAN Observer capability which does not require any additional licensing. Although it only takes 10 minutes or so to setup, having to download and deploy a full blown VCSA to just use the VSAN Observer is definitely not ideal, especially if you are resource constrained in your environment. You also may only need the VSAN Observer for a short amount of time, but it could take you longer to deploy and in a troubleshooting situation, time is of the essence.

I recently came across an internal Socialcast thread and one of the suggestion was why not build a tiny Photon OS VM that already contained RVC? Instead of building a specific Photon OS that was specific to RVC, why not just create a Docker Container for RVC? This also means you could pull down the Docker Container from Photon OS or any other system that has Docker installed. In fact, I had already built a Docker Container for some handy VMware Utilities, it would be simple enough to just have an RVC Docker Container.

The one challenge that I had was that the current RVC github repo does not contain the latest vSphere 6.x changes. The fix was simple, I just copied the latest RVC files from a vSphere 6.0 Update 1 deployment of the VCSA (/opt/vmware/rvc and /usr/bin/rvc) and used that to build my RVC Docker Container which is now hosted on Docker Hub here and includes the Dockerfile in case someone was interested in how I built it.

To use the RVC Docker Container, you just need access to a Linux Container Host, for example VMware Photon OS which can be deployed using an ISO or OVA. For instructions on setting that up, please take a look here which should only take a minute or so. Once logged in, you just need to run the following commands to pull down the RVC Docker Container and to star the container:

docker pull lamw/rvc
docker run --rm -it lamw/rvc

ruby-vsphere-console-docker-container-1
As seen in the screenshot above, once the Docker Container has started, you can then access RVC like you normally would. Below is an quick example of logging into one of my VSAN environments and using RVC to run the VSAN Health Check command.

ruby-vsphere-console-docker-container-0
If you wish to run the VSAN Observer with the live web server, you will need to map the port from the Linux Container Host to the VSAN Observer port which runs on 8010 by default when starting the RVC Docker Container. To keep things simple, I would recommend mapping 80->8010 and you would run the following command:

docker run --rm -it -p 80:8010 lamw/rvc

Once the RVC Docker Container has started, you can then start the VSAN Observer with --run-webserver option and if you connect to the IP Address of your Linux Container Host using a browser, you should see the VSAN Observer Stats UI.

Hopefully this will come in handy for anyone who needs to quickly access RVC.

Categories // Docker, VSAN, vSphere 6.0 Tags // container, Docker, Photon, ruby vsphere console, rvc, vcenter server appliance, VCSA, vcva, VSAN, VSAN 6.1, vSphere 6.0 Update 1

Quick Tip - Changing default port for HTTP Reverse Proxy on both vCenter Server & ESXi

10.27.2015 by William Lam // 11 Comments

If you decide to use a custom port for the HTTP Reverse Proxy (rhttpproxy) on vCenter Server which uses port 80 (HTTP) and 443 (HTTPS) by default, you should also apply the same change on all ESXi hosts being managed by that vCenter Server for proper functionality. The configuration files for the rhttpproxy has since changed from the early days of vSphere 5.x and in vSphere 6.x, there are now different.

UPDATE (04/27/18) - With release of vSphere 6.7, VMware now officially supports customizing the Reverse HTTP(s) Ports on the VCSA. Below is a screenshot using the VCSA Installer UI and this can also be customized in the JSON configuration file using the VCSA CLI Installer for automation purposes.

Below are the instructions for modifying the default ports for rhttproxy service for both Windows vCenter Server, vCenter Server Appliance (VCSA) and ESXi host.

Note: If you change the default ports of your vCenter Server, you will need to ensure that all VMware/3rd Party products that communicate with vCenter Server are also modified.

vCenter Server for Windows

On Windows, you will need to modify C:\ProgramData\VMware\vCenterServer\cfg\vmware-rhttpproxy\config.xml and look for the following lines to change either the HTTP and/or HTTPs ports:

<httpPort>80</httpPort>
<httpsPort>443</httpsPort>

Once you have saved the changes, you will need to restart the VMware HTTP Reverse Proxy service using Windows Services Manager.

vCenter Server Appliance (VCSA)

On the VCSA, you will need to modify /etc/vmware-rhttpproxy/config.xml and look for the following lines to change either the HTTP and/or HTTPs ports:

<httpPort>80</httpPort>
<httpsPort>443</httpsPort>

Once you have saved the changes, you will need to restart the rhttpproxy service by running the following command:

/etc/init.d/rhttpproxy restart

ESXi

Disclaimer: VMware does not officially support modifying the default HTTP/HTTPS ports on an ESXi host.

Pre-ESXi 8.0 - Use the following instructions:

On ESXi, you will need to modify /etc/vmware/rhttpproxy/config.xml and look for the following lines to change either the HTTP and/or HTTPs ports:

<httpPort>80</httpPort>
<httpsPort>443</httpsPort>

Once you have saved the changes, you will need to restart the rhttpproxy service by running the following command:

/etc/init.d/rhttpproxy restart

  • For ESXi 8.0 - Please see Changing the default HTTP(s) Reverse Proxy Ports on ESXi 8.0 for updated instructions
  • For ESXi 8.0 Update 1 and later - Please see Changing the default HTTP(s) Reverse Proxy Ports on ESXi 8.0 Update 1 for updated instructions

Categories // ESXi, VCSA, vSphere, vSphere 6.0 Tags // ESXi, reverse proxy, rhttpproxy, vCenter Server, vcenter server appliance, VCSA, vcva

Building minimal vSphere demo lab using VMware Fusion/Workstation with only 8GB memory?

10.16.2015 by William Lam // 7 Comments

After tweeting this update last week, I received quite a few questions on how I was able to squeeze a vCenter Server Appliance (VCSA) & ESXi 6.0 Update 1 along with a VMware Photon VM, all running on my Mac Book Air with only 8GB of memory. Although, I was not able to make use of my demo which was for my vSphere Content Library session at VMworld Europe this week; I thought I would still share the details on how I built this vSphere lab environment which could come in handy for others.

I was able to squeeze VCSA 6.0 & ESXi 6.0 Update 1 & Photon VM on Mac Book Air w/only 8GB of memory. Chrome & terminal ran fine as well!

— William Lam (@lamw.bsky.social | @*protected email*) (@lamw) October 7, 2015

I wanted to run everything on my Mac Book Air primarily for the convenience factor so I did not have to bring my Mac Mini which may not be ideal for traveling aboard. The performance and responsiveness of the environment was actually pretty good and I was able to also access the vSphere Web Client using Google Chrome as well as OS X terminal for CLI operations without any problems. It definitely helps if you place all VMs on SSDs, which is especially useful if swapping occurs since we are overcommitting the physical memory.

minimal-vsphere-demo-lab-on-fusion-or-workstation-with-only-8GB-of-memory-3
Below are the instructions for building this environment and here is a quick summary of the expected memory configuration for the three VMs.

Virtual Machine Memory
Embedded vCenter Server Appliance VM 5GB
ESXi VM 3GB
Photon VM 384 MB

Step 1 - Download the VCSA & ESXi 6.0 Update 1 ISO (or any other version you wish to run). You will need to extract the contents of VCSA ISO and the OVA is located in /vcsa/vmware-vcsa and you will need to add the .ova extension.

  • Source: Ultimate automation guide to deploying VCSA 6.0 Part 1: Embedded Node

Step 2 - We will need to configure memory overcommitment for VMware Fusion/Workstation to allow for the majority of the memory to be swapped to be able to run our minimal vSphere environment. You will need to set the value of prefvmx.minVmMemPct to 25 by adding the following line to the respective configuration file shown in the table below.

prefvmx.minVmMemPct = 25

Hypervisor Configuration File
VMware Workstation C:\ProgramData\VMware\VMware Workstation\config.ini
VMware Fusion /Library/Preferences/VMware\ Fusion/config
  • Source: Quick Tip – How to enable memory overcommitment in VMware Fusion?

Step 3 - Deploy the VCSA OVA to either your VMware Fusion or Workstation deployment and ensure you do not power on the VM. We will need to make the following edits to the VCSA's VMX file to ensure it is properly configured when it is powered on. Below is an example of the VMX parameters you will need to add before powering on the VM.

guestinfo.cis.deployment.node.type = "embedded"
guestinfo.cis.vmdir.domain-name = "vghetto.local"
guestinfo.cis.vmdir.site-name = "vghetto"
guestinfo.cis.vmdir.password = "VMware1!"
guestinfo.cis.appliance.net.addr.family = "ipv4"
guestinfo.cis.appliance.net.addr = "192.168.1.54"
guestinfo.cis.appliance.net.pnid = "192.168.1.54"
guestinfo.cis.appliance.net.prefix = "24"
guestinfo.cis.appliance.net.mode = "static"
guestinfo.cis.appliance.net.dns.servers = "192.168.1.1"
guestinfo.cis.appliance.net.gateway = "192.168.1.1"
guestinfo.cis.appliance.root.passwd = "VMware1!"
guestinfo.cis.appliance.ssh.enabled = "true"

  • Source: Quick Tip – How to enable memory overcommitment in VMware Fusion?

Step 5 - Once the VCSA has successfully been configured and you can connect to it using the vSphere Web Client, you can then power it off and reduce the memory from 8GB to 5GB.

Step 4 - Create a new VM using the ESXi 6.x GuestOS type for running your Nested ESXi VM and stick with the defaults of 4GB of memory to be able to install ESXi. Once the VM has been created, go ahead and install ESXi using the ISO as you normally would.

Step 5 - Once the ESXi VM has successfully been installed and booted up, you can then power it off and reduce the memory from 4GB to 3GB.

Step 6  (Optional) - If you wish to play with VMware Photon, you can also install Photon using the ISO which can be downloaded from here or deploy using the OVA which can be downloaded from here.

For folks who have more memory in their system, you could add an additional two Nested ESXi VMs to then run a full VSAN setup and then you will have a pretty powerful, with minimal resource footprint that you can bring with you anywhere to run demos or for development and testing purposes. I also highly recommend making use of the "Suspend" operation when you need to quickly get access to memory or run other applications and this also allows you to quickly resume the entire environment in just a few seconds without having to power down the entire setup which will take much longer.

Categories // Apple, Fusion, vSphere 6.0 Tags // apple, ESXi, fusion, Photon, vcenter server appliance, VCSA, vcva, workstation

  • « Previous Page
  • 1
  • …
  • 14
  • 15
  • 16
  • 17
  • 18
  • …
  • 44
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...