WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Docker Container for the Ruby vSphere Console (RVC)

11.08.2015 by William Lam // 2 Comments

The Ruby vSphere Console (RVC) is an extremely useful tool for vSphere Administrators and has been bundled as part of vCenter Server (Windows and the vCenter Server Appliance) since vSphere 6.0. One feature that is only available in the VCSA's version of RVC is the VSAN Observer which is used to capture and analyze performance statistics for a VSAN environment for troubleshooting purposes.

For customers who are still using the Windows version of vCenter Server and wish to leverage this tool, it is generally recommended that you deploy a standalone VCSA just for the VSAN Observer capability which does not require any additional licensing. Although it only takes 10 minutes or so to setup, having to download and deploy a full blown VCSA to just use the VSAN Observer is definitely not ideal, especially if you are resource constrained in your environment. You also may only need the VSAN Observer for a short amount of time, but it could take you longer to deploy and in a troubleshooting situation, time is of the essence.

I recently came across an internal Socialcast thread and one of the suggestion was why not build a tiny Photon OS VM that already contained RVC? Instead of building a specific Photon OS that was specific to RVC, why not just create a Docker Container for RVC? This also means you could pull down the Docker Container from Photon OS or any other system that has Docker installed. In fact, I had already built a Docker Container for some handy VMware Utilities, it would be simple enough to just have an RVC Docker Container.

The one challenge that I had was that the current RVC github repo does not contain the latest vSphere 6.x changes. The fix was simple, I just copied the latest RVC files from a vSphere 6.0 Update 1 deployment of the VCSA (/opt/vmware/rvc and /usr/bin/rvc) and used that to build my RVC Docker Container which is now hosted on Docker Hub here and includes the Dockerfile in case someone was interested in how I built it.

To use the RVC Docker Container, you just need access to a Linux Container Host, for example VMware Photon OS which can be deployed using an ISO or OVA. For instructions on setting that up, please take a look here which should only take a minute or so. Once logged in, you just need to run the following commands to pull down the RVC Docker Container and to star the container:

docker pull lamw/rvc
docker run --rm -it lamw/rvc

ruby-vsphere-console-docker-container-1
As seen in the screenshot above, once the Docker Container has started, you can then access RVC like you normally would. Below is an quick example of logging into one of my VSAN environments and using RVC to run the VSAN Health Check command.

ruby-vsphere-console-docker-container-0
If you wish to run the VSAN Observer with the live web server, you will need to map the port from the Linux Container Host to the VSAN Observer port which runs on 8010 by default when starting the RVC Docker Container. To keep things simple, I would recommend mapping 80->8010 and you would run the following command:

docker run --rm -it -p 80:8010 lamw/rvc

Once the RVC Docker Container has started, you can then start the VSAN Observer with --run-webserver option and if you connect to the IP Address of your Linux Container Host using a browser, you should see the VSAN Observer Stats UI.

Hopefully this will come in handy for anyone who needs to quickly access RVC.

Categories // Docker, VSAN, vSphere 6.0 Tags // container, Docker, Photon, ruby vsphere console, rvc, vcenter server appliance, VCSA, vcva, VSAN, VSAN 6.1, vSphere 6.0 Update 1

Using Ansible to provision a Kubernetes Cluster on VMware Photon

11.05.2015 by William Lam // 1 Comment

ansible-vmware-photon-kubernetes
I am always interested in learning and playing with new technologies, solutions and tools. Ansible, a popular configuration management tool which was recently acquired by Redhat, is one such tool that I have had on my to do list for some time now. It is quite difficult to find extra free time and with a new 7month year old, it has gotten even harder. However, in the last week or so I have been waking up randomly at 4-5am and I figured I might as well put this time to go use and give Ansible a try.

As the title suggests, I will be using Ansible to deploy a Kubernetes Cluster running on top of VMware's Photon OS. The motivation behind this little project was after watching Kelsey Hightower's recorded session at HashiConf on Managing Applications at Scale and comparing HashiCorp's Nomad and Google's Kubernetes (K8s) scheduler. I knew there were already a dozen different ways to deploy K8s, but I figure I would try something new and add a VMware spin to it by using the Photon OS.

I had found an out dated reference on setting up K8s in the Photon OS documentation and though a few of the steps are no longer needed, it provided a good base for me on creating the Ansible playbook for setting up a K8s Cluster. If you are not familiar with Ansible, this getting started guide was quite helpful. For our K8s setup, we will have a 2-Node setup, one being the Master and the other the Minion. If you are interested in an overview of K8s, be sure to check out the official documentation here.

Step 1 - You will need to deploy at least 2 Photon OS VMs, one for the Kubernetes Master and one for the Minon. This can be done using either the ISO or by deploying the pre-packaged OVA. For more details on how to setup Photon OS, please refer to the documentation here. This should take only a few minutes as the installation or deployment of Photon OS is pretty quick. In my setup, I have 192.168.1.133 as Master and 192.168.1.111 as the Minion.

Step 2 - Download and install Ansible on your client desktop. There are several options depending on the platform you plan to use. For more information take a look at the documentation here. In my setup, I will be using a Mac OS X system and you can easily install Ansible by running the following command:

brew install ansible

Step 3 - Next, to verify and test that our installation of Ansible was successful, we will need to create our inventory host file (I called it hosts but you can name it anything you want) which will contain the mappings to our Photon OS VMs. The example below assumes you do not have DNS running in your environment and I am making use of the variable options in host file to specify a friendly names versus just using the IP Addresses which will be read in later. If you do have DNS in your environment, you do not need the last section of the file.

[kubernetes_cluster]
192.168.1.133
192.168.1.111

[masters]
192.168.1.133

[minions]
192.168.1.111

[kubernetes_cluster:vars]
master_hostname=photon-master
master_ip=192.168.1.133
minion_hostname=photon-node
minion_ip=192.168.1.111

Step 3 - We will be performing a basic "ping" test to validate that Ansible is in fact working and can communicate with our deployed Photon VMs. Run the following command which will specify the inventory host file as input:

ansible -i hosts all -m ping --user root --ask-pass

Screen Shot 2015-11-04 at 5.45.12 PM
Step 4 - If the previous step was successful, we can now create our Ansible playbook which will contain the instructions on setting up our K8s Cluster. Download the kubernetes_cluster.yml to your desktop and then run the following command:

ansible-playbook -i hosts --user root --ask-pass kubernetes_cluster.yml

If you want to use SSH keys for authentication and if you have already uploaded the public keys to your Photon VMs, then you can replace --ask-pass with --private-key and specify the full path to your SSH private keys.

using-ansible-to-provision-kubernetes-cluster-running-on-vmware-photon-0
Step 5 - Once the Ansible playbook has been successfully executed, you should see summary at the end showing everything was ok. To verify that our K8s Cluster has been properly setup, we will check the Minon's node status state which should show "ready". To do so, we will login to the K8s Master node and run the following command:

kubectl get nodes

You should see that the status field shows "Ready" which means the K8s Cluster has been properly configured.

using-ansible-to-provision-kubernetes-cluster-running-on-vmware-photon-1
At this point you have a basic K8s Cluster setup running on top of VMware Photon. If you are interested in exploring K8s further, there are some nice 101 and 201 official tutorials that can be found here. Another handy reference that I used for creating my Ansible playbook was this article here which provided a way to create loops using the lineinfile param.

Categories // Automation, Cloud Native, vSphere Tags // Ansible, cloud native apps, K8s, Kubernetes, Photon

Content Library Tech Preview at VMworld Europe 2015

11.04.2015 by William Lam // 4 Comments

For those of you who were fortunate enough to attend the Content Library Technical Deep Dive session (#5106) at VMworld Europe several weeks back and stayed until the very end, you were treated to an exclusive sneak peak demo. The demo was well received from what I heard, especially having been one of the most popular feature requests when talking to customers. I know the Content Library Engineering team has been working hard on this feature and I thought what better way than to show it off at VMworld!

I recently had a meeting with the Content Library Dev Manager (Pratima Rao) who also had presented at VMworld Europe and I just got the green light to share the demo with my readers. As a reminder, this is a Tech Preview and I encourage you to check out the disclaimer below if you have any questions related to the delivery of this feature 🙂 So without further ado, here is the Tech Preview video that was demo'ed at VMworld.

Note: There is no audio to the video, but for those interested in what is happening in the video, here is a quick summary. Today, you can upload and manage ISO images within the Content Library, however when trying to mount an ISO from the Content Library, the workflow is not as straight forward as it could be. In a future update of vSphere, you will now have a new option to directly mount an ISO from the Content Library. The demo starts off by showing some ISOs that have already been uploaded to an existing Content Library. We can then access those ISOs by going to the Virtual Machine settings and using the familiar mount ISO workflow to access the content. You will see that there is now a new option to mount an ISO from the Content Library and you will be presented with a filtered list of all files with .iso extension. Once you have selected the the ISO, the VM will mount it like you normally would from a vSphere Datastore or from the client system. Some additional things to note is that you can also filter by searching for specific content by using the search box in case you have multiple Content Libraries. Lastly, there are some useful metadata in the columns fields when looking through your ISOs which could help with further identifying the content you are interested in.

Disclaimer: This is an early Tech Preview and the overview of new technology represents no commitment from VMware to deliver these features in any generally available product. Features are subject to change, and must not be included in contracts, purchase orders, or sales agreement of any kind. Technically feasibility and market demand will affect final delivery. Pricing and packaging for any new technologies features discussed or represented have not been determined

Content Library Tech Preview at VMworld Europe 2015 from lamw on Vimeo.

Categories // vSphere 6.0, vSphere Web Client Tags // content library, iso, Tech Preview

  • « Previous Page
  • 1
  • …
  • 332
  • 333
  • 334
  • 335
  • 336
  • …
  • 560
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...