WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

How to SSH to Tanzu Kubernetes Grid (TKG) Cluster in vSphere with Tanzu?

10.10.2020 by William Lam // 6 Comments

For troubleshooting your vSphere with Tanzu environment, you may have a need to SSH to the Control Plane of your Tanzu Kubernetes Grid (TKG) Cluster. This was something I had to do to verify some basic network connectivity. At a high level, we need to login to our Supervisor Cluster and retrieve the SSH secret to our TKG Cluster and since this question recently came up, below are the instructions.


UPDATE (10/10/20) - It looks like it is also possible to retrieve the TKG Cluster credentials without needing SSH directly to the Supervisor Control Plane VM, see Option 1 for the alternate solution.

Option 1:

Step 1 - Login to the Supervisor Control Plane using the following command:

kubectl vsphere login --server=172.17.31.129 -u *protected email* --insecure-skip-tls-verify

Step 2 - Next, we need to retrieve the SSH password secret for our TKG Cluster and perform a base64 decode to retrieve the plain text value. You will need two pieces of information and then substitute that into the command below

  • The name of your vSphere Namespace which was created in your vSphere with Tanzu environment, in my example it is called primp-industries
  • The name of your TKG Cluster, in my example it is called william-tkc-01 and the secret name will be [tkg-cluster-name]-ssh-password as shown in the example below

kubectl -n primp-industries get secrets william-tkc-01-ssh-password -o jsonpath={.data.ssh-passwordkey} | base64 -d

Step 3 - Finally, you can now SSH to TKG Cluster from a system which has network connectivity, this can be from the Supervisor Cluster Control Plane VM or another system. The SSH username for the TKG Cluster is vmware-system-user and use the credentials that was provided from the previous screen.

Option 2:

Step 1 - SSH to the VCSA and then run the following script to retrieve the Supervisor Cluster Control Plane VM credentials:

/usr/lib/vmware-wcp/decryptK8Pwd.py

Step 2 - SSH to the IP Address using root username and the password provided from the previous command

Step 3- Next, we need to retrieve the SSH password secret for our TKG Cluster and perform a base64 decode to retrieve the plain text value. You will need two pieces of information and then substitute that into the command below

  • The name of your vSphere Namespace which was created in your vSphere with Tanzu environment, in my example it is called primp-industries
  • The name of your TKG Cluster, in my example it is called william-tkc-01 and the secret name will be [tkg-cluster-name]-ssh-password as shown in the example below

kubectl -n primp-industries get secrets william-tkc-01-ssh-password -o jsonpath={.data.ssh-passwordkey} | base64 -d

Step 4 - Finally, you can now SSH to TKG Cluster from a system which has network connectivity, this can be from the Supervisor Cluster Control Plane VM or another system. The SSH username for the TKG Cluster is vmware-system-user and use the credentials that was provided from the previous screen.

Categories // Kubernetes, VMware Tanzu, vSphere 7.0 Tags // Tanzu Kubernetes Grid, vmware-system-user, vSphere 7.0 Update 1, vSphere Kubernetes Service

Using ESXi-Arm Fling as a lightweight vSphere Automation environment for PowerCLI and Terraform

10.09.2020 by William Lam // 1 Comment

A set of use cases that I was really excited for when I first heard about ESXi-Arm a few years ago was around the topic of vSphere Automation and Development. I speak with many customers who are just starting out on their Automation journey whether that is using PowerCLI, one of our many vSphere Automation SDK or even directly to the new vCenter REST API which all new features are being exposed through these days.

One of the biggest challenge for new comers is simply getting access to hardware that they can start playing around with and although there are is plethora of vSphere Homelab choices, it does require some amount of investment, which is definitely worth it in the long run. However, if you are just getting started and maybe you want something that is a bit more lighter weight, there are not too many options outside of an Intel NUC. I know many consultants actually carry around an Intel NUC that contains several VM images that they use to with their clients, including demos.

With the small form factor, low cost and reduced power consumption of the Raspberry Pi, I think this really opens up the door for some interesting creative solutions:

  • Basic vSphere footprint that can be used for work or learning purposes
  • Easy way to learn and explore the vSphere API with an actual host and enabling real VM deployments
  • Trying out Infrastructure-as-Code (IaC) tools such as Terraform and Ansible
  • Quick way to run through basic demos in front of customers
  • On-demand and self-contained lab environment for small Hackathon at your local VMUG or even at VMworld

Something I was really interested in early on was to be able to use ESXi-Arm with the Raspberry Pi to not only have a basic ESXi environment but also have PowerCLI environment up and running in an Arm VM. My first thought was to get this setup using Photon OS, which not only has Arm distribution but also has support for Powershell and PowerCLI. I was hoping with some tinkering, I could easily get Powershell for Arm to run on PhotonOS (which it did) but I then ran into issues installing PowerCLI itself.

I decided to give up for now and take a look at Ubuntu which also supports Powershell for Arm, but the Microsoft documentation only listed instructions for 32-bit and ESXi-Arm requires a 64-bit. Taking a look at the Powershell release files, I noticed there was 64-bit package and with a few minor adjustments to the commands, I got PowerCLI installed and connected back to my rPI which was attached to my x86 vCenter Server!

[Read more...]

Categories // Automation, ESXi-Arm, PowerCLI, vSphere Tags // Arm, ESXi, PowerCLI, Terraform

vSAN Witness using Raspberry Pi 4 & ESXi-Arm Fling

10.08.2020 by William Lam // 36 Comments

As hinted in my earlier blog post, you can indeed setup a vSAN Witness using the ESXi-Arm Fling running on a Raspberry Pi (rPI) 4b (8GB) model. In fact, you can even setup a standard 2-Node or 3-Node vSAN Cluster using the exact same technique. For those familiar with vSAN and the vSAN Witness, we will need to have at least two storage devices for the caching and capacity tier.

For the rPI, this means we are limited to using USB storage devices and luckily, vSAN can actually claim and consume USB storage devices. For a basic homelab, this is probably okay but if you want something a bit more reliable, you can look into using a USB 3.0 to M.2 NVMe chassis. The ability to use an M.2 NVMe device should definitely provide more resiliency compared to a typical USB stick you might have lying around. From a capacity point of view, I had two 32GB USB keys that I ended up using which should be plenty for a small setup but you can always look at purchasing large capacity given how cheap USB devices are.

Disclaimer: ESXi-Arm is a VMware Fling which means it is not a product and therefore it is not officially supported. Please do not use it in Production.

With the disclaimer out of the way, I think this is a fantastic use case for an inexpensive vSAN Witness which could be running at a ROBO/Edge location or simply supporting your homelab. The possibilities are certainly endless and I think this is where the ESXi-Arm team would love to hear whether this is something customers would even be interested in and please share your feedback to help with priorities for both the ESXi-Arm and vSAN team.

In my setup, I have two Intel NUC 9th Pro which make up my 2-Node vSAN Cluster and then an rPI as my vSAN Witness. Detailed instructions can be found below including a video for those wanting to see vSAN Witness in action by actually powering on an actual workload 😀

[Read more...]

Categories // ESXi-Arm, VSAN, vSphere Tags // Arm, ESXi, Raspberry Pi, witness

  • « Previous Page
  • 1
  • …
  • 173
  • 174
  • 175
  • 176
  • 177
  • …
  • 561
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...