WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Kubernetes on ESXi-Arm using k3s

10.16.2020 by William Lam // 11 Comments

The tiny form factor of a Raspberry Pi (rPI) is a fantastic hardware platform to start playing with the ESXi-Arm Fling. You can already do a bunch of fun VMware things like running a lightweight vSAN Witness Node to setting up basic automation environment for PowerCLI, Terraform and Packer to running rPI OS as VM, enabling some neat use cases like consolidating your physical rPI assets which might be running RetroPi and Pi-Hole which many home labbers are doing.

In addition to VMware solutions, its is also a great platform to learn and tinker with new technologies like Kubernetes (K8s) which I am sure many of you have been hearing about 🙂 Although our vSphere with Tanzu and Tanzu Kubernetes Grid (TKG) does not currently work with the ESXi-Arm Fling, I have actually been meaning to try out a super lightweight K8s distribution designed for IoT/Edge called k3s (pronounced k-3-s) which also recently joined Cloud Native Computing Foundation (CNCF) Sandbox level.

k3s is supported on rPI and you normally would have multiple rPI devices to represent the number of nodes, for example if you want a basic 3-Node cluster, you would need three physical rPI devices. With ESXi-Arm, you can now create these nodes as VM, using just a single rPI. This opens up the door for all sorts of explorations, you can create HA cluster or try out more advanced features which might be more difficult if you needed several physical devices. If you mess up, you can simply re-deploy the VM without much pain or simply clone the VM.

In my setup, I am using 3 x Photon OS VMs. One for the primary node and two for k3s worker nodes. You can certainly install k3s on any other Arm-based OS including rPI OS (which can now run as a VM as mentioned earlier).


[Read more...]

Categories // ESXi-Arm, Kubernetes Tags // Arm, ESXi, k3s, Kubernetes

Automated vSphere with Tanzu Lab Deployment Script

10.13.2020 by William Lam // 16 Comments

After sharing a sneak peak of my updated vSphere with Tanzu Automated Lab Deployment script on Twitter, I have been receiving non-stop requests on when the script will be available. It took a bit longer to finish off the documentation, creating the script was actually the easy part 😛

In any case, I am happy to finally share the automated script for deploying the new vSphere with Tanzu "Basic" which is included as part of vSphere 7.0 Update 1 is now available! You can find full details at the following Github repo: https://github.com/lamw/vsphere-with-tanzu-basic-automated-lab-deployment

In addition to the deployment instructions on the Github repo, I have also included a sample walkthrough which includes both deploying the vSphere with Tanzu environment as well as enabling Workload Management on the vSphere Cluster, which is not part of the automated deployment script.

I will also be updating my existing Workload Management PowerCLI Module to incorporate the new requirements for automating the enablement of Workload Management for a vSphere with Tanzu Basic Cluster. Together with this script, you will now have the ability to deploy vSphere with Tanzu end-to-end in under 1hr time!

More details will be shared in a later blog post and I hope folks enjoy the script, it was a ton of work!

Categories // Automation, Kubernetes, VMware Tanzu, vSphere 7.0 Tags // vSphere 7.0 Update 1, vSphere Kubernetes Service

How to SSH to Tanzu Kubernetes Grid (TKG) Cluster in vSphere with Tanzu?

10.10.2020 by William Lam // 6 Comments

For troubleshooting your vSphere with Tanzu environment, you may have a need to SSH to the Control Plane of your Tanzu Kubernetes Grid (TKG) Cluster. This was something I had to do to verify some basic network connectivity. At a high level, we need to login to our Supervisor Cluster and retrieve the SSH secret to our TKG Cluster and since this question recently came up, below are the instructions.


UPDATE (10/10/20) - It looks like it is also possible to retrieve the TKG Cluster credentials without needing SSH directly to the Supervisor Control Plane VM, see Option 1 for the alternate solution.

Option 1:

Step 1 - Login to the Supervisor Control Plane using the following command:

kubectl vsphere login --server=172.17.31.129 -u *protected email* --insecure-skip-tls-verify

Step 2 - Next, we need to retrieve the SSH password secret for our TKG Cluster and perform a base64 decode to retrieve the plain text value. You will need two pieces of information and then substitute that into the command below

  • The name of your vSphere Namespace which was created in your vSphere with Tanzu environment, in my example it is called primp-industries
  • The name of your TKG Cluster, in my example it is called william-tkc-01 and the secret name will be [tkg-cluster-name]-ssh-password as shown in the example below

kubectl -n primp-industries get secrets william-tkc-01-ssh-password -o jsonpath={.data.ssh-passwordkey} | base64 -d

Step 3 - Finally, you can now SSH to TKG Cluster from a system which has network connectivity, this can be from the Supervisor Cluster Control Plane VM or another system. The SSH username for the TKG Cluster is vmware-system-user and use the credentials that was provided from the previous screen.

Option 2:

Step 1 - SSH to the VCSA and then run the following script to retrieve the Supervisor Cluster Control Plane VM credentials:

/usr/lib/vmware-wcp/decryptK8Pwd.py

Step 2 - SSH to the IP Address using root username and the password provided from the previous command

Step 3- Next, we need to retrieve the SSH password secret for our TKG Cluster and perform a base64 decode to retrieve the plain text value. You will need two pieces of information and then substitute that into the command below

  • The name of your vSphere Namespace which was created in your vSphere with Tanzu environment, in my example it is called primp-industries
  • The name of your TKG Cluster, in my example it is called william-tkc-01 and the secret name will be [tkg-cluster-name]-ssh-password as shown in the example below

kubectl -n primp-industries get secrets william-tkc-01-ssh-password -o jsonpath={.data.ssh-passwordkey} | base64 -d

Step 4 - Finally, you can now SSH to TKG Cluster from a system which has network connectivity, this can be from the Supervisor Cluster Control Plane VM or another system. The SSH username for the TKG Cluster is vmware-system-user and use the credentials that was provided from the previous screen.

Categories // Kubernetes, VMware Tanzu, vSphere 7.0 Tags // Tanzu Kubernetes Grid, vmware-system-user, vSphere 7.0 Update 1, vSphere Kubernetes Service

  • « Previous Page
  • 1
  • …
  • 11
  • 12
  • 13
  • 14
  • 15
  • …
  • 25
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...