WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

How to SSH to Tanzu Kubernetes Grid (TKG) Cluster in vSphere with Tanzu?

10.10.2020 by William Lam // 6 Comments

For troubleshooting your vSphere with Tanzu environment, you may have a need to SSH to the Control Plane of your Tanzu Kubernetes Grid (TKG) Cluster. This was something I had to do to verify some basic network connectivity. At a high level, we need to login to our Supervisor Cluster and retrieve the SSH secret to our TKG Cluster and since this question recently came up, below are the instructions.


UPDATE (10/10/20) - It looks like it is also possible to retrieve the TKG Cluster credentials without needing SSH directly to the Supervisor Control Plane VM, see Option 1 for the alternate solution.

Option 1:

Step 1 - Login to the Supervisor Control Plane using the following command:

kubectl vsphere login --server=172.17.31.129 -u *protected email* --insecure-skip-tls-verify

Step 2 - Next, we need to retrieve the SSH password secret for our TKG Cluster and perform a base64 decode to retrieve the plain text value. You will need two pieces of information and then substitute that into the command below

  • The name of your vSphere Namespace which was created in your vSphere with Tanzu environment, in my example it is called primp-industries
  • The name of your TKG Cluster, in my example it is called william-tkc-01 and the secret name will be [tkg-cluster-name]-ssh-password as shown in the example below

kubectl -n primp-industries get secrets william-tkc-01-ssh-password -o jsonpath={.data.ssh-passwordkey} | base64 -d

Step 3 - Finally, you can now SSH to TKG Cluster from a system which has network connectivity, this can be from the Supervisor Cluster Control Plane VM or another system. The SSH username for the TKG Cluster is vmware-system-user and use the credentials that was provided from the previous screen.

Option 2:

Step 1 - SSH to the VCSA and then run the following script to retrieve the Supervisor Cluster Control Plane VM credentials:

/usr/lib/vmware-wcp/decryptK8Pwd.py

Step 2 - SSH to the IP Address using root username and the password provided from the previous command

Step 3- Next, we need to retrieve the SSH password secret for our TKG Cluster and perform a base64 decode to retrieve the plain text value. You will need two pieces of information and then substitute that into the command below

  • The name of your vSphere Namespace which was created in your vSphere with Tanzu environment, in my example it is called primp-industries
  • The name of your TKG Cluster, in my example it is called william-tkc-01 and the secret name will be [tkg-cluster-name]-ssh-password as shown in the example below

kubectl -n primp-industries get secrets william-tkc-01-ssh-password -o jsonpath={.data.ssh-passwordkey} | base64 -d

Step 4 - Finally, you can now SSH to TKG Cluster from a system which has network connectivity, this can be from the Supervisor Cluster Control Plane VM or another system. The SSH username for the TKG Cluster is vmware-system-user and use the credentials that was provided from the previous screen.

Categories // Kubernetes, VMware Tanzu, vSphere 7.0 Tags // Tanzu Kubernetes Grid, vmware-system-user, vSphere 7.0 Update 1, vSphere Kubernetes Service

Tanzu Kubernetes Grid (TKG) Demo Appliance 1.1.3

08.10.2020 by William Lam // 1 Comment

It has been awhile since I have updated my Tanzu Kubernetes Grid (TKG) Demo Appliance Fling, which is a virtual appliance that enables anyone to go from zero to Kubernetes in less than 30 minutes with just an SSH client and a web browser. For VMware Cloud on AWS customers interested in running TKG, this is a great way to quickly get started on a proof of concept, demo or for development and testing purposes. One great benefit is that everything required for TKG is self contained within the appliance including an embedded Harbor registry and the respective TKG container images, great for air-gapped or non-internet accessible environments.

Here is a summary of what is new:

Support for latest TKG 1.1.3

There have been several of smaller releases to TKG since their 1.0.0 release but due to their short lifecycle, I decided to hold off. Behind the scenes, I have actually been working closely with TKG team on the latest TKG 1.1.3 release which was just release last week. One really cool feature that was introduced in TKG 1.1.2 is the ability to upgrade an existing TKG Workload Cluster to a newer version of Kubernetes.

With TKG 1.1.3, support for Kubernetes v1.18.6 and v1.17.9 is now possible and the latest version of the demo appliance will also support this workflow. In fact, I have also updated my TKG Workshop Guide to include all new updates including the upgrade workflow. To reduce the maintenance burden on myself, the TKG Demo Appliance 1.0.0 will be removed in the near future, for now it has been deprecated but all existing content is still available. I highly recommend checking out the latest version as you will get all the latest features of TKG.

[Read more...]

Categories // Automation, Kubernetes, VMware Cloud on AWS, VMware Tanzu Tags // Kubernetes, Tanzu Kubernetes Grid, TKG, VMware Cloud on AWS, VMware Tanzu

How to configure network proxy with Tanzu Kubernetes Grid (TKG)?

05.18.2020 by William Lam // 3 Comments

Network Proxies are commonly used by customers to provide connectivity from internal servers/services to access external networks like the Internet in a controlled and secured manner. While working on a recent network proxy enhancement for our VMware Event Broker Appliance (VEBA) Fling, I had setup a Squid server which is a popular network proxy solution.

I had noticed a couple of folks were asking about network proxy configuration for Standalone Tanzu Kubernetes Grid (TKG) and figure this might be interesting to explore, especially for my recently released TKG Demo Appliance Fling which enables folks to quickly go from zero to Kubernetes in just 30 minutes! I figured this would be another good opportunity to learn a bit more about TKG as well as Kubernetes (K8s) and I jokingly said to myself, how hard could this be!? 😉 Apparently it was not trivial and took a bit of trial/error to figure out the correct combination and below is the procedure that can be followed for both standard deployment of TKG as well as the TKG Demo Appliance Fling.

Proxy Setting configurations for TKG CLI

The TKG CLI uses KinD (Kubernetes in Docker) under the hood to setup the initial K8s bootstrap cluster to deploy the TKG Management Cluster. If you have not already downloaded KinD node image (registry.tkg.vmware.run/kind/node:v1.17.3_vmware.2) or if you need to go through a network proxy to do so, then the following instructions can be followed to make your Docker Client aware of a network proxy.

Here is an example of the error if Docker Client can not download the image:

# docker pull registry.tkg.vmware.run/kind/node:v1.17.3_vmware.2
Error response from daemon: Get https://registry.tkg.vmware.run/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

If you are not using a private container registry with TKG, then you also need to also ensure that the KinD Cluster can connect to your network proxy when it pulls down the required containers from the internet. Luckily, KinD can simply detect the network proxy settings of your operating system. You can either set the proxy using traditional environmental variables (http_proxy, https_proxy and no_proxy) during your use of TKG CLI or you can simply set it globally so you do not forget.

In my setup, TKG CLI is running in a Photon OS VM and global proxy settings are configured in /etc/sysconfig/proxy Proxy settings will vary across operating systems and you should check with the vendor documentation for specific instructions. The following command will set both HTTP and HTTPS proxy variables to use my proxy server and you will also want to make sure you whitelist all networks and addresses which you want to by-pass the proxy.

cat > /etc/sysconfig/proxy << EOF
PROXY_ENABLED="yes"
HTTP_PROXY="http://192.168.1.3:3128"
HTTPS_PROXY="http://192.168.1.3:3128"
NO_PROXY="localhost,192.168.1.0/24,192.168.2.0/24,registry.rainpole.io,10.2.224.4,.svc,100.64.0.0/13,100.96.0.0/11"
EOF

Note: If you are using the TKG Demo Appliance, you only need to configure the Photon OS global proxy settings. In my example, I have white listed my local 192.168.* addresses, registry.rainpole.io which is the embedded Harbor registry, 10.2.224.4 which is the internal IP Address of VMC vCenter Server, *.svc addresses which all the internal K8s services and 100.64.0.0/13 which is the CIDR range used by TKG for the Service networks and 100.96.0.0/11 which is the CIDR range used by TKG Cluster networks.

[Read more...]

Categories // Automation, Kubernetes, VMware Tanzu Tags // http proxy, proxy, Tanzu Kubernetes Grid

  • « Previous Page
  • 1
  • …
  • 5
  • 6
  • 7
  • 8
  • 9
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...