WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Workaround for ESXi-Arm in vSphere 7.0 Update 1 or newer

10.12.2020 by William Lam // 8 Comments

In vSphere 7.0 Update 1, a new capability was introduced called the vCenter Cluster Services (vCLS) which provides a new framework for decoupling and managing distributing control plane services for vSphere. To learn more, I highly recommend the detailed blog post linked above by Niels. In addition, Duncan also has a great blog post about common question/answers and considerations for vCLS, which is definitely worth a read as well.

vSphere DRS is one of the vSphere features which relies on this new vCLS service and this is made possible by the vCLS VMs which are deployed automatically when it detects there are ESXi hosts within a vSphere Cluster (regardless if vSphere DRS is enabled or not). For customers who may be using the ESXi-Arm Fling with a vSphere 7.0 Update 1 environment, you may have noticed continuous "Delete File" tasks within vCenter that seems to loop forever.

This occurs because the vCLS service will first test to see if it can upload a file to the datastore, once it can, it will delete it. The issue is that the vCLS VMs are x86 and can not be deployed to an ESXi-Arm Cluster as the CPU architecture is not supported. There is a workaround to disable vCLS for the ESXi-Arm Cluster, which I will go into shortly. However, because vCLS can not properly deploy, it means vSphere DRS capabilities will not be possible when using vSphere 7.0 Update 1 with ESXi-Arm hosts. If this is desirable, it is recommended that to use either vSphere 7.0c or vSphere 7.0d if you wish to use vSphere DRS.

Note: vSAN does not rely on vCLS to function but to be able to use it, you must place your ESXi-Arm hosts into a vSphere Cluster and hence applying this workaround would be desirable for that use case.

[Read more...]

Categories // ESXi-Arm, vSphere 7.0 Tags // Arm, ESXi, vCenter Clustering Services, vCLS, vSphere 7.0 Update 1

How to SSH to Tanzu Kubernetes Grid (TKG) Cluster in vSphere with Tanzu?

10.10.2020 by William Lam // 6 Comments

For troubleshooting your vSphere with Tanzu environment, you may have a need to SSH to the Control Plane of your Tanzu Kubernetes Grid (TKG) Cluster. This was something I had to do to verify some basic network connectivity. At a high level, we need to login to our Supervisor Cluster and retrieve the SSH secret to our TKG Cluster and since this question recently came up, below are the instructions.


UPDATE (10/10/20) - It looks like it is also possible to retrieve the TKG Cluster credentials without needing SSH directly to the Supervisor Control Plane VM, see Option 1 for the alternate solution.

Option 1:

Step 1 - Login to the Supervisor Control Plane using the following command:

kubectl vsphere login --server=172.17.31.129 -u *protected email* --insecure-skip-tls-verify

Step 2 - Next, we need to retrieve the SSH password secret for our TKG Cluster and perform a base64 decode to retrieve the plain text value. You will need two pieces of information and then substitute that into the command below

  • The name of your vSphere Namespace which was created in your vSphere with Tanzu environment, in my example it is called primp-industries
  • The name of your TKG Cluster, in my example it is called william-tkc-01 and the secret name will be [tkg-cluster-name]-ssh-password as shown in the example below

kubectl -n primp-industries get secrets william-tkc-01-ssh-password -o jsonpath={.data.ssh-passwordkey} | base64 -d

Step 3 - Finally, you can now SSH to TKG Cluster from a system which has network connectivity, this can be from the Supervisor Cluster Control Plane VM or another system. The SSH username for the TKG Cluster is vmware-system-user and use the credentials that was provided from the previous screen.

Option 2:

Step 1 - SSH to the VCSA and then run the following script to retrieve the Supervisor Cluster Control Plane VM credentials:

/usr/lib/vmware-wcp/decryptK8Pwd.py

Step 2 - SSH to the IP Address using root username and the password provided from the previous command

Step 3- Next, we need to retrieve the SSH password secret for our TKG Cluster and perform a base64 decode to retrieve the plain text value. You will need two pieces of information and then substitute that into the command below

  • The name of your vSphere Namespace which was created in your vSphere with Tanzu environment, in my example it is called primp-industries
  • The name of your TKG Cluster, in my example it is called william-tkc-01 and the secret name will be [tkg-cluster-name]-ssh-password as shown in the example below

kubectl -n primp-industries get secrets william-tkc-01-ssh-password -o jsonpath={.data.ssh-passwordkey} | base64 -d

Step 4 - Finally, you can now SSH to TKG Cluster from a system which has network connectivity, this can be from the Supervisor Cluster Control Plane VM or another system. The SSH username for the TKG Cluster is vmware-system-user and use the credentials that was provided from the previous screen.

Categories // Kubernetes, VMware Tanzu, vSphere 7.0 Tags // Tanzu Kubernetes Grid, vmware-system-user, vSphere 7.0 Update 1, vSphere Kubernetes Service

ESXi 7.0 Update 1 now includes NIC driver for Intel NUC 10

09.21.2020 by William Lam // 16 Comments

With the upcoming release of vSphere 7.0 Update 1 and specifically ESXi 7.0 Update 1, support for the onboard NIC of the Intel NUC 10 (Frost Canyon) is now included and the community ne1000 VIB driver is no longer needed. If you had previously installed the community driver, you can uninstall the VIB after successfully upgrading to ESXi 7.0 Update 1.

Categories // ESXi, Home Lab, vSphere 7.0 Tags // ESXi 7.0 Update 1, Intel NUC, vSphere 7.0 Update 1

  • « Previous Page
  • 1
  • 2

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...