WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Quick Tip - vmware-iso builder for Packer now supported with ESXi 7.0

10.12.2020 by William Lam // 3 Comments

When vSphere 7.0 GA'ed earlier this year, one of the changes that I had noticed while going through the release notes was the removal of the VNC Server on ESXi. By default, this is disabled but users could enable it on a per-VM basis and connect to a specific VM using VNC. Not many customers used this feature and it made sense on why it was removed.

However, one implication is that if you use HashiCorp Packer and the vmware-iso builder to created automated images with ESXi, it will no longer work after upgrading to ESXi 7.0 as Packer relies on this VNC interface to send automated keystrokes to a VM as part of its automation. After learning about this change with vSphere 7.0, I filed a Packer Github Enhanacement to see if someone would be open to re-implementing the keystrokes functionality by leveraging the vSphere HTML5 Console SDK which would then allow for the use of VNC over websockets. The PR was closed about a month ago and while recently working on the vCenter Event Broker Appliance (VEBA) project, I finally got a chance to verify the feature after upgrading my physical ESXi host to latest 7.0 Update 1 and happy to share that the vmware-iso builder now functions as before.

The following two lines should be added to your Packer template:

"vnc_over_websocket": true
"insecure_connection": true

For reference, you can also refer to the VEBA Packer template

An alternative workaround is to use the vsphere-iso builder which leverages the vSphere USB scan codes API to send keystrokes into a VM without having to rely on the VNC interface. One downside is that you do need have a vCenter Server as the vsphere-iso builder interacts with the vSphere API on vCenter Server rather than directly going to ESXi and this would also impact anyone using Free ESXi to build their Packer images.

The primary reason that I had not switched over to the vsphere-iso builder was that I had quite a few Packer templates using the vmware-iso builder and the syntax was not portable between the two. For this reason alone, I decided to hold off upgrading my physical ESXi host to 7.0 until now.

Categories // Automation, vSphere 7.0 Tags // ESXi, Packer, vnc, websocket

Workaround for ESXi-Arm in vSphere 7.0 Update 1 or newer

10.12.2020 by William Lam // 8 Comments

In vSphere 7.0 Update 1, a new capability was introduced called the vCenter Cluster Services (vCLS) which provides a new framework for decoupling and managing distributing control plane services for vSphere. To learn more, I highly recommend the detailed blog post linked above by Niels. In addition, Duncan also has a great blog post about common question/answers and considerations for vCLS, which is definitely worth a read as well.

vSphere DRS is one of the vSphere features which relies on this new vCLS service and this is made possible by the vCLS VMs which are deployed automatically when it detects there are ESXi hosts within a vSphere Cluster (regardless if vSphere DRS is enabled or not). For customers who may be using the ESXi-Arm Fling with a vSphere 7.0 Update 1 environment, you may have noticed continuous "Delete File" tasks within vCenter that seems to loop forever.

This occurs because the vCLS service will first test to see if it can upload a file to the datastore, once it can, it will delete it. The issue is that the vCLS VMs are x86 and can not be deployed to an ESXi-Arm Cluster as the CPU architecture is not supported. There is a workaround to disable vCLS for the ESXi-Arm Cluster, which I will go into shortly. However, because vCLS can not properly deploy, it means vSphere DRS capabilities will not be possible when using vSphere 7.0 Update 1 with ESXi-Arm hosts. If this is desirable, it is recommended that to use either vSphere 7.0c or vSphere 7.0d if you wish to use vSphere DRS.

Note: vSAN does not rely on vCLS to function but to be able to use it, you must place your ESXi-Arm hosts into a vSphere Cluster and hence applying this workaround would be desirable for that use case.

[Read more...]

Categories // ESXi-Arm, vSphere 7.0 Tags // Arm, ESXi, vCenter Clustering Services, vCLS, vSphere 7.0 Update 1

How to SSH to Tanzu Kubernetes Grid (TKG) Cluster in vSphere with Tanzu?

10.10.2020 by William Lam // 6 Comments

For troubleshooting your vSphere with Tanzu environment, you may have a need to SSH to the Control Plane of your Tanzu Kubernetes Grid (TKG) Cluster. This was something I had to do to verify some basic network connectivity. At a high level, we need to login to our Supervisor Cluster and retrieve the SSH secret to our TKG Cluster and since this question recently came up, below are the instructions.


UPDATE (10/10/20) - It looks like it is also possible to retrieve the TKG Cluster credentials without needing SSH directly to the Supervisor Control Plane VM, see Option 1 for the alternate solution.

Option 1:

Step 1 - Login to the Supervisor Control Plane using the following command:

kubectl vsphere login --server=172.17.31.129 -u *protected email* --insecure-skip-tls-verify

Step 2 - Next, we need to retrieve the SSH password secret for our TKG Cluster and perform a base64 decode to retrieve the plain text value. You will need two pieces of information and then substitute that into the command below

  • The name of your vSphere Namespace which was created in your vSphere with Tanzu environment, in my example it is called primp-industries
  • The name of your TKG Cluster, in my example it is called william-tkc-01 and the secret name will be [tkg-cluster-name]-ssh-password as shown in the example below

kubectl -n primp-industries get secrets william-tkc-01-ssh-password -o jsonpath={.data.ssh-passwordkey} | base64 -d

Step 3 - Finally, you can now SSH to TKG Cluster from a system which has network connectivity, this can be from the Supervisor Cluster Control Plane VM or another system. The SSH username for the TKG Cluster is vmware-system-user and use the credentials that was provided from the previous screen.

Option 2:

Step 1 - SSH to the VCSA and then run the following script to retrieve the Supervisor Cluster Control Plane VM credentials:

/usr/lib/vmware-wcp/decryptK8Pwd.py

Step 2 - SSH to the IP Address using root username and the password provided from the previous command

Step 3- Next, we need to retrieve the SSH password secret for our TKG Cluster and perform a base64 decode to retrieve the plain text value. You will need two pieces of information and then substitute that into the command below

  • The name of your vSphere Namespace which was created in your vSphere with Tanzu environment, in my example it is called primp-industries
  • The name of your TKG Cluster, in my example it is called william-tkc-01 and the secret name will be [tkg-cluster-name]-ssh-password as shown in the example below

kubectl -n primp-industries get secrets william-tkc-01-ssh-password -o jsonpath={.data.ssh-passwordkey} | base64 -d

Step 4 - Finally, you can now SSH to TKG Cluster from a system which has network connectivity, this can be from the Supervisor Cluster Control Plane VM or another system. The SSH username for the TKG Cluster is vmware-system-user and use the credentials that was provided from the previous screen.

Categories // Kubernetes, VMware Tanzu, vSphere 7.0 Tags // Tanzu Kubernetes Grid, vmware-system-user, vSphere 7.0 Update 1, vSphere Kubernetes Service

  • « Previous Page
  • 1
  • …
  • 31
  • 32
  • 33
  • 34
  • 35
  • …
  • 42
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...