WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Using Terraform to deploy a Tanzu Kubernetes Grid (TKG) Cluster in vSphere with Tanzu 

11.10.2020 by William Lam // 4 Comments

A few months back I saw that HashiCorp had released a new Kubernetes (K8s) Provider for Terraform, currently in Alpha state, which enable users to deploy K8s resources using the popular Infrastructure-as-Code (IaC) tool. I thought this would be pretty cool if it works with our vSphere with Tanzu solution, since the Tanzu Kubernetes Grid (TKG) Service uses ClusterAPI via a custom VM Operator to deploy TKG Guest Clusters which is just a fancy way of saying it uses K8s API to deploy more K8s 🙂

UPDATE (04/27/21) - vSphere 7.0 Update 2a has resolved the admission webhook issue and users can now deploy TKG Guest Cluster using K8s Provider for Terraform

The setting up the new K8s provider was pretty straight forward and after spending a few minutes in figuring out how to convert my existing TKG YAML to the required HCL format for Terraform to understand, I was able to to run a terraform "plan" but quickly ran into the following error:

failed: admission webhook "default.mutating.tanzukubernetescluster.run.tanzu.vmware.com" does not support dry run

It looks like our tanzukubernetescluster admission webhooks does not currently support dry run operations which can be quite useful but also common when using Terraform. I figured this was the end of that idea and I ended up just filing a feature enhancement internally for adding this support in the future as I can see this being quite useful for our customers.

After finishing up recent pet project of getting a fully functional vSphere with Tanzu on a homelab budget and just using 32GB of memory, I decided to take another look at this and discovered the required tweak to get this working was super trivial, literally a single line change.

Disclaimer: This is not officially supported by VMware, use at your own risk.

[Read more...]

Categories // Automation, Kubernetes, VMware Tanzu, vSphere 7.0 Tags // Kubernetes, Tanzu Kubernetes Grid, Terraform, vSphere Kubernetes Service

Complete vSphere with Tanzu homelab with just 32GB of memory!

11.09.2020 by William Lam // 43 Comments

Since the release of vSphere 7.0 Update 1, the demand and interests from the community on getting hands on with vSphere with Tanzu and the new simplified networking solution, has been non-stop. Most folks are either upgrading their existing homelab or looking to purchase new hardware that can better support the new features of the vSphere 7.0 release.

Although vSphere with Tanzu now has a flavor that does not require NSX-T which helps reduces the barrier on getting started, it still has some networking requirements which may not be easily met in for all lab environments. In fact, this was actually the primary reason I had started to look into this since my personal homelab network is very basic and I do not have nor want a switch that can support multiple VLANs, which is one of the requirements for vSphere with Tanzu.

While investigating for a potential solution, which included way too MANY hours of debugging and troubleshooting, I also thought about the absolute minimal amount of resources I could get away with after put everything together. To be clear, my homelab is comprised of a single Supermicro E200-8D which has 128GB of memory and that has served me well over the years and I highly recommend it for anyone that can fit that into their budget. With that said, I did set out with a pretty aggressive goal of using something that is pretty common in VMware homelabs which is an Intel NUC and with just 32GB of memory.

UPDATE (07/02/24) - As of vSphere 8.0 Update 3, you no longer have the ability to configure a single Supervisor Control Plane VM using the minmaster and maxmasters parameters, which have also been removed from /etc/vmware/wcp/wcpsvc.yaml in favor of allowing users to control this configuration programmatically as part of enabling vSphere IaaS (formally known as vSphere with Tanzu). The updated vSphere IaaS API that allows users to specify number of Supervisor Control Plane VM will not be available until the next major vSphere release. While this regressed capability is unfortunate, it was also not an officially supported configuration and for users who wish to specify the number of Supervisor Control Plane VM using YAML method, you will need to use an earlier version of vSphere.

[Read more...]

Categories // Home Lab, Kubernetes, VMware Tanzu, vSphere 7.0 Tags // HAProxy, Intel NUC, Kubernetes, vSphere Kubernetes Service

Configure network proxy using YTT with Tanzu Kubernetes Grid (TKG)

11.04.2020 by William Lam // 1 Comment

I was doing some work with Tanzu Kubernetes Grid (TKG) 1.2 using my TKG Demo Appliance Fling and the environment that I was working in did not have direct internet access, which is usually the case for most Production environment. I needed to have outbound connectivity from the TKG Worker Nodes so that they could pull down a set of containers as part of attaching to our Tanzu Mission Control (TMC) service.

Luckily, there was an HTTP proxy server that I could use for this connectivity and we just need to update our TKG templates so the TKG worker nodes will have the proxy settings. In the past, when needing to apply such customizations such as adding a network proxy to TKG, it meant I had to manually edit the TKG Dev/Prod YAML files. As previously shared, Tanzu Kubernetes Grid (TKG) 1.2 now uses the YAML Templating Tool (YTT) tool for customizing TKG plans.

Although the TKG documentation provides an example for YTT template example, it did not actually cover the TKG Worker Nodes which is what I needed but also that I needed to add a command into the postKubeadmCommands for the network proxy to be activated. The issue is that this section no longer exists in the base template like it did in previous versions of TKG and required some additional YTT annotation to get this working.

Here is the complete working ~/.tkg/providers/infrastructure-vsphere/ytt/proxy_nameserver.yaml template that adds the respective HTTP(S) proxy server and No Proxy settings.

#@ load("@ytt:overlay", "overlay")

#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
spec:
  kubeadmConfigSpec:
    preKubeadmCommands:
    #! Add HTTP_PROXY to containerd configuration file
    #@overlay/append
    - echo $'[Service]\nEnvironment="HTTP_PROXY=http://1.2.3.4:3128/"' > /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/append
    - echo 'Environment="HTTPS_PROXY=http://1.2.3.4:3128"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/append
    - echo 'Environment="NO_PROXY=localhost,192.168.4.0/24,192.168.3.0/24,registry.rainpole.io,10.2.224.4,.svc,100.64.0.0/13,100.96.0.0/11"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/match missing_ok=True
    postKubeadmCommands:
    #@overlay/append
    - systemctl restart containerd

#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"})
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
spec:
  template:
    spec:
      preKubeadmCommands:
      #! Add HTTP_PROXY to containerd configuration file
      #@overlay/append
      - echo $'[Service]\nEnvironment="HTTP_PROXY=http://1.2.3.4:3128/"' > /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/append
      - echo 'Environment="HTTPS_PROXY=http://1.2.3.4:3128"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/append
      - echo 'Environment="NO_PROXY=localhost,192.168.4.0/24,192.168.3.0/24,registry.rainpole.io,10.2.224.4,.svc,100.64.0.0/13,100.96.0.0/11"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/match missing_ok=True
      postKubeadmCommands:
      #@overlay/append
      - systemctl restart containerd

Categories // Kubernetes, VMware Tanzu Tags // http proxy, proxy, Tanzu Kubernetes Grid

  • « Previous Page
  • 1
  • …
  • 9
  • 10
  • 11
  • 12
  • 13
  • …
  • 25
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025