WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud
  • Tanzu
    • Application Modernization
    • Tanzu services
    • Tanzu Community Edition
    • Tanzu Kubernetes Grid
    • vSphere with Tanzu
  • Home Lab
  • Nested Virtualization
  • Apple
You are here: Home / Automation / Customizing Kubernetes cluster template (Dev/Prod) plans in Tanzu Kubernetes Grid 1.2

Customizing Kubernetes cluster template (Dev/Prod) plans in Tanzu Kubernetes Grid 1.2

10.20.2020 by William Lam // Leave a Comment

With previous releases of Tanzu Kubernetes Grid (TKG), if you needed to apply special OS customizations that were applied to the deployed Control Plane and Worker Node VMs, such as injecting commands to handle network proxy or dealing with insecure container registry, your only option was to hand edit the default TKG Dev/Prod YAML templates. Not only was this error prone but because the templates can change from each release, it was difficult to manage and test until you attempted a deployment.

One of the newest features with the release of TKG 1.2 is official support for customizing the Kubernetes (K8s) Cluster Templates Plans using YTT (YAML Templating Tooling) which allows users to provide custom data that can then be patched/overlay to an existing YAML file. YTT itself is part of a larger toolset for building, creating and configuring deployments for K8s called Carvel. The Domain Specific Language (DSL) that YTT uses was not exactly intuitive but since the official TKG documentation had an example to start with, I was able to mostly figure my way through along with some tips from the #carvel Slack channel.

So what was I trying to do? I was working on updating my TKG Demo Appliance Fling to the latest 1.2 release and part of the setup required adding an entry to /etc/hosts file on all TKG VMs that are deployed. Instead of directly messing with the YAML templates, there is now a new "overlay" YAML file in ~/.tkg/providers/infrastructure-vsphere/ytt/vsphere-overlay.yaml which can be used to make such changes.

The default example only demonstrates how to add a command into preKubeadmCommands which only affects the Control Plane VMs as it targets the KubeadmControlPlane kind as shown below:

#@ load("@ytt:overlay", "overlay")

#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
spec:
  kubeadmConfigSpec:
    preKubeadmCommands:
    #! Add nameserver to all k8s nodes
    #@overlay/append
    - echo "192.168.2.2   registry.rainpole.io" >> /etc/hosts

For the change to apply to both the Control Plane and Worker Node VMs, the following would need to be used:

#@ load("@ytt:overlay", "overlay")

#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
spec:
  kubeadmConfigSpec:
    preKubeadmCommands:
    #! Add nameserver to all k8s nodes
    #@overlay/append
    - echo "192.168.2.2   registry.rainpole.io" >> /etc/hosts

#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"})
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
spec:
  template:
    spec:
      preKubeadmCommands:
      #! Add nameserver to all k8s nodes
      #@overlay/append
      - echo "192.168.2.2   registry.rainpole.io" >> /etc/hosts

The way that you figure out the spec is by looking at the original Dev/Prod YAML to figure which you wish to replace and/or overlay and append. It took a few tries until this had clicked for me and not messing up on the indentation. As of writing this, there is no online YTT linter which you can run it through for syntax validation, so I had to wait to see if TKG complained and/or verified the results to see if the changes did what I want.

More from my site

  • Tanzu Kubernetes Grid (TKG) Demo Appliance 1.1.3
  • Tanzu Kubernetes Grid (TKG) Demo Appliance for VMC and vSphere
  • Configure non-secure Harbor registry with Tanzu Kubernetes Grid (TKG)
  • Deploy Harbor in an Air-Gapped environment for Tanzu Kubernetes Grid (TKG)
  • Sneak peek at deploying Tanzu Kubernetes Grid on vSphere & VMware Cloud on AWS

Categories // Automation, Kubernetes, VMware Tanzu Tags // Kubernetes, Tanzu Kubernetes Grid, TKG, ytt

Thanks for the comment! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Infrastructure Business Group (CIBG) at VMware. He focuses on Cloud Native technologies, Automation, Integration and Operation for the VMware Cloud based Software Defined Datacenters (SDDC)

Connect

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Recent

  • How to disable the Efficiency Cores (E-cores) on an Intel NUC? 03/24/2023
  • Changing the default HTTP(s) Reverse Proxy Ports on ESXi 8.0 03/22/2023
  • NFS Multi-Connections in vSphere 8.0 Update 1 03/20/2023
  • Quick Tip - How to download ESXi ISO image for all releases including patch updates? 03/15/2023
  • SSD with multiple NVMe namespaces for VMware Homelab 03/14/2023

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2023