WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Search Results for: vSphere with Kubernetes

Mapping between vSphere Container Volume to Persistent Volume Claim (PVC) in vSphere 7.0 Update 1 using PowerCLI

11.04.2020 by William Lam // 1 Comment

With the introduction of the vSphere Container Storage Interface (CSI) 2.1, it looks like the previous method outlined by Cormac Hogan no longer applies when looking to map between a vSphere Container Volume (CV), which is a vSphere construct to the underlying Persistent Volume Claim (PVC), which is a Kubernetes construct.


William Arroyo, a K8s Solution Engineer recently noticed this behavior change and was asking if there was a way to use PowerCLI to still perform this look up. Given I had provided the original PowerCLI snippet on Cormac's blog, I was curious myself and since I had just rebuilt my vSphere with Tanzu environment, I figured I take a quick look to see where this new information might now be placed.

I did also want to mention that you can easily find this information using the vSphere UI by just clicking on the "Details" box next to the PVC


With the latest PowerCLI 12.1 release, we have a number of Cloud Native Storage (CNS) cmdlets that we can leverage and after a quick minute of poking around, this new information can be found using the Get-CnsVolume cmdlet and using the ExtensionData property to get more detailed properties.

[Read more...]

Categories // Automation, Cloud Native, PowerCLI Tags // CSI, Persistent Memory, vSphere Container Volume

Configure network proxy using YTT with Tanzu Kubernetes Grid (TKG)

11.04.2020 by William Lam // 1 Comment

I was doing some work with Tanzu Kubernetes Grid (TKG) 1.2 using my TKG Demo Appliance Fling and the environment that I was working in did not have direct internet access, which is usually the case for most Production environment. I needed to have outbound connectivity from the TKG Worker Nodes so that they could pull down a set of containers as part of attaching to our Tanzu Mission Control (TMC) service.

Luckily, there was an HTTP proxy server that I could use for this connectivity and we just need to update our TKG templates so the TKG worker nodes will have the proxy settings. In the past, when needing to apply such customizations such as adding a network proxy to TKG, it meant I had to manually edit the TKG Dev/Prod YAML files. As previously shared, Tanzu Kubernetes Grid (TKG) 1.2 now uses the YAML Templating Tool (YTT) tool for customizing TKG plans.

Although the TKG documentation provides an example for YTT template example, it did not actually cover the TKG Worker Nodes which is what I needed but also that I needed to add a command into the postKubeadmCommands for the network proxy to be activated. The issue is that this section no longer exists in the base template like it did in previous versions of TKG and required some additional YTT annotation to get this working.

Here is the complete working ~/.tkg/providers/infrastructure-vsphere/ytt/proxy_nameserver.yaml template that adds the respective HTTP(S) proxy server and No Proxy settings.

#@ load("@ytt:overlay", "overlay")

#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
spec:
  kubeadmConfigSpec:
    preKubeadmCommands:
    #! Add HTTP_PROXY to containerd configuration file
    #@overlay/append
    - echo $'[Service]\nEnvironment="HTTP_PROXY=http://1.2.3.4:3128/"' > /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/append
    - echo 'Environment="HTTPS_PROXY=http://1.2.3.4:3128"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/append
    - echo 'Environment="NO_PROXY=localhost,192.168.4.0/24,192.168.3.0/24,registry.rainpole.io,10.2.224.4,.svc,100.64.0.0/13,100.96.0.0/11"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/match missing_ok=True
    postKubeadmCommands:
    #@overlay/append
    - systemctl restart containerd

#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"})
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
spec:
  template:
    spec:
      preKubeadmCommands:
      #! Add HTTP_PROXY to containerd configuration file
      #@overlay/append
      - echo $'[Service]\nEnvironment="HTTP_PROXY=http://1.2.3.4:3128/"' > /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/append
      - echo 'Environment="HTTPS_PROXY=http://1.2.3.4:3128"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/append
      - echo 'Environment="NO_PROXY=localhost,192.168.4.0/24,192.168.3.0/24,registry.rainpole.io,10.2.224.4,.svc,100.64.0.0/13,100.96.0.0/11"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/match missing_ok=True
      postKubeadmCommands:
      #@overlay/append
      - systemctl restart containerd

Categories // Kubernetes, VMware Tanzu Tags // http proxy, proxy, Tanzu Kubernetes Grid

Custom Virtual Machine Class Types with vSphere with Tanzu

10.30.2020 by William Lam // 1 Comment

When you deploy a Tanzu Kubernetes Grid (TKG) Cluster using the integrated TKG Service in vSphere with Tanzu, you can specify a Virtual Machine Class Type which determines the amount of CPU and Memory resources that are allocated for both the Control Plane and/or Worker Node VMs for your TKG Cluster.

Here is a sample YAML specification that uses the best-effort-xsmall VM class type for both Control Plane and Worker Node, but you can certainly override and choose different classes based on your requirements.

apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
  name: william-tkc-01
  namespace: primp-industries
spec:
  distribution:
    version: v1.17.8+vmware.1-tkg.1.5417466
  settings:
    network:
      cni:
        name: antrea
      pods:
        cidrBlocks:
        - 193.0.2.0/16
      serviceDomain: managedcluster.local
      services:
        cidrBlocks:
        - 195.51.100.0/12
  topology:
    controlPlane:
      class: best-effort-xsmall
      count: 1
      storageClass: vsan-default-storage-policy
    workers:
      class: best-effort-xsmall
      count: 3
      storageClass: vsan-default-storage-policy

Today, the are a total of 16 VM Class types that you can select from, however these are not customizable which is something that has been coming up more recently. The vSphere with Tanzu team is aware of this request and is working on a solution that not only makes customizing CPU and Memory easier but also supporting storage customization. As you can see from the table below, 16GB is only supported configuration today.


In the mean time, if you need a supported path for customizing your TKG Guest Clusters, one option is to use the TKG Standalone / MultiCloud CLI, which can be used with a vSphere with Tanzu Cluster. You will need to deploy an additional TKG Management Cluster (basically a few VMs), but once you have that, you can override CPU, Memory and Storage of both the Control Plane and Worker Nodes using the following environment variables:

  • VSPHERE_WORKER_NUM_CPUS
  • VSPHERE_WORKER_MEM_MIB
  • VSPHERE_WORKER_DISK_GIB
  • VSPHERE_CONTROL_PLANE_NUM_CPUS
  • VSPHERE_CONTROL_PLANE_MEM_MIB
  • VSPHERE_CONTROL_PLANE_DISK_GIB

If you are interested, the easiest way to get started is by using my TKG Demo Appliance Fling which was just recently updated to the latest TKG 1.2 release which has support for K8s v1.19 which is currently not available on vSphere with Tanzu.

Now, you might ask, would it be possible to create your own custom VM class types using vSphere with Tanzu? Well .... keep reading to find out 🙂

Disclaimer: This is not officially supported by VMware, use at your own risk. These custom changes can potentially impact upgrades or automatically be reverted upon the next update or upgrade. You have been warned.

[Read more...]

Categories // Kubernetes, VMware Tanzu Tags // vSphere Kubernetes Service

  • « Previous Page
  • 1
  • …
  • 16
  • 17
  • 18
  • 19
  • 20
  • …
  • 36
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...