WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Mapping between vSphere Container Volume to Persistent Volume Claim (PVC) in vSphere 7.0 Update 1 using PowerCLI

11.04.2020 by William Lam // 1 Comment

With the introduction of the vSphere Container Storage Interface (CSI) 2.1, it looks like the previous method outlined by Cormac Hogan no longer applies when looking to map between a vSphere Container Volume (CV), which is a vSphere construct to the underlying Persistent Volume Claim (PVC), which is a Kubernetes construct.


William Arroyo, a K8s Solution Engineer recently noticed this behavior change and was asking if there was a way to use PowerCLI to still perform this look up. Given I had provided the original PowerCLI snippet on Cormac's blog, I was curious myself and since I had just rebuilt my vSphere with Tanzu environment, I figured I take a quick look to see where this new information might now be placed.

I did also want to mention that you can easily find this information using the vSphere UI by just clicking on the "Details" box next to the PVC


With the latest PowerCLI 12.1 release, we have a number of Cloud Native Storage (CNS) cmdlets that we can leverage and after a quick minute of poking around, this new information can be found using the Get-CnsVolume cmdlet and using the ExtensionData property to get more detailed properties.

[Read more...]

Categories // Automation, Cloud Native, PowerCLI Tags // CSI, Persistent Memory, vSphere Container Volume

Configure network proxy using YTT with Tanzu Kubernetes Grid (TKG)

11.04.2020 by William Lam // 1 Comment

I was doing some work with Tanzu Kubernetes Grid (TKG) 1.2 using my TKG Demo Appliance Fling and the environment that I was working in did not have direct internet access, which is usually the case for most Production environment. I needed to have outbound connectivity from the TKG Worker Nodes so that they could pull down a set of containers as part of attaching to our Tanzu Mission Control (TMC) service.

Luckily, there was an HTTP proxy server that I could use for this connectivity and we just need to update our TKG templates so the TKG worker nodes will have the proxy settings. In the past, when needing to apply such customizations such as adding a network proxy to TKG, it meant I had to manually edit the TKG Dev/Prod YAML files. As previously shared, Tanzu Kubernetes Grid (TKG) 1.2 now uses the YAML Templating Tool (YTT) tool for customizing TKG plans.

Although the TKG documentation provides an example for YTT template example, it did not actually cover the TKG Worker Nodes which is what I needed but also that I needed to add a command into the postKubeadmCommands for the network proxy to be activated. The issue is that this section no longer exists in the base template like it did in previous versions of TKG and required some additional YTT annotation to get this working.

Here is the complete working ~/.tkg/providers/infrastructure-vsphere/ytt/proxy_nameserver.yaml template that adds the respective HTTP(S) proxy server and No Proxy settings.

#@ load("@ytt:overlay", "overlay")

#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
spec:
  kubeadmConfigSpec:
    preKubeadmCommands:
    #! Add HTTP_PROXY to containerd configuration file
    #@overlay/append
    - echo $'[Service]\nEnvironment="HTTP_PROXY=http://1.2.3.4:3128/"' > /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/append
    - echo 'Environment="HTTPS_PROXY=http://1.2.3.4:3128"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/append
    - echo 'Environment="NO_PROXY=localhost,192.168.4.0/24,192.168.3.0/24,registry.rainpole.io,10.2.224.4,.svc,100.64.0.0/13,100.96.0.0/11"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/match missing_ok=True
    postKubeadmCommands:
    #@overlay/append
    - systemctl restart containerd

#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"})
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
spec:
  template:
    spec:
      preKubeadmCommands:
      #! Add HTTP_PROXY to containerd configuration file
      #@overlay/append
      - echo $'[Service]\nEnvironment="HTTP_PROXY=http://1.2.3.4:3128/"' > /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/append
      - echo 'Environment="HTTPS_PROXY=http://1.2.3.4:3128"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/append
      - echo 'Environment="NO_PROXY=localhost,192.168.4.0/24,192.168.3.0/24,registry.rainpole.io,10.2.224.4,.svc,100.64.0.0/13,100.96.0.0/11"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/match missing_ok=True
      postKubeadmCommands:
      #@overlay/append
      - systemctl restart containerd

Categories // Kubernetes, VMware Tanzu Tags // http proxy, proxy, Tanzu Kubernetes Grid

New SDDC Linking capability for VMware Cloud on AWS

11.03.2020 by William Lam // 1 Comment

Back in September, the VMware Transit Connect (vTGW) on VMware Cloud on AWS (VMConAWS) feature was released and provides users a simplified way of connecting AWS VPCs, AWS Direct Connect Gateways and customer on-premises datacenter from a networking connectivity standpoint. As part of this feature, a new logical construct called an SDDC Group was created which allows customers to easily apply common networking connectivity policies across a number of SDDCs versus having to manage them separately which can quickly get complex from an operational point of view.

The SDDC Group not only simplified the initial setup, but it also simplifies Day 2 Operations when new SDDCs are provisioned and added to the SDDC group. The networking policies that have been configured at the SDDC Group will automatically apply to all new SDDCs which makes this a really slick solution. As SDDCs are removed from the SDDC Group, the related configurations are automatically un-provisioning and detached from the respective networking resources.


Simplified network connectivity using an SDDC Group was just the beginning! Today, the VMware Cloud team has released a new feature built on top of the SDDC Groups construct called vCenter Linking for SDDC Groups. Just as the name implies, customers can now easily "Link" multiple vCenter Servers within an SDDC Group enabling a single view of all vCenter Servers using any one of the vSphere UIs within the SDDC. For those familiar with Enhanced Linked Mode (ELM), this is basically that but for SDDCs running in the Cloud!

The workflow could not have been simpler and last week I got try it out and was quite impressed! Under the hood, this leverages the vCenter Convergence capability and when enabling vCenter Linking, the service automatically handles all those details including the necessary NSX-T firewall rules that need to be configured across ALL SDDC to allow for secured connectivity. Just imagined having to do this each time a new SDDC is added or remove, you need to manually go to all SDDC and update or create new firewall rules!? This is all hidden away from the user and by simply associating SDDCs in the SDDC Group, the configurations are applied automatically for you.

Just setup an upcoming feature which builds on top of VMware Transit Connect Gateway (vTGW) allowing #VMWonAWS customers to now “Link” multiple SDDCs together. Just 1-Click, you now can access all Cloud vCenter Servers using any one vSphere UI. ELM for Cloud!#VMwareCloud pic.twitter.com/dImg6Yloe3

— William Lam (@lamw.bsky.social | @*protected email*) (@lamw) October 30, 2020

One question that I did have while trying out this new feature was how does this work with existing features such as Hybrid Linked Mode (HLM) and ELM?

[Read more...]

Categories // VMware Cloud, VMware Cloud on AWS Tags // ELM, Enhanced Linked Mode, HLM, Hybrid Linked Mode, SDDC Group, VMware Cloud, VMware Cloud on AWS

  • « Previous Page
  • 1
  • …
  • 167
  • 168
  • 169
  • 170
  • 171
  • …
  • 564
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VCF 9.0 Installer workaround for ESXi hosts with different vendor 06/19/2025
  • NVMe Tiering with AMD Ryzen CPU workaround for VCF 9.0 06/19/2025
  • vSAN ESA Disk & HCL Workaround for VCF 9.0 06/19/2025
  • Disable 10GbE NIC Pre-Check in the VCF 9.0 Installer 06/19/2025
  • Minimal resources for deploying VCF 9.0 in a Lab 06/18/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...