WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Hybrid (x86 and Arm) Kubernetes clusters using Tanzu Community Edition (TCE) and ESXi-Arm

11.19.2021 by William Lam // Leave a Comment

With the recent introduction of Tanzu Community Edition (TCE), users can now easily get first hand experience across VMware's Tanzu portfolio, including VMware's Enterprise Kubernetes (K8s) runtime called Tanzu Kubernetes Grid (TKG), all completely for free. One popular request that frequently comes up from our community is the ability to use TCE with the ESXi-Arm Fling.

Currently, TCE is only supported with x86 hardware platforms which includes ESXi-x86 and there is certainly a desire to be able to use TCE with Arm-based hardware running on top of ESXi-Arm, especially with inexpensive Raspberry Pi for learning and exploration purposes.

I recently came to learn about a really cool project that is being developed as part of VMware's Office of the CTO (OCTO) for a new Cluster API (CAPI) provider where you can Bring Your own Host (BYOH) that is already running Linux. What really intrigued me about their project was not the fact that they could create a TCE Workload Cluster that comprised of physical hosts but the fact that they were actually running on Arm hardware! 🤩

My immediate reaction was to see if this would also work with just Linux VMs? With some trial/error and help from Jixing Jia, one of the project maintainers, I was able to confirm that this indeed does works using Ubuntu VMs running on ESXi-Arm. What was even more impressive was the realization that this not only works for both physical and virtual Arm Linux systems, but that users could also create a hybrid TCE Workload Cluster that consists of BOTH x86 and Arm nodes! 🤯

I can only imagine the possibilities that this could enable in the future where application(s) could potentially span across CPU architecture, virtual and physical worker nodes which exposes different capabilities that can then be delivered based on the requirements of the application such as GPU as an example. It will be interesting to see the types of use cases the BYOH Cluster API Provider will help enable, especially pertaining to Edge computing.

If you are interested in playing with the BYOH Cluster API Provider, check out the detailed instructions below on how to get started. Since this is still currently in Alpha development, there are still a few manual steps and currently there is no native TCE integration. If this is something that is interesting to you, feel free to leave any feedback or better yet, leave comments directly on the Github repo asking for feature enhancements that you would like to see such as native support for TCE 😀

[Read more...]

Categories // ESXi-Arm, Kubernetes, VMware Tanzu Tags // Arm, ESXi, Raspberry Pi, Tanzu Community Edition, Tanzu Kubernetes Grid, TKG

Updates to USB Network & NVMe Community Driver for ESXi 7.0 Update 3

11.11.2021 by William Lam // 16 Comments

Happy Thursday! I know many of you have been asking about the status and support for ESXi 7.0 Update 3 and the popular USB Network Native Driver for ESXi. It has taken a bit longer as Songtao (the Engineer behind the Fling) has also been extremely busy and was also recently on PTO. Although I know this is something folks use extensively, I do also want to remind everyone that this is provided as a Fling, which means it is developed and supported as time permits. I will certainly do my best to help get new releases out aligning with ESXi updates and as a reminder, a new version of the USB Fling will ALWAYS be required for major releases of ESXi, which also includes update releases.

[Read more...]

Categories // ESXi, Home Lab Tags // ESXi 7.0 Update 3, Fling, NVMe, usb ethernet adapter, usb network adapter

VMware Cloud Enterprise Federation with Microsoft Azure Active Directory

11.08.2021 by William Lam // Leave a Comment

As a follow-up to my recent blog post on how to configure identity federation between VMware Cloud and AWS SSO using our the new Just-in-Time (JiT) provisioning method, I was also interested to see what the process looked like with Microsoft Azure Active Directory (AD), which is another popular identity provider, which can also benefit our Azure VMware Solution (AVS) customers leveraging VMware Cloud Services. Similar to AWS SSO, I had never worked with Azure AD before and this was a good opportunity to check out their service.

Here is a quick video for those interested in the final logon experience when VMware Cloud is using Azure AD as the identity provider:

[Read more...]

Categories // VMware Cloud Tags // active directory, Azure, SAML, VMware Cloud

  • « Previous Page
  • 1
  • …
  • 129
  • 130
  • 131
  • 132
  • 133
  • …
  • 567
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Ultimate Lab Resource for VCF 9.0 06/25/2025
  • VMware Cloud Foundation (VCF) on ASUS NUC 15 Pro (Cyber Canyon) 06/25/2025
  • VMware Cloud Foundation (VCF) on Minisforum MS-A2 06/25/2025
  • VCF 9.0 Offline Depot using Synology 06/25/2025
  • Deploying VCF 9.0 on a single ESXi host? 06/24/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...