As some of you can probably tell from my recent Twitter updates and blog posts (here and here) that I have been spending some time lately with both vSphere with Kubernetes and Tanzu Kubernetes Grid (TKG). Like many of you in the community, I am still pretty new to Kubernetes (K8s) and I am still learning about what it has to offer both from an infrastructure standpoint but more importantly how it can be used to deliver new and modern applications. I am also very lucky to be part of the the VMware Event Broker Appliance Open Source Fling project which builds and runs on top K8s and this project has allowed me to really get hands on which is how I learn best.
A couple of months back I was asked to put together a workshop to demonstrate how to deploy TKG Clusters running on VMware Cloud on AWS (VMC) and while developing the workshop, I thought it would be really cool if I could make it even easier for anyone that is brand new to K8s to quickly get started with TKG. I wanted to have a solution that can literally be dropped into any supported vSphere-based environment with basic networking to go from Zero to Kubernetes in less than 30 minutes!
Enter the Demo Appliance for Tanzu Kubernetes Grid (TKG) Fling
A Virtual Appliance that pre-bundles all required dependencies to help customers in learning and deploying standalone Tanzu Kubernetes Grid (TKG) clusters running on either VMware Cloud on AWS and/or vSphere 6.7 Update 3 environment for Proof of Concept, Demo and Dev/Test purposes. This appliance will enable you to quickly go from zero to Kubernetes in less than 30 minutes with just an SSH client and a web browser!
In addition to the appliance, I have also put together a step by step workshop-style guide which not only walks you through in deploying your first TKG Cluster but also provide some example demos and references which you can explore further. Below are some of the highlights of the Demo Appliance for TKG:
Quickly deploy TKG Clusters onto any VMware Cloud on AWS or vSphere-based infrastructure using either the TKG UI or CLI. Since all dependencies are included within the TKG Demo Appliance, you can actually speed up deployments by running the appliance directly within your VMC and/or vSphere infrastructure so there are no additional latency or bandwidth/connectivity to worry about.
An online vSphere Content Library can be used to automatically download and sync both the TKG Demo Appliance as well as the other TKG OVA dependency to quickly get started.
An embedded Harbor registry pre-loaded with all required TKG and Demo Containers. This is the perfect solution for an air-gapped and non-internet accessible environment without having to stand up your own container registry which is not the easiest thing to setup with TKG. This is also ideal for demo purposes since TKG currently requires outbound internet connectivity and this reduces the barrier to get started whether you are using VMC or any on-premises vSphere infrastructure.
Sample demo applications including creating a basic persistent volume, K8s 3-tier application with both a simple and LoadBalancer mode. Additional demos and examples can be added in the future based on feedback or contributors from the community.
Easily access and debug TKG Clusters using the powerful Octant UI tool which is packaged within the appliance.
Martin Flinn says
Show me a TKG Plex server and I'm interested to learn.
William Lam says
Google is your friend 🙂
Thank you for the wonderful posts. It will be good if the TKG demo appliance could deploy in a static ip environment by specifying a pool of ips.
William Lam says
The DHCP requirement is a TKG specific requirement and not the Appliance
Hi William, this is a great post. I have one question on how to configure for static IP? As the appliance is a photon-OS.
William Lam says
Not sure what you mean by configure static IP? The TKG Demo Appliance requires a static IP, just fill in OVF properties and thats all setup for your automatically 🙂
Are there plans to support Static IP address allocation for TKG cluster nodes?
Hey William, I think I figured out my error. I didn't have a DHCP Zone set up on my DC to support my VLAN100 (192.168.100.x). I will test that out and let you know if it resolved my issue.