WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud
  • Tanzu
    • Application Modernization
    • Tanzu services
    • Tanzu Community Edition
    • Tanzu Kubernetes Grid
    • vSphere with Tanzu
  • Home Lab
  • Nested Virtualization
  • Apple
You are here: Home / Cloud Native / How to deploy Knative to a Tanzu Kubernetes Grid (TKG) Cluster on both vSphere with Tanzu and TKG Multi-Cloud?

How to deploy Knative to a Tanzu Kubernetes Grid (TKG) Cluster on both vSphere with Tanzu and TKG Multi-Cloud?

11.23.2020 by William Lam // Leave a Comment

This weekend I spent some time installing Knative, which is an open source framework that is built on top of Kubernetes. Knative is actually made up of two core components, serving and eventing. This quote from Ram Gopinathan, Principal Technology Architect, T-Mobile really sums up Knative quite nicely:

Knative helps our developers focus on building the business logic rather than worrying about building low-level platform capabilities such as build, deploy, autoscaling, monitoring, and observability.

There are a number of tutorials online for setting up Knative, most of which using Kubernetes in Docker (KinD) for easy local development. Since I have been spending quite a bit of time lately with both our vSphere with Tanzu and Tanzu Kubernetes Grid (TKG) Multi-Cloud solution, which both support deploying conformant and production grade Kubernetes (K8s) Clusters called a TKG Guest Cluster, I figure I might as well learn how to install Knative using these infrastructures.

The instructions below will be focus on deploying the Knative serving components. Once you have that setup, it is easy to deploy the eventing components which you can follow the official Knative documentation.

Tanzu Kubernetes Grid Multi-Cloud (TKGm)

I will assume that you already have deployed TKGm and have a TKG Guest Cluster up and running. If you do not, I would highly recommend you check out my TKG Demo Appliance Fling which enables you to quickly get started in less than 30min running on either VMware Cloud on AWS (VMConAWS), VMware Cloud on DellEMC (VMConDellEMC) or any on-premises vSphere 6.7 Update 3 or later.

Step 1 (Optional) - TKGm does not provide an out of the box load balancer for your TKG workloads. Although Knative does not require an LB, it does simplify the setup. For testing/learning purposes, you can setup Metallb, which I will walk you through the steps if you do not have an LB. If you do, you can move straight to Step 2.

Run the following commands to create the metallb-system namespace along with respective secret and deployment:

kubectl create ns metallb-system
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="\$(openssl rand -base64 128)"
kubectl -n metallb-system apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml

Run the following command to download the metallb-config.yaml file which contains the IP Address range in which Metallb will used to allocate for LB requests.

curl -L https://raw.githubusercontent.com/lamw/knative-on-tkg/master/tkgm/metallb-config.yaml -o metallb-config.yaml

In my environment, I am on a 192.168.2.0/24 network and I have reserved 192.168.2.240 to 192.168.2.250 for my LB range. Go ahead and edit the file and update your desired range and then run the following command to apply the configuration:

kubectl apply -n metallb-system -f metallb-config.yaml

Step 2 - Deploy the Knative serving components by running the following:

kubectl apply -f https://github.com/knative/serving/releases/download/v0.15.0/serving-crds.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/v0.15.0/serving-core.yaml

Step 5 - The deployment will take a few minutes, you can watch the following command until all components show ready before proceeding to the next step.

kubectl -n knative-serving get deployments


Step 6 - For the networking layer, we are going to use Kourier. Run the following command to deploy and configure Knative:

kubectl apply -f https://github.com/knative-sandbox/net-kourier/releases/download/v0.15.0/kourier.yaml
kubectl patch configmap/config-network --namespace knative-serving --type merge --patch '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}'

Step 7 - The deployment will take a few minutes, you can watch the following command until all components show ready before proceeding to the next step.

kubectl -n kourier-system get deployments


Step 8 - Once the Kourier deployment has completed, we now need to retrieve the IP Address that has been allocated to the service by our LB.

kubectl -n kourier-system get svc kourier


In the example above, the IP Address is 192.168.2.240, record this as you will need it for the next step.

Step 7 - Take the LB IP Address from the previous step and run the following command to configure Knative serving:

kubectl patch configmap/config-domain --namespace knative-serving --type merge --patch '{"data":{"192.168.2.240.nip.io":""}}'

Step 8 - To verify that Knative serving has been deployed correctly, we will deploy their sample hello world application. Run the following command to deploy the new service:

kubectl apply -f https://raw.githubusercontent.com/lamw/knative-on-tkg/master/tkgm/service.yaml

Step 9 - The deployment will take a minute or so, watch the following command until the Ready status shows true.

kubectl get ksvc


Step 10 - Using the previous command, retrieve the URL and perform a cURL and if everything was setup correctly, you should receive a response like on shown below:

curl http://helloworld-go.default.192.168.2.240.nip.io

vSphere with Tanzu

I will assume that you already have deployed a vSphere with Tanzu environment using either HAProxy or NSX-T for your Load Balancer. If you wish to build your vSphere with Tanzu homelab, I highly recommending checking out this blog post, which outlines how to setup the entire vSphere environment including the vCenter Server Appliance (VCSA) using just 32GB of memory. You will also noticed below there are additional pod security policies (PSP) YAML files which must be configured prior to allowing our workloads to run, this is by design to ensure that no rogue workloads are simply scheduled.

Step 1 - Deploy the Knative serving components by running the following:

kubectl apply -f https://github.com/knative/serving/releases/download/v0.15.0/serving-crds.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/v0.15.0/serving-core.yaml
kubectl apply -f https://raw.githubusercontent.com/lamw/knative-on-tkg/master/vsphere-with-tanzu/knative-serving-psp.yaml

Step 3 - The deployment will take a few minutes, you can watch the following command until all components show ready before proceeding to the next step.

kubectl -n knative-serving get deployments


Step 4 - For the networking layer, we are going to use Kourier. Run the following command to deploy and configure Knative:

kubectl apply -f https://github.com/knative-sandbox/net-kourier/releases/download/v0.15.0/kourier.yaml
kubectl patch configmap/config-network --namespace knative-serving --type merge --patch '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}'
kubectl apply -f https://raw.githubusercontent.com/lamw/knative-on-tkg/master/vsphere-with-tanzu/kourier-system-psp.yaml

Step 5 - The deployment will take a few minutes, you can watch the following command until all components show ready before proceeding to the next step.

kubectl -n kourier-system get deployments


Step 6 - The deployment will take a few minutes, you can watch the following command until all components show ready before proceeding to the next step.

kubectl -n kourier-system get svc kourier


In the example above, the IP Address is 10.10.0.70, record this as you will need it for the next step.

Step 7 - Take the LB IP Address from the previous step and run the following command to configure Knative serving:

kubectl patch configmap/config-domain --namespace knative-serving --type merge --patch '{"data":{"10.10.0.70.nip.io":""}}'

Step 9 - To verify that Knative serving has been deployed correctly, we will deploy their sample hello world application. Run the following command to deploy the new service:

kubectl apply -f https://raw.githubusercontent.com/lamw/knative-on-tkg/master/vsphere-with-tanzu/service.yaml
kubectl apply -f https://raw.githubusercontent.com/lamw/knative-on-tkg/master/vsphere-with-tanzu/helloworld-go-psp.yaml

Step 10 - The deployment will take a minute or so, watch the following command until the Ready status shows true.

kubectl get ksvc


Step 11 - Using the previous command, retrieve the URL and perform a cURL and if everything was setup correctly, you should receive a response like on shown below:

curl http://helloworld-go.default.10.10.0.70.nip.io

More from my site

  • Using Terraform to deploy a Tanzu Kubernetes Grid (TKG) Cluster in vSphere with Tanzu 
  • Quick Tip - Correctly naming TKR's in Local Content Library for vSphere with Tanzu in vSphere 8
  • vSphere Event-Driven Automation using Tanzu Application Platform (TAP) on Tanzu Kubernetes Grid Service
  • Packer reference for VMware Harbor Virtual Appliance
  • Can I deploy both Tanzu Kubernetes Grid (TKG) and vSphere with Tanzu on same vSphere Cluster?

Categories // Cloud Native, Kubernetes, VMware Tanzu Tags // Knative, Kubernetes, Tanzu Kubernetes Grid, vSphere with Tanzu

Thanks for the comment! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Infrastructure Business Group (CIBG) at VMware. He focuses on Cloud Native technologies, Automation, Integration and Operation for the VMware Cloud based Software Defined Datacenters (SDDC)

Connect

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Recent

  • Changing the default HTTP(s) Reverse Proxy Ports on ESXi 8.0 03/22/2023
  • Quick Tip - How to download ESXi ISO image for all releases including patch updates? 03/15/2023
  • SSD with multiple NVMe namespaces for VMware Homelab 03/14/2023
  • Is my vSphere Cluster managed by vSphere Lifecycle Manager (vLCM) as a Desired Image or Baseline? 03/10/2023
  • Interesting VMware Homelab Kits for 2023 03/08/2023

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2023

 

Loading Comments...