WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple
You are here: Home / Cloud Native / Interesting Kubernetes application demos

Interesting Kubernetes application demos

06.08.2020 by William Lam // 3 Comments

I am always on the lookout for cool and interesting demos to deploy, especially with some of the work I have been doing lately with vSphere with Kubernetes (K8s) and Tanzu Kubernetes Grid (TKG). I am sure many of you have probably seen the basic wordpress demos which seems to be the typical "Hello World" app for K8s and having something more compelling not only makes the demo more interesting but it can also help folks better understand how a modern applications can be built, deployed and run.

Below is a list of of the K8s demo applications that I have come across as part of my exploration and by no means is this an exhaustive list. I have been able to successfully deploy these applications running on the latest version of K8s (1.17 and 1.18) as I did come across other demos which did not work or I had issues setting up. If there are other K8s demos that folks have used, feel free to leave a comment and I will update the blog post after doing some basic testing.

For those of you who may not have a K8s environment and is running either vSphere 6.7 Update 3 or have access to a VMware Cloud on AWS SDDC, you can easily setup a TKG Cluster in under 30 minutes leveraging my TKG Demo Appliance Fling.

Yelb

Yelb has been one of my go to demos due to its simplicity. For those not familiar, Yelb is a famous VMware demo application that was built by my good friend Massimo Re Ferre' and Andrea Siviero. This web application contains the following services: UI Frontend, Application Server, Database Server and Caching Service using Redis.


Special Requirements:

  • None

NodePort Deployment:

Step 1 - Deploy the application

kubectl create ns yelb
kubectl apply -f https://raw.githubusercontent.com/lamw/vmware-k8s-app-demo/master/yelb.yaml

Step 2 - Verify all pods are ready

kubectl -n yelb get pods

Step 3 - To access the application, open web browser to http://<ip>:30001

kubectl -n yelb describe pod $(kubectl -n yelb get pods | grep yelb-ui | awk '{print $1}') | grep "Node:"

LoadBalancer Service Deployment:

Step 1 - Deploy the application

kubectl create ns yelb
kubectl apply -f https://raw.githubusercontent.com/lamw/vmware-k8s-app-demo/master/yelb-lb.yaml

Step 2 - Verify all pods are ready

kubectl -n yelb get pods

Step 3 - To access the application, open web browser to http://<external-ip>

kubectl -n yelb get svc/yelb-ui

Note: It is expected that your K8s cluster can support a LoadBalancer Service

VMware Event Broker Application

I know many of you are already familiar with the VMware Event Broker Appliance (VEBA) Fling which enables customers to easily build event-driven automation based on vCenter Server Events. Since VEBA is built as a native K8s application, customers that have an existing K8s Cluster which includes vSphere with K8s and TKG can also get the benefits of VEBA!

Special Requirements:

  • None

Deployment:

Detailed instructions on deploying and configuring VEBA with K8s can be found here.

Kubeapps

I recently learned about Kubeapps which provides a very slick web interface for deploying and managing K8s applications which also includes Helm charts. After deploying Kubeapps, you immediately have access to a catalog of several hundred ready to deploy applications. It includes basic apps like WordPress but it also contains some of the most popular and most up to date open source applications that are being deployed in production like Kafka, Jenkins and Elastic Search as an example. If you have your own application repository, you can add those to Kubeapps which will automatically give you UI interface to deploy your own custom K8s applications. Kubeapps is also a VMware Opensource project that is being led by the Bitnami team which was acquired recently by VMware. Definitely worth checking out!


Special Requirements:

  • Helm is required for installation
  • Storage Classes (for persistent apps)

Deployment:

Detailed instructions can be found here and to learn more about Kubeapps, Tiffany Jernigan, a Developer Advocate for VMware has fantastic getting started video which I also recommend checking out.

ACME Fitness

The ACME Fitness application is built by our VMware Cloud Advocacy team and it demonstrates an example e-commerce/online application which is made up of a number of different microservices. It also includes a load generator tool which is also looks pretty neat.


Special Requirements:

  • Load Balancer required

Deployment:

Step 1 - Clone repo

git clone https://github.com/vmwarecloudadvocacy/acme_fitness_demo.git
cd acme_fitness_demo/kubernetes-manifests

Step 2 - Specify a secret name and namespace to apply to the deployment. If you do not have the permission to create a namespace, simply omit that from each of the commands.

ACME_SECRET=VMware1!
ACME_NAMESPACE=acme-fitness

Step 3 - Deploy application

kubectl create ns ${ACME_NAMESPACE}
kubectl -n ${ACME_NAMESPACE} create secret generic cart-redis-pass --from-literal=password=${ACME_SECRET}
kubectl -n ${ACME_NAMESPACE} apply -f cart-redis-total.yaml
kubectl -n ${ACME_NAMESPACE} apply -f cart-total.yaml
kubectl -n ${ACME_NAMESPACE} create secret generic catalog-mongo-pass --from-literal=password=${ACME_SECRET}
kubectl -n ${ACME_NAMESPACE} create -f catalog-db-initdb-configmap.yaml
kubectl -n ${ACME_NAMESPACE} apply -f catalog-db-total.yaml
kubectl -n ${ACME_NAMESPACE} apply -f catalog-total.yaml
kubectl -n ${ACME_NAMESPACE} apply -f payment-total.yaml
kubectl -n ${ACME_NAMESPACE} create secret generic order-postgres-pass --from-literal=password=${ACME_SECRET}
kubectl -n ${ACME_NAMESPACE} apply -f order-db-total.yaml
kubectl -n ${ACME_NAMESPACE} apply -f order-total.yaml
kubectl -n ${ACME_NAMESPACE} create secret generic users-mongo-pass --from-literal=password=${ACME_SECRET}
kubectl -n ${ACME_NAMESPACE} create secret generic users-redis-pass --from-literal=password=${ACME_SECRET}
kubectl -n ${ACME_NAMESPACE} create -f users-db-initdb-configmap.yaml
kubectl -n ${ACME_NAMESPACE} apply -f users-db-total.yaml
kubectl -n ${ACME_NAMESPACE} apply -f users-redis-total.yaml
kubectl -n ${ACME_NAMESPACE} apply -f users-total.yaml
kubectl -n ${ACME_NAMESPACE} apply -f frontend-total.yaml
kubectl -n ${ACME_NAMESPACE} apply -f point-of-sales-total.yaml

Step 4 - Verify all pods are in ready. I did notice that the point-of-sales (pos) pod may be in crash loop.

kubectl -n ${ACME_NAMESPACE} get pods

Step 5 - To access the application, open web browser to http://<external-ip>

kubectl -n ${ACME_NAMESPACE} get svc/frontend

Online Boutique

Online Boutique is another e-commerce/online store demo that was built by the folks over at Google which demonstrates a 10-Tier microservice application.


Special Requirements:

  • Load Balancer required

Step 1 - Clone repo

git clone https://github.com/GoogleCloudPlatform/microservices-demo.git
cd microservices-demo

Step 2 - Create namespace to apply to the deployment. If you do not have the permission to create a namespace, simply omit that from each of the commands.

kubectl create ns boutique

Step 3 - Deploy application

kubectl -n boutique apply -f ./release/kubernetes-manifests.yaml

Step 4 - Verify all pods are ready

kubectl -n boutique get pods

Step 5 - To access the application, open web browser to http://<external-ip>

kubectl -n boutique get svc/frontend-external

Robot Shop

Robot Shop is another e-commerce/online store application that was built by Instana using a number of different web technologies and microservices. This might be your cup of tea for those Sci-Fi fans 🙂


Special Requirements:

  • Helm is required for installation
  • Load Balancer required

Step 1 - Clone repo

git clone https://github.com/instana/robot-shop.git
cd robot-shop/K8s/helm

Step 2 - Create namespace to apply to the deployment. If you do not have the permission to create a namespace, simply omit that from each of the commands.

kubectl create ns robot-shop

Step 2 - Deploy application

kubectl create ns robot-shop
helm install robot-shop --namespace robot-shop .

Step 3 - Verify all pods are ready

kubectl -n robot-shop get pods

Step 4 - To access the application, open web browser to http://<external-ip>:8080

kubectl -n robot-shop get svc/web

Kubedoom

Kubedoom is a fun little application that I came across which allows you to have some fun killing demons, err I mean pods within your K8s Cluster to test the resiliency of an application. Its hard work but someone has to do it 😉

via GIPHY
Special Requirements:

  • VNC Client

Step 1 - Clone repo

git clone https://github.com/storax/kubedoom.git
cd kubedoom

Step 2 - Deploy application

kubectl apply -f manifest/

Step 3 - Identify the IP Address of the K8s Node to connect

kubectl -n kubedoom describe pod | grep 'Node:'

Note: If you are using the TKG Demo Appliance, you can also setup SSH port forward to the K8s Node by using ssh root@[TKG-IP] -L 5900:[K8s-NODE-IP]:5900 and then connecting to localhost:5900 from your local desktop after successfully establishing SSH tunnel.

Step 4 - Connect to the Kubedoom container using any VNC Client on port 5900 and the password is idbehold.

Kubevaders

I initially had Kubevaders on my list but was not able to get it to work. I finally got a chance to take another look and figured it out. If Doom is not your cup of tea, Kubevaders is another fun way of killing pods.

Special Requirements:

  • Helm is required for installation
  • Load Balancer required

Note: If you are using my TKG Demo Appliance, you can deploy Metallb for the Load Balancer by following the instructions here.

Step 1 - Clone repo

git clone https://github.com/lucky-sideburn/KubeInvaders.git
cd Kubevaders

Step 2 - Install an Ingress Controller. For simplicity purposes, we are using Nginx and will be installing via Helm:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install nginx-ingress ingress-nginx/ingress-nginx

Step 3 - Create namespace and deploy application  using Helm. You will need to specify a list of K8s Namespaces where you want the Pods to be made available for your destruction. Here are am using the Acme Fitness application which has quite a few pods. You can specify more than one namespace, just make sure to escape the "comma" by doing something like "ns1\,ns2\,ns3"

kubectl create namespace kubeinvaders
helm install kubeinvaders --set-string target_namespace="acme-fitness" --namespace kubeinvaders ./helm-charts/kubeinvaders --set ingress.hostName=tkg-k8s-invaders.io

Step 4 - Retrieve the external IP Address that should have been allocated to your Nginx Ingress Controller

kubectl get svc/nginx-ingress-ingress-nginx-controller

Step 5 - As part of deploying the application, there was an ingress hostname property that was set which by default we used tkg-k8s-invaders.io which will be used to access the demo. You can either create a DNS entry or you can simply create a hosts entry that maps the hostname to the IP Address.

Step 6 - Open browser from a system that can access the external IP Address/Hostname and you should now be taken to Kubevaders application. You can switch between the different namespaces and start killing your pods!

via GIPHY

Retro DOS Game Engine

Please see this blog post for deploying a retro DOS (based on DOSBox) game engine running on Kubernetes

Pacman on Kubernetes

While researching some fun demos including my recent retro gaming on Kubernetes, I also came across this fun project from Ivan Font demonstrating a classic Pacman game over HTML5 and using simple MongoDB for the backend stats. It has been several years since the project had been touched and when I had initially looked at the nginx version of the application, the build process failed due to various issues. I almost gave up but decided to look at the nodejs version and was finally able to make that work, which required several tweaks since most of the YAML in the repository were no longer valid for recent versions of Kubernetes.

Special Requirements:

  • Default Storage Class required
  • Load Balancer required

Note: If you are using my TKG Demo Appliance, you can deploy Metallb for the Load Balancer and default StorageClass by following the instructions here and here.

Step 1 - Clone repo

git clone https://github.com/font/k8s-example-apps.git
cd k8s-example-apps/pacman-nodejs-app

Step 2 - Specify your Dockerhub username in the DOCKER_USERNAME variable and then login. This step is needed as we will need to build the containers and push that into your account.

DOCKER_USERNAME=lamw
docker login -u ${DOCKER_USERNAME}

Step 3 - Build the Pacman nodejs appplication:

docker build -t ${DOCKER_USERNAME}/pacman-nodejs-app docker/
docker push ${DOCKER_USERNAME}/pacman-nodejs-app

Step 4 - Create the pacman namespace which will contain all of our redeployed resources

kubectl create namespace pacman

Step 5 - Create the MongoDB PVC

cat > mongo-pvc-new.yaml << EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mongo-storage
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
EOF

kubectl -n pacman apply -f mongo-pvc-new.yaml

Step 6 - Create the MongoDB Service:

kubectl -n pacman apply -f services/mongo-service.yaml

Step 7 - Create the MongoDB Deployment:

cat > mongo-deployment-new.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: mongo
  name: mongo
spec:
  replicas: 1
  selector:
    matchLabels:
      name: mongo
  template:
    metadata:
      labels:
        name: mongo
    spec:
      containers:
      - image: mongo
        name: mongo
        ports:
        - name: mongo
          containerPort: 27017
        volumeMounts:
          - name: mongo-db
            mountPath: /data/db
      volumes:
        - name: mongo-db
          persistentVolumeClaim:
            claimName: mongo-storage
EOF

kubectl -n pacman apply -f mongo-deployment-new.yaml

Step 8 - Create the Pacman Service:

cat > pacman-service.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: pacman
  labels:
    name: pacman
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
  selector:
    name: pacman
EOF

kubectl -n pacman apply -f pacman-service.yaml

Step 9 - Create the Pacman Deployment:

cat > pacman-deployment-new.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: pacman
  name: pacman
spec:
  replicas: 1
  selector:
    matchLabels:
        name: pacman
  template:
    metadata:
      labels:
        name: pacman
    spec:
      containers:
      - image: ${DOCKER_USERNAME}/pacman-nodejs-app:latest
        name: pacman
        ports:
        - containerPort: 8080
          name: http-server

EOF

kubectl -n pacman apply -f pacman-deployment-new.yaml

Step 10 - Verified that all deployment and services are running by running

kubectl -n pacman get all


Step 11 - Open browser from a system that can access the external IP Address of the service/pacman as shown from the previous output and start playing! Audio is also functional but disabled by default, simply toggle the speaker icon to enjoy 🙂

via GIPHY

Minecraft on Kubernetes

Here is fun project by Eric Jadi that uses Minecraft to manage (play) with your workloads in Kubernetes.

I personally do not play Minecraft, so I have not tried out his solution but you can find more details here.

More from my site

  • vSphere Pods using VDS based Supervisor in vSphere with Tanzu?
  • Enhancements to VMware Tools 12 for Container Application Discovery in vSphere 
  • Quick Tip - Setting up Kubernetes using Containerd on Photon OS
  • Packer reference for VMware Harbor Virtual Appliance
  • How to clean up stale vSphere Container Volumes & First Class Disks?

Categories // Cloud Native, Kubernetes, VMware Tanzu Tags // Kubernetes

Comments

  1. *protectedkcfung says

    08/13/2020 at 11:35 pm

    This is great info! Exactly what I'm looking for in my Kubernetes learning journey

    Reply
  2. *protectedBruce Modell says

    12/23/2020 at 8:03 am

    acme fitness - Step 4 - Verify all pods are in ready. I did notice that the point-of-sales (pos) pod may....

    May what?

    Reply
  3. *protectedPrakash says

    02/11/2021 at 5:57 pm

    A lot of good stuff

    Reply

Thanks for the comment!Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...