WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Getting started with VMware Pivotal Container Service (PKS) Part 2: PKS Client

03.26.2018 by William Lam // 6 Comments

In this article, we will configure the various command-line tools that will be used to interact with the PKS Platform which will then be consumed by either the Operators (managing the PKS infrastructure) and/or the the Developers (consumers of the Kubernetes Clusters).

Below is a quick summary, description and the consumers of the CLIs that we will be installing:

CLI Description Consumer
pks Used to create/delete and manage K8S Clusters Operator
kubectl Used to interact with K8S Cluster and deploy applications including scaling up/down Developer
uaac Used to manage user accounts and authorization for the PKS platform Operator
bosh Used to manage PKS deployments and provides information about the VMs using its Cloud Provider Interface (CPI) which is vSphere in this case Operator
om Used to Used to manager and interact with Ops Manager Operator
nsx-cli.sh Used to clean NSX-T objects after a K8S have been deleted (will be Automated by PKS in future release) Operator

Both PKS and Kubectl CLIs are supported on either Windows, MacOS and Linux, you can refer to Part 1 for a link to the binary downloads. The remainder of the tools are primarily used by Operators and to make them accessible for multiple users, you can deploy a centralized management VM. In my lab, I am referring to this VM as the "PKS Client" which is where we will be installing all the CLIs. You can use a variety of supported Operating Systems, but I found Ubuntu to work the best, especially for some of the package dependencies. I did try to use our own PhotonOS, but I was having some trouble figuring out the required packages. If I figure it out, then I will update the article as that may be preferred over Ubuntu if you have never worked with it before.

If you missed any of the previous articles, you can find the complete list here:

  • Getting started with VMware Pivotal Container Service (PKS) Part 1: Overview
  • Getting started with VMware Pivotal Container Service (PKS) Part 2: PKS Client
  • Getting started with VMware Pivotal Container Service (PKS) Part 3: NSX-T
  • Getting started with VMware Pivotal Container Service (PKS) Part 4: Ops Manager & BOSH
  • Getting started with VMware Pivotal Container Service (PKS) Part 5: PKS Control Plane
  • Getting started with VMware Pivotal Container Service (PKS) Part 6: Kubernetes Go!
  • Getting started with VMware Pivotal Container Service (PKS) Part 7: Harbor
  • Getting started with VMware Pivotal Container Service (PKS) Part 8: Monitoring Tool Overview
  • Getting started with VMware Pivotal Container Service (PKS) Part 9: Logging
  • Getting started with VMware Pivotal Container Service (PKS) Part 10: Infrastructure Monitoring
  • Getting started with VMware Pivotal Container Service (PKS) Part 11: Application Monitoring
  • vGhetto Automated Pivotal Container Service (PKS) Lab Deployment

[Read more...]

Categories // Automation, Cloud Native, Kubernetes, NSX Tags // BOSH, cloud native apps, kubectl, Kubernetes, nsx-cli.sh, om, PCF, Pivotal, PKS, uaac

Getting started with VMware Pivotal Container Service (PKS) Part 1: Overview

03.23.2018 by William Lam // 17 Comments

This past week and half, I have been spending quite a bit of time familiarizing myself with the recently released VMware Pivotal Container Service solution, also referred to as VMware PKS for short (yes, that is a K not a C which is a nod to Google's container scheduler Kubernetes). VMware PKS is part of a project that I am currently working on and I figure I would share the process and steps I took to deploy VMware PKS in my own personal lab, in case other folks are interested in trying out this neat and powerful solution for deploying Cloud Native Apps using Kubernetes which was co-developed between VMware, Pivotal and Google.

If you would like to learn more about this first release of VMware PKS and the benefits it provides to both developers (consumers) and operators (admins/SRE) for Kubernetes infrastructure, check out this blog post here. Merlin Glynn, one of the Product Managers for PKS also did an awesome light board video overview of VMware PKS if you want the sparks notes version. If you simply want to give PKS a try without deploying anything, the CNA folks have also published a PKS HOL which can you find here. Another useful resource is the Getting Started with Kubernetes-as-a-Service post from Michael West who works in CNA team and built the PKS HOL.


This will be the first, in a series of articles outlining my VMware PKS deployment and configuration which hopefully can help benefit others as it took me several attempts while learning about the solution. Although the first few articles will include manual guidance, rest assure, there will be some cool automation towards the end but I figure that folks may want to go through this once by hand to get a good understanding on all the different components and how they interact with each other. Plus, some of the PKS-specific automation is still being worked on by the product team and hopefully I will be able to share some of that real soon.

If you missed any of the previous articles, you can find the complete list here:

  • Getting started with VMware Pivotal Container Service (PKS) Part 1: Overview
  • Getting started with VMware Pivotal Container Service (PKS) Part 2: PKS Client
  • Getting started with VMware Pivotal Container Service (PKS) Part 3: NSX-T
  • Getting started with VMware Pivotal Container Service (PKS) Part 4: Ops Manager & BOSH
  • Getting started with VMware Pivotal Container Service (PKS) Part 5: PKS Control Plane
  • Getting started with VMware Pivotal Container Service (PKS) Part 6: Kubernetes Go!
  • Getting started with VMware Pivotal Container Service (PKS) Part 7: Harbor
  • Getting started with VMware Pivotal Container Service (PKS) Part 8: Monitoring Tool Overview
  • Getting started with VMware Pivotal Container Service (PKS) Part 9: Logging
  • Getting started with VMware Pivotal Container Service (PKS) Part 10: Infrastructure Monitoring
  • Getting started with VMware Pivotal Container Service (PKS) Part 11: Application Monitoring
  • vGhetto Automated Pivotal Container Service (PKS) Lab Deployment

[Read more...]

Categories // Automation, Cloud Native, ESXi, Kubernetes, NSX, VSAN, vSphere Tags // BOSH, cloud native apps, Kubernetes, PCF, Pivotal, PKS

Test driving VMware Photon Controller Part 3a: Deploying Kubernetes

04.21.2016 by William Lam // 6 Comments

If you have been following the series thus far, we have covered installing Photon Controller in Part 1 and then we learned how to create our first virtual machine using Photon Controller in Part 2. Next up, we will demonstrate how easy it is to stand up the three different Cluster Orchestration solutions that are supported on top of Photon Controller, starting with Kubernetes. Once the Cluster Orchestration solution has been setup, you can then deploy your application like you normally would through the Cluster Orchestration and behind the scenes, Photon Controller will automatically provision the necessary infrastructure to run your given application without having to know anything about the underlying resources.

test-driving-photon-controller-k8-cluster
If you recall from our last article, there are several default VM Flavors that are included in the Photon Controller installation. The ones that are named cluster-* are VM Flavors used for deploying the Cluster Orchestration virtual machines that have been configured to support high scale and throughput (up to 4vCPU and 8GB of memory). If you are testing this in a lab environment where you might be constrained on memory resources for your ESXi host (16GB of memory), then you actually have a few options. The first option is to create a new VM Flavor with a smaller configuration (e.g. 1vCPU/2GB memory) and then override the default VM Flavor when deploying the Cluster Orchestration. The second option which I learned from talking to Kris Thieler was that you can actually re-define the default cluster-* VM Flavors to fit your environment needs which he has documented here. To simplify our deployment, we will actually use Option 1 on just creating a new VM Flavor that we will use to override the default VM Flavor. If you have more than 16GB of memory, then you can skip Step 2.

Deploying Kubernetes Cluster

Step 1 - Download the Kubernetes VMDK from here and the Kubectl binary from here.

Step 2 - Run the following command to create our new VM Flavor override which we will call cluster-tiny-vm that is configured with 1vCPU/1GB of memory:

./photon -n flavor create --name cluster-tiny-vm --kind "vm" --cost "vm 1 COUNT,vm.flavor.cluster-other-vm 1 COUNT,vm.cpu 1 COUNT,vm.memory 1 GB,vm.cost 1 COUNT"

Step 3 - We will now upload our Kubernetes image and make note of the ID generated after the upload by running the following command:

./photon -n image create photon-kubernetes-vm-disk1.vmdk -n photon-kubernetes-vm.vmdk -i EAGER

Step 4 - Next, we also need the ID of our Photon Controller Instance deployment as it will be required in the next step by running the following command:

./photon deployment list

Step 5 - We will now enable the Kubernetes Cluster Orchestration on our Photon Controller instance by running the following command and specifying the ID of your deployment as well as the ID of the Kubernetes image from the previous two steps:

./photon -n deployment enable-cluster-type 7fd9a13d-e69e-4165-9b34-d436f4c67ea1 -k KUBERNETES -i 4332af67-2ff0-49f7-ba44-dd4140908e32

test-driving-photon-controller-k8-0
Step 6 - We can also see what Cluster Orchestration solutions have been enabled for our Photon Controller by running the following command and specifying our deployment ID:

./photon deployment show 7fd9a13d-e69e-4165-9b34-d436f4c67ea1

test-driving-photon-controller-k8-1
As you can see from the screenshot above, there is a Cluster Configuration section which provides a list of Cluster Orchestration solutions that have been enabled as well as their respective image.

Step 7 - We are now ready to spin up our Kubernetes (K8) Cluster by simply running the following command and substituting the network information from your environment. We are also going to only deploying a single K8 Slave (if you have additional resources you can spin up more or you can always re-size the cluster after it has been deployed) and lastly, we will override the default VM Flavor used by specifying -v option and providing the name of our VM Flavor called cluster-tiny-vm. You can just hit enter when prompted for the two etcd IP Addresses, the assumption is that you have DHCP running and those will automatically obtain an address.

./photon cluster create -n k8-cluster -k KUBERNETES --dns 192.168.1.1 --gateway 192.168.1.1 --netmask 255.255.255.0 --master-ip 192.168.1.55 --container-network 10.2.0.0/16 --etcd1 192.168.1.56 -s 1 -v cluster-tiny-vm

test-driving-photon-controller-k8-2
Step 8 - The process can take a few minutes and you should see a message like the one shown above which prompts you to run the cluster show command to get more details about the state of the cluster.

./photon cluster show 9b159e92-9495-49a4-af58-53ad4764f616

test-driving-photon-controller-k8-3
Exploring Kubernetes

At this point, you have now successfully deployed a fully functional K8 Cluster using Photon Controller with just a single command. We can now take explore our K8 setup a bit by using the kubectl CLI that you had downloaded earlier. For more information on how to interact with K8 Cluster using kubectl command, be sure to check out the official K8 documentation here.

To view the nodes within the K8 Cluster, you can run the following command and specifying the IP Address of the master VM provided in the previous step:

./kubectl -s 192.168.1.55:8080 get nodes

test-driving-photon-controller-k8-4
Lets now do something useful with our K8 Cluster and deploy a simple Tomcat application. We first need to download the following two configuration files that will define our application:

  • photon-Controller-Tomcat-rc.yml
  • photon-Controller-Tomcat-service.yml

We then need to edit the photon-Controller-Tomcat-rc.yml file and delete the last two lines as it contains an incorrect syntax:

labels:
name: "tomcat-server"

To deploy our application, we will run the following two commands which will setup our replication controller as well as the service for our Tomcat application:

./kubectl -s 192.168.1.55:8080 create -f photon-Controller-Tomcat-rc.yml
./kubectl -s 192.168.1.55:8080 create -f photon-Controller-Tomcat-service.yml

We can then check the status of our application deloyment by running the following command:

./kubectl -s 192.168.1.55:8080 get pods

test-driving-photon-controller-k8-5
You should see a tomcat-server-* entry and the status should say "Image: tomcat is not ready on the node". You can give it a few seconds and then re-run the command until it is showing "Running" as the status which means our application has been successfully deployed by the K8 Cluster.

We can now open a browser to the IP Address of our K8 Master VM's IP, which in my environment was 192.168.1.55 and specify port 30001 which was defined in the configuration file of Tomcat application and we should see that we now have Tomcat running.

test-driving-photon-controller-k8-6
We can also easily scale up the number of replication servers for our Tomcat application by running the following command:

./kubectl -s 192.168.1.55:8080 scale --replicas=2 rc tomcat-server

You can easily scale the application back down by re-running the command and specifying a value of one. Lastly, if we want to delete our application, we can run the following two commands:

./kubectl -s 192.168.1.55:8080 delete service tomcat
./kubectl -s 192.168.1.55:8080 delete rc tomcat-server

Once we are done using using our K8 Cluster, we can tear it down by specifying the ID of the K8 Cluster found in Step 8 by running the following command which will now delete the VMs that Photon Controller had deployed:

./photon -n cluster delete 9b159e92-9495-49a4-af58-53ad4764f616

Hopefully this gave you a quick taste on how easy it is to setup a fully functional K8 Cluster using Photon Controller. In the next article, we will take a look at deploying a Mesos Cluster using Photon Controller, so stay tuned!

  • Test driving VMware Photon Controller Part 1: Installation
  • Test driving VMware Photon Controller Part 2: Deploying first VM
  • Test driving VMware Photon Controller Part 3a: Deploying Kubernetes
  • Test driving VMware Photon Controller Part 3b: Deploying Mesos
  • Test driving VMware Photon Controller Part 3c: Deploying Docker Swarm

Categories // Automation, Cloud Native, ESXi, vSphere 6.0 Tags // cloud native apps, ESXi, Kubernetes, Photon Controller

  • « Previous Page
  • 1
  • …
  • 13
  • 14
  • 15
  • 16
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...