Continuing with our PKS installation, we are now going to finish up with configuring and deploying the PKS Control Plane Tile which provides a frontend API that will be used by Cloud/Platform Operators to easily interact with PKS for provisioning and managing (create, delete, list, scale up/down) Kubernetes (K8S) Clusters. Once a K8S Cluster has successfully been deployed through PKS, operators simply provide their developers the external hostname of the K8S Cluster and the kubectl configuration file and they can immediately start deploying applications without knowing anything about PKS and how it works! If an application that a developer is deploying requires an external load balancer service, they can easily specify that in their application deployment YAML file and behind the scenes, PKS will automatically provision on-demand an NSX-T Load Balancer to service the application and this is completely seamless and does not require any additional assistance from the operator.
If you missed any of the previous articles, you can find the complete list here:
- Getting started with VMware Pivotal Container Service (PKS) Part 1: Overview
- Getting started with VMware Pivotal Container Service (PKS) Part 2: PKS Client
- Getting started with VMware Pivotal Container Service (PKS) Part 3: NSX-T
- Getting started with VMware Pivotal Container Service (PKS) Part 4: Ops Manager & BOSH
- Getting started with VMware Pivotal Container Service (PKS) Part 5: PKS Control Plane
- Getting started with VMware Pivotal Container Service (PKS) Part 6: Kubernetes Go!
- Getting started with VMware Pivotal Container Service (PKS) Part 7: Harbor
- Getting started with VMware Pivotal Container Service (PKS) Part 8: Monitoring Tool Overview
- Getting started with VMware Pivotal Container Service (PKS) Part 9: Logging
- Getting started with VMware Pivotal Container Service (PKS) Part 10: Infrastructure Monitoring
- Getting started with VMware Pivotal Container Service (PKS) Part 11: Application Monitoring
- vGhetto Automated Pivotal Container Service (PKS) Lab Deployment
Step 1 - If you have not already downloaded PKS (pivotal-container-service-*.pivotal), please see Part 1 for the download URL. To import the PKS Tile, go to the home page of Ops Manager and click "Import a Product" and select the PKS package to begin.
Once the PKS Tile has been successfully imported, go ahead and click on the "plus" symbol to add the PKS Tile which will make it available for us to start configuring, similiar to what we did for the BOSH Tile. After that, click on the PKS Tile so we can begin the configuration.
Step 2 - This first section defines the AZ and Networks that will be used to deploy the PKS Control Plane VM as well as the K8S Management PODs. These were all previously defined when we had configured BOSH.
- Singleton Jobs: AZ-Management
- Balance Jobs: AZ-Management
- Network: pks-mgmt-network
- Service Network: k8s-mgmt-cluster-network
Step 3 - This next section is for the PKS API endpoint and a certificate will be generated based on your DNS domain. In my environment, the domain is primp-industries.com and you will need to add wildcard in front as shown in the screenshot below.
Step 4 - In the next two section (Plan 1 and Plan 2) are used to configure the size and resources used for each of the VM types for K8S Cluster. During K8S deployment, you can specify these "plans" and decide how big a given VM instance is for different deployment scenarios. For now, you can leave the defaults (you can always come back in later and modify) and you simply need to assign the AZ for placement, which in our case is AZ-Compute which you will need to do for both Plans.
Step 5 - In this section, we need to specify "vSphere" as our IaaS and provide the credentials to our Computer vCenter Server along with the datastore in which persistent disks will be deployed to by PKS. Behind the scenes, when an application requests persistent disks (default is ephemeral), the Project Hatchway plugin intercepts the request and will use these credentials to create a persistent VMDK and make that available back to the application. This is all done seamless and on-demand without any interact between the developer deploying the application and the Cloud/Platform Operator. For the "Stored VM Folder" field, be sure to use the same value that you had specified during the BOSH deployment. If you are unsure, go blog post Part 4 and Step 4 to see what you had selected before proceeding.
Step 6 - In this next section we will provide the NSX-T configurations for the networks that we had created earlier which will be used by K8S Clusters. Start off by selecting "NSX-T" as the network type and then provide credentials to your NSX-T Manager. If you have replaced the NSX-T SSL Certificate, you will need to provide that or you can disable SSL verification which I have done for testing purposes. Next, you will need to provide the name of the Compute vSphere Cluster which has been prepped for NSX-T, in my environment, that is PKS-Cluster.
For next three fields, you will need to the NSX-T UI (this can also be programmatically queried through NSX-T REST API ) to obtain the UUID for the T0 Router, IP Block and Load Balance IP Pool.
- T0 Router ID - Navigate to Routing->Routers, select T0-LR and click on the ID to retrieve the UUID as shown in the screenshot below
- IP Block ID - Navigate to DDI->IPAM, select PKS-IP-Block and click on the ID to retrieve the UUID as shown in the screenshot below
- Floating IP Pool ID - Navigate to Inventory->Groups->IP Pools, select Load-Balancer-Pool and click on the ID to retrieve the UUID as shown in the screenshot below
Note: Pre-check validation for correct NSX-T objects UUIDs will be done when you click save, so if you made a mistake, the UI will alert you.
Step 7 - In this section, we will configure the User Account and Authorization endpoint which we will use to manage users for PKS. You just need to provide a DNS entry and ensure that it is mapped to the same DNS domain that you had configured earlier as the certificate generated will need to match. In my example, I used uaa.primp-industries.com and once the PKS VM has been deployed, you can update your DNS Server to make sure this hostname points back to the IP selected for PKS Control Plane VM or you can update your /etc/hosts file on the PKS Client VM for testing purposes.
Step 8 - In this section, we simply just need to enable NSX-T validation and we can leave the rest alone.
Step 9 - In the very last section, you may need to import an updated Stemcell VM, which we had downloaded from blog post Part 1. If you are not prompted to, then you can move on to the next step.
Step 10 - To begin the PKS Control Plane VM deployment, go ahead and navigate back to the Ops Manager home page and click "Apply Changes" to start the deployment.
This will take some time and in my environment, it took ~30minutes to complete. This is a good time to take a coffee or beer break depending on the hour of the day 😀
Step 10 - If everything was successfully deployed, you can head over to your vCenter Server and you should see another new VM named vm-[UUID] to denote the PKS Control Plane VM. Similiar to the BOSH VM, we can look at the instance_group Custom Attribute to tell the role for this particular VM.
Another way you can easily identify either PKS Control Plane or BOSH VM is simply clicking on the tile in Ops Manager and then select the "Status" tab which will not only give you the VM Display Name in vCenter Server but also the IP Address that was automatically allocated from the PKS Management Network that we had specified within BOSH.
If you recall earlier, we had specified our PKS API endpoint to be uaa.primp-industries.com and now we can take the IP Address from below and create a DNS entry which we will be using in the next article to setup a new PKS user. If you do not have DNS in your environment, you can also add an entry to your /etc/hosts on the PKS Client VM as an alternative.
In our next article, we will demonstrate how to interact with PKS using the PKS CLI to request a new K8S Cluster as an Operator and then walk through a sample application deployment on top of the newly create K8S Cluster like a Developer would normally.
Sanjeev Sharma says
My "Kubernetes Cloud Provider Configuration" screen requires configring "vCenter Master Credentials" and "vCenter Worker Credentials" . To keep things simple since this is a lab setup, I used Administrator's credentials for both. The roles configuration given in Pivotal documentation at https://docs.pivotal.io/runtimes/pks/1-0/vsphere-prepare-env.html is too confusing so I gave up on it.
Everything works fine except persistent volumes. When I try to create a pod with static PV, I get the error below. I do not know which username or password it is complaining about.
"
Warning FailedMount 1m kubelet, 076097b2-bcd6-4f52-902d-f4f0ba2f31b1 Unable to mount volumes for pod "task-pv-pod_default(b31bf928-7fff-11e8-a960-005056937543)": timeout expired waiting for volumes to attach/mount for pod "default"/"task-pv-pod". list of unattached/unmounted volumes=[pv0002]
Warning FailedMount 55s (x2 over 3m) attachdetach-controller AttachVolume.Attach failed for volume "pv0002" : ServerFaultCode: Cannot complete login due to an incorrect user name or password.
"
est@dsib1241:~/pks/nginx_kube_example$ kubectl describe pod task-pv-pod
Name: task-pv-pod
Namespace: default
Node: 076097b2-bcd6-4f52-902d-f4f0ba2f31b1/10.228.247.241
Start Time: Wed, 04 Jul 2018 23:01:23 -0400
Labels:
Annotations:
Status: Pending
IP:
Containers:
task-pv-container:
Container ID:
Image: nginx
Image ID:
Port: 80/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
Mounts:
/usr/share/nginx/html from pv0002 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-z9lw9 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
pv0002:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvc0002
ReadOnly: false
default-token-z9lw9:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-z9lw9
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m default-scheduler Successfully assigned task-pv-pod to 076097b2-bcd6-4f52-902d-f4f0ba2f31b1
Normal SuccessfulMountVolume 3m kubelet, 076097b2-bcd6-4f52-902d-f4f0ba2f31b1 MountVolume.SetUp succeeded for volume "default-token-z9lw9"
Warning FailedMount 1m kubelet, 076097b2-bcd6-4f52-902d-f4f0ba2f31b1 Unable to mount volumes for pod "task-pv-pod_default(b31bf928-7fff-11e8-a960-005056937543)": timeout expired waiting for volumes to attach/mount for pod "default"/"task-pv-pod". list of unattached/unmounted volumes=[pv0002]
Warning FailedMount 55s (x2 over 3m) attachdetach-controller AttachVolume.Attach failed for volume "pv0002" : ServerFaultCode: Cannot complete login due to an incorrect user name or password.
William Lam says
Since you're using the same credentials for both, can I ask that you just double check that you didn't typo the credentials?
Nikodim Nikodimov says
Hi William, I am getting the below error at the last step (Installing Pivotal Container Service):
Task 91 | 07:27:21 | Compiling packages: pks-nsx-t-jq/6fe9c981d95d718336b92bafc09b187ebdb2a85a (00:00:18)
L Error: Unknown CPI error 'Unknown' with message 'Could not power on VM '': No host is compatible with the virtual machine.' in 'create_vm' CPI method
Task 91 | 07:27:21 | Compiling packages: golang-1.9-linux/200a29b129a9078cb156be44c4a792ae24f42900 (00:00:18)
L Error: Unknown CPI error 'Unknown' with message 'Could not power on VM '': No host is compatible with the virtual machine.' in 'create_vm' CPI method
Task 91 | 07:27:22 | Compiling packages: pks-nsx-t-govc/5be26942bded36425daf831136a48ae497c13caf (00:00:19)
L Error: Unknown CPI error 'Unknown' with message 'Could not power on VM '': No host is compatible with the virtual machine.' in 'create_vm' CPI method
Task 91 | 07:27:22 | Compiling packages: pks-nsx-t-scripts/b909fb369460b0b0b1ef9ac0462dc16fb81e06e0 (00:00:19)
L Error: Unknown CPI error 'Unknown' with message 'Could not power on VM '': No host is compatible with the virtual machine.' in 'create_vm' CPI method
Task 91 | 07:27:22 | Error: Unknown CPI error 'Unknown' with message 'Could not power on VM '': No host is compatible with the virtual machine.' in 'create_vm' CPI method
Any idea what might be the problem?
Thanks!
Anibal Avelar says
Hello,
I'm getting this exactly error. Could you fix it? what was you resolution?
karamjotkohli says
@Nikodim & @Anibal. I was getting the same error in my nested homelab. If that is the case, ensure your ESXi(Physical) have vt-x enabled on BIOS or its nested ESXi(which was in my case), you have Hardware Assisted Virtualization enabled from ESXi VM(nested ESXi). The bosh vm that will be deployed after this above mentioned task will be with 2 vCPU & 8GB of RAM. I made sure above and it started working.
karamjotkohli says
@Nikodim & @Anibal. I was getting the same error in my nested homelab. If that is the case, ensure your ESXi(Physical) have vt-x enabled on BIOS or its nested ESXi(which was in my case), you have Hardware Assisted Virtualization enabled from ESXi VM(nested ESXi). The bosh vm that will be deployed after this above mentioned task will be with 2 vCPU & 8GB of RAM. I made sure above and it started working.