Last week I wrote about a really nifty Virtual Appliance called sk8s which can be used to quickly setup a Kubernetes (k8s) cluster for development and testing purposes. If you have not checked out that article, be sure to give that a read first to get the full context. As mentioned in the previous article, sk8s runs great on any vSphere deployment but it can also run on VMware Cloud on AWS (VMC) which adds an additional capability where an AWS Elastic Load Balancer (ELB) can automatically be provisioned and configured to front-end the k8s control plane as part of the deployment for external access.
The nice benefit of this is that you only need to configure access to the ELB and not directly to the underlying VMs running within the SDDC, both simplifying the setup but also reducing the need to expose the VMs directly to the internet. The write-up below is similar to that of the previous article, but it does expand into greater detail when deploying to VMC and all the required configuration changes within the VPC using the AWS Console and the Network and Security changes using the VMC Console.
Note: If you decide to use the integrated AWS ELB integration, please be aware that you will be charged for the consumption. For pricing, please see the AWS documentation here.
- Access to the VMC Console and VMC SDDC
- NSX-T Logical Network with DHCP enabled
- AWS Access & Secret Key for automatically creating ELB (Optional)
Step 1 - Install govc on your local desktop which has access to your VMC vSphere environment. If you have not installed govc, the quickest way is to simply download the latest binary, below is an example of installing the latest MacOS version:
curl -L https://github.com/vmware/govmomi/releases/download/v0.20.0/govc_darwin_amd64.gz | gunzip > /usr/local/bin/govc
chmod +x /usr/local/bin/govc
Step 2 - We need to verify a few settings in the AWS Console to ensure that the VPC that is connected to your SDDC is properly configured so that the provisioning of the ELB will be successful.
Select your VPC and ensure that it is configured as a Default VPC, this is important as the sk8s provisioning scripts looks for this to identify where to create the ELB.
Select Subnets on the left hand side and ensure that which ever network you are using for the VPC Network within your VMC SDDC, a label of VMC Routing Network exists. If not, simply create it as the sk8s provisioning script will use this to provisioning the ELB. This label should automatically get added by default when you link your SDDC to your AWS Account, but I have seen cases where it may not and you just have to add it manually.
Select Security Groups and create the following four rules to ensure that we can communicate with the ELB and that we are passing from traffic from the ELB to sk8s control plane VMs.
|All Traffic||All||All||(default Security Group)|
Lastly, select Internet Gateways on the left hand side and ensure there is an iGW attached to your VPC. This is required for provisioning an ELB to the VPC.
Step 3 - Next, login to the VMC Console and create a new NSX-T Logical Network (Segment) which is where the sk8s VM will be running, this can be anything you like as long as it has DHCP enabled. Even if you have an existing network, it is still recommended to create a dedicated network to help isolate the workload but it will also make it easier when we go and create the required firewall rules.
Step 4 - We now need to create the appropriate NSX-T firewalls rules that will allow sk8s, which will run in our Compute Network to talk to the the vCenter Management Network for deploying the sk8s clones as well as enable outbound connectivity for it to download the required k8s packages.
We first need to create the NSX-T Inventory Groups for both the Management and Compute Groups. Under Groups->Management Groups create a new group called sk8s VM Network which maps to the network that you had defined in Step 4 (e.g. 192.168.2.0/24)
Under Groups->Workload Groups create two groups: sk8s VM Network that maps to the network you had defined in Step 4 (e.g. 192.168.2.0/24) and VMC Management Network which maps to the infrastructure network of your SDDC found in the Overview tab (e.g. 10.2.0.0/16)
Now we can create the Gateway Firewall rules required for both the Management and Compute Gateway. Under Gateway Firewall->Management Gateway create the following rule which will allow inbound from our Compute Network:
|sk8s VM Network to VC||sk8s VM Network||vCenter||HTTPS|
Under Gateway Firewall->Compute Gateway we need to create the following three rules which will allow the AWS ELB to pass traffic to our sk8s VMs, outbound access to the Management Network and connectivity to the internet for k8s package downloads.
|sk8s VM Network to Internet||sk8s VM Network||Any||Any|
|VPC to sk8s VM Network||Connected VPC||sk8s VM Network||Any|
|sk8s VM Network to VMC Management Network||sk8s VM Network||VMC Management Network||HTTPS|
At this point, we are done with all the VMC Network and Security settings and we are now ready to deploy our OVA.
Step 5 - Login to the VMC vCenter Server and deploy the sk8-photon.ova appliance using the vSphere UI. You can either download the OVA to your local desktop and then deploy or you can simply deploy by specifying the following the URL: https://s3-us-west-2.amazonaws.com/cnx.vmware/cicd/sk8-photon.ova
I personally recommend importing the OVA into a vSphere Content Library, this way you can easily deploy additional instances which are independent of each other for testing purposes.
During the network configuration, select the NSX-T Logical Network (Segment) that you had created earlier and make sure to change the IP Allocation to DHCP
Step 6 - Next, we specify the version of k8s and the number of nodes. In this example, we will default to the latest version of k8s, but you can follow the github URL if you wish to deploy a specific version. The really slick thing about sk8s is that you can define the number of control plane and worker nodes as shown in the screenshot below and sk8s will automatically clone itself and configure the k8s nodes to match the desired outcome. In this example, I will deploy 3 nodes, 2 control plane with a shared worker node.
Step 7 - In the next section you can define the amount of CPU/Memory resources used for the control plane and worker nodes. You will also need to provide your VMC vCenter Server credentials which will be used by the sk8s VM to perform the clone operation.
Step 8 - Finally, in the AWS Load Balancer section is where you can enable the option to automatically provision an AWS ELB and connect that up to your sk8s control plane. If you wish to enable this, you will need to create an AWS Access and Secret Key using IAM in advanced. You can also specify which AWS Region the ELB gets deployed to.
Step 9 - Once the sk8s OVA has been deployed, the last step is to power on the VM and watch it do its thing. If everything was configured correctly, you should see additional VMs cloned from the source sk8s appliance. It will follow the same naming convention as the base sk8s and it will append -c[NN] for the Control Plane function or -w[NN] Worker VMs as shown in the screenshot below. The initial VM is a Control Plane VM but it does not get renamed to ensure it can find itself for the clone operation.
Note: If cloning does not happen within 30 seconds or so, there may be an issue. To troubleshoot further, login to the console of the VM (root/changeme) and take a look at /var/log/sk8/vsphere.log which contains more information about deployment.
As mentioned, the actual clone operation should be fairly quick, but it can take up to several minutes to download the required binaries for setting up k8s cluster. You can monitor the progress by logging into the console sk8s VM (SSH is disabled by default) and you can tail /var/log/sk8/sk8.log to see the detailed progress.
Step 10 - Once the cloning operation has completed, the Notes/Annotations for all VMs will automatically be updated with unique instructions on how to access the particular k8s cluster deployment. This is in the form of a cURL command which talks to sk8 service and returns information about the cluster and how to connect. It does this by using the govc CLI, which is why we need to have it installed locally on your desktop.
Before we run the cURL command, we need to define a few govc variables so that it can talk to our VMC vCenter Server, to do so run the following commands and replace it with your own values:
export GOVC_USERNAME=*protected email*
In my setup, I have the following in my VM Notes field: curl -sSL http://bit.ly/sk8-local | sh -s -- 4200d32c-4240-46fe-69c0-680c4a540044
Simply run the command and it will contact all sk8s nodes to run a health check and then generate SSH keys that can then be used to access each node if required. If we enabled the load balancer option and it was able to successfully deploy the ELB, you should also see a success message for that line. If the output hangs at the "generating cluster access", most likely still downloading the binaries. You can refer to the step above to check its progress and then re-run the command.
At the very end, you will be provided with an export command to run which creates multiple aliases for ssh, scp, kubectl and a turn down script which will delete all the VMs that is specific for this k8s deployment. The nice thing about this is that you can deploy any number of sk8s in any configurations and be able to access them without having to constantly switch your kubeconfig which would be required if these aliases did not exists. If you enabled the load balancer feature, you can retrieve the ELB hostname by just looking at the kubeconfig file that is generated. In my case, it is under /Users/wlam/.sk8/b5172c0/kubeconfig and simply look for the server entry which should have the following: https://sk8-b5172c0-XXXXX.elb.us-west-2.amazonaws.com
After running the export command, instead of using the traditional kubectl command to connect to k8s cluster, you will use the specific alias as noted in the output. In my example, it is kubectl-b5172c0 and if you want to deploy a sample application, check out this blog post here and start from Step 7. One thing to be aware of if you are using the sample application deployment is that you will not be able to communicate with the nodeport over the ELB, as only the k8s control plane is allowed over the ELB. If you wish to also access the yelb application over the ELB, then we will need to open up an additional listener for our application and update our security group to allow the specific port. When you are done using your sk8s cluster, make sure to use the turn-down command, which will automatically delete the VMs and un-provision the AWS ELB for you automatically.
The first thing we need to do is find all the internal IP Addresses of the worker nodes, by running the following command:
kubectl-b5172c0 -n yelb get nodes -o wide
Next, login to the AWS Console and select the EC2 Service which is where the load balancer configuration is configured. Under Load Balancing->Target Groups, add a new group called yelb and register all IP Addresses as targets from the previous step and specify port 30001 or whatever port you may have selected for the application deployment.
Now that we have our new target group, we just need to add a new listener and specify the port and target group we had just created.
Finally, we need to add a new inbound rule to our security group to allow the new port. Select Security Groups and under Inbound Rules, add the following:
|Custom TCP Rule||TCP||30001||0.0.0.0/0|
If everything was configured correctly and you have the yelb application running, you should now be able to open a web browser and point it to your ELB address with port 30001 (e.g http://sk8-b5172c0-XXXXX.elb.us-west-2.amazonaws.com:30001) to access the application! This is pretty darn cool once you understand what is happening and as I said before, you can deploy any number of these sk8s appliances, which are independent of each other and can automatically be connected to an AWS ELB for external connectivity if required.
Thanks for the comment!