Project Pacific was definitely one of the most exciting and most talked about announcement at this past VMworld. In case you missed the big news, check out this quick snippet of the Day 1 Keynote where Pat Gelsinger and Joe Beda (one of the co-creators of Kubernetes, now at VMware) introduces Project Pacific to the world.
If you ask most folks what Project Pacific is about, they would probably answer something with Kubernetes and Containers in vSphere, which is a fair assessment, especially as Kubernetes was probably mentioned once or twice during the conference 😉
However, Project Pacific is actually more than just Kubernetes but with all the new lingo like Supervisor and Guest Clusters, one can easily get lost in the implementation or what I would refer to as the "how" part of Project Pacific. If you ask me, the "why" part is much more significant and Project Pacific is fundamentally re-defining what and how to deploy a workload in vSphere.
Historically, the vSphere Platform has always been very infrastructure centric and even though its job is to run applications which run on Virtual Machines, it is heavily biased towards infrastructure consumers and did not make managing and deploying applications a first class citizen in vSphere. Administrators would deploy Virtual Machines and then hand them off to their development team to then setup the application. Although logical constructs like a Virtual App (vApp) have existed in vSphere, allowing administrators to group a collection of Virtual Machines that made up an application, it did very little for Developers who are deploying and managing the application itself. The unit of management was a Virtual Machine.
Project Pacific is evolving the definition of a workload in vSphere and moving from an infrastructure-centric to an application-centric model. As mentioned in Kit Colbert VMworld session, today a modern enterprise application is not just comprised of Kubernetes and Containers but a hybrid of both new and existing infrastructure. You may have a modern front-end, but the backend may still rely on functionality that is provided by a traditional application running in a Virtual Machine or it may also need access to data from a persistent store like a database. Trying to manage such an application is not only complex but more importantly, what is being managed is still infrastructure rather than focusing on the application itself.
This is where Kubernetes can help and the "how" part of Project Pacific. Kubernetes has become the de-factor standard in the industry for deploying containerized applications at scale. Kubernetes is not just a platform for managing Containers but its abstractions, extensibility and patterns can be used to build new platforms. This was the big aha moment for VMware, that we could apply these established patterns in Kubernetes to help us evolve vSphere into a platform that can manage any type of workload not only for today but also in the future. This is Project Pacific.
Kubernetes has a logical construct called a Namespace which is a collection of Kubernetes resource objects like Pods, Services, Persistent Volumes, etc. By leveraging Kubernetes extensibility model, we can actually extend this namespace construct to include other custom resources objects like Virtual Machines, Disks, Functions, etc. using a Custom Resource Definition (CRD) and fully lifecycle these CRDs with a Custom Controller, all natively within Kubernetes. Furthermore, we can apply resource management, security and other policies just like you would on a Virtual Machine but now at the Namespace level to simplify the management of our application.
By integrating Kubernetes into the control plane of vSphere, we are making Applications a first class citizen in vSphere and we are also making Developers a first class consumer in vSphere through an interface they are already familiar with which is Kubernetes. The benefit here is that vSphere can take advantage of Kubernetes powerful declarative pattern in deploying, managing and scaling an application regardless of the underlying deployment form factors (e.g. Container, Function, Virtual Machine or even higher abstractions like a Database). The new unit of management is now a Namespace.
So how does this all work? There is a ton more detail and I am only scratching the surface here and I would like to point folks to some fantastic technical deep dives on Project Pacific including two VMworld Sessions which I highly recommend folks to check out (free for everyone, just sign in with free VMworld account to view).
Videos:
- Introducing Project Pacific: Transforming vSphere into the App Platform of the Future (HBI4937BU)
- Project Pacific Technical Overview: Unifying vSphere and Kubernetes (HBI4500BU)
Blogs:
- Introducing Project Pacific
- Project Pacific - Technical Overview
- Project Pacific - Names on Kubernetes
- Project Pacific - Infrastructure Self-Service
- 5 things to know about Project Pacific
One thing I noticed and heard from others during VMworld was some confusion around the concept of the Supervisor and Guest Cluster in Project Pacific. In fact, in the session they had to re-iterate the point that there was no "Nesting" involved. I thought I would create these two diagrams to help explain some of the terminology found in Project Pacific and hopefully clear up any confusion around the two types of clusters.
A Supervisor Cluster is nothing more than a vSphere Cluster that has Project Pacific enabled and therefore it is also a Kubernetes Cluster itself. The Kubernetes Worker Nodes in this case is an ESXi host rather than a Linux host or VM. We do this by porting a native implementation of the Kublet, which acts as the control agent for managing Kubernetes Nodes from the Kubernetes Master into ESXi which we call the spherelet. This is analogous to hostd which is the control agent for managing ESXi Nodes from vCenter Server. Within ESXi, we also have a new lightweight Container Runtime called the CRX which is responsible for running Native Pods/Containers within ESXi. This is analogous to the VMX which is responsible for running Virtual Machines. Administrators and Developers interact with vCenter Server which exposes a native Kubernetes interface for deploying workloads and in turn may be composed of Containers, Functions, Virtual Machines, etc), all running natively in a vSphere Cluster. No Nesting involved.
As mentioned earlier, Project Pacific uses Kubernetes to improve workload deployments in vSphere by embedding its abstractions and patterns into vSphere. Having said that, the Supervisor Cluster is not a conformant upstream Kubernetes Cluster and this is by design. There are a number of assumptions that are made for a traditional Kubernetes Node which simply does not make sense for ESXi such as the ability to run a privilege container which would give it access to all other containers on the host. For these reasons, we have disabled some of these functionalities as ESXi is a Hypervisor and not regular Linux host to ensure that we can securely isolate the workloads regardless if it is a Virtual Machine, Container or other abstraction in the future.
Guest Clusters on the other hand enable Developers to easily request and deploy their own conformant upstream Kubernetes Cluster which may be a specific version or feature set for development. This Kubernetes Cluster runs within a set of Virtual Machines residing within the vSphere Cluster that has Project Pacific enabled, also known as the Supervisor Cluster. There is no Nesting of any sorts and this is analogous to what VMware Enterprise PKS is already doing today which is running Kubernetes on a set of Virtual Machines on top of vSphere. The biggest difference between PKS and Project Pacific is how the Guest Clusters are requested. In PKS and other similar offerings like OpenShift, Developers must learn a different management interface to request a Kubernetes Clusters, this can be an API or just CLI.
Instead, why force Developers to learn a new management interface? They already know Kubernetes, why not just extend the workload concept for a new type of workload deployment. Through a set of CRDs and Custom Controllers that VMware has built into Project Pacific, Developers can now easily request a new workload where the deployment is a fully functional Kubernetes Cluster just like they would with any other application deployment. This Guest Clusters capability enables IT organizations to provide a completely automated and self-service Kubernetes offering running on top of an existing vSphere Cluster(s) enabled with Project Pacific! In the diagram below, vCenter Server is managing several vSphere Clusters. Two of which has Project Pacific enabled and although I only show one type of workload per vSphere or Supervisor Cluster, you can certainly mix and max which is another benefit of the resource management and isolation of the vSphere platform. Everything running in a Supervisor Cluster is either a VM and/or Container, there is no Nesting of any sorts.
Hopefully this was helpful in understanding the "why" behind Project Pacific and some of the new concepts that it introduces.
Sean McGreal says
Great article, it would be good to understand how ingress and egress are handled.. I'm wondering how a service mesh would be applied
Doug Young says
Hi Sean... You might want to look at this:
https://blogs.vmware.com/networkvirtualization/2019/04/how-istio-nsx-service-mesh-and-nsx-data-center-fit-together.html/
Siva M says
Containers running on the supervisor cluster are VMs; i.e. pod VMs because ESXi hosts are worker nodes. However, what about the containers running on the guest cluster worker VMs; they are nested; aren't they?
William Lam says
Siva,
As mentioned several times in the blog post, there's no "Nesting" of any sorts. Guest Clusters is nothing more than just standard VMs running natively on vSphere, so the Containers in that case is simply running in the VM (this is what you have today with solutions like Enterprise PKS, OpenShift, etc.). Give the post another read, I tried to make that as clear as possible compared to other posts which dive straight into these terms which can be confusing
Siva says
Thanks William. I understand, all I was mentioning was containers on guest clusters are not Virtual, as they could be on supa Srvisor cluster MTchines. they are regular containers running inside Linux Ve virtuMl machines. Hope this clarifies..
Pawo2000 says
Is Project Pacific included into ESXI 7.0 RC1?
I cannot find the spherelet process on deployed hosts.