When Project Pacific was first announced back in 2019, most of the focus was on Kubernetes and how it would be re-architected into vSphere, basically the "how" or the implementation details. As much as I enjoy diving into the tech, what really stood out to me about Project Pacific was the implication it would have on workload evolution for vSphere.
In fact, I wrote about this very topic in this blog post: Project Pacific - Workload Evolution in vSphere because I felt that most of the focus was only on the "how" but not the "why". Here is a quote from the blog that summarizes why I was excited for Project Pacific:
However, Project Pacific is actually more than just Kubernetes but with all the new lingo like Supervisor and Guest Clusters, one can easily get lost in the implementation or what I would refer to as the "how" part of Project Pacific. If you ask me, the "why" part is much more significant and Project Pacific is fundamentally re-defining what and how to deploy a workload in vSphere.
Fast forward to today, vSphere with Tanzu has been delivering on the vision of Project Pacific since its introduction with vSphere 7 back in 2020. Developers, DevOps and Platform Engineering teams can easily deploy workloads like Tanzu Kubernetes Grid Clusters (TKC) or Virtual Machines into a vSphere Cluster that has been enabled with vSphere with Tanzu, also known as a Supervisor Cluster.
While the current vSphere with Tanzu experience works well for most environments with a handful of Supervisor Clusters, but what happens when you need to support more users, teams and an increased number of Supervisor Clusters across different locations? How do you manage access control for these users and the compute resources that they can consume while providing a simple and intuitive developer ready interface? This is where VMware Cloud Consumption Interface (CCI), formally known as Project Cascade comes in!
CCI builds on the foundational construct of the Supervisor Cluster and enables customers to now aggregate multiple Supervisor Clusters to provides their end users with a single consumption interface (hence the name) for self-service workload deployment. Another benefit of CCI is that end users no longer need access to the underlying infrastructure such as vCenter Server(s) or the underlying Supervisor Cluster(s). Authentication and Authorization is all managed through CCI and identity can be federated your organizations identity provider of choice.
CCI uses projects to associate different set of users/groups that can access a specific set of Supervisor Clusters through a new templating capability called Supervisor Cluster Classes. Different Supervisor Cluster Classes can be created to map to specific underlying Supervisor Cluster capabilities, resources and region availability. Users then deploy workloads through CCI, which is available for consumption across UI, API or CLI with a native Kubernetes experience using kubectl.
If this sounds interesting? There is an early access Beta for CCI!
Pre-requisite for joining CCI Beta:
- Existing vSphere+ customer or access to a vSphere+ Trial
- vSphere 7.0 Update 3f and later or vSphere 8 environment with vSphere with Tanzu enabled
If you are interested in participating in the early access program and help influence the direction of CCI, please send an email to cci [at] vmware [dot] com
Now, here is your first quick look at CCI in action (demo recording can be found at the end of the blog):
Once a project has been created and associated with your defined Supervisor Cluster Classes, end users will be able to select from a pre-defined set of templates and create the desired Supervisor Namespace, which is resource construct for deploying and running workloads.
Users will then be able to select where to deploy the Supervisor Namespace based on the access they have been granted.
Once the Supervisor Namespace has been created, you will be able to create and view your workloads such as Tanzu Kubernetes Grid Cluster(s), Virtual Machines(s) and Persistent Volumes(s).
Each workload type can be deployed using any of the CCI interfaces such as UI or the kubectl CLI.
The really cool thing about using the UI to deploy workloads for the very first time is the realtime and interactive YAML editor that shows you the exact Kubernetes specification for that given workload deployment. This means you can copy the YAML as-is and deploy using the kubectl CLI and result will be same regardless of using the UI or CLI, which I think is really awesome experience and I wish we had this for every VMware UI!
Using the kubectl client to deploy workloads to CCI is just as easy as you can see from the sample below, especially being able to dynamically discover the available resources that a user can consume and deploy their desired workloads.
# Login to CCI Service kubectl ccs login --server ${CCI_ENDPOINT} --token $TOKEN # Switch k8s context to CCI Supervisor Namespace kubectl config use-context ccs:aa-wlam-project:cci-ns-01 # Deploy VM kubectl apply -f vm-workload.yaml # Deploy TKC kubectl apply -f tkc-workload.yaml # Deploy PVC kubectl apply -f pvc.yaml
For those that have deployed workloads using vSphere with Tanzu, the same YAML manifest for workloads can be re-used which help simplifies the user onboarding and reusability.
Lastly, as you interact and deploy workloads using the CCI service, you will see all those operations directed to the specific vCenter Server and Supervisor via a Cloud Proxy that is deployed into a customers on-premises environment and provides secure connectivity back to the CCI service. In my environment, I am actually running all of this on a single an Intel NUC 🙂
Thanks for the comment!