In this blog post, we will now deploy an instance of VMware Private AI Services (PAIS) that will use the Vector Database that was provisioned earlier and the OIDC Client Application that we had also setup earlier using the Authentik Identity Provider (IdP).
Requirements:
- VCF Automation (VCFA) Organization configured with Namespace
- VMware Private AI Services (PAIS) deployed
- Data Services Manager (DSM) configured with VCFA
- Authentik IdP configured with OIDC Public Client Application
- Harbor instance configured for AI model store
If you already understand the mechanics of the PAISConfiguration resource type, feel free to skip ahead but it actually took me some time to understand the different parts of the custom resource definition (CRD) and ultimately what each section was doing and what was needed from a requirements standpoint.
Before we jump into applying bunch of YAML manifests, let's take a look at an example paisconfiguration.yaml manifest and the high level sections that would be important to understand.
- worker.storageClassName - This is the storage class that will be used to deploy vSphere Pods for Data Indexing (specify even if you do not use)
- clientTls.caBundleRefs - This is a reference to configmap names that you will need to create that contains the TLS certificate for PAIS to trust your Harbor registry used to host your models, Data Services Manager (DSM) for accessing data services and IdP for authenticating into PAIS service
- database - This contains the connection string information from the Vector Database that you had provisioned earlier
- auth - This is the OIDC Public Application Client information that you had created earlier from your IdP including the allowed groups to access the PAIS service
- nvidiaConfig - This is configuration for accessing the NVIDIA drivers along with any required secrets to pull from their repository as well as licensing for vGPU usage
- vksControlPlane - This is the VM and Storage Class definition for the VKS Control Plane that will be spun up to manage the various model endpoints (deployed at a later point) which will run as Worker nodes managed by this VKS Control Plane
Step 1 - We need to create a configmap that contains the CA certificate for our Harbor, DSM and IdP instances, so that PAIS can establish a trust with each of these systems. You can refer to the example YAML manifests below and replace the values with your own.
We will then create these configmaps by running the following command:
kubectl apply -f dsm-tls-trust-configmap.yaml kubectl apply -f harbor-tls-trust-configmap.yaml kubectl apply -f oidc-tls-trust-configmap.yaml
Step 2 - We need to create a secret that contains the password to the Vector Database that we had provisioned earlier using DSM Database Service. Login to your VCFA Tenant Portal and copy the "Connection String" which will contain the Postgres connection details including the credentials.

For this secret, we need to include the database password and the CA certificate of the Postgres database, which you can find from Step 4 from this previous blog post. You can refer to this example YAML manifest and replace the values with your own.
We will then create the configmap by running the following command:
kubectl apply -f dsm-postgres-creds-secret.yaml
In addition, we will use the database connection details to populate the database section including the host, username and db-name within the paisconfiguration.yaml
database:
host: 31.32.0.2
username: pgadmin
dbname: pais-db
passwordRef:
name: postgres-credentials
fieldPath: password
Step 3 - We will update the auth section with the Issuer URL, Client ID and Authentik IdP Group that you had setup from
auth:
oidc:
issuerUrl: https://auth2.vcf.lab/application/o/pais/
clientId: 2jgGwTq5PMqIcoH6YuAhOEdXf1vu0I1xfyHiltfU
scope:
- openid
- groups
- offline_access
authorizedGroups:
- pais-users
Step 4 - The nvidiaConfig section of the paisconfiguration.yaml contains three sub-sections: licenseConfigRef, imagePullSecretRef and gpuOperatorOverridesRef
For my setup, since I am NOT using NVIDIA vGPU, so the authentication token and the registry credentials that would be required to pull containers from NVIDIA GPU Cloud (NGC) is not required but we still need to define configmaps as these are still required for PAIS to function regardless of not using the NGC drivers
- nvidia-license-conf-configmap.yaml - The client_configuration_token.tok should be empty string
- nvidia-ngc-secret.yaml - The dockerconfigjson must contain base64 encoding of {"auths":{"nvcr.io":{"username":"","password":""}}} with empty username/password
- gpu-operator-override-configmap.yaml - Provides an override to use the NVIDIA open-source drivers instead and the desired version of the guestOS driver
Note: Only the gpu-operator-override-configmap.yaml needs to be tweaked, if you wish to use another version of the NVIDIA guest drivers. The other two YAML files can be applied as-is without any modifications
We will then create these configmaps by running the following command:
kubectl apply -f nvidia-license-conf-configmap.yaml kubectl apply -f nvidia-ngc-secret.yaml kubectl apply -f gpu-operator-override-configmap.yaml
Step 5 - The vksControlPlane section of the paisconfiguration.yaml contains virtualMachineClassName and storageClassName which will specify the compute and storage used for the VKS Control Plane VM that will be managing the various model endpoint worker nodes. For my setup, I ended up using the best-effort-xlarge (4 vCPU / 32 GB memory) VM Class and my standard storage class backed by my vSAN Deployment.
Note: I came to find out that VM Storage Policies that contain spaces do NOT map well into VM Storage Classes, so I ended up having to duplicate my desired VM Storage Policy, so that the label did not contain spaces, which I could then expose to my VCFA Namespace and consume when deploying PAIS.
Step 6 - Finally, create a new context login and/or refresh your login to access the VCFA Namespace and apply the paisconfiguration.yaml manifest:
# Create new context to login to VCFA Namespace vcf context create legal --endpoint auto01.vcf.lab --api-token $VCF_CLI_VCFA_API_TOKEN --insecure-skip-tls-verify --type cci vcf context use legal:oversight-kny28:oversight # Refresh token to login to VCFA Namespacec vcf context refresh legal:oversight-kny28:oversight kubectl apply -f paisconfiguration.yaml
We can monitor the progress of the PAIS deployment by checking whether the three pods (API, Ingress & Rex Worker) is up and running by running the following:
kubectl get pods
![]()
All three pods should come up after a few minutes but if you see that the PAIS API pod is still not running and the status is CrashLoopBackOff, one useful tip is to check the logs of the oauth2-proxy container within the PAIS API pod by running the following (replace the Pod name with what you see from the previous command):
kubectl logs pais-api-d9a3b7e2-5c87-4a02-8168-79d7c3ea978e-7d4647c8df-2765d -c oauth2-proxy
Step 7 - Once everything is up and running, we can retrieve the IP Address that has been allocated (by our VPC) to our PAIS Ingress for accessing the PAIS service UI. If you recall during the Authentik IdP cofiguration in Step 5, we had created an FQDN placeholder (pais.vcf.lab) and now we can associate the IP Address from this step to the defined FQDN.
kubectl get svc

We can also retrieve the IP Address by logging into VCFA Tenant Portal and navigating to Build & Deploy->(Namespace)->Private AI->Service Configuration under Ingress section as shown in the screenshot if you prefer the UI method.

Step 8 - Open a browser to the PAIS Service FQDN (e.g. https://pais.vcf.lab) and you should see the following screen with a login.

When you click on the login button, it should redirect you to your Authentik IdP where you will enter the credentials for a user that is part of the defined PAIS group (e.g. pais-users) is authorized to access the PAIS service.

If everything was setup correctly, you should now be successfully logged into PAIS UI Service, where you will deploy your model endpoints and start consuming them by creating AI Agents!
FYI, The VCFA link in your post is dead
Fixed. Looks like the doc URLs may have changed this week, possibly after 9.0.1 release