In this blog post, we will walk through the configuration steps for setting up Data Services Manager (DSM) and connecting that to VCF Automation (VCFA), allowing us to deploy a Vector Database which is a requirement for configuring VMware Private AI Services (PAIS).
Requirements:
- VCF Automation (VCFA) Organization configured with Namespace
- VMware Private AI Services (PAIS) deployed
Additional References:
Step 1 - Download the latest DSM OVA from the Broadcom Support Portal (BSP) and then deploy it into your VCF 9.0 environment using the vSphere UI. There are a number of OVA properties that you will need to populate including the SHA256 Thumbprint of your vCenter Server.
To simplify the deployment, I have created the following shell script called deploy_data_services_manager.sh, which you can use to streamline the setup.
Once DSM has powered on, it will automatically register itself as a vCenter Server extension and you should see a vSphere UI plugin banner notification in about ~5minutes or so depending on how long it takes to fully initialize.
Step 2 - To verify that DSM vSphere UI Plugin is operational, you should be able to go to your vCenter Serve inventory object and navigate to Configure->Data Services Manager and you will need to create a local user account that will be used for the initial setup but also for VCF Automation (VCFA) to connect to DSM.

Note: When providing the password for the local user, there is no password confirmation, so double check that you had properly typed your desired password. I had fat fingered the entry initially and started to debug an issue that was my own doing!
Step 3 - To allow DSM to deploy into a specific VCFA Namespace, which you would have already created as part of the VCFA Tenant Portal setup under Projects, navigate to Configure->Data Services Manager->Infrastructure Policies and select the desired VCFA Namespace(s).

Step 4 - We need to login to the DSM Admin UI to enable the specific Postgres versions that can be used by PAIS. Open a browser to the FQDN of your DSM deployment and login with the local account that you had created in Step 2. PAIS requires a Vector Database, which is supported by Postgres. To enable DSM to provision a Postgres database, we will navigate to Versions & Upgrades->Postgres and select your desired version and toggle the enable button.

We are now done with the initial DSM configuration, we now need to connect VCFA to DSM.
Step 5 - Before we can connect VCFA to our DSM instance, we need to apply the following workaround for a known issue, to ensure that a TLS trust can be properly established between the two systems
Step 6 - Login to VCFA Provider Portal and navigate to VCF Services->Data Services and provide the DSM endpoint (which is just FQDN prefix with https), the local credentials that you had created in Step 2 and the TLS certificate for your DSM deployment (open browser to DSM Admin UI and export the PEM certificate to your desktop) and configure to initial the connection.

This can take a few minutes and you can monitor the progress by expanding the Recent Tasks at the bottom of the screen.
Step 7 - Once VCFA can connect to DSM, when we navigate back to VCF Services->Data Services we can create a Data Service Policy that will allow us to specify the type of data service (e.g. Postgres), desired versions and infrastructure policy that can be used for the desired VCFA Tenant Portals for consumption.

Step 8 - Finally, to allow VCFA Organizations to consume DSM services, we can publish the service by navigating to Services->Overview->(DSM Service)->Action->Publish

Step 9 - To provision a new Postgres DB that will be used by PAIS, login as an end user to your VCFA Tenant Portal (e.g. https://auto01.vcf.lab/automation) and navigate to Build & Deploy->(VCFA Namespace)->Services->Database and then click the create button using the defaults.

Note: When selecting your desired VM Storage Policy, make sure it does not contain any spaces or the deployment will have issues as the naming is not kubernetes conformant.
While the VCFA UI makes it super easy to deploy a new service for the first time, you may have noticed on the right hand side of the UI, dynamically generated YAML is also available. Users can then save the YAML manifest and source control the configuration but also deploy the service even faster using the new VCF CLI and kubectl.
Step 1 - We need to generate an API token for our user from the VCFA Tenant Portal. Once logged in, you will click on the username and navigate to Account->API Tokens to create a new token.
Step 2 - Install the VCF CLI, if you have not already and then export the following variable VCF_CLI_VCFA_API_TOKEN and set that to the API token from Step 1.
export VCF_CLI_VCFA_API_TOKEN=[YOURTOKEN]
Next, we will create a kubernetes context connecting to your VCFA endpoint (e.g. auto01.vcf.lab) and specify the API token variable along with the VCFA Organization name
vcf context create legal --endpoint auto01.vcf.lab --api-token $VCF_CLI_VCFA_API_TOKEN --insecure-skip-tls-verify --type cci --tenant-name legal
Once you have successfully authenticated, additional packages may automatically get installed but you can now get a context to your VCFA Namespace, which in my setup is called "oversight-kny28" and you can simply copy the command outputted from the
vcf context use legal:oversight-kny28:oversight

Step 3 - We can now deploy Postgres DB for PAIS which is comprised of both a secret (credentials for our DB) and the Postgres DB instance. You can refer to vcfa-postgres-pais-db-secret.yaml and
vcfa-postgres-pais-db.yaml as a reference example.
kubectl apply -f vcfa-postgres-pais-db-secret.yaml kubectl apply -f vcfa-postgres-pais-db.yaml
Step 4 - Once the Postgres DB is ready, we need to retrieve the CA certificate of the database that will be used as later on within the PAIS configuration.
There are two methods to retrieve the CA certificate
Logging into VCFA Namespace using kubectl and save the CA certificate on your local desktop for now.
PAIS_DB="pais-db"
PAIS_DB_SECRET_REF=$(kubectl get postgresclusters ${PAIS_DB} -o jsonpath='{.status.connection.passwordRef.name}')
kubectl get secret ${PAIS_DB_SECRET_REF} -o jsonpath='{.data.ca\.crt}' | base64 -d
Logging into DSM Admin UI and navigate to Databases->Postgres->Summary->View CA and download the CA certificate in PEM format and save that on your local desktop for now.


Thanks for the comment!