When a user uploads an ISO/OVF/OVA from their desktop to VCF Automation (VCFA) Content Library, the file is temporarily stored in the Transfer Spooling Area (TSA) that resides locally within VCFA before it is finally transferred to the destination vCenter Server.

By default, VCFA uses an embedded object store backed by SeaweedFS to hold the temporary user files, which are stored in 64MB chunks. The size of the SeaweedFS volumes will be based on the VCFA deployment mode (single node vs multi-node for HA). A total of three volumes (3 replicas) will always be configured regardless of the deployment mode for VCFA and the total usable storage capacity is sum of these three volumes.
- Single Node, each volume is 75GB
- Multi-Node (Medium), each volume is 150GB
- Multi-Node (Large), each volume is 200GB
The SeaweedFS volumes can be resized non-disruptively by going into VCF Operations and navigating to Fleet Management->Lifecycle->Management->Components and then select your VCF Automation instance and perform the storage resize operation.

Enter the desired capacity for the Shared Storage Data and VCFA will then automatically resize the volume without any user impact.

Alternatively, VCFA can be configured with a more scalable option that leverages an external NFS server for the TSA function to accommodate larger file uploads.
To configure the TSA storage backing from the internal SeaweedFS to an external NFS server, we need to issue an API request when logged into VCFA Appliance. Create a shell script called configure_nfs.sh which should contain the following and update the three variables at the top of the script:
NFS_SERVER_IP="192.168.30.29"
NFS_SERVER_MOUNT_PATH="/volume1/vcfa"
NFS_SERVER_MOUNT_SIZE_GIB="500"
### DO NOT EDIT BEYOND HERE ###
K8S_TOKEN=$(kubectl get secrets synthetic-checker-krp -n vmsp-platform -ojsonpath={.data.token} | base64 -d)
NODE_IP=$(kubectl get nodes -ojsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}')
REQUEST_RESPONSE=$(curl -k -s -X POST -H "Authorization: Bearer $K8S_TOKEN" https://${NODE_IP}:30005/webhooks/prelude/tenant-manager/configure-nfs \
-d "{
\"host\": \"$NFS_SERVER_IP\",
\"path\": \"$NFS_SERVER_MOUNT_PATH\",
\"size\": \"$NFS_SERVER_MOUNT_SIZE_GIB\"
}"
curl -q -L -k -X GET -H "Authorization: Bearer $K8S_TOKEN" https://${NODE_IP}:30005/$(echo $REQUEST_RESPONSE | jq -r .statusURI) | jq .
Once the script has been successfully executed, you should see a 200 response code that indicates the API endpoint has received the request and the reconfiguration is now in progress.

The /configure-nfs webhook will first check whether the NFS server and volume is accessible by spinning up a few helper containers, which we can see by running the following command:
kubectl -n prelude get pods | grep tenant
You can run the following command to ensure that the container is able to update the ownership of the NFS volume:
kubectl -n prelude logs tenant-manager-0 -c prepare-dir-ownership
![]()
We can confirm the NFS mount was successful by running the following command on the tenant-manager pod:
kubectl -n prelude exec -it tenant-manager-0 -- mount | grep nfs
![]()
Lastly, the Tenant Manager pod should now be running after the storage reconfiguration:
kubectl -n prelude get pods -l app.kubernetes.io/instance=tenant-manager
![]()
When uploading a new file, we should now see that the temporary file is now being stored on our NFS server

To configure the TSA storage backing from an external NFS server to the internal SeaweedFS, we simply issue an API request when logged into VCFA Appliance with an empty string for the host property. Create a shell script called configure_seaweedfs.sh which should contain the following:
K8S_TOKEN=$(kubectl get secrets synthetic-checker-krp -n vmsp-platform -ojsonpath={.data.token} | base64 -d)
NODE_IP=$(kubectl get nodes -ojsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}')
REQUEST_RESPONSE=$(curl -k -X POST -H "Authorization: Bearer $K8S_TOKEN" https://${NODE_IP}:30005/webhooks/prelude/tenant-manager/configure-nfs -d '{"host":""}')
curl -q -L -k -X GET -H "Authorization: Bearer $K8S_TOKEN" https://${NODE_IP}:30005/$(echo $REQUEST_RESPONSE | jq -r .statusURI) | jq .
You can monitor the storage reconfiguration similar to configuring an NFS server and /configure-nfs webhook does provide an overall status that you can perform GET request by running the following:
K8S_TOKEN=$(kubectl get secrets synthetic-checker-krp -n vmsp-platform -ojsonpath={.data.token} | base64 -d)
NODE_IP=$(kubectl get nodes -ojsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}')
curl -k -X GET -H "Authorization: Bearer $K8S_TOKEN" https://${NODE_IP}:30005/webhooks/prelude/tenant-manager/configure-nfs

Hi William, thanks for the write-up! Was just researching this the other day and couldnt find much.
Did you miss a part?
"The size of the SeaweedFS volumes will be based on the VCFA deployment mode, for a simple (non-HA) deployment, each volume is 75GB. "
--> whats the volume size then for a HA deployment?
No. What I wrote is correct, did you read the sentence immediately after which should answer your HA question 🙂
>> There is a total of three volumes (3 replicas) that is always configured regardless of the deployment mode for VCFA and total usable storage capacity is sum of those three volumes.
I did, thats why I asked 🙂
Reading the first sentence " for a simple (non-HA) deployment, each volume is 75GB", it implies that the volume size is different between simple and HA mode.
Reading the sentence after that then "There is a total of three volumes (3 replicas) that is always configured *regardless of the deployment* mode for VCFA and total usable storage capacity is sum of those three volumes." makes it sound like even in the simple mode (so only one VCFA appliance) there are 3 volumes (each 75 GB in size) on the one VCFA appliance.
See what I mean?
That is EXACTLY how you interpret that sentence. We deploy 3 replicas for HA regardless if you've selected Singleton or HA VCFA
For Singleton, the default volume size is 75GB. For HA, you have Medium/Large options and the default volume size for those is 150GB/200GB respectively.
Ahh, now there is that little tidbit that boggled my mind! :))
Thanks for the clarifications, William!
So:
- Simple Mode: 3 replica volumes (on single node), each volume 75 GB in size
- HA Mode in medium: 3 replica volumes (across 3 nodes), each volume 150 GB in size
- - HA Mode in large: 3 replica volumes (across 3 nodes), each volume 200 GB in size
Thanks again! 🙂
Does this change from object store to NFS persist across upgrades e.g. from v9.0.0.0 to v9.0.1.0 ??
Yes, this configuration is expected to be persisted upon upgrades