WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
    • VMware Cloud Foundation 9
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple
You are here: Home / VCF Automation / Managing Storage for VCF Automation (VCFA) Content Library Transfer Spooling Area

Managing Storage for VCF Automation (VCFA) Content Library Transfer Spooling Area

12.01.2025 by William Lam // 7 Comments

When a user uploads an ISO/OVF/OVA from their desktop to VCF Automation (VCFA) Content Library, the file is temporarily stored in the Transfer Spooling Area (TSA) that resides locally within VCFA before it is finally transferred to the destination vCenter Server.


By default, VCFA uses an embedded object store backed by SeaweedFS to hold the temporary user files, which are stored in 64MB chunks. The size of the SeaweedFS volumes will be based on the VCFA deployment mode (single node vs multi-node for HA). A total of three volumes (3 replicas) will always be configured regardless of the deployment mode for VCFA and the total usable storage capacity is sum of these three volumes.

  • Single Node, each volume is 75GB
  • Multi-Node (Medium), each volume is 150GB
  • Multi-Node (Large), each volume is 200GB

The SeaweedFS volumes can be resized non-disruptively by going into VCF Operations and navigating to Fleet Management->Lifecycle->Management->Components and then select your VCF Automation instance and perform the storage resize operation.


Enter the desired capacity for the Shared Storage Data and VCFA will then automatically resize the volume without any user impact.


Alternatively, VCFA can be configured with a more scalable option that leverages an external NFS server for the TSA function to accommodate larger file uploads.

To configure the TSA storage backing from the internal SeaweedFS to an external NFS server, we need to issue an API request when logged into VCFA Appliance. Create a shell script called configure_nfs.sh which should contain the following and update the three variables at the top of the script:

NFS_SERVER_IP="192.168.30.29"
NFS_SERVER_MOUNT_PATH="/volume1/vcfa"
NFS_SERVER_MOUNT_SIZE_GIB="500"

### DO NOT EDIT BEYOND HERE ###

K8S_TOKEN=$(kubectl get secrets synthetic-checker-krp -n vmsp-platform -ojsonpath={.data.token} | base64 -d)
NODE_IP=$(kubectl get nodes -ojsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}')

REQUEST_RESPONSE=$(curl -k -s -X POST -H "Authorization: Bearer $K8S_TOKEN" https://${NODE_IP}:30005/webhooks/prelude/tenant-manager/configure-nfs \
-d "{
        \"host\": \"$NFS_SERVER_IP\",
        \"path\": \"$NFS_SERVER_MOUNT_PATH\",
        \"size\": \"$NFS_SERVER_MOUNT_SIZE_GIB\"
    }"
curl -q -L -k -X GET -H "Authorization: Bearer $K8S_TOKEN"   https://${NODE_IP}:30005/$(echo $REQUEST_RESPONSE | jq -r .statusURI) | jq .

Once the script has been successfully executed, you should see a 200 response code that indicates the API endpoint has received the request and the reconfiguration is now in progress.


The /configure-nfs webhook will first check whether the NFS server and volume is accessible by spinning up a few helper containers, which we can see by running the following command:

kubectl -n prelude get pods | grep tenant

You can run the following command to ensure that the container is able to update the ownership of the NFS volume:

kubectl -n prelude logs tenant-manager-0 -c prepare-dir-ownership


We can confirm the NFS mount was successful by running the following command on the tenant-manager pod:

kubectl -n prelude exec -it tenant-manager-0 -- mount | grep nfs


Lastly, the Tenant Manager pod should now be running after the storage reconfiguration:

kubectl -n prelude get pods -l app.kubernetes.io/instance=tenant-manager


When uploading a new file, we should now see that the temporary file is now being stored on our NFS server


To configure the TSA storage backing from an external NFS server to the internal SeaweedFS, we simply issue an API request when logged into VCFA Appliance with an empty string for the host property. Create a shell script called configure_seaweedfs.sh which should contain the following:

K8S_TOKEN=$(kubectl get secrets synthetic-checker-krp -n vmsp-platform -ojsonpath={.data.token} | base64 -d)
NODE_IP=$(kubectl get nodes -ojsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}')

REQUEST_RESPONSE=$(curl -k -X POST -H "Authorization: Bearer $K8S_TOKEN" https://${NODE_IP}:30005/webhooks/prelude/tenant-manager/configure-nfs -d '{"host":""}')
curl -q -L -k -X GET -H "Authorization: Bearer $K8S_TOKEN"   https://${NODE_IP}:30005/$(echo $REQUEST_RESPONSE | jq -r .statusURI) | jq .

You can monitor the storage reconfiguration similar to configuring an NFS server and /configure-nfs webhook does provide an overall status that you can perform GET request by running the following:

K8S_TOKEN=$(kubectl get secrets synthetic-checker-krp -n vmsp-platform -ojsonpath={.data.token} | base64 -d)
NODE_IP=$(kubectl get nodes -ojsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}')

curl -k -X GET -H "Authorization: Bearer $K8S_TOKEN" https://${NODE_IP}:30005/webhooks/prelude/tenant-manager/configure-nfs

Categories // VCF Automation, VMware Cloud Foundation Tags // VCF 9.0

Comments

  1. *protectedSteffen Richter says

    12/02/2025 at 1:43 am

    Hi William, thanks for the write-up! Was just researching this the other day and couldnt find much.

    Did you miss a part?
    "The size of the SeaweedFS volumes will be based on the VCFA deployment mode, for a simple (non-HA) deployment, each volume is 75GB. "
    --> whats the volume size then for a HA deployment?

    Reply
    • William Lam says

      12/02/2025 at 8:43 am

      No. What I wrote is correct, did you read the sentence immediately after which should answer your HA question 🙂

      >> There is a total of three volumes (3 replicas) that is always configured regardless of the deployment mode for VCFA and total usable storage capacity is sum of those three volumes.

      Reply
      • *protectedSteffen Richter says

        12/02/2025 at 9:08 am

        I did, thats why I asked 🙂

        Reading the first sentence " for a simple (non-HA) deployment, each volume is 75GB", it implies that the volume size is different between simple and HA mode.

        Reading the sentence after that then "There is a total of three volumes (3 replicas) that is always configured *regardless of the deployment* mode for VCFA and total usable storage capacity is sum of those three volumes." makes it sound like even in the simple mode (so only one VCFA appliance) there are 3 volumes (each 75 GB in size) on the one VCFA appliance.

        See what I mean?

        Reply
        • William Lam says

          12/02/2025 at 9:42 am

          That is EXACTLY how you interpret that sentence. We deploy 3 replicas for HA regardless if you've selected Singleton or HA VCFA

          For Singleton, the default volume size is 75GB. For HA, you have Medium/Large options and the default volume size for those is 150GB/200GB respectively.

          Reply
          • *protectedSteffen Richter says

            12/03/2025 at 12:40 am

            Ahh, now there is that little tidbit that boggled my mind! :))

            Thanks for the clarifications, William!

            So:
            - Simple Mode: 3 replica volumes (on single node), each volume 75 GB in size
            - HA Mode in medium: 3 replica volumes (across 3 nodes), each volume 150 GB in size
            - - HA Mode in large: 3 replica volumes (across 3 nodes), each volume 200 GB in size

            Thanks again! 🙂

  2. *protectedSimon Sparks says

    12/02/2025 at 2:12 am

    Does this change from object store to NFS persist across upgrades e.g. from v9.0.0.0 to v9.0.1.0 ??

    Reply
    • William Lam says

      12/03/2025 at 1:58 pm

      Yes, this configuration is expected to be persisted upon upgrades

      Reply

Thanks for the comment!Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Disable HTTP Range Requests on Synology WebStation, Apache or Nginx 01/14/2026
  • Quick Tip - Correlating VCF Component to Bundle ID/Name 01/08/2026
  • TLS Chain of Trust when using SSL Inspection with VCF Download Tool (VCFDT) 01/07/2026
  • Quick Tip - Reset vCenter Server from previously managed VCF Operations for VCF Single Sign-On (SSO) 01/06/2026
  • Running VCF Download Tool (VCFDT) on Apple macOS 01/05/2026

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.

To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2026

 

Loading Comments...