With previous releases of the VCSA, increasing disk capacity was not a very straight forward process. Even though you could easily increase the size of the underlying VMDK while the VCSA was running, increasing the guestOS filesystem was not as seamless. In fact, the process was to add a new VMDK, format it and then copy the contents from the old disk to the new disk as detailed in VMware KB 2056764. This meant with previous releases of VCSX 5.x, you would need to incur downtime of your environment and it could be also be quite significant depending on your familiarity with the steps mentioned in the KB not to mention the time it took to copy the data.
UPDATE (12/06/16) - For VCSA 6.5 deployments, please refer to the article here as the instructions have changed since VCSA 6.0.
The reason for this unnecessary complexity is that VCSA did not take advantage of a Logical Volume Manager (LVM) for managing its disks. In VCSA 6.0, LVM is now used to make it extremely easy to increase disk capacity while the VCSA is running. VCSA 6.0 further simplifies this by separating out the various functions into their own disk partitions comprised of 11 VMDKs compared to the monolithic design in previous VCSA releases. This not only allows you to increase capacity for specific a partition but you can also now attach specific storage SLA's using VM Storage Policies on specific VMDKs such as the Database or Log VMDK for example.
In the example below, I will walk through the process of increasing the DB VMDK from the existing 10GB to 20GB while the vCenter Server is still running.
Step 1 - Verify the existing disk capacity using "df -h"
Step 2 - Increase the capacity on VMDK 6 which represents the DB partition using the vSphere Web/C# Client.
Step 3 - Once the VMDK has been increased, you will need to run the following command in the VCSA which will automatically expand any Logical Volumes that have had their Physical Volumes increased:
vpxd_servicecfg storage lvm autogrow
Step 4 - Confirm the newly added capacity has been consumed
If you would like to learn more about the different VMDK structure in the new VCSA 6.0, I will be sharing more details in a future article.
Dingo Taz says
Thanks for this. Just a tip though, you need to delete snapshots first, as otherwise disk sizes are locked from being changed.
wtcthomas says
I wanted to change from tiny to small deployment, I follow the steps above but I got result = 1, disks have not changed using df -h
thanks
tom says
sorry, typo, it works. thanks
Alfonso says
Thank you, William.
That post saves me, and allows me to import current vcsa 5 database without problem.
Chip says
William, I noticed this works on some of the vdisks but not all. For example, if you needed to extend Hard disk 1, vpxd_servicecfg storage lvm autogrow will not work. Since none of the vSphere 6 documentation I could find mentions this ability, what are some other caveats around using autogrow?
Randomizer says
I realize this is old, but for anyone else curious why autogrow doesn't work on disk 1 (also won't work on 4) it's because those two are partitions (/dev/sdx), not logical volumes. The caveat is that you can only use it on LVMs.
To extend a partition you'd probably need to follow something along the lines of this KB: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2056764
Joe Cooper says
Any idea on how to do this on an external PSC appliance? vpxd_servicecfg doesn't appear to be on the server.
John Maxwell says
I have that question as well. My PSC /storage/log vol is full. its only 5GB and I would like to increase it
AtleJensen says
I would also like to know this.
Cedric C. says
Same problem for Me !!
William Lam says
When a PSC is deployed externally, it looks like it does not include this new utility to easily increase volumes. This is something that VMware is aware of and will be fixing in the next update of vSphere.
In the meantime, it sounds like this might be related to this KB (https://kb.vmware.com/kb/2143565) in which heavy logging from PSC is actually causing the log volume to fill up. If this is the case, simply increasing the capacity of the disk won't help as you'll probably run into this problem again
ecmanfra says
unfortunately this is still an issue, still can't expand any of the disks on a external psc =(
William Lam says
Joe,
That's correct, the behavior hasn't changed even with the latest U2 release. I believe this will be fully resolved in a future update. In the meantime, curious if you're needing to expand the VMDK due to the PSC logging or for some other reason? We have a KB documenting on how to enable log rotation
Joe Cooper says
Yup... it's all about that log directory. I'll follow the KB and apply the workaround.
zachary says
can not increase the root partition on VCSA6.0 by using this way. any suggestion?
Gopi says
https://blog.pivotal.io/labs/labs/increasing-size-vcsa-root-filesystem
Wes says
Longtime reader, huge fan, and this tip just saved from log disk headaches. Thanks William!
RyGuy says
Any new updates for external PSC?
Doing this workaround is Looney Tunes; https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2143565
Ken says
OK, so what if its not the DB volume that needs to be extended? In my case it was the log volume.. since it was just up one volume up from the DB I assumed it would be VMDK 5, and it was, but this was a guess based on an assumption that could have been faulty. More details on this would be appreciated.
cr0ft says
The whole system eating logs thing is a pain in my arse. Yes, we're behind on upgrades, still using 6.0, but our system has fallen over several times due to this issue. First it was the well known screwup that there was no log rotation, so I fixed that. Then some other log filled up elsewhere and borked us, so I fixed that. Now, today, I think it was wrapper.log in /usr/lib or something that filled up the root partition and again I was doing the whole ssh in and hunt for the culprit. I'm moving to 6.5... and going over and increasing all those partitions that it makes to double the norm. Perhaps that won't break in the next couple of weeks. Meh...