WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Updates to VMDK partitions & disk resizing in VCSA 6.5

11.07.2016 by William Lam // 9 Comments

Similiar to the vCenter Server Appliance (VCSA) 6.0 release, the new VCSA 6.5 is also composed of multiple virtual machine disks (VMDKs). Each VMDK maps to a specific function and OS partition within the VCSA. There are now a total of 12 VMDKs, two of which are new in vSphere 6.5: vSphere Update Manager (VUM) and Image Builder. The following table provides a break down of the VMDKs in VCSA 6.5 compared to VCSA 6.0:

Disk 6.0 Size 6.5 Size Purpose Mount Point
VMDK1 12GB 12GB / and Boot  / and Boot
VMDK2 1.2GB 1.8GB VCSA's RPM packages N/A as it is not mounted after install
VMDK3 25GB 25GB Swap SWAP
VMDK4 25GB 25GB Core  /storage/core
VMDK5 10GB 10GB Log  /storage/log
VMDK6 10GB 10GB DB  /storage/db
VMDK7 5GB 15GB DBLog  /storage/dblog
VMDK8 10GB 10GB SEAT (Stats Events and Tasks)  /storage/seat
VMDK9 1GB 1GB Net Dumper  /storage/netdump
VMDK10 10GB 10GB Auto Deploy  /storage/autodeploy
VMDK11 N/A (Previously InvSrvc 5GB) 10GB Image Builder /storage/imagebuilder
VMDK12 N/A 100GB Update Manager  /storage/updatemgr

In addition to the VMDK/partition changes, there are a couple of enhancements when needing to increase disk capacity in the VCSA. Just like in VCSA 6.0, you will still be able to hot-extend any one of the VMDKs while the system is still running.

  • The first change is that instead of the old vpxd_servicecfg command which is used expand the logical volume(s) making the new storage capacity available the OS/application, it has been replaced with the following command: /usr/lib/applmgmt/support/scripts/autogrow.sh 
  • The second change is that instead of having to perform the above command using only SSH which may be disabled by default. There is now a new Virtual Appliance Management Interface (VAMI) REST API that can be called remotely: POST /appliance/system/storage/resize
  • The final difference is that in previous releases, you could only resize the Embedded VCSA or External VCSA node, but not the Platform Services Controller (PSC) node. In 6.5, this has changed and you can apply this method on any one of the VCSA nodes. Thanks to Blair for reminding me on this one!

Lets walk through an example of increasing the Net Dumper partition (VMDK9) and exercising this new VAMI API.

Step 1 - Login to VCSA using SSH to run a quick "df -h" to check the current size of your Net Dumper partition which by default will be 1GB as seen in the screenshot below.

increase-disk-capacity-vcsa-6-5-0
Step 2 - Next, we will increase the VMDK to 5GB. In this example, I am using the vSphere Web Client but if you wanted to completely automate this process end-to-end, you can use the vSphere API/PowerCLI to perform this operation.

increase-disk-capacity-vcsa-6-5-1
Step 3 - To quickly try out the new VAMI API, we will use the new vSphere API Explorer that is included in the VSCA 6.5. Simply open a web browser and enter the following URL: https://[VCSA-HOSTNAME]/apiexplorer Select the "appliance" API and then click on the login button and enter your vCenter Server credentials.

increase-disk-capacity-vcsa-6-5-2
Step 4 - Scroll down to the POST /appliance/system/storage/resize operation and expand it. To call this API, just click on the "Try it out" button. If the operation completely successfully, you should see a  200 response as shown in the screenshot below.

increase-disk-capacity-vcsa-6-5-3
Step 3 and 4 can also be called directly through PowerCLI using the new CIS cmdlets (Connect-CisServer & Get-CisService) which exposes the new VAMI APIs. Below is a quick snippet that performs the exact same operation:

Connect-CisServer -Server 192.168.1.150 -User *protected email* -Password VMware1!
$diskResize = Get-CisService -Name 'com.vmware.appliance.system.storage'
$diskResize.resize()

Step 5 - Lastly, we can now log back into the VCSA and re-run the "df -h" command to verify we can see the new storage capacity.

increase-disk-capacity-vcsa-6-5-4

Categories // Automation, VCSA, vSphere 6.5 Tags // autogrow.sh, PowerCLI, REST API, vami, vcenter server appliance, VCSA, vcva, vmdk, vSphere 6.5

How to deploy the vCenter Server Appliance (VCSA) 6.5 running on VMware Fusion & Workstation?

10.27.2016 by William Lam // 31 Comments

As with any new release of vSphere, it is quite common for customers to deploy the new software in either a vSphere home or test lab to get more familiar with it. Although not everyone has access to a vSphere lab environment, the next best thing is to leverage either VMware Fusion or Workstation. With the upcoming release of vSphere 6.5, this is no different. In fact, during the vSphere Beta program, this was something that was asked about by several customers and something I had helped document as the process has changed from previous releases of the VCSA.

In vSphere 6.5, the VCSA deployment has changed from a "Single" monolithic stage where a user enters all of their information up front and the installer goes and deploys the VCSA OVA and then applies the configurations. If you had fat finger say a DNS entry or wanted to change the IP Address before applying the actual application configurations, it would not be possible and you would have to re-deploy which was not an ideal user experience.

In vSphere 6.5, the new UI installer will still allow you to perform a "Single" monolithic stage but it is now broken down into two distinct stages as shown below with their respective screenshots:

Stage 1 - Initial OVA deployment which includes basic networking

vcsa-6-5-installer-1
Stage 2 - Applying VCSA specific personality configuration

vcsa-6-5-installer-2
Just like in prior releases of the VCSA, the UI translates the user input into specific OVF properties which are then passed into the VCSA guest for configuration. This means that if you wish to deploy VCSA 6.5 running Fusion or Workstation, you will have two options to select from. You either deploy VCSA and complete both Stage 1 and 2 or just Stage 1 only. If you select the latter option, to complete the actual deployment, you will need to open a web browser to the VAMI UI (https://[VCSA-IP]:5480) and finish configuring the VCSA using the "Setup vCenter Server Appliance" option as shown in the screenshot below.

vcsa-6-5-installer-3
If your goal is to quickly get the VCSA 6.5 up and running, then going with Option 1 (Stage 1 & 2 Config) is the way to go. If your goal is to learn about the new VCSA UI Installer, then you can at least get a taste of that by going with Option 2 (Stage 1 Config) and this way you can step through Stage 2 using the native UI installer.

One last thing I would like to mention is that there have been a number of new services added to the VCSA 6.5. One example is that vSphere Update Manager (VUM) is now embedded in the VCSA and it is also enabled by default. With these new services, the tiniest deployment size is going to require 10GB of memory where as before it was 8GB. This is something to be aware of and ensure that you have adequate resources before attempting to deploy the VCSA or else you may see some unexpected failures while the system is being configured.

Note: If you have access to fast SSDs and would like to overcommit memory in Fusion or Workstation, you might be able to get this to work leveraging some tricks mentioned here. This is not something I have personally tested, so YMMV.

Here are the steps to deploy VCSA 6.5 using either VMware Fusion or Workstation:

Step 0 (Optional) - Familiarize yourself with setting up VCSA 6.0 was on Fusion/Workstation with this blog post which will be helpful for additional context.

Step 1 - Download & extract the VCSA 6.5 ISO

Step 2 - Import the VCSA OVA which will be located in vcsa/VMware-vCenter-Server-Appliance-6.5.0.5100-XXXXXX_OVF10.ova using either VMware Fusion or Workstation (you can either double click or just go to File->Open) but make sure you do NOT power it on after deployment. (this is very important)

Step 4 - Locate the directory in which the VCSA was deployed to and open up the VMX file and append one of the following options (make sure to change the IP information and passwords based on your environment):

Option 1 (Stage 1 & 2 Configuration):

guestinfo.cis.deployment.node.type = "embedded"
guestinfo.cis.appliance.net.addr.family = "ipv4"
guestinfo.cis.appliance.net.mode = "static"
guestinfo.cis.appliance.net.pnid = "192.168.1.190"
guestinfo.cis.appliance.net.addr = "192.168.1.190"
guestinfo.cis.appliance.net.prefix = "24"
guestinfo.cis.appliance.net.gateway = "192.168.1.1"
guestinfo.cis.appliance.net.dns.servers = "192.168.1.1"
guestinfo.cis.appliance.root.passwd = "VMware1!"
guestinfo.cis.appliance.ssh.enabled = "True"
guestinfo.cis.deployment.autoconfig = "True"
guestinfo.cis.appliance.ntp.servers = "pool.ntp.org"
guestinfo.cis.vmdir.password = "VMware1!"
guestinfo.cis.vmdir.site-name = "virtuallyGhetto"
guestinfo.cis.vmdir.domain-name = "vsphere.local"
guestinfo.cis.ceip_enabled = "False"

Option 2 (Stage 1 Only Configuration):

guestinfo.cis.deployment.node.type = "embedded"
guestinfo.cis.appliance.net.addr.family = "ipv4"
guestinfo.cis.appliance.net.mode = "static"
guestinfo.cis.appliance.net.pnid = "192.168.1.190"
guestinfo.cis.appliance.net.addr = "192.168.1.190"
guestinfo.cis.appliance.net.prefix = "24"
guestinfo.cis.appliance.net.gateway = "192.168.1.1"
guestinfo.cis.appliance.net.dns.servers = "192.168.1.1"
guestinfo.cis.appliance.root.passwd = "VMware1!"
guestinfo.cis.appliance.ssh.enabled = "True"
guestinfo.cis.deployment.autoconfig = "False"
guestinfo.cis.ceip_enabled = "False"

Step 5 - Once you have saved your changes, go ahead and power on the VCSA. At this point, the guestinfo properties that you just added will be read in by VMware Tools as the VCSA is booting up and the configuration will begin. Depending on the speed of your hardware, this can potentially take up to 15min+ as I have seen it. Please be patient with the process. If you wish to check the progress of the deployment, you can open a browser to https://[VC-IP]:5480 and you should see some progress or you can periodically connect to the Hostname/IP Address and once it is done, you should be taken to the vCenter Server's main landing page.

Categories // Fusion, Home Lab, VCSA, vSphere 6.5, Workstation Tags // fusion, vcenter server appliance, VCSA, vcva, vSphere 6.5, workstation

How to run a Docker Container on the vCenter Server Appliance (VCSA) 6.5?

10.24.2016 by William Lam // 8 Comments

One of the most notable changes in the vCenter Server Appliance (VCSA) in vSphere 6.5 is a switch of the underlying OS from SLES to VMware's very own Photon OS. With this change, VMware will now own the entire software stack within the VCSA (OS + Application). This will allow VMware to quickly respond and deliver OS and security updates to customers at a much quicker rate than it was possible before.

During my testing of the VCSA, I had a need to spin up a Docker Container. Given that the VCSA is now Photon OS based, this should be a pretty trivial thing to enable as it is with a standalone installation of Photon OS. After a bit of trial/error, I found what was needed to get this working on the VCSA. Before jumping into the solution, I should say that this is really for lab and educational purposes. In general, I would NOT recommend installing additional software on the VCSA, not only is this NOT supported by VMware but you may also potentially be impacting your vCenter Server by taking resources away from the main application. It is possible to constrain the amount of resources (CPU/Memory) allocated to the Docker Container, please refer to this resource for more information.

For smaller customers, the argument is that I can just run everything on a single system but in reality there are many benefits to having a separate management VM which can be Photon OS or any other OS that your organization supports. You can install additional management tools/scripts and you would not be artificially limited by the VCSA's environment which is really locked down to what is absolutely needed to run the vCenter Server application and its services.

Disclaimer: This is not officially supported by VMware, please use at your own risk.

Given that PowerCLI Core (Linux and Mac OS X) was just recently released, which also includes a Docker Container, I figure this would be a nice example to start with as I know a few of you have asked about this possibility 🙂

Step 1 - Install Docker by running the following command (you will need access to the internet either direct or proxy access from the VCSA)

tdnf -y install docker

Step 2 - Load the following kernel module which will allow us to start the Docker client by running the following command:

insmod /usr/lib/modules/$(uname -r)/kernel/net/bridge/bridge.ko

Note: The above command does not persist across reboots. If you would like to persist this configuration, please refer to the instructions at the very bottom.

Step 3 - Enable and start the Docker Client by running the following command:

systemctl enable docker
systemctl start docker

Step 4 - Pull down the PowerCLI Core Docker Image from Docker Hub by running the following command:

docker pull vmware/powerclicore

docker-container-on-vcsa-6-5-3
Step 5 - Start the PowerCLI Core Docker Container by running the following command:

docker run --rm -it --entrypoint='/usr/bin/powershell' vmware/powerclicore

docker-container-on-vcsa-6-5-4
As you can see from the screenshot above, you now have PowerShell and the PowerCLI module loaded running as a Docker Container on the VCSA 🙂 You can apply this to any Docker Container that you have created or pulling it directly from Docker Hub. If you prefer to build the PowerCLI Core Docker Container from the Dockerfile, you simply just need to download and extract the PowerCLI Core zip file onto the VCSA and then run the following command:

docker build -t vmware/powercli .

docker-container-on-vcsa-6-5-0

How to persist bridge module load across reboots:

Step 1 - Edit /etc/modprobe.d/modprobe.conf and remove the "install bridge /bin/false" entry.

Step 2 - Create a new file called /etc/modules-load.d/bridge.conf which contains the word "bridge" (no quotes). When the system boots up, it will iterate through all the module configuration file and load the respective modules. The bridge module is what is needed to start the Docker Daemon.

Categories // Automation, Docker, Not Supported, PowerCLI, VCSA, vSphere 6.5 Tags // Docker, Photon, vcenter server appliance, VCSA, vcva, vSphere 6.5

  • « Previous Page
  • 1
  • …
  • 18
  • 19
  • 20
  • 21
  • 22
  • …
  • 46
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...