WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Multiple VMDKs in VCSA 6.0?

03.09.2015 by William Lam // 10 Comments

One thing you might notice after deploying the new VCSA 6.0 is that it now includes 11 VMDKs. If you are like me, you are probably asking why are there so many? If you look at past releases of the VCSA, it only contained two VMDKS. The first disk was used for both the OS and the various VMware applications like vCenter Server, vSphere Web Client, etc. and the second disk was where all the application data was stored such as the VCDB, SSODB, Logs, etc.

There were several challenges with this design, one issue was that you could not easily increase the disk capacity for a particular application component. If you needed more storage for the VCDB but not for your logs or other applications, you had no choice but to increase the entire volume. In fact, this was actually a pretty painful process because a logical volume manager (LVM) was also not used. This meant that you needed to stop the vCenter Server service, add a new disk, format it and then copy all the data from the old volume to the new. Another problem with the old design is that you can not apply Storage QoS on important data such as the VCDB which you may want on a faster tier of storage or putting your Log data on slower and cheaper tier of storage by leveraging something like VM Storage Policies which works on a per VMDK basis.

For these reasons, VCSA 6.0 is now comprised of 11 individual VMDKs as seen in the screenshot below.

11-vmdks-vcsa-6.0-0
Here is a useful table that I have created which provides the mappings of each of the VDMKs to their respective functions.

Disk Size Purpose Mount Point
VMDK1 12GB / and Boot / and /boot
VMDK2 1.2GB Temp Mount /tmp/mount
VMDK3 25GB Swap SWAP
VMDK4 25GB Core /storage/core
VMDK5 10GB Log /storage/log
VMDK6 10GB DB /storage/db
VMDK7 5GB DBLog /storage/dblog
VMDK8 10GB SEAT (Stats Events and Tasks) /storage/seat
VMDK9 1GB NetDumper /storage/netdump
VMDK10 10GB AutoDeploy /storage/autodeploy
VMDK11 5GB Inventory Service /storage/invsvc

In addition, increasing disk capacity for a particular VMDK has been greatly simplified as the VCSA 6.0 now uses LVM to manage each of the partitions. You can now, on the fly increase disk space for a particular volume while the vCenter Server is still running and the changes will go live immediately. You can refer to this article here for the process as it is a simple two step process.

Here are some useful commands to get more details of the filesystem structure in the new VCSA.

lsblk

11-vmdks-vcsa-6.0-2

lsscsi

11-vmdks-vcsa-6.0-3

Categories // VCSA, vSphere 6.0 Tags // isscsi, lsblk, lvm, SEAT, VCSA, vcva, vmdk, vSphere 6.0

How to configure SMP-FT using Nested ESXi in vSphere 6?

03.06.2015 by William Lam // 1 Comment

Symmetric Multi-Processing Fault Tolerance (SMP-FT) has been a long-awaited feature by many VMware customers. With the release of vSphere 6.0, the SMP-FT capability is now finally available and if you want to try out this new feature and see how it works from a "functional" perspective, you can easily do so by running it in a Nested ESXi environment. SMP-FT no longer uses the "record/replay" capability like its younger brother Uniprocessing Fault Tolerance (UP-FT). Instead, SMP-FT now uses a new Fast Checkpointing technique which not only improves the overall performance of its predecessor but also greatly simplifies and reduces additional configurations when running in a Nested ESXi environment.

Disclaimer: Running SMP-FT in a Nested ESXi environment does not replace or substitute actual testing of physical hardware. For any type of performance testing, please test SMP-FT using real hardware.

Requirements:

  • pESXi host running either ESXi 5.5 or 6.0
  • vCenter Server 6.0
  • 2 x Nested ESXi VMs running ESXi 6.0 (vHW9+)
  • Shared storage for the Nested ESXi VMs

Instructions:

Step 1 - Created a Nested ESXi VM using guestOS type "ESXi 5.5/6.0 or later". You will need at least 2 vCPU or greater, 4GB of memory or greater for the installation of ESXi and most importantly, a VMXNET3 network adapter. The reason a VMXNET3 adapter is required is that SMP-FT has a requirement for 10Gbit network connection and the VMXNET3 driver can simulate a 10Gbit connection for a Nested ESXi VM. For further instructions on creating a Nested ESXi VM, please take a look at this article. If you are unable to add VMXNET3 adapter, you may need to first change the guestOS type to "Other 64-bit", add the adapter and then change the guestOS type back.

smp-ft-nested-esxi-0
Step 2 - Install ESXi 6.0 on the Nested ESXi VM and ensure you also have a vCenter Server 6.0 deployed if you have not done so already and add your Nested ESXi instances to a new vSphere Cluster which has vSphere HA enabled.

Step 3 - You will need to enable both vMotion and Fault Tolerance traffic type for the VMkernel interface that you wish to run FT traffic across.

smp-ft-nested-esxi-1
Step 4 - At this point, you can create a real or dummy VM and power it on. Once you have the powered on VM, you can now enable either UP-FT or SMP-FT by right clicking and selecting "Enable Fault Tolerance".

smp-ft-nested-esxi-2
As you can see from the screenshot above, I have successfully enabled FT on a VM with 4vCPU running inside of a Nested ESXi VM, how cool is that!? Hopefully this will help you get more familiar with the new SMP-FT feature when you are ready to give it a real spin on real hardware 🙂

Note: Intel Sandy Bridge is recommended when using SMP-FT (real physical hardware) but if you have older CPUs, you enable "Legacy FT" mode by adding the following VM Advanced Setting "vm.uselegacyft" to the VM you are enabling FT on.

Categories // ESXi, Nested Virtualization, vSphere 6.0 Tags // fault tolerance, nested ft, nested virtualization, smp-ft, vm.uselegacyft, vSphere 6.0

How to change/deploy VCSA 6.0 with default bash shell vs appliancesh?

03.06.2015 by William Lam // 10 Comments

When logging into the new VCSA 6.0 via SSH, you will notice that you are no longer dropped into a normal bash shell but into a new appliancesh (pronounced appliance shell) environment. This new interface provides basic set of virtual appliance management capabilities including Ruby vSphere Console (RVC) access which makes the majority of operations convenient to a vSphere Administrator but it also helps restrict unnecessary access to the underlying filesystem which can be helpful from a security standpoint.

If you need to access the underlying filesystem, you can temporarily enable it by running the following two commands:

shell.set --enabled True
shell

applianceshell-default-bash
If you need to transfer files to/from the VCSA via SCP/WinSCP, you will need to change the default shell from /bin/appliancesh to /bin/bash else the operation will fail. You can easily do this by using the chsh command:

chsh -s "/bin/bash" root

If you rather have the BASH shell configured as the default after deployment and not have to go through this manual process each time, you can actually configured using the following hidden option called guestinfo.cis.appliance.root.shell

This property allows you to specify the default shell for the "root" account and you can only modify this if you deploy the VCSA using ovftool. Here is the parameter you would append to the ovftool argument list:

--prop:guestinfo.cis.appliance.root.shell="/bin/bash"

You can leverage this new property and automate the deployment of the new VCSA 6.0 and for more details be sure to check out my VCSA 6.0 Automation Series.

Categories // Automation, OVFTool, VCSA, vSphere 6.0 Tags // appliancesh, guestinfo, ovftool, VCSA, vcva, vSphere 6.0

  • « Previous Page
  • 1
  • …
  • 12
  • 13
  • 14
  • 15
  • 16
  • …
  • 21
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...