WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

How to run Nested ESXi on the vCloud Hybrid Service?

05.02.2014 by William Lam // 7 Comments

nested-esxi-on-vchsToday I was granted access to VMware's vCloud Hybrid Service and the first order of business for me of course, was to provision a Nested ESXi VM! After going through the vCHS UI (which is very slick and easy to use by the way) and the vCloud Director UI, I realized the ESXi guestOS type has not been enabled on the backend of the vCloud Director Database. This totally makes sense, as vCHS is a production ready service and they definitely would not want to run anything that is not officially supported.

Having said that, I can see the benefits to customers who would like build out a Nested ESXi environment on vCHS for lab purposes instead of having to manage their own. Some customers even leverage Nested ESXi as part of their development and testing of software and it can be challenging at times to quickly spin up a brand new environment. Instead, they go to vCHS and with just a couple of of clicks in the UI or automatically using the vCloud APIs, provision a couple of Nested ESXi instances for testing. You can easily discard the resources once you are done or keep them running a bit longer.

Having worked with vCloud Director in the past, I knew that you could import an OVF/OVA and I thought maybe I could just import the Nested ESXi OVF templates that I built and potentially workaround vCHS "limitation" 🙂

Disclaimer: Nested ESXi and Nested Virtualization is not officially supported by VMware nor is it supported on vCHS

I tried to upload one of the OVF templates that I built, but it turns out vCloud Director does not supported the Dynamic Disks feature, so I had to perform two additional steps.

Step 1 - Download one of the following Nested ESXi OVF templates

  • Single Nested ESXi VM Template
  • 3-Node VSAN Nested ESXi VM Template
  • 32-Node VSAN Nested ESXi VM Template

Step 2 - Import the OVF template in an existing vSphere environment and ensure you are doing so using the vSphere Web Client, as some of the properties may not be imported properly

Step 3 - Once deployed, go ahead and re-export the image to an OVF/OVA (I choose OVA as it is a single file) and this will generate the empty VMDKs for you so the image should still be very small (< 1MB)

Step 4 - Login into to your vCHS account and  click on your Virtual Datacenter. Select Virtual Machines and then click on Manage in vCloud Director. Import the OVF/OVA that you have just exported

Step 5 - Once the import has been completed, you now have a Virtual Machine that has been configured with the correct guestOS type which should be VMware ESXi 5.x as seen in the screenshot below

nested-esxi-on-vchs-2
Step 6 - At this point, you can either mount an ESXi ISO over your browser or upload it into the vCloud Director Catalog so you can mount it locally and begin your installation of ESXi. Below is a screenshot of 3 Nested ESXi VMs running on vCHS

nested-esxi-on-vchs-3
Note: It looks like some of the advanced VM settings that are part of my OVF template are ignored as part of the vCloud Director import. This means that if you would like to run a Nested VSAN environment on vCHS, you will not be able to rely on the SSD emulation setting but instead, you will need to run through the ESXCLI claim rules to mark particular disks as "SSD" devices. It would have been really nice if vCloud Director would preserve all the advanced VM settings but at least you can still run a Nested VSAN environment.

So there you have it, Nested ESXi running on vCHS! I am kind of curious if this is the first instance of a Nested ESXi VM running on vCHS without having admin access on the backend system?

Note: One limitation to be aware of is that since the backend of vCloud Director is not properly enabled for Nested Virtualization support, this means you will NOT be able to run Nested VMs on top of the Nested ESXi instances. This is due to the lack of having Network Pool which has both Promiscuous & Forge Transmits enabled which is a requirement for proper Nested VMs connectivity. I wonder if vCHS should provide Nested Virtualization capabilities? I know I definitely would like to see it, what do you think? Leave a comment if you have some thoughts on this topic.

UPDATE (05/4/14) - If you wish to run a Nested VSAN environment on vCHS, you will need to take a look at this blog post here on how to "fake" an SSD on one of the devices by using ESXCLI claim rules. The rason for this is that you will not be able to leverage the other method of emulating an SSD device via advanced setting as that requires access to the underlying vSphere environment which you will not have in vCHS.

Categories // ESXi, Nested Virtualization, VSAN, vSphere Tags // ESXi, nested, nested virtualization, ssd, vCHS, vcloud hybrid service, VSAN

VSAN Flash/MD capacity reporting

04.29.2014 by William Lam // Leave a Comment

One of the capabilities that is available with VSAN when creating a VM Storage Policy is the ability to specify the amount of to Flash to reserve for a Virtual Machine object as a read cache. For Virtual Machines that require high levels of performance, you can assign this policy to the Virtual machine and VSAN will ensure a percentage of the Flash capacity is provided to your workload.

vsan-flash-md-capacity-report-3-NEW
A couple weeks back I was asked whether it was possible to report on the total amount of Flash capacity available to a VSAN Cluster including what has been reserved and in use. I thought that this was a great idea as users would probably want to be able see their utilization over time and ensure they do not over provision their Flash capacity.

For those of you who have used RVC, this information is somewhat available today using the vsan.disks_stats command. The only problem is that this information is only provided at a per device level for each ESXi host and not in an aggregate view for the entire VSAN Cluster.

vsan-flash-md-capacity-report-0
Leveraging the work I had done earlier with exploring the VSAN API and looking at the VSAN component count, I was able to extract the necessary information that I was looking for to provide an aggregate view. To demonstrate this functionality, I have created two sample scripts: vSphere SDK for Perl script called vsanFlashAndMDCapacity.pl and PowerCLI script called vsanFlashAndMDCapacity.ps1

Disclaimer:  These scripts are provided for informational and educational purposes only. It should be thoroughly tested before attempting to use in a production environment.

Both scripts work exactly the same way, you just need to connect it to a vCenter Server that has at least one VSAN Cluster. The script will automatically search for all VSAN enabled vSphere Cluster and provide the following information:

  • Total SSD Capacity
  • Total SSD Reserved Capacity
  • Total SSD Used Capacity
  • Total MD Capacity
  • Total MD Reserved Capacity
  • Total MD Used Capacity

Here is an example screenshot for the vSphere SDK for Perl script:

vsan-flash-md-capacity-report-1
Here is an example screenshot for the PowerCLI script:

vsan-flash-md-capacity-report-2
One question I had myself while looking at the results was regarding the "Used" property and what it meant. I think this is best explained with an example after learning about the details from engineering.

Lets say there are 2 VSAN objects:

  • Object1: Configured size: 100GB, space reservation 10%, actual data written 5GB.
  • Object2: Configured size: 100GB, space reservation 10%, actual data written 15GB.

This would mean:

Object1:
Configured/Provisioned: 100GB
Reserved: 10GB
Physical Used: 5GB
Used: 10GB

Object2:
Configured/Provisioned: 100GB
Reserved: 10GB
Physical Used: 15GB
Used: 15GB

The "Used" property is then calculated as the MAX(Physical Used, Reserved). I have also shared this information with engineering, perhaps they may consider adding this information to RVC 🙂 If you think this is something you would like to see in RVC, please leave a comment.

Categories // Automation, VSAN, vSphere 5.5 Tags // ESXi 5.5, flash, PowerCLI, ssd, VSAN, vSphere 5.5, vSphere API

Handy VSAN VOBs for creating vCenter Alarms

04.22.2014 by William Lam // 3 Comments

There have been quite a few questions lately around vCenter Server Alarms for VSAN, one in particular that I have noticed is around individual disk failure for VSAN. Outside of the generic default datastore alarms, there seems to be only two VSAN specific alarms:

vsan-default-alarms
I figure there must be other useful alarms that we could create, especially after showing how you can create a vCenter Server Alarm to monitor the VSAN component count threshold based on a particular VSAN VOB. I took a look around and found the following VSAN specific VOBs which could be useful for creating additional vCenter Alarms.

VOB ID VOB Description
esx.audit.vsan.clustering.enabled VSAN clustering services have been enabled.
esx.clear.vob.vsan.pdl.online VSAN device has come online.
esx.clear.vsan.clustering.enabled VSAN clustering services have now been enabled.
esx.clear.vsan.vsan.network.available VSAN now has at least one active network configuration.
esx.clear.vsan.vsan.vmknic.ready A previously reported vmknic now has a valid IP.
esx.problem.vob.vsan.lsom.componentthreshold VSAN Node: Near node component count limit.
esx.problem.vob.vsan.lsom.diskerror VSAN device is under permanent error.
esx.problem.vob.vsan.lsom.diskgrouplimit Failed to create a new disk group.
esx.problem.vob.vsan.lsom.disklimit Failed to add disk to disk group.
esx.problem.vob.vsan.pdl.offline VSAN device has gone offline.
esx.problem.vsan.clustering.disabled VSAN clustering services have been disabled.
esx.problem.vsan.lsom.congestionthreshold VSAN device Memory/SSD congestion has changed.
esx.problem.vsan.net.not.ready A vmknic added to VSAN network configuration doesn't have valid IP. Network is not ready.
esx.problem.vsan.net.redundancy.lost VSAN doesn't haven any redundancy in its network configuration.
esx.problem.vsan.net.redundancy.reduced VSAN is operating on reduced network redundancy.
esx.problem.vsan.no.network.connectivity VSAN doesn't have any networking configuration for use.
esx.problem.vsan.vmknic.not.ready A vmknic added to VSAN network configuration doesn't have valid IP. It will not be in use.

Looking at the list above, the following two VOBs seems like they would be useful for alerting on a disk failure is:

  • esx.problem.vob.vsan.lsom.diskerror
  • esx.problem.vob.vsan.pdl.offline

Disclaimer: There are no guarantees that a disk error or failure will automatically trigger these VOBs due to the unknown nature of how a disk may be fail, especially if it is intermittently.

Even though we can not simulate a disk error on a physical disk, we can still do some magic using a Nested VSAN environment. The worse case scenario that you could run into is that one of the disk just goes completely offline. We can simulate a similar behavior in a Nested ESXi environment by removing one of the virtual disks from the Virtual Machine (not deleting it).

To demonstrate the following scenario, here are the steps to create a vCenter Alarm for the following two VOBs:

Step 1 - Create a new vCenter Alarm and give it a name. Select “Hosts” for Monitor and “Specific event occurring …” for Monitor:

vsan-disk-failure-alarm-0
Step 2 - Add the following two VOBs above into the Event trigger:

vsan-disk-failure-alarm-1
Step 3 - Remove one of the Virtual Disks (SSD/MD) from the Virtual Machine running the Nested ESXi VM.

Step 4 - There are two ways in which you can trigger the alarm. You can either create a new Virtual Machine which will try to write to the Nested ESXi VM in which you remove the Virtual Disk or you can rescan the storage adapter for the Nested ESXi VM. In my environment, I happen to have a VM running on an NFS datastore and I performed a Storage vMotion of the VM onto my VSAN Datastore using the default FTT=1 policy on a three node VSAN Cluster. This immediately triggered the alarm as seen in the screenshots below:

vsan-disk-failure-alarm-2

vsan-disk-failure-alarm-3

Categories // VSAN, vSphere 5.5 Tags // alarm, ESXi 5.5, vob, VSAN, vSphere 5.5

  • « Previous Page
  • 1
  • …
  • 40
  • 41
  • 42
  • 43
  • 44
  • …
  • 53
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...