WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
    • VMware Cloud Foundation 9
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Quick Tip - lldpnetmap, a handy utility to map pNic to pSwitch on ESXi

05.20.2014 by William Lam // 8 Comments

Last week while attending VMware's R&D Innovation Offsite (RADIO), I ran into Christian Dickmann, who as many of you know works on the VSAN team. During our discussion, he mentioned a nifty little utility called lldpnetmap that he had used recently. This utility is found within the ESXi Shell and provides a quick and easy way to display the mapping between an ESXi hosts physical network interface to the physical switch they are connected to using LLDP (Link Layer Discovery Protocol). This is similar to what Cisco's proprietary discovery protocol (CDP) provides, but only details about the physical switch.

CDP has been supported with vSphere Standard Switches for quite sometime now, but LLDP support was only added recently with the introduction of the vSphere Distributed Switch. Chris Wahl has a great article here on why you should enable either CDP/LLDP and the benefits you get with it. For customers who are running non-Cisco switches, lldpnetmap is a great way to quickly figure out which physical switch your ESXi hosts are connected to, especially useful during troubleshooting where every minute counts.

There are actually two ways in which you can run the lldpnetmap utility. The first method is by running it within the ESXi Shell using the following command:

lldpnetmap

The command takes about about 30-60 seconds to run and if successful, you should see the name of the physical network switch and the vmnic (pNIC) that they are connected to.

Here is a screenshot of what that output looks like:

lldpnetmap-0
The second method is actually how Christian had been using the command which is through RVC. Using the vsan.lldpnetmap command, you can specify an individual ESXi host or an entire vSphere Cluster. Even though the command is under the VSAN namespace, you do not need to have VSAN enabled to use the command.

Here is a screenshot of what that output looks like:

lldpnetmap-1
Note: If you do not see any output, you are most likely connected to a Cisco switch or to a non-managed switch that does not support LLDP.

This is one utility I will be sure to remember the next time I need to troubleshoot a networking issue. Thanks for sharing this handy tidbit Christian!

Categories // ESXi Tags // ESXi 5.5, LLDP, lldpnetmap, rvc, vSphere 5.5

How to run the VSAN Observer in "collection" mode in the background?

05.18.2014 by William Lam // 1 Comment

The VSAN Observer is a very powerful tool that allows you to get in-depth performance analysis of your VSAN environment. One of the really useful feature is the ability to run the VSAN Observer in "collection" mode by using the --generate-html-bundle option. Something that I have noticed when running the VSAN Observer in collection mode is that you not close the current SSH session, else the collection will stop. I have even tried running the VSAN Observer using RVC's not very well known "script" feature and then back-grounding the process, but after a minute or so the collection also just stops.

The only workaround that I have found is by using Screen, a full-screen windows session manager usually found on most Linux/UNIX and Mac OS X systems. Having used Screen in the past life as a Systems Administrator, I can say it is an extremely useful tool when needing to perform long running tasks and not have to worry about your SSH session being disconnected. You can start a session, disconnect and then re-connect at a later time to monitor the progress.

If you are on a Mac, then Screen should have already been installed. Below are the steps to run the VSAN Observer in the VCSA using Screen:

Step 1 - Start screen and give the session a name such as "VSAN-Observer" for example:

screen -S VSAN-Observer

Step 2 - SSH to your VCSA and login to RVC and start the VSAN Observer using the collection mode as you normally would. For step by step instructions, check out Rawlinson Rivera's article here on setting up the VSAN Observer.

Step 3 - Once the VSAN Observer is running, enter the following key combinations which will detach your Screen session:

Crtl+a d

Step 4 - To list the available Screen sessions, you can run the following command:

screen -list

vsan-observer-rvc-script-1
Step 5 - To re-attach to your Screen session, you will need to specify the session name. In our example, it was called VSAN-Observer:

screen -r VSAN-Observer

An alternative to Step 2, instead of running the VSAN Observer interactively, I actually prefer to run the VSAN Observer using RVC's script option. It is just less typing for me and makes it easy to collect stats across multiple VSAN environments

To do so, you will need to create a script file that contains the following:

# William Lam
# www.virtuallyghetto.com
# RVC script for running VSAN Observer

datacenter_name = "VSAN-Datacenter"
cluster_name = "VSAN-Cluster"
vsan_html_output_directory = "/storage/core"
vsan_observer_runtime = "1"

# Do not edit beyond here #

puts "Enabling VSAN Observer collection for: #{cluster_name} ..."
rvc_exec("vsan.observer --run-webserver --force --generate-html-bundle #{vsan_html_output_directory} --max-runtime #{vsan_observer_runtime} /localhost/#{datacenter_name}/computers/#{cluster_name}")

The RVC script option actually accepts a Ruby script to execute and if we take a look at the script, we are just passing some arguments to the vsan.observer command.

To use the RVC script instead of interactively logging in, you can run the following command:

rvc -s [SCRIPT-NAME] [USERNAME:PASSWORD]@localhost

vsan-observer-rvc-script-0
I think a nice feature enhancement to the VSAN Observer is the ability to automatically background the collection process without having to rely on the existing SSH connection, perhaps this is something Christian may consider for a future update to RVC 🙂 In the meantime, this is a pretty decent work around

Categories // ESXi, VSAN Tags // ESXi 5.5, ruby, ruby vsphere console, rvc, VCSA, VSAN, vsan observer, vSphere 5.5

How to run Nested ESXi on the vCloud Hybrid Service?

05.02.2014 by William Lam // 7 Comments

nested-esxi-on-vchsToday I was granted access to VMware's vCloud Hybrid Service and the first order of business for me of course, was to provision a Nested ESXi VM! After going through the vCHS UI (which is very slick and easy to use by the way) and the vCloud Director UI, I realized the ESXi guestOS type has not been enabled on the backend of the vCloud Director Database. This totally makes sense, as vCHS is a production ready service and they definitely would not want to run anything that is not officially supported.

Having said that, I can see the benefits to customers who would like build out a Nested ESXi environment on vCHS for lab purposes instead of having to manage their own. Some customers even leverage Nested ESXi as part of their development and testing of software and it can be challenging at times to quickly spin up a brand new environment. Instead, they go to vCHS and with just a couple of of clicks in the UI or automatically using the vCloud APIs, provision a couple of Nested ESXi instances for testing. You can easily discard the resources once you are done or keep them running a bit longer.

Having worked with vCloud Director in the past, I knew that you could import an OVF/OVA and I thought maybe I could just import the Nested ESXi OVF templates that I built and potentially workaround vCHS "limitation" 🙂

Disclaimer: Nested ESXi and Nested Virtualization is not officially supported by VMware nor is it supported on vCHS

I tried to upload one of the OVF templates that I built, but it turns out vCloud Director does not supported the Dynamic Disks feature, so I had to perform two additional steps.

Step 1 - Download one of the following Nested ESXi OVF templates

  • Single Nested ESXi VM Template
  • 3-Node VSAN Nested ESXi VM Template
  • 32-Node VSAN Nested ESXi VM Template

Step 2 - Import the OVF template in an existing vSphere environment and ensure you are doing so using the vSphere Web Client, as some of the properties may not be imported properly

Step 3 - Once deployed, go ahead and re-export the image to an OVF/OVA (I choose OVA as it is a single file) and this will generate the empty VMDKs for you so the image should still be very small (< 1MB)

Step 4 - Login into to your vCHS account and  click on your Virtual Datacenter. Select Virtual Machines and then click on Manage in vCloud Director. Import the OVF/OVA that you have just exported

Step 5 - Once the import has been completed, you now have a Virtual Machine that has been configured with the correct guestOS type which should be VMware ESXi 5.x as seen in the screenshot below

nested-esxi-on-vchs-2
Step 6 - At this point, you can either mount an ESXi ISO over your browser or upload it into the vCloud Director Catalog so you can mount it locally and begin your installation of ESXi. Below is a screenshot of 3 Nested ESXi VMs running on vCHS

nested-esxi-on-vchs-3
Note: It looks like some of the advanced VM settings that are part of my OVF template are ignored as part of the vCloud Director import. This means that if you would like to run a Nested VSAN environment on vCHS, you will not be able to rely on the SSD emulation setting but instead, you will need to run through the ESXCLI claim rules to mark particular disks as "SSD" devices. It would have been really nice if vCloud Director would preserve all the advanced VM settings but at least you can still run a Nested VSAN environment.

So there you have it, Nested ESXi running on vCHS! I am kind of curious if this is the first instance of a Nested ESXi VM running on vCHS without having admin access on the backend system?

Note: One limitation to be aware of is that since the backend of vCloud Director is not properly enabled for Nested Virtualization support, this means you will NOT be able to run Nested VMs on top of the Nested ESXi instances. This is due to the lack of having Network Pool which has both Promiscuous & Forge Transmits enabled which is a requirement for proper Nested VMs connectivity. I wonder if vCHS should provide Nested Virtualization capabilities? I know I definitely would like to see it, what do you think? Leave a comment if you have some thoughts on this topic.

UPDATE (05/4/14) - If you wish to run a Nested VSAN environment on vCHS, you will need to take a look at this blog post here on how to "fake" an SSD on one of the devices by using ESXCLI claim rules. The rason for this is that you will not be able to leverage the other method of emulating an SSD device via advanced setting as that requires access to the underlying vSphere environment which you will not have in vCHS.

Categories // ESXi, Nested Virtualization, VSAN, vSphere Tags // ESXi, nested, nested virtualization, ssd, vCHS, vcloud hybrid service, VSAN

  • « Previous Page
  • 1
  • …
  • 136
  • 137
  • 138
  • 139
  • 140
  • …
  • 151
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Simplify License Management across VCF Operations Fleet & Standalone Deployment for Monitoring 03/05/2026
  • Automated Initial Configuration of VCF Operations 9 using CASA API 03/04/2026
  • Automated Deployment of VCF Operations 9 OVA 02/27/2026
  • Frequent Query container volume async Tasks in vSphere UI  02/20/2026
  • Quick Tip - Debugging "stuck" vSphere Supervisor being removed 02/19/2026

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.

To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2026

 

Loading Comments...