WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Nested ESXi 5.1 Supports VMXNET3 Network Adapter Type

09.11.2012 by William Lam // 15 Comments

I noticed something interesting while extracting the contents of ESXi 5.1 ISO for some kickstart configurations ....

Do you see it? It's a VMXNET3 driver for the VMkernel! I also confirmed by running the following ESXCLI command querying for the VMkernel module "vmxnet3":

# esxcli system module get -m vmxnet3
Module: vmxnet3
Module File: /usr/lib/vmware/vmkmod/vmxnet3
License: GPL
Version: Version 1.1.32.0, Build: 799733, Interface: 9.2 Built on: Aug  1 2012
Signed Status:
Signature Issuer:
Signature Digest:
Signature FingerPrint:
Provided Namespaces:
Required Namespaces: [email protected], com.vmware.vmkapi@v2_1_0_0

***Disclaimer***: This is for educational purposes only, this is not officially supported by VMware. Use at your own risk. There is also a mention of this in the vSphere 5.1 release notes that VMs running on nested ESXi hosts using VMXNET3 driver could potentially crash. Again, not supported user at your own risk.

Next I decided to create a Nested ESXi 5.1 VM, but instead of selecting the e1000 driver which was the only network adapter type that would function for running a nested ESXi host, I choose the VMXNET3 adapter and to my surprise ESXi's networking stack was fully functional.

You can see from the above screenshot, I have a two VMXNET3 network adapters for my nested ESXi 5.1 VM. Here are two additional screenshot of the physical adapters as seen by nested ESXi 5.1 host and you can see that it shows up as VMware Inc. VMXNET3

I have not tried any performance tests, so not sure if there are going to be any significant benefits but pretty cool nonetheless!

Categories // Uncategorized Tags // ESXi 5.1, nested, vesxi, vmxnet3, vSphere 5.1

Configuring New vSphere Web Client Session Timeout

09.10.2012 by William Lam // 5 Comments

Just like you could in the old vSphere C# Client, users can also configure a session timeout for the new vSphere Web Client in the latest release of vSphere 5.1. This not only ensures that idle sessions automatically disconnect after a certain period of time but also helps reduce the resources consumed on the vCenter Server as each session allocates a certain amount of resources.

To configure the session timeout, you will need to login to the server running the vSphere Web Client service (which is usually your vCenter Server) and find the webclient.properties file and change the default timeout and then restart the vSphere Web Client service. For the VCSA, the default timeout value is 120 minutes and I assume it is the same default for the Windows vCenter Server.

Step 1 - Locate the webclient.properties file:

    VCSA 5.x

/var/lib/vmware/vsphere-client/webclient.properties

    VCSA 6.x

/etc/vmware/vsphere-client/webclient.properties

    Windows vCenter Server 5.x

%ALLUSERPROFILE%\VMware\vCenterServer\cfg\vsphere-client\webclient.properties

    Windows vCenter Server 6.x

%ALLUSERPROFILE%\VMware\vSphere Web Client\webclient.properties

Step 2 - Un-comment and change session.timeout value to desired value:

session.timeout = 120

Step 3 - Restart the vSphere Web Client Service:

/etc/init.d/vsphere-client restart

You will need to restart the vSphere Web Client service for the changes to go into effect. For the Windows vCenter Server, just restart the vSphere Web Client service and for the VCSA, run the above command.

In my lab, I configured the time out to be 1 minute, once the session has gone idle for the configured period, you will automatically be logged out and brought back to login page with the following message:

 

Categories // vSphere, vSphere 6.0 Tags // session, sso, timeout, vSphere 5.1, vSphere 5.5, vsphere web client

How To Initiate a Wipe & Shrink Operation On an SE Sparse Based Disk

09.10.2012 by William Lam // 6 Comments

In my previous two articles, I showed you how to create your own SE Sparse disks as well as creating new virtual machine Linked Clones leveraging the new SE Sparse disk format. If you recall earlier, one of the features of the SE Sparse disk format is to provide the ability to reclaim unused blocks within the guestOS which is a two step process: wipe and shrink.

Here is a screenshot that describes the process which was taken from the What's New In vSphere 5.1 Storage Whitepaper by my colleague Cormac Hogan. I highly recommend you check out the whitepaper which includes more details about this feature and other storage improvements in vSphere 5.1

The process of kicking off this wipe and shrink operation will be done through an integration with VMware View (a future release from my understanding). Now, it's important to understand that it's not just simply calling these two operations but it is also when they are called. The wipe operation is more CPU intensive as it scans for unused space within the guestOS filesystem and the shrink operation is more I/O intensive as it issues the SCSI unmaps commands. I can only assume that these operations will be scheduled based on the utilization of the guestOS to help reduce the impact to the VM workload.

Now having said that, since the SE Sparse disk format is a feature of the vSphere 5.1 platform, so are both the wipe and shrink operations. Though they are not exposed in the public vSphere API like the SE Sparse disk format, you can still access the private APIs if you know where to look 😉

Disclaimer: This is for educational purposes only, this is not officially supported by VMware. Use at your own risk.

With some help from my good friend the vSphere MOB and some digging, I have located the two vSphere API methods for wipe and shrink operation. Before getting started, ensure you have a VM with at least one SE Sparse disk, else these commands will not be very useful.

Note: In this experiment, I tested the wipe and shrink operation with Windows XP image, this may or may not work on other OSes.

First you will need to search for the VM in question and retrieve it's vSphere MOB URL which is in the format of https://[vcenter-server]/mob/?moid=vm-X where X is the MoRef ID for your VM. You can either navigate through the vSphere MOB or use my MoRef finder script.

Wipe Operation

To issue the wipe API, enter the following URL into your web browser (remember to replace the MoRef ID with the one of your VM)

https://[vcenter-server]/mob/?moid=vm-X&method=wipeDisk

Here is a screenshot of what that looks like if you are able to successfully access the private API:

Go ahead and click on "Invoke Method" which will then execute the wipe operation. If you take a look at the vSphere Web Client, you should now see a new task for the wipe operation.

This can take a bit of time as it scans through the guestOS filesystem for unused space.

Shrink Operation

Once the wipe operation as completed, we then need to issue the shrink API. Enter the following URL into your web browser (remember to replace the MoRef ID with tone of your VM)

https://[vcenter-server]/mob/?moid=vm-X&method=shrinkDisk

Here is a screenshot of what that looks like if you are able to successfully access the private API:

Here you can specify particular disks (requires diskId) that you wish to perform the shrink operation on. If you leave it blank, it will try to shrink all disks associated with your VM. In our example, I will shrink all disks. Go ahead and click on the "Invoke Method" which will kick off the shrink operation. If you go back to the vSphere Web Client, you should now see a shrink task in progress.

Again, this operation can also take some time, but once it has finished, then you have successfully reclaimed any unused blocks within your guestOS.

Categories // Automation Tags // api, ESXi 5.1, Managed Object Browser, mob, sesparse, shrink, unmap, vSphere 5.1, vSphere MOB, wipe

  • « Previous Page
  • 1
  • …
  • 482
  • 483
  • 484
  • 485
  • 486
  • …
  • 565
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • PowerCLI remediation script for running NSX Edge on AMD Ryzen for VCF 9.0 06/20/2025
  • Failed to locate kickstart on Nested ESXi VM CD-ROM in VCF 9.0 06/20/2025
  • NVMe Tiering with Nested Virtualization in VCF 9.0 06/20/2025
  • VCF 9.0 Installer workaround for ESXi hosts with different vendor 06/19/2025
  • NVMe Tiering with AMD Ryzen CPU workaround for VCF 9.0 06/19/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025