WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

VM serial logging to the rescue for capturing Nested ESXi PSOD

03.21.2016 by William Lam // Leave a Comment

I frequently deploy pre-releases of our software to help test and provide early feedback to our Engineering teams. One piece of software that I deploy some what frequently is our ESXi Hypervisor and the best way to deploy it, is of course inside of a Virtual Machine or commonly referred to as Nested ESXi.

Most recently while testing a new ESXi build in my lab (screenshot below is for demo purposes, not the actual PSOD image), I encountered an ESXi purple screen of death (PSOD) during the bootup of the ESXi Installer itself. Since ESXi had not been installed, there was no place for ESXi to actually store the core dumps which made it challenging when filing a bug with Engineering as screenshots may not always contain all the necessary details.

Screen Shot 2016-03-21 at 9.26.08 AM
Luckily, because we are running in a VM, a really neat feature that VMware has supported for quite some time now is configuring a virtual serial port for logging purposes. In fact, one of the neatest feature from a troubleshooting standpoint was the introduction of the Virtual Serial Port Concentrator (vSPC) feature in vSphere 5.0 which allowed a VM to log directly to a serial console server just like you would for physical servers. You of course had few other options of either logging directly to the serial port of the physical ESXi, named pipe or simply to a file that lived on a vSphere Datastore.

Given this was a home lab setup, the easiest method was to simply output to a file. To add a virtual serial port, you can either use the vSphere Web/C# Client or the vSphere APIs. Since this is not something I need to do often, I just used the UI. Below is a screenshot using the vSphere Web Client and once you have added the virtual serial port, you need to specify the filename and where to the store the output file by clicking on the "Browse" button.

vm-serial-logging
If the GuestOS which includes ESXi has been configured to output to a serial port, the next time there is an issue and you can easily captured the output to a file instead of just relying on a screenshot. One additional tip which might be useful is by default, vSphere will prompt whether you want to replace or append to the configured output file. If you wish to always replace, you can add the following VM Advanced Setting and you will not get prompted in the UI.

answer.msg.serial.file.open = "Replace"

Virtual serial ports are supported on both vSphere (vCenter Server + ESXi) as well as our hosted products VMware Fusion and Workstation.

Categories // ESXi, Fusion, Nested Virtualization, Workstation Tags // ESXi, fusion, nested, nested virtualization, psod, serial logging, vSphere, workstation

vSphere 6.0 Update 2 hints at Nested ESXi support for Paravirtual SCSI (PVSCSI) in the future

03.14.2016 by William Lam // 6 Comments

Although Nested ESXi (running ESXi in a Virtual Machine) is not officially supported today, VMware Engineering continues to enhance this widely used feature by making it faster, more reliable and easier to consume for our customers. I still remember that it was not too long ago that if you wanted to run Nested ESXi, several non-trivial and manual tweaks to the VM's VMX file were required. This made the process of consuming Nested ESXi potentially very error prone and provide a less than ideal user experience.

Things have definitely been improved since the early days and here are just some of the visible improvements over the last few years:

  • Prior to vSphere 5.1, enabling Virtual Hardware Assisted Virtualization (VHV) required manual edits to the VMX file and even earlier versions required several VMX entries. VHV can now be easily enabled using either the vSphere Web Client or the vSphere API.
  • Prior to vSphere 5.1, only the e1000{e} networking driver was supported with Nested ESXi VMs and although it was functional, it also limited the types of use cases you might have for Nested ESXi. A Native Driver for VMXNET3 was added in vSphere 5.1 which not only increased the performance that came with using the optimized VMXNET3 driver but it also enabled new use cases such testing SMP-FT as it was now possible to get 10Gbe interface to Nested ESXi VM versus the traditional 1GBe with e1000{e} driver.
  • Prior to vSphere 6.0, selection of ESXi GuestOS was not available in the "Create VM" wizard which meant you had to resort to re-editing the VM after initial creation or using the vSphere API. You can now select the specific ESXi GuestOS type directly in the vSphere Web/C# Client.
  • Prior to vSphere 6.0, the only way to cleanly shutdown or power cycle a Nested ESXi VM was to perform the operation from within the system as there was no VMware Tools support. This changed with the development of a VMware Tools daemon specifically for Nested ESXi which started out as a VMware Fling. With vSphere 6.0, the VMware Tools for Nested ESXi was pre-installed by default and would automatically startup when it detected that it ran as a VM. In addition to power operations provided by VMware Tools, it also enabled the use of the Guest Operations API which was quite popular from an Automation standpoint.

Yesterday while working in my new vSphere 6.0 Update 2 home lab, I needed to create a new Nested ESXi VM and noticed something really interesting. I used the vSphere Web Client like I normally would and when I went to select the GuestOS type, I discovered an interesting new option which you can see from the screenshot below.

nested-esxi-changes-in-vsphere60u2-3
It is not uncommon to see VMware to add experimental support for potentially new Guest Operating Systems in vSphere. Of course, there are no guarantees that these OSes would ever be supported or even released for that matter.

What I found that was even more interesting was that when select this new ESXi GuestOS type (vmkernel65) is what was recommended as the default virtual hardware configuration for the VM. For the network adapter, it looks like the VMXNET3 driver is now recommended over the e1000e and for the storage adapter the VMware Paravirtual (PVSCSI) adapter is now recommended over the LSI Logic Parallel type. This is really interesting as it is currently not possible today to get the optimized and low overhead of the PVSCSI adapter working with Nested ESXi and this seems to indicate that PVSCSI might actually be possible in the future! 🙂

nested-esxi-changes-in-vsphere60u2-1
I of course tried to install the latest ESXi 6.0 Update 2 (not yet GA'ed) using this new ESXi GuestOS type and to no surprise, the ESXi installer was not able to detect any storage devices. I guess for now, we will just have to wait and see ...

Categories // ESXi, Nested Virtualization, Not Supported, vSphere 6.0 Tags // ESXi, nested, nested virtualization, pvscsi, vmxnet3, vSphere 6.0 Update 2

Hidden OVF 2.0 capablity found in the vSphere Content Library

01.12.2016 by William Lam // 5 Comments

There are a number of new and useful capabilities that have been introduced in the OVF 2.0 specification. One such capability which I thought was really interesting and that could easily benefit VMware-based solutions is the ScaleOutSection feature. This feature allows you specify the number of instances of a given Virtual Appliance to instantiate at deployment time by making use of pre-defined OVF Deployment Options which can also be overriden by a user.

Lets use an example to see how this actually works. Say you have a single Virtual Appliance (VA) and the application within the appliance can scale to N, where N is any number greater or equal to 1. If you wanted to deploy 3 instances of this VA, you would have to deploy it 3 separate times by either by running through an OVF upload or deploying it from a template. In either case, you are performing N-instantiations. Would it not be cool if you could still start with a single VA image and specify at deployment time the number of instances you want to deploy and only need to upload the VA just once? Well, that is exactly what the OVF ScaleOutSection feature provides.

Below is a diagram to help illustrate this feature further. We start out with our single VA, which contains several pre-defined Deployment Options which can contain any text you wish for the logical grouping. In this example, I am using the terms "Single", "Minimal" and "Typical" to map to number of VA's to deploy which are 1, 3, and 4 respectively. If we choose the "Minimal" Deployment option, we would then get 3 instantiated VA's. If we decide that the defaults are not sufficient, we could also override the default by specifying a different number which the VA supports.

OVF20_ScaleOut
A really cool use case that I had thought about when I first came across the ScaleOutSection feature was to make use of it with my Nested ESXi Virtual Appliance. This capability would make it even easier to standup a vSphere or VSAN Cluster of any size for development or testing purposes. Today, vSphere and many of the other VMware products only support OVF 1.x specification and as far as I know, OVF 2.0 was not something that was being looked at.

Right before holiday break, I was chatting with one of the Engineers in the Content Library team and one of the topics that I had discussed in passing was OVF 2.0 support. It turns out that, although vSphere itself does not support OVF 2.0, the vSphere Content Library feature actually contains a very basic implementation of OVF 2.0 and though not complete, it does have some support for the ScaleOutSection feature.

This of course got me thinking and with the help of the Engineer, I was able to build a prototype version of my Nested ESXi Virtual Appliance supporting the ScaleOutSection feature. Below is a quick video that demonstrates how this feature would work using a current release of vSphere 6.0. Pretty cool if you ask me!? 🙂

Demo of Prototype Nested ESXi Virtual Appliance using OVF 2.0 ScaleOut from lamw on Vimeo.

Now, before you get too excited. There were a couple of caveats that I found while going through the deployment workflow. During the deployment, the VMDKs were not properly being processed and when you power on the VMs, it was as if they were empty disks. This was a known issue and I have been told this has already been resolved in a future update. The other bigger issue is how OVF properties are handled with multiple instances of the VA. Since this is not a supported workflow, the OVF wizard is only brought up once regardless of the number of instances being deployed. This means that all VAs will inherit the same OVF values since are you are only prompted once. The workaround was to deploy the VAs, then go into each individual VA and update their OVF properties before powering on the VMs. Since OVF 2.0 and the ScaleOutSection feature is not an officially supported feature, the user experience is not as ideal as one would expect.

I personally think there are some pretty interesting use cases that could be enabled by OVF 2.0 and ScaleOutSection feature. A few VMware specific solutions that I can think of off the top of my head that could potentially leverage this capability are vRealize Log Insight, vRealize Operations Manager and vRealize Automation Center to just name a few. I am sure there are others including 3rd party and custom Virtual Appliances that have been developed and I am curious to hear if this is something that might be of interest to you? If you have any feedback, feel free to leave a comment and I can share this with the Content Library PM.

Categories // ESXi, Nested Virtualization, OVFTool, vSphere Tags // content library, ova, ovf, ovf 2.0, ScaleOutSection, virtual appliance

  • « Previous Page
  • 1
  • …
  • 14
  • 15
  • 16
  • 17
  • 18
  • …
  • 26
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...