WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

How to Enable Nested vFT (virtual Fault Tolerance) in vSphere 5

07.31.2011 by William Lam // 5 Comments

The ability to enable virtual Fault Tolerance in nested virtual machines running in vESX(i) is not a new feature in vSphere 5, vFT has been an unsupported feature since vSphere 4 and was initially identified by Simon Gallagher. The process is exactly the same in vSphere 5 in which three virtual machine configuration options need to be configured for the virtual machine to be enabled with FT, not the vESXi VM.

replay.supported = "true"
replay.allowFT = "true"
replay.allowBTOnly = "true"

During the beta of vSphere 5, I did enable vFT but on an offline virtual machine to conserve on unnecessary compute resources. Today there was a question on the beta community around configuring vFT for vSphere 5 and I wanted to quickly validate the configurations still hold true. I ran into a interesting error when trying to enable vFT, the power on process for the secondary virtual machine failed with the following error:

This was not an error I had seen before in vSphere 4 and looking at the vmkernel and vmware.log files, I noticed the following:

2011-07-31T17:31:39.314Z| vcpu-0| [vob.vmotion.stream.keepalive.read.fail] vMotion migration [ac1e0050:1312133702562144] failed to read stream keepalive: Connection closed by remote host, possibly due to timeout
2011-07-31T17:31:39.314Z| vcpu-0| [msg.checkpoint.precopyfailure] Migration to host <> failed with error Connection closed by remote host, possibly due to timeout (0xbad003f).
2011-07-31T17:31:39.324Z| vcpu-0| Migrate: secondary failure during migration: error Connection closed by remote host, possibly due to timeout.

I tried changing the advanced option on the vESX(i) host to increase the vMotion timeout but continued to hit the same error. I decided to look more into the first error message "failed to read stream keepalive" and found an advanced ESX(i) setting called /Migrate/VMotionStreamDisable, this advanced option has been available since ESX(i) 4.x.

I decided to disable vMotion Stream and to my surprised, it allowed FT to power on the secondary virtual machine and no longer ran into that error.

Note: You may or may not run into this error message and the configuration may not be necessary. If you enable vFT on an offline VM, you should not have any issues as long as you meet the minimum Fault Tolerance requirements.

You can configure the advanced ESXi option using either esxcli or legacy esxcfg-advcfg commands:

esxcli system settings advanced set -o /Migrate/VMotionStreamDisable -i 0
esxcfg-advcfg -s 0 /Migrate/VMotionStreamDisable

It is important to understand that even though one can setup a vESX(i) hosts and test and play with some of the advanced functionality such as vMotion and FT that the actual behavior is unpredictable as these configurations are unsupported by VMware. This of course is also great feature for home labs and studying for VMware certifications such as VCP and VCAP-DCA, but that should be the extent of leveraging these unsupported configurations.

Categories // ESXi, Nested Virtualization, Not Supported Tags // ESXi 5.0, fault tolerance, nested ft, vft, vSphere 5.0

How to Add a Splash of Remote Color to ESXi Shell

07.23.2011 by William Lam // 7 Comments

This morning I noticed a very interesting retweet by fellow vExpert Wil van Antwerpen from another vExpert: Richard Cardona (You may know him as rcardona2k on the VMTN Community Forums) about a neat little trick with the use of remote ESXi Shell (previous known as remote TSM).

For those of you who login remotely via SSH to the ESXi Shell (previously known as unsupported mode and Tech Support Mode) know that you can run the DCUI utility remotely by just typing "dcui". The remote DCUI works just like it does using the direct console, with the exception of displaying the famous yellow and black screen that we are familiar with.

Richard came upon a neat little trick by setting the terminal type to "linux" from the default "xterm" that the yellow and black can be enabled when using the remote DCUI.

Before launching DCUI utility, you will need to run the following command on the ESXi Shell:

export TERM=linux

Next you will just type "dcui" and hit enter

Here is an example of running remote DCUI in color on ESXi 5

Here is an example of running remote DCUI in color on ESXi 4.1

Note: As you can see this is not a new trick in vSphere 5, but has been there since 4.x days but one big change with vSphere 5 is the full resolution of DCUI which many have complained about in the past.

If you are interested in other ways of customizing the DCUI, take a look at this blog post How to add a splash of color to ESXi DCUI Welcome Screen

Don't forget to play some cool soundtrack music when using the DCUI 😉

Categories // ESXi, Not Supported Tags // dcui, ESXi 5.0, vSphere 4.0, vSphere 5.0

How to Enable Support for Nested 64bit & Hyper-V VMs in vSphere 5

07.12.2011 by William Lam // 66 Comments

With the release of vSphere 5, one of the most sought out feature from VMware is the ability to run nested 64bit and Hyper-V guest virtual machines in a virtual ESXi instance. Previous to this, only 32bit virtual machines were supported as the VT-x/AMD-V Hardware Virtualization CPU instructions could not be virtualized and presented to the virtual ESX(i) guest. This feature is quite useful for home and lab setups in testing new features or studying for VMware certifications and running multiple vESX(i) instances.

You will still be required to have a 64bit capable system and CPU and you will need to be running ESXi 5.0, this will not work for ESX(i) 4.x or older.

 
The above diagram depicts the various levels of inception where pESXi is your physical ESXi 5.0 hosts. We then create a vESXi 5.0 host which will contain the necessary Hardware Virtualization CPU instructions to support a 64bit nested guest OS which I've created as another ESXi host called vvESXi.
Note: You will not be able to run a 4th level nested 64bit VM (I have tried by further passing the HV instructions in the nested guest) and it will just boot up and spin your CPUs for hours.
This feature by default is disabled in ESXi 5.0, to enable this virtualized HV (Hardware Virtualization) you will need to add the following string vhv.allow = "TRUE"  to /etc/vmware/config of your Physical ESXi 5.0 host
Once the configuration change has been made, the feature goes into effect right away. A reboot of the system is not necessary. To verify, you should now be able to power on a 64bit guest OS and see that the HV instructions bits are being passed into the guestOS which will then allow you to run a nested 64bit guestOS. You can also verify by looking in the vmware.log file of the virtual machine and grep for the string "monitorControl.vhv" and if you see the following message, then Virtualized HV is not enabled.
In the past to run a virtual ESX(i) instance, a few advanced .vmx configuration entries were needed as documented here. With ESXi 5.0, if you are using virtual hardware version 8, then you do not need to make any additional changes. If you are using hardware version 4 or 7, then you will need to add a few changes to the VM's configuration file.
Creating vESXi 5.0 Instance using Hardware Version 8:
1. To create a virtual ESXi 5.0 instance, start off by just creating a standard RHEL5/6 64bit VM using the vSphere Client
2. Once the VM has been created, edit the settings of the VM and change over to the "Options" and now have the ability to select a new guestOS type: VMware ESXi 5.x or VMware ESXi 4.x under the "Other" section.

Note: I'm not sure why these two additional guestOS type is not available from the default creation menu, but are available after the initial VM shell is created.

3. You are now ready to install ESXi 5.0 in this new vESXi host and then you can create and power on nested 64bit guestOS within that vESXi instance as denoted from the picture below

Creating vESXi 5.0 Instance using Hardware Version 4/7:
1. To create a virtual ESXi 5.0 instance, start off by just creating a standard RHEL5/6 64bit VM using the vSphere Client

2. Now you will need to add the following advanced .vmx parameter:  monitor.virtual_exec = "hardware" which can be done through the vSphere Client and/or editing the .vmx parameter manually.

3. Next you will need to add some cpuid bits, depending if you are running an Intel or AMD CPU, the respective entries are required:

Intel Hosts:

cpuid.1.ecx = "----:----:----:----:----:----:--h-:----"

AMD Hosts:

cpuid.80000001.ecx.amd = "----:----:----:----:----:----:----:-h--"
cpuid.8000000a.eax.amd = "hhhh:hhhh:hhhh:hhhh:hhhh:hhhh:hhhh:hhhh"
cpuid.8000000a.ebx.amd = "hhhh:hhhh:hhhh:hhhh:hhhh:hhhh:hhhh:hhhh"
cpuid.8000000a.edx.amd = "hhhh:hhhh:hhhh:hhhh:hhhh:hhhh:hhhh:hhhh"

4. You are now ready to install ESXi 5.0 in this new vESXi host and then you can create and power on nested 64bit guestOS within that vESXi instance

By using a VM that is hardware version 8, you can easily automate the creation of vESXi 5.0 instance by changing the guestOS string in the .vmx parameter to "vmkernel" and the above configurations other than "vhv" string needed for either an Intel or AMD system are automatically configured.

For proper networking connectivity, also ensure that either your standard vSwitch or Distributed Virtual Switch has both promiscuous mode and forged transmit enabled either globally on the portgroup or distributed portgroup your nested ESXi hosts are connected to.  

Creating a vHyper-V  Instance on physical ESXi 5.0:
1. To create a virtual Hyper-V instance, start off by creating a Windows 2008 Server R2 64bit VM using the vSphere Client

2. If you are using Hardware Version 7, you will need to follow the instructions in "Creating vESXi 5.0 Instance using Hardware Version 4/7" to add the additional parameters to the VM. If you are using Hardware Version 8, you just need to change the guestOS type to VMware ESXi 5.0

3. You need to add one additional .vmx parameter which tells the underlying guestOS (Hyper-V) that it is not running as a virtual guest which in fact it really is. The parameter is hypervisor.cpuid.v0 = FALSE

4. You are now ready to install Hyper-V in a virtual machine and you can also spin up nested 64bit guestOSes in this virtual Hyper-V instance.

As you can see, now you can even run Hyper-Crap, err I mean Hyper-V as a virtualized guest under ESXi 5.0. I did not get a chance to try out Xen, but I'm sure with the ability to virtualize the Hardware Virtualization instructions, you should be able to run other types of hypervisors for testing purposes.

This is a really awesome feature but note that this is not officially supported by VMware, use at your own risk.

Categories // ESXi, Home Lab, Nested Virtualization, Not Supported Tags // ESXi 5.0, hyper-v, nested, vesxi, vhv, vSphere 5.0

  • « Previous Page
  • 1
  • …
  • 19
  • 20
  • 21
  • 22
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025