WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

How to configure SMP-FT using Nested ESXi in vSphere 6?

03.06.2015 by William Lam // 1 Comment

Symmetric Multi-Processing Fault Tolerance (SMP-FT) has been a long-awaited feature by many VMware customers. With the release of vSphere 6.0, the SMP-FT capability is now finally available and if you want to try out this new feature and see how it works from a "functional" perspective, you can easily do so by running it in a Nested ESXi environment. SMP-FT no longer uses the "record/replay" capability like its younger brother Uniprocessing Fault Tolerance (UP-FT). Instead, SMP-FT now uses a new Fast Checkpointing technique which not only improves the overall performance of its predecessor but also greatly simplifies and reduces additional configurations when running in a Nested ESXi environment.

Disclaimer: Running SMP-FT in a Nested ESXi environment does not replace or substitute actual testing of physical hardware. For any type of performance testing, please test SMP-FT using real hardware.

Requirements:

  • pESXi host running either ESXi 5.5 or 6.0
  • vCenter Server 6.0
  • 2 x Nested ESXi VMs running ESXi 6.0 (vHW9+)
  • Shared storage for the Nested ESXi VMs

Instructions:

Step 1 - Created a Nested ESXi VM using guestOS type "ESXi 5.5/6.0 or later". You will need at least 2 vCPU or greater, 4GB of memory or greater for the installation of ESXi and most importantly, a VMXNET3 network adapter. The reason a VMXNET3 adapter is required is that SMP-FT has a requirement for 10Gbit network connection and the VMXNET3 driver can simulate a 10Gbit connection for a Nested ESXi VM. For further instructions on creating a Nested ESXi VM, please take a look at this article. If you are unable to add VMXNET3 adapter, you may need to first change the guestOS type to "Other 64-bit", add the adapter and then change the guestOS type back.

smp-ft-nested-esxi-0
Step 2 - Install ESXi 6.0 on the Nested ESXi VM and ensure you also have a vCenter Server 6.0 deployed if you have not done so already and add your Nested ESXi instances to a new vSphere Cluster which has vSphere HA enabled.

Step 3 - You will need to enable both vMotion and Fault Tolerance traffic type for the VMkernel interface that you wish to run FT traffic across.

smp-ft-nested-esxi-1
Step 4 - At this point, you can create a real or dummy VM and power it on. Once you have the powered on VM, you can now enable either UP-FT or SMP-FT by right clicking and selecting "Enable Fault Tolerance".

smp-ft-nested-esxi-2
As you can see from the screenshot above, I have successfully enabled FT on a VM with 4vCPU running inside of a Nested ESXi VM, how cool is that!? Hopefully this will help you get more familiar with the new SMP-FT feature when you are ready to give it a real spin on real hardware 🙂

Note: Intel Sandy Bridge is recommended when using SMP-FT (real physical hardware) but if you have older CPUs, you enable "Legacy FT" mode by adding the following VM Advanced Setting "vm.uselegacyft" to the VM you are enabling FT on.

Categories // ESXi, Nested Virtualization, vSphere 6.0 Tags // fault tolerance, nested ft, nested virtualization, smp-ft, vm.uselegacyft, vSphere 6.0

How to change/deploy VCSA 6.0 with default bash shell vs appliancesh?

03.06.2015 by William Lam // 10 Comments

When logging into the new VCSA 6.0 via SSH, you will notice that you are no longer dropped into a normal bash shell but into a new appliancesh (pronounced appliance shell) environment. This new interface provides basic set of virtual appliance management capabilities including Ruby vSphere Console (RVC) access which makes the majority of operations convenient to a vSphere Administrator but it also helps restrict unnecessary access to the underlying filesystem which can be helpful from a security standpoint.

If you need to access the underlying filesystem, you can temporarily enable it by running the following two commands:

shell.set --enabled True
shell

applianceshell-default-bash
If you need to transfer files to/from the VCSA via SCP/WinSCP, you will need to change the default shell from /bin/appliancesh to /bin/bash else the operation will fail. You can easily do this by using the chsh command:

chsh -s "/bin/bash" root

If you rather have the BASH shell configured as the default after deployment and not have to go through this manual process each time, you can actually configured using the following hidden option called guestinfo.cis.appliance.root.shell

This property allows you to specify the default shell for the "root" account and you can only modify this if you deploy the VCSA using ovftool. Here is the parameter you would append to the ovftool argument list:

--prop:guestinfo.cis.appliance.root.shell="/bin/bash"

You can leverage this new property and automate the deployment of the new VCSA 6.0 and for more details be sure to check out my VCSA 6.0 Automation Series.

Categories // Automation, OVFTool, VCSA, vSphere 6.0 Tags // appliancesh, guestinfo, ovftool, VCSA, vcva, vSphere 6.0

Duplicate MAC Address concerns with xVC-vMotion in vSphere 6.0

03.05.2015 by William Lam // 4 Comments

In vSphere 6.0, the mobility options for a Virtual Machine is truly limitless. This has all been possible with a new set of vMotion capabilities introduced in vSphere 6.0 which you can learn more about them here and here. In the past, one area of concern when migrating a VM from one vCenter Server to another is the possibility that a migrated VM's MAC Address might be re-provisioned by the source vCenter Server resulting in a MAC Address conflict. In fact, this is actually a topic I have covered before in my considerations when migrating VMs between vCenter Servers article. I highly encourage you check out that article before proceeding further as it provides some additional and necessary context.

When looking to leverage the new Cross vCenter Server vMotion (xVC-vMotion) capability in vSphere 6.0, are MAC Address conflicts still a concern? To answer that question, lets take a look at an example. Below I have a diagram depicting two different vSphere 6.0 deployments. The first is comprised of three vCenter Servers who are joined to the same SSO Domain called vghetto.local and VM1 is currently being managed by VC1. The second is a single vCenter Server connected to a completely different SSO Domain called vmware.local. I will also assume we are being a good VI Admin and we have deployed each vCenter Server using a unique ID (more details here on why having different VC ID matters).

mac-address-xvc-vmotion-00
Lets say we now migrate VM1 from VC1 to VC2. In previous releases of vSphere, this potentially could lead to VC1 re-provisioning the MAC Address that VM1 was associated with because that MAC Address was no longer being managed by VC1 and from its point of view, it is now available. Though this type of a scenario is probably rare in most customer environments, in a high churn continuous integration or continuous delivery environment, this can be a real issue. So has anything been improved in vSphere 6.0? The answer is yes, of course 🙂

In vSphere 6.0, vCenter Server now maintains a VM MAC Address Blacklist which upon a successful xVC-vMotion will update this blacklist with the MAC Addresses associated with the migrated VM. This ensures that the source vCenter Server will not re-provisioned these MAC Addresses to newly created VMs and these MAC Addresses are basically "blacklisted" from being used again as shown in the diagram below.

mac-address-xvc-vmotion-1
If we decide to migrate VM1 from VC2 back to VC1, the blacklist is automatically updated and "blacklisted" MAC Addresses will be removed. If we decide to migrate VM1 to a completely different vCenter Server which is not part of the same SSO Domain, then the MAC Address could potentially be re-used, but it will depend on your environment if VC4 is on a completely different L2 segment, then a MAC Address conflict would not occur.

As of right now, there is no automatic way of reclaiming blacklisted MAC Addresses, it is a manual process that must be initiated through a private vSphere API. I am hoping we will be able to get this documented in an official VMware KB, so that in case this is required, you can easily follow the simple steps to execute the necessary APIs. Automatic reclamation is being looked at by Engineering and hopefully we will see this in a future patch/update in vSphere. Overall, this should should not really be a concern given that vCenter Server can uniquely generate about 65,000 unique MAC Addresses and you would have to perform quite a few xVC-vMotions before ever needing to reclaim from the blacklist.

One thing to be aware of when performing xVC-vMotion or ExVC-vMotion is that there are currently no pre-flight checks for MAC Address conflicts at the destination vCenter Server (something Engineering is looking update in a future patch/update release). Having said that, there are two additional measures you can implement in you environment to prevent MAC Address conflicts:

  1. Create vCenter Server alarm which can detect and notify you of a duplicate MAC Address in you environment (also applicable to vSphere 5.5)
  2. Pro-actively check to see if the existing MAC Addresses of your VM is currently in use prior to performing a xVC-vMotion, this is especially useful when performing ExVC-vMotion.

To help with with number 2, I have created a simple PowerCLI script called check-vm-mac-conflict.ps1 which accepts both your source and destination vCenter Server as well as the name of the VM in the source VC to be migrated. It will check the VM's MAC Addresses in the destination VC and ensure that there are no conflicts. If there is a conflict, it will output the name of the destination VM and the MAC Address that is in conflict as seen in the screenshot below.

mac-address-xvc-vmotion-2
Hopefully with these additional measures, you can easily prevent MAC Address conflicts when performing xVC-vMotions in your vSphere environment which can be a pain to troubleshoot.

Categories // vSphere, vSphere 6.0 Tags // blacklist, Cross vMotion, Long Distance vMotion, mac address, vSphere 6.0, xVC-vMotion

  • « Previous Page
  • 1
  • …
  • 38
  • 39
  • 40
  • 41
  • 42
  • …
  • 51
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...