WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Search Results for: nested esxi

Migrating ESXi to a Distributed Virtual Switch with a single NIC running vCenter Server

11.18.2015 by William Lam // 29 Comments

Earlier this week I needed test something which required a VMware Distributed Virtual Switch (VDS) and this had to be a physical setup, so Nested ESXi was out of the question. I could have used my remote lab, but given what I was testing was a bit "experimental", I prefered using my home lab in the event I need direct console access. At home, I run ESXi on a single Apple Mac Mini and one of the challenges with this and other similar platforms (e.g. Intel NUC) is that they only have a single network interface. As you might have guessed, this is a problem when looking to migrate from a Virtual Standard Switch (VSS) to VDS, as it requires at least two NICS.

Unfortunately, I had no other choice and needed to find a solution. After a couple minutes of searching around the web, I stumbled across this serverfault thread here which provided a partial solution to my problem. In vSphere 5.1, we introduced a new feature which would automatically roll back a network configuration change if it negatively impacted network connectivity to your vCenter Server. This feature could be disabled temporarily by editing the vCenter Server Advanced Setting (config.vpxd.network.rollback) which would allow us to by-pass the single NIC issue, however this does not solve the problem entirely. What ends up happening is that the single pNIC is now associated with the VDS, but the VM portgroups are not migrated and the reason that this is problematic is that the vCenter Server is also running on the ESXi host which it is managing and has now lost network connectivity 🙂

I lost access to my vCenter Server and even though I could connect directly to the ESXi host, I was not able to change the VM Network to the Distributed Virtual Portgroup (DVPG). This is actually an expected behavior and there is an easy work around, let me explain. When you create a DVPG, there are three different bindings: Static, Dynamic, and Ephemeral that can be configured and by default, Static binding is used. Both Static and Dynamic DVPGs can only be managed through vCenter Server and because of this, you can not change the VM network to a non-Ephemeral DVPG and in fact, it is not even listed  when connecting to the vSphere C# Client. The simple work around is to create a DVPG using the Ephemeral binding and this will allow you to then change the VM network of your vCenter Server and is the last piece to solving this puzzle.

Disclaimer: This is not officially supported by VMware, please use at your own risk.

Here are the exact steps to take if you wish to migrate an ESXi host with a single NIC from a VSS to VDS and running vCenter Server:

Step 1 - Change the following vCenter Server Advanced Setting config.vpxd.network.rollback to false:

migrating-from-vss-to-vds-with-single-nic-1
Note: Remember to re-enable this feature once you have completed the migration

Step 2 - Create a new VDS and the associated Portgroups for both your VMkernel interfaces and VM Networks. For the DVPG which will be used for the vCenter Server's VM network, be sure to change the binding to Ephemeral before proceeding with the VDS migration.

migrating-from-vss-to-vds-with-single-nic-0
Step 3 - Proceed with the normal VDS Migration wizard using the vSphere Web/C# Client and ensure that you perform the correct mappings. Once completed, you should now be able connect directly to the ESXi host using either the vSphere C# Client or ESXi Embedded Host Client to confirm that the VDS migration was successful as seen in the screenshot below.

migrating-from-vss-to-vds-with-single-nic-2
Note: If you forgot to perform Step 2 (which I initially did), you will need to login to the DCUI of your ESXi host and restore the networking configurations.

Step 4 - The last and final step is to change the VM network for your vCenter Server. In my case, I am using the VCSA and due to a bug I found in the Embedded Host Client, you will need to use the vSphere C# Client to perform this change if you are running VCSA 6.x. If you are running Windows VC or VCSA 5.x, then you can use the Embedded Host Client to modify the VM network to use the new DVPG.

migrating-from-vss-to-vds-with-single-nic-3
Once you have completed the VM reconfiguration you should now be able to login to your vCenter Server which is now connected to a DVPG running on a VDS which is backed by a single NIC on your ESXi host 😀

There is probably no good use case for this outside of home labs, but I was happy that I found a solution and hopefully this might come in handy for others who might be in a similar situation and would like to use and learn more about VMware VDS.

Categories // ESXi, Not Supported, vSphere Tags // distributed portgroup, distributed virtual switch, dvs, ESXi, notsupported, vds

UEFI PXE boot is possible in ESXi 6.0

10.09.2015 by William Lam // 21 Comments

A couple of days ago I received an interesting question from fellow colleague Paudie O'Riordan, who works over in our Storage and Availability Business Unit at VMware. He was helping a customer who was interested in PXE booting/installing ESXi using UEFI which is short for Unified Extensible Firmware Interface. Historically, we only had support for PXE booting/installing ESXi using the BIOS firmware. You also could boot an ESXi ISO using UEFI, but we did not have support for UEFI when it came to booting/installing ESXi over the network using PXE and other variants such as iPXE/gPXE.

For those of you who may not know, UEFI is meant to eventually replace the legacy BIOS firmware. There are many benefits with using UEFI over BIOS, a recent article that does a good job of explaining the differences can be found here. In doing some research and pinging a few of our ESXi experts internally, I found that UEFI PXE boot support is actually possible with ESXi 6.0. Not only is it possible to PXE boot/install ESXi 6.x using UEFI, but the changes in the EFI boot image are also backwards compatible, which means you could potentially PXE boot/install an older release of ESXi.

Note: Auto Deploy still requires legacy BIOS firmware, UEFI is not currently supported today. This is something we will be addressing in the future, so stay tuned.

Not having worked with ESXi and UEFI before, I thought this would be a great opportunity for me to give this a try in my homelab which would also allow me to document the process in case others were interested. For my PXE server, I am using CentOS 6.7 Minimal (64-Bit) which runs both the DHCP and TFTP services but you can use any distro that you are comfortable with.

[Read more...]

Categories // Automation, ESXi, vSphere 6.0 Tags // bios, boot.cfg, bootx64.efi, dhcp, ESXi 6.0, kickstart, mboot.efi, pxe boot, tftp, UEFI, vSphere 6.0

How to deploy vSphere 6.0 (VCSA & ESXi) on vCloud Director and vCloud Air?

04.27.2015 by William Lam // 13 Comments

In case you missed the awesome news last Friday, George Kobar who works over in the vCloud Air team shared a really cool solution in which he demonstrates how to efficiently setup Nested ESXi running in vCloud Air which includes support for inner-vm guest communication without requiring Promiscuous Mode. Nested ESXi has been possible on vCloud Air for quite some time, in fact when I was first granted access I had to try it out myself and had written about it here. The great thing about vCloud Air is that it runs directly on vSphere which means you will get all the added benefits of the underlying vSphere platform including things like VHV (Virtual Hardware Assisted-Virtualization) to ensure that your Nested ESXi VM and its virutal workloads runs as efficiently and as performant as possible. If you are new to vCloud Air, I would recommend checking out this tutorial here which goes into some of the basic operations.

Given the updated news regarding Nested ESXi on vCloud Air, I am sure many of you are excited to try out this new trick for those requiring inner-vm guest communication. I figured most of you will be interested in trying out vSphere 6.0, especially with some of the new capabilities like SMP-FT and VSAN 6.0 which runs perfectly fine in a Nested ESXi environment for demo and learning purposes as shown here and here. I thought I would put together a quick guide on how to setup both Nested ESXi 6.0 as well as the new VCSA 6.0 (which does have a few minor caveats but can definitely run in vCloud Director and vCloud Air environment).

nested-esxi-6.0-vcloud-air
vcsa--6.0-vcloud-air
Disclaimer: The usual caveat ... Nested ESXi is not officially supported by VMware

ESXi 6.0

There is no version of vCloud Director for the Enterprise that supports vSphere 6.0 which means there is no direct support for the latest virtual hardware release which is 11 or support for ESXi 6.x guestOS type. This is also true for vCloud Air which is currently running on vSphere 5.5 and because of this reason, you will need to upload a VM that has been configured with ESXi 5.x as the guestOS type when looking to install ESXi 6.0. Once vCloud Air supports vSphere 6.0, then you can upload a VM that has been created with the ESXi 6.x guestOS type.

The easiest way to create Nested ESXi VM in a vCloud Director or vCloud Air environment is to simply import a VM that has already been configured with ESXi guestOS type (this does not need to be an already installed image). To help expedite the deployment of Nested ESXi in vCloud Air, I have built several Nested ESXi OVF Templates that that you can use. You will also need to upload an ESXi 6.0 ISO or whichever version of ESXi you plan on running since both ESX(i) 4.x and 5.x is possible.

VCSA 6.0

One of the challenges I came across when testing the new VCSA 6.0 in a vCloud Director based environment which also affect vCloud Air is that they do not support a few capabilities within the OVF specification, namely Deployment Options. Due to this limitation and few others, we can not directly import the VCSA 6.0 OVA into vCloud Director. Luckily, there is a workaround which I had looked into a few months before the GA of vSphere 6.0 and below are the steps to import a VCSA 6.0 OVA into a vCloud Director environment. If you are looking to run VCSA 5.5, then you can directly import the OVA without going through these steps.

Step 1 - Download and extract the contents of the VCSA 6.0 ISO (Build 2656757 was  used)

Step 2 - Convert VCSA 6.0 OVA located in vcsa/vmware-vcsa into an OVF by either using ovftool, tar or a tool like 7zip.

ovftool --sourceType=OVA vmware-vcsa vmware-vcsa.ovf

Next, you will need to make several modifications to the OVF file. I do have to warn you, there are a few tweaks and I highly recommend that you use the OVF templates that I have already created for you. Make sure to also delete the .mf (manifest file) since you are making changes to the OVF else the OVF validation will throw an error because the files have been modified.

To save you some time, pain and troubles, I have pre-created the following 3 OVFs (based on vSphere 6.0 GA release of VCSA 6.0) which contains all the modifications mentioned in Step 3 which you can download and then jump to Step 4:

  • VCSA 6.0 Embedded Tiny OVF
  • VCSA 6.0 vCenter Server Management Node Tiny ONLY OVF
  • VCSA 6.0 Platform Services Controller Node Tiny ONLY OVF

Step 3 - The first is to locate the "References" tag located at the top of the OVF file and remove the line containing the RPM reference. At the end it should look something like the following:

  <References>
    <ovf:File ovf:href="VMware-vCenter-Server-Appliance-6.0.0.5100-2656759_OVF10-file1.json" ovf:id="layout.json_id" ovf:size="5756"/>
    <File ovf:href="VMware-vCenter-Server-Appliance-6.0.0.5100-2656759_OVF10-disk1.vmdk" ovf:id="VMware-vCenter-Server-Appliance-6.0.0.5100-2656759-system.vmdk_id" ovf:size="524469248"/>
    <File ovf:href="VMware-vCenter-Server-Appliance-6.0.0.5100-2656759_OVF10-disk2.vmdk" ovf:id="VMware-vCenter-Server-Appliance-6.0.0.5100-2656759-cloud-components.vmdk_id" ovf:size="1369250304"/>
    <File ovf:href="VMware-vCenter-Server-Appliance-6.0.0.5100-2656759_OVF10-disk3.vmdk" ovf:id="VMware-vCenter-Server-Appliance-6.0.0.5100-2656759-swap.vmdk_id" ovf:size="74240"/>
  </References>

In addition, depending on the method you took to convert the OVA to an OVF, you may also need to rename the json and disk file names located in this section to match the extracted contents.

The second is to delete the following section from the OVF that starts with MigrationUpgradeRequisitesSection:

<vmw:MigrationUpgradeRequisitesSection ovf:required="false">
<Info>Files necessary for migration-based upgrade.</Info>
<vmw:Requisite ovf:fileRef="VMware-vCenter-Server-Appliance-6.0.0.5110-2656759-upgrade-requirements.rpm_id" vmw:purpose="requirements"/>
</vmw:MigrationUpgradeRequisitesSection>

The fourth step is to specify the deployment option type that you wish to use. VCSA 6.0 supports the following: embedded, infrastructure (PSC) and management (VC). You will need to locate the following line containing guestinfo.cis.deployment.node.type and set the value property to one of the three options.

<Property ovf:key="guestinfo.cis.deployment.node.type" ovf:type="string" ovf:userConfigurable="false" ovf:value="infrastructure">

The fifth and final step is to specify the deployment size that you wish use for your VCSA, here are nine different supported options:

  • Embedded
    • tiny
    • small
    • medium
    • large
  • vCenter Server Management Node (only)
    • management-tiny
    • management-small
    • management-medium
    • management-large
  • Platform Services Controller Node (only)
    • infrastructure

Since both vCloud Director and vCloud Air does not support the Deployment Option OVF capability, you will need to specify the deployment you wish to use. Locate the DeploymentOptionSection and the first entry where it shows "default=true", you will need to change the id to match one of the entries show above. For example, if you wanted an Embedded VCSA deployment using the tiny size, you would specify "tiny" in the id field.

  <DeploymentOptionSection>
    <Info>List of profiles</Info>
    <Configuration ovf:default="true" ovf:id="tiny">

Once you have selected the type of deployment, you will also need to remove ALL entries referencing the other deployment types else it will always deploy an Embedded deployment.

Note: I would like to give a big shout-out to Doug Baer who works over in the VMware HOL team, he actually discovered the initial issue with the Deployment Options and found the workaround by removing the other disk references. If not, you would end up needing ~2TB of storage as VCD tries to aggregate all nine deployments into one! When I had initially worked out the steps to deploy a VCSA 6.0, I had only used the Embedded deployment option.

Step 4 - Lastly, you will need to change the "capacity" property as seen below from 1303 to 1306 due to a known vCloud Air issue documented in KB2094271

<Disk ovf:capacity="1303" ovf:capacityAllocationUnits="byte * 2^20" ovf:diskId="cloudcomponents" ovf:fileRef="VMware-vCenter-Server-Appliance-6.0.0.5110-2656759-cloud-components.vmdk_id" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="1365573632"/>

Step 5 - You are now ready to upload your VCSA 6.0 OVF to your vCloud Director or vCloud Air environment.

Note: For vCloud Air, you will need to use the "Manage in vCloud Director" link to upload the OVF as the vCloud Air interface does not support direct OVA/OVF uploads.

Step 6 - When you are are ready to deploy your VCSA, one very important step that you will need to do is to edit a few of the OVF properties in the VM before powering it on. If you power on the VCSA before performing this step, the system will need to be deleted and re-deployed as the OVF properties are only read in on the initial first boot which is required for proper configuration.

  • Make sure to disable guest customization, to do so right click on the VM and select Guest OS Customization and uncheck "Enable guest customization"
  • To edit the OVF properties, right click on the VM and select Properties. Click on Guest Properties and you will ONLY be editing the following three sections

Networking Configuration

vcsa-6.0-networking-configurations
System Configuration

vcsa-6.0-system-configurations
SSO Configuration

vcsa-6.0-sso-configuration
For an Embedded Configuration, you will need to edit the following (below is an example of the data input):

Host Network IP Address: 192.168.110.100
Host Network IP Address Family: ipv4
Host Network DNS Servers: 192.168.110.10
Host Network Default Gateway: 192.168.110.1
Host Network Mode: static
Host Network Identity: vc-01a.corp.local
Host Network Prefix: 24
Tools-based Time Synchronization Enable: check OR NTP Servers
Root Password: VMware1!
SSH Enabled: check/uncheck
Directory Domain Name: vghetto.local
New Identity Domain: check
Directory Password: VMware1!
Site Name: virtuallyGhetto

For a vCenter Server Management Node only , you will need to edit the following (below is an example of the data input):

Host Network IP Address: 192.168.110.100
Host Network IP Address Family: ipv4
Host Network DNS Servers: 192.168.110.10
Host Network Default Gateway: 192.168.110.1
Host Network Mode: static
Host Network Identity: vc-01a.corp.local
Host Network Prefix: 24
Tools-based Time Synchronization Enable: check OR NTP Servers
Platform Services Controller: psc-01a.corp.local
Root Password: VMware1!
SSH Enabled: check/uncheck
Directory Domain Name: vghetto.local
New Identity Domain: uncheck
Directory Password: VMware1!
Site Name: virtuallyGhetto

For a Platform Services Controller Node only, you will need to edit the following (below is an example of the data input):

Host Network IP Address: 192.168.110.110
Host Network IP Address Family: ipv4
Host Network DNS Servers: 192.168.110.10
Host Network Default Gateway: 192.168.110.1
Host Network Mode: static
Host Network Identity: psc-01a.corp.local
Host Network Prefix: 24
Tools-based Time Synchronization Enable: check OR NTP Servers
Root Password: VMware1!
SSH Enabled: check/uncheck
Directory Domain Name: vghetto.local
New Identity Domain: check
Directory Password: VMware1!
Site Name: virtuallyGhetto

If everything was deployed successfully, you should now have a VCSA 6.0 instance running in either your vCloud Director or vCloud Air environment.

Categories // Automation, OVFTool, vCloud Air, VCSA, vSphere 6.0 Tags // ova, ovf, ovftool, vcd, vcloud air, vcloud director, VCSA, vcva, vSphere 6.0

  • « Previous Page
  • 1
  • …
  • 31
  • 32
  • 33
  • 34
  • 35
  • …
  • 68
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Ultimate Lab Resource for VCF 9.0 06/25/2025
  • VMware Cloud Foundation (VCF) on ASUS NUC 15 Pro (Cyber Canyon) 06/25/2025
  • VMware Cloud Foundation (VCF) on Minisforum MS-A2 06/25/2025
  • VCF 9.0 Offline Depot using Synology 06/25/2025
  • Deploying VCF 9.0 on a single ESXi host? 06/24/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...