WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud
  • Tanzu
    • Application Modernization
    • Tanzu services
    • Tanzu Community Edition
    • Tanzu Kubernetes Grid
    • vSphere with Tanzu
  • Home Lab
  • Nested Virtualization
  • Apple

ACPI motherboard layout requires EFI - Considerations for switching VM firmware in vSphere 8 

01.11.2023 by William Lam // Leave a Comment

One of the important settings to consider when creating a new Virtual Machine in vSphere is the VM firmware, which can either be BIOS or EFI and can be configured under VM Options->Boot Options->Firmware. After selecting the desired guest operating system (GOS) in vSphere, the system will default to a recommended firmware type and can also be overridden by the user. Ultimately, the selection of the VM firmware should be determined by what your GOS supports.

If you ever need to change the VM firmware, you typically will need to re-install the GOS because it does not understand the new firmware change (just like in a physical server) and more than likely the GOS will also not boot due to this change and this is the existing behavior from GOS point of view.

For a net new VM creation, prior to vSphere 8, if you had configured a VM using EFI firmware and you have not installed a GOS and realized that you had made a mistake and needed to change the VM firmware to BIOS, you could easily do so using the vSphere UI or API and then install your OS. In vSphere 8 and specifically when using the latest Virtual Machine Compatibility (vHW20), you can not just switch the VM firmware after the initial VM creation, especially if you had started with EFI firmware and wish to change it to BIOS.

In doing so, you will come across the following error message:

ACPI motherboard layout requires EFI. Failed to start the virtual machine. Module DevicePowerOnEarly power on failed.

[Read more...]

Categories // vSphere 8.0 Tags // acpi, bios, efi, ESXi 8.0, i440bx, vNUMA, vSphere 8.0

Adding custom VSAN BIOS splash screen to the Intel NUC

03.06.2016 by William Lam // 5 Comments

One of the last things I wanted to look into after setting up my new VSAN 6.2 home lab on the new 6th Gen Intel NUC was to add a custom BIOS splash screen giving my system a personal touch. Updating the BIOS splash screen would require flashing the BIOS itself which gave me some concerns after hearing about the BIOS v33 issue in which the M.2 slot would no longer be detected after the update. Although there was a simple workaround after the update, I still wanted to be cautious. Over the weekend I had noticed that Intel had released BIOS v36 for the Intel NUC which resolved the M.2 issue among a few others. I decided to give it a shot and hope that I that I do not brick my NUC.

I am happy to say that I was successful in updating to the latest Intel NUC BIOS and as you can see from the screenshot below, I was also able to replace the default Intel BIOS splash screen with a Captain VSAN BIOS splash screen (TV is 46" for those wondering) 🙂

custom-vsan-bios-splash-screen-for-intel-nuc-0
The process for building and customizing your Intel NUC BIOS is relatively straight forward but because I waited until after I had everything installed, it ended up being a bit more work than I had hoped. To customize your BIOS, Intel provides a Microsoft Windows only utility called Intel Integrator Toolkit. The easiest way to build and update your BIOS is to initially start off by installing Microsoft Windows on the Intel NUC itself which then allows you to easily flash the BIOS using the executable that is generated from the toolkit. Since I had already consumed both of my SSDs for VMware VSAN and Microsoft Windows does not allow you to install its OS directly onto a USB device, I had to use this method here to install a bootable version of Microsoft Windows onto the USB device since I did not want to blow away my VSAN setup.

OK, so now onto the cool stuff. Below are the instructions on how to build and customize your BIOS for the Intel NUC. If you would like to use the exact same BIOS splash screen as well as update to the latest BIOS v36 and do not want to go through the hassle, I have made my custom VSAN BIOS image available here. You just need to download the executable and run it on the Intel NUC itself which must be running Microsoft Windows (I used 8.1) and then follow the screens on flashing your BIOS.

Step 1 - Download the following two packages and transfer them to Microsoft Windows image running on your NUC:

  • Intel Integrator Toolkit
  • Intel NUC BIOS v36 (SYSKLi35-86A)

Step 2 - Install the Intel Integrator Toolkit and then start the program

Step 3 - Select the "Customize a BIOS file" option and load either the custom VSAN BIOS image which I have made available here OR load the NUC BIOS v36 file you had downloaded earlier.

custom-vsan-bios-splash-screen-for-intel-nuc-1
Step 4 - In the lower left hand corner, browse for the graphic image that you wish to use for your BIOS splash screen (images with black background works the best). For those interested, you can find the Captain VSAN image that I had used here. The tool actually supports several image formats in addition to the default BMP such as JPEG and PNG, you just need to change the extension type. There is a size limitation, but the nice thing about the tool is that there is an option to compress the image when it detects it is too large. Make sure to change the image for the four different options by clicking on the drop down wizard. I thought I only had to replace the first image but it looks like other versions of the splash screen is also used and it is best to just replace them all. You also have the option of changing other default settings in the BIOS, feel free to click on the tooltip for details on each of the options.

custom-vsan-bios-splash-screen-for-intel-nuc-2
Step 5 - Once you are done customizing your BIOS, you will then save your changes and the tool will produce a single Windows executable (SY0036.exe) which you will run on the NUC itself to flash the BIOS. You will be prompted with a couple of questions and once the process begins, it will restart and you will need to confirm one more time before the imaging process starts. If everything was successful, you should now see a new BIOS splash screen replacing the default Intel image. There is a good chance you may go through this process a few times depending if you are happy with the splash screen display. I think it took me about three tries. Hope this helps anyone looking to add that personal touch to their home lab!

Categories // ESXi, VSAN, vSphere 6.0 Tags // bios, homelab, Intel NUC, splash screen, Virtual SAN, VSAN

UEFI PXE boot is possible in ESXi 6.0

10.09.2015 by William Lam // 21 Comments

A couple of days ago I received an interesting question from fellow colleague Paudie O'Riordan, who works over in our Storage and Availability Business Unit at VMware. He was helping a customer who was interested in PXE booting/installing ESXi using UEFI which is short for Unified Extensible Firmware Interface. Historically, we only had support for PXE booting/installing ESXi using the BIOS firmware. You also could boot an ESXi ISO using UEFI, but we did not have support for UEFI when it came to booting/installing ESXi over the network using PXE and other variants such as iPXE/gPXE.

For those of you who may not know, UEFI is meant to eventually replace the legacy BIOS firmware. There are many benefits with using UEFI over BIOS, a recent article that does a good job of explaining the differences can be found here. In doing some research and pinging a few of our ESXi experts internally, I found that UEFI PXE boot support is actually possible with ESXi 6.0. Not only is it possible to PXE boot/install ESXi 6.x using UEFI, but the changes in the EFI boot image are also backwards compatible, which means you could potentially PXE boot/install an older release of ESXi.

Note: Auto Deploy still requires legacy BIOS firmware, UEFI is not currently supported today. This is something we will be addressing in the future, so stay tuned.

Not having worked with ESXi and UEFI before, I thought this would be a great opportunity for me to give this a try in my homelab which would also allow me to document the process in case others were interested. For my PXE server, I am using CentOS 6.7 Minimal (64-Bit) which runs both the DHCP and TFTP services but you can use any distro that you are comfortable with.

Step 1 - Download and install CentOS 6.7 Minimal (64-Bit)

Step 2 - Login to the CentOS system via terminal and perform the following commands which will update the system and install the DHCP and TFTP services:

yum -y update
yum -y install dhcp tftp-server

Step 3 - Download and upload an ESXi 6.x ISO to the CentOS system. In example here, I am using latest ESXi 6.0 Update 1 image (VMware-VMvisor-Installer-6.0.0.update01-3029758.x86_64.iso).

Step 4 - Extract the contents of the ESXi ISO to the TFTP directory by running the following commands:

mount -o loop VMware-VMvisor-Installer-6.0.0.update01-3029758.x86_64.iso /mnt/
cp -rf /mnt/ /var/lib/tftpboot/esxi60u1
umount /mnt/
rm VMware-VMvisor-Installer-6.0.0.update01-3029758.x86_64.iso

Step 5 - Copy the custom ESXi bootx64.efi bootloader image to the root of the extracted ESXi directory by running the following command:

cp /var/lib/tftpboot/esxi60u1/efi/boot/bootx64.efi /var/lib/tftpboot/esxi60u1/mboot.efi

Step 6 - Next, we need to edit our DHCP configuration file /etc/dhcp/dhcpd.conf to point our hosts to the mboot.efi image. Below is an example configuration and you will need to replace it with the network configuration of your environment. If you are running the TFTP server on another system, you will need to change the next-server property to the address of that system else you will just specify the same IP Address as the DHCP server.

default-lease-time 600;
max-lease-time 7200;
ddns-update-style none;
authoritative;
log-facility local7;
allow booting;
allow bootp;
option client-system-arch code 93 = unsigned integer 16;

class "pxeclients" {
   match if substring(option vendor-class-identifier, 0, 9) = "PXEClient";
   # specifies the TFTP Server
   next-server 192.168.1.180;
   if option client-system-arch = 00:07 or option client-system-arch = 00:09 {
      # PXE over EFI firmware
      filename = "esxi60u1/mboot.efi";
   } else {
      # PXE over BIOS firmware
      filename = "esxi60u1/pxelinux.0";
   }
}

subnet 192.168.1.0 netmask 255.255.255.0 {
    option domain-name "primp-industries.com";
    option domain-name-servers 192.168.1.1;
    host vesxi60u1 {
        hardware ethernet 00:50:56:ad:f7:4b;
        fixed-address 192.168.1.199;
    }
}

Step 7 - Next, we will need to edit our TFTP configuration file /etc/xinetd.d/tftp to enable the TFTP service by modifying the following line from yes to no:

disable = no

Step 8 - By default, the ESXi's boot.cfg configuration file refers to all packages under / path. We will need to remove that reference and can easily do so by running the following command:

sed -i 's/\///g' /var/lib/tftpboot/esxi60u1/boot.cfg

Step 9 - Finally, we need to restart both the TFTP (under xinetd) and DHCP services. For testing purposes, I have also disabled firewall for ipv4/ipv6, of course in a real production environment you will probably want to only open the ports required for TFTP/DHCP.

/etc/init.d/xinetd restart
/etc/init.d/dhcpd restart
/etc/init.d/iptables stop
/etc/init.d/ip6tables stop

We can now boot up either a physical host that is configured to use UEFI firmware OR we can also easily test using Nested ESXi. The only change we need to make to our ESXi VM is by setting the firmware mode from BIOS to EFI which can be done using the vSphere Web/C# Client as shown in the two screenshots below:

uefi-pxe-boot-esxi-6.0-0 uefi-pxe-boot-esxi-6.0-1
If everything was successfully configured, we should now see our system PXE boot into ESXi installer using UEFI as seen in the screenshot below.

uefi-pxe-boot-esxi-6.0-2
If you run into any issues, I would recommend checking system logs on your PXE server (/var/log/messages) to see if there are any errors. You can also troubleshoot by manually using tftp client and connecting to your TFTP Server to ensure you are able to pull down the files such as the boot.cfg by running the following command:

tftp [PXE-SERVER]
get esxi60u1/boot.cfg

For additional resources on scripted installation of ESXi also referred to as Kickstart, be sure to take a look here. I also would like to give a big shoutout and thanks to Tim Mann, one of the Engineers responsible for adding UEFI support into ESXi and for answering some of my questions while I was setting up my environment.

Categories // Automation, ESXi, vSphere 6.0 Tags // bios, boot.cfg, bootx64.efi, dhcp, efi, esxi 6.0, kickstart, mboot.efi, pxe boot, tftp, UEFI, vSphere 6.0

Search

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Infrastructure Business Group (CIBG) at VMware. He focuses on Cloud Native technologies, Automation, Integration and Operation for the VMware Cloud based Software Defined Datacenters (SDDC)

Connect

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automated ESXi Installation with a USB Network Adapter using Kickstart 02/01/2023
  • How to bootstrap ESXi compute only node and connect to vSAN HCI Mesh? 01/31/2023
  • Quick Tip - Easily move or copy VMs between two Free ESXi hosts? 01/30/2023
  • vSphere with Tanzu using Intel Arc GPU 01/26/2023
  • Quick Tip - Automating allowed and not allowed Datastores for use with vSphere Cluster Services (vCLS) 01/25/2023

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2023

 

Loading Comments...