WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud
  • Tanzu
    • Application Modernization
    • Tanzu services
    • Tanzu Community Edition
    • Tanzu Kubernetes Grid
    • vSphere with Tanzu
  • Home Lab
  • Nested Virtualization
  • Apple

Quick Tip - Easily identify source DHCP server using ESXi DCUI

11.20.2020 by William Lam // 1 Comment

While installing the ESXi 7.0 Update 1 on one of my physical system, I happened to be in the "Configure Management Network" section of the ESXi Direct Console UI (DCUI) and noticed something I had never seen before. As shown in the screenshot, it now shows the IP Address of the DHCP server in which ESXi received the DHCP lease.


I had not noticed this before and after asking on Twitter, it looks like this is definitely a new enhancement that was added fairly recently. I did not see this in one of my ESXi 6.7 Update 3 deployments, but it may have came in a later patch but definitely new in ESXi 7.0 or greater. Not only is this a quick and easy way to identify the DHCP server being used but in case you need to track down an unexpected rogue DHCP server running, this will certainly come in handy as pointed out by John.

Trying to get rogue DHCP servers under control?
Remember kids, DHCP Snooping saves lives! https://t.co/FKPgKzI9In

— John Nicholson (@Lost_Signal) November 20, 2020

Categories // ESXi, vSphere 6.7, vSphere 7.0 Tags // dcui, dhcp

UEFI PXE boot is possible in ESXi 6.0

10.09.2015 by William Lam // 21 Comments

A couple of days ago I received an interesting question from fellow colleague Paudie O'Riordan, who works over in our Storage and Availability Business Unit at VMware. He was helping a customer who was interested in PXE booting/installing ESXi using UEFI which is short for Unified Extensible Firmware Interface. Historically, we only had support for PXE booting/installing ESXi using the BIOS firmware. You also could boot an ESXi ISO using UEFI, but we did not have support for UEFI when it came to booting/installing ESXi over the network using PXE and other variants such as iPXE/gPXE.

For those of you who may not know, UEFI is meant to eventually replace the legacy BIOS firmware. There are many benefits with using UEFI over BIOS, a recent article that does a good job of explaining the differences can be found here. In doing some research and pinging a few of our ESXi experts internally, I found that UEFI PXE boot support is actually possible with ESXi 6.0. Not only is it possible to PXE boot/install ESXi 6.x using UEFI, but the changes in the EFI boot image are also backwards compatible, which means you could potentially PXE boot/install an older release of ESXi.

Note: Auto Deploy still requires legacy BIOS firmware, UEFI is not currently supported today. This is something we will be addressing in the future, so stay tuned.

Not having worked with ESXi and UEFI before, I thought this would be a great opportunity for me to give this a try in my homelab which would also allow me to document the process in case others were interested. For my PXE server, I am using CentOS 6.7 Minimal (64-Bit) which runs both the DHCP and TFTP services but you can use any distro that you are comfortable with.

Step 1 - Download and install CentOS 6.7 Minimal (64-Bit)

Step 2 - Login to the CentOS system via terminal and perform the following commands which will update the system and install the DHCP and TFTP services:

yum -y update
yum -y install dhcp tftp-server

Step 3 - Download and upload an ESXi 6.x ISO to the CentOS system. In example here, I am using latest ESXi 6.0 Update 1 image (VMware-VMvisor-Installer-6.0.0.update01-3029758.x86_64.iso).

Step 4 - Extract the contents of the ESXi ISO to the TFTP directory by running the following commands:

mount -o loop VMware-VMvisor-Installer-6.0.0.update01-3029758.x86_64.iso /mnt/
cp -rf /mnt/ /var/lib/tftpboot/esxi60u1
umount /mnt/
rm VMware-VMvisor-Installer-6.0.0.update01-3029758.x86_64.iso

Step 5 - Copy the custom ESXi bootx64.efi bootloader image to the root of the extracted ESXi directory by running the following command:

cp /var/lib/tftpboot/esxi60u1/efi/boot/bootx64.efi /var/lib/tftpboot/esxi60u1/mboot.efi

Step 6 - Next, we need to edit our DHCP configuration file /etc/dhcp/dhcpd.conf to point our hosts to the mboot.efi image. Below is an example configuration and you will need to replace it with the network configuration of your environment. If you are running the TFTP server on another system, you will need to change the next-server property to the address of that system else you will just specify the same IP Address as the DHCP server.

default-lease-time 600;
max-lease-time 7200;
ddns-update-style none;
authoritative;
log-facility local7;
allow booting;
allow bootp;
option client-system-arch code 93 = unsigned integer 16;

class "pxeclients" {
   match if substring(option vendor-class-identifier, 0, 9) = "PXEClient";
   # specifies the TFTP Server
   next-server 192.168.1.180;
   if option client-system-arch = 00:07 or option client-system-arch = 00:09 {
      # PXE over EFI firmware
      filename = "esxi60u1/mboot.efi";
   } else {
      # PXE over BIOS firmware
      filename = "esxi60u1/pxelinux.0";
   }
}

subnet 192.168.1.0 netmask 255.255.255.0 {
    option domain-name "primp-industries.com";
    option domain-name-servers 192.168.1.1;
    host vesxi60u1 {
        hardware ethernet 00:50:56:ad:f7:4b;
        fixed-address 192.168.1.199;
    }
}

Step 7 - Next, we will need to edit our TFTP configuration file /etc/xinetd.d/tftp to enable the TFTP service by modifying the following line from yes to no:

disable = no

Step 8 - By default, the ESXi's boot.cfg configuration file refers to all packages under / path. We will need to remove that reference and can easily do so by running the following command:

sed -i 's/\///g' /var/lib/tftpboot/esxi60u1/boot.cfg

Step 9 - Finally, we need to restart both the TFTP (under xinetd) and DHCP services. For testing purposes, I have also disabled firewall for ipv4/ipv6, of course in a real production environment you will probably want to only open the ports required for TFTP/DHCP.

/etc/init.d/xinetd restart
/etc/init.d/dhcpd restart
/etc/init.d/iptables stop
/etc/init.d/ip6tables stop

We can now boot up either a physical host that is configured to use UEFI firmware OR we can also easily test using Nested ESXi. The only change we need to make to our ESXi VM is by setting the firmware mode from BIOS to EFI which can be done using the vSphere Web/C# Client as shown in the two screenshots below:

uefi-pxe-boot-esxi-6.0-0 uefi-pxe-boot-esxi-6.0-1
If everything was successfully configured, we should now see our system PXE boot into ESXi installer using UEFI as seen in the screenshot below.

uefi-pxe-boot-esxi-6.0-2
If you run into any issues, I would recommend checking system logs on your PXE server (/var/log/messages) to see if there are any errors. You can also troubleshoot by manually using tftp client and connecting to your TFTP Server to ensure you are able to pull down the files such as the boot.cfg by running the following command:

tftp [PXE-SERVER]
get esxi60u1/boot.cfg

For additional resources on scripted installation of ESXi also referred to as Kickstart, be sure to take a look here. I also would like to give a big shoutout and thanks to Tim Mann, one of the Engineers responsible for adding UEFI support into ESXi and for answering some of my questions while I was setting up my environment.

Categories // Automation, ESXi, vSphere 6.0 Tags // bios, boot.cfg, bootx64.efi, dhcp, efi, esxi 6.0, kickstart, mboot.efi, pxe boot, tftp, UEFI, vSphere 6.0

vGhetto Lab #NotSupported Slides Posted

10.17.2012 by William Lam // Leave a Comment

As promised, here are slides to my #NotSupported session at VMworld Europe which I continued the theme of home labs with my vGhetto Lab #NotSupported presentation.

The idea behind the vGhetto Lab is to easily setup a vSphere home lab without too much effort and most importantly, leveraging as little resources as possible. This is all accomplished with the following:

  • Physical host running ESXi 5.x
  • ESXi 5.x offline depot image
  • VCSA 5.x (vCenter Server Appliance)

In addition to the above, you will also need to download the vGhetto Lab scripts which are shown in the video.
Here are some additional details on how to quickly get setup with your own vGhetto Lab.

Step 1 - After installing ESXi 5.x on your physical host, you will need to deploy the VCSA. Make sure you add a second network interface to VCSA as shown in the presentation. In my example, I created another vSwitch with no uplinks and portgroup for Auto Deploy network

Step 2 - Once the VCSA is powered on, go ahead and SCP the scripts to virtual machine. The first script that we will need to execute is the setupNetwork.sh and you will need to edit the following variables:

VCENTER_IP_ADDRESS_1=192.168.1.150
VCENTER_NETMASK_1=255.255.255.0
VCENTER_GATEWAY=192.168.1.1
VCENTER_IP_ADDRESS_2=172.30.0.1
VCENTER_NETMASK_2=255.255.255.0
VCENTER_HOSTNAME=vcenter.primp-industries.com
DOMAIN_LIST=primp-industries.com
DNS_LIST=192.168.1.1

Note: To ensure that you do not accidentally run the script without changing out the variables, there is another variable called ACTUALLY_READ_SCRIPT that needs to be changed from "no" to "yes" else the script will not execute.

Step 3 - Next we need to configure the vCenter Server, we will need to execute the configureVCSA51.sh which will configure the embedded SSO Database as well as the database of the vCenter Server. You do not need to edit any variables in this script

Step 4 - Finally, we need to configure our DHCP, TFTP, Auto Deploy services as well extracting the ESXi offline depot image and preparing it for use with Auto Deploy. You will need to edit the following variables before running the setupvGhettoLab.sh script.

DHCP_SUBNET=172.30.0.0
DHCP_NETMASK=255.255.255.0
DHCP_START_RANGE=172.30.0.100
DHCP_END_RANGE=172.30.0.200
DHCP_INTEFACE=eth1
TFTP_SERVER=172.30.0.1
VCSA_SERVER=192.168.1.150
ESXI_OFFLINE_DEPOT=/root/VMware-ESXi-5.1.0-799733-depot.zip
ESXI_REPO_PATH=/etc/vmware-vpx/docRoot
ESXI_REPO_DIR=ESXi-5.1.0

Note: For the Image Profile and Auto Deploy rule creation, if you wish for the script to execute them automatically versus echoing to the screen, remove the "echo" statement as well as the double quotes from the following so the last three lines look like this:

pxe-profile-cmd create $(cat /tmp/VIBS) ${ESXI_REPO_DIR}
rule-cmd create -i pxe:${ESXI_REPO_DIR} ${AUTO_DEPLOY_RULE} vendor=='VMware, Inc.'
rule-set-cmd set ${AUTO_DEPLOY_RULE}

Step 5 - You are now ready to create your nested ESXi virtual machines. You can use RVC as shown in the presentation (there is a slide at the very end which lists the commands) or you can connect to vSphere Web Client and create the ESXi virtual machines the traditional way via the GUI.

After updating the DHCP configurations with the new MAC Addresses from your nested ESXi virtual machines, you should then see Auto Deploy automatically provision your ESXi hosts and join them to the VCSA you deployed earlier.

Additional Links:

  • vInception #NotSupported Slides Posted

 

Categories // Uncategorized Tags // appliance, auto deploy, dhcp, esxi, esxi5.1, notsupported, ruby vsphere console, rvc, tftp, vcsa, vcva, vmworld, vSphere, vSphere 5.1

Search

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Infrastructure Business Group (CIBG) at VMware. He focuses on Cloud Native technologies, Automation, Integration and Operation for the VMware Cloud based Software Defined Datacenters (SDDC)

Connect

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automated ESXi Installation with a USB Network Adapter using Kickstart 02/01/2023
  • How to bootstrap ESXi compute only node and connect to vSAN HCI Mesh? 01/31/2023
  • Quick Tip - Easily move or copy VMs between two Free ESXi hosts? 01/30/2023
  • vSphere with Tanzu using Intel Arc GPU 01/26/2023
  • Quick Tip - Automating allowed and not allowed Datastores for use with vSphere Cluster Services (vCLS) 01/25/2023

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2023

 

Loading Comments...