WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Executing Commands During Boot Up In ESXi 5.1

05.09.2013 by William Lam // 6 Comments

There maybe certain use cases where you need to execute a command or series of commands during the boot up of an ESX(i) host and historically administrators have added commands to the /etc/rc.local which is one of the last scripts to be executed in the boot up process. However, with ESXi 5.1, the location of the file has changed from /etc/rc.local to /etc/rc.local.d/local.sh.

VMware also provides a KB2043564 that describes the locations for the various version of ESX(i):

  • In ESX 3.x/4.x - You would add the command(s) to /etc/rc.d/rc.local
  • In ESXi 4.x/5.0 - You would add the command(s) to /etc/rc.local
  • In ESXi 5.1 - It has changed to /etc/rc.local.d/local.sh

Disclaimer: As mentioned in the VMware KB, though this is supported, it is still not recommended that you edit these files unless directed by VMware support or under special scenarios.

A question that came up recently was about automatically loading the "multiextent" VMkernel module (which is required if you wish to use the 2gbsparse VMDK format, you can find more details in this blog article here) during the boot up of an ESXi 5.1 host as this had to be done manually, even if you had enabled the module. To do so, you just need to edit the /etc/rc.local.d/local.sh and add the following command above the "exit 0".

localcli system module load -m multiextent

Note: The reason I used localcli versus ESXCLI is that hostd may not be completely ready and hence the command may fail during the boot up process. One can also loop and wait for ESXCLI to be ready, but this is another way of performing the operation.

 

Categories // ESXi Tags // ESX, ESXi, local.sh, rc.local

How To Add A Tag (Log prefix) To Syslog Entries

05.07.2013 by William Lam // 2 Comments

Last year I wrote an article on how to forward vCenter Server logs to a remote syslog server using the built in syslog-ng client in the VCSA. A few weeks back, I received an interesting email from Michael White sharing details about adding a "tag" or more specifically, adding a string prefix to each syslog entry being forwarded. This was interesting as it enables a user to easily search for a specific log entry based on a "tag" and comes really in handy when you have multiple log sources being forwarded from the same host. An example of this would be the various logs from a vCenter Server such as vpxd, vws, inventoryservice, etc. which all have their own individual logs coming from the same host.

Within the Syslog-ng client configuration, you can specify the log_prefix() option and the string you wish to prefix a given log source. The tag has a specific syntax that must contain a : (colon) and a whitespace after the string (e.g. "VC_APP: ").

Using the vCenter Server as example, we could add the following tags:
After restarting the syslog-ng client for the changes to going into effect, you can head over to your syslog server to view the updated syslog entries. In the screenshot below, we can see we have log sources from both our VC_APP (vpxd.log) and VC_IS (ds.log) entries as specified in our syslog-ng client configurations.

Note: For newer versions of syslog-ng, program_override() is used instead of log_prefix(). The syntax for that would be program_override("VC_APP").

I want to thank Michael for sharing this cool tidbit!

Categories // Uncategorized Tags // syslog, tag, VCSA

How To Enable Nested ESXi Using VXLAN In vSphere & vCloud Director

05.06.2013 by William Lam // 9 Comments

Recently I had received several inquiries asking on how to configure nested ESXi (Nested Virtualization) to function in a VXLAN environment. I have written several articles in the past on configuring nested ESXi in a regular vSphere and vCloud Director environment, but with the use of a VXLAN backed network, there are a few additional steps that are required. These steps include additional configurations of the vCloud Network & Security Manager (previously known as vShield Manager) which ensures that both the required promiscuous mode and forged transmits are automatically enabled for the VXLAN virtual wires (vWires) as they are managed exclusive by the vCNS Manager.

In this article, I will walk you through the configurations that is required when using VXLAN in both a vSphere only environment as well as a vCloud Director environment. If you would like to learn more about how VXLAN works, be sure to check out the multi-part VXLAN series (Part 1/Part 2) by Venky Deshpande.

Disclaimer: This is not officially supported by VMware, please use at your own risk.

Configurations for VXLAN in vSphere Environment

Step 1 - Deploy vCNS Manager and configure it to point to your vCenter Server (do not enable or prepare VXLAN, this must be done after the configurations)

Step 2 - You will need to identify the VDS MoRef ID in your vCenter Server which will be used in the next step. Since the configuration is applied at the VDS level, you may want to consider having a separate VDS serving Nested Virtualization traffic since both promiscuous mode & forged transmits will automatically be enabled for all vWires. To locate the VDS MoRef ID, login to the vSphere Web Client and select the summary view for the VDS.

The VDS MoRef ID will be towards the end of the URL link and it should start with dvs-X where X is some arbitrary number. Record this value down for the next step

Step 3 - Download the enablePromForVDS.sh shell script which will be used to prepare the VDS within the vCNS Manager. The script basically performs a POST to the REST API to the vCNS Manager using cURL and it accepts three input parameters: vCNS Manager IP Address/Hostname, VDS MoRef ID and VDS MTU. The username/password is hard coded in the script to use the default which is admin/default. If you have modified the default password like any good admin, you will want to change the password before running the script. If you take a look at the request body, you will notice only promiscuous mode is enabled to true, but this will also automatically enable forged transmits as well.

In my lab enviroment, I have the vCNS Manager IP to be 172.30.0.196, VDS MoRef ID to be dvs-13 and VDS MTU to be 9000. So the syntax to run the script would be:

./enablePromForVDS.sh 172.30.0.196 dvs-13 9000

Here is a screenshot of executing the script, you should see a response back with 200 to indicate successful execution of the script.

Step 4 - Now, we will proceed with the VXLAN preparation. Start off by logging into the vCNS Manager and selecting the vSphere Datacenter which you wish to enable VXLAN. On the right you should see a tab called "Network Virtualization" go ahead and click on that and then click on the sub-tab called "Preparation". Click on edit and then select the vSphere Cluster and proceed through the wizard based on your environment configuration.

Step 5 - Once the VXLAN preparation has completed, click on the "Segment ID" and configure that based on your environment.

Step 6 - Next, click on "Network Scopes" and you will create a network scope and specify the set of vSphere Clusters the VXLAN network will span.

Step 7 - Lastly, click on "Networks" and this is where you will create your vWires and ensure it the proper network scope is selected.

Step 8 - To confirm that everything has been configured properly. We now log back into our vSphere Web Client and heading over to the VDS settings page. You should now see a new vWire portgroup that is created, if we take a look at it's settings we should see that both promiscuous mode and forged transmits is enabled.

You are now done with the VXLAN configurations in the vCNS Manager and can proceed to the regular instructions for enabling Nested ESXi for vSphere.

Note: If you have already prepared VXLAN in your environment, you can still configure the above without having to un-prepare your VXLAN configurations. You just need to login to the vCNS Manager via the REST API and perform a DELETE on the VDS switch (Please refer to page 153 of the vCNS API Programming Guide) which will just delete the mapping from vCNS but will not destroy any of your VDS configuration. Once that is done, you will be able to use the script to configure the VDS with the proper settings.

Configurations for VXLAN in vCloud Director Environment

A VXLAN network pool is automatically created for you when using vCloud Director 5.1, so the steps for preparing Nested Virtualization for vCloud Director is extremely simple compared to the vSphere only environment.

Note: VXLAN is only supported in vCloud Director 5.1, for previous versions you have the choice of using a VCD-NI or vSphere backed network and the configurations for that can be found here.

Step 1 - Please follow the steps 1-5 from above in the vSphere only environment and then you are done. If you would like a more detailed walk through for configuring VXLAN for a vCloud Director environment, check out this article by Rawlinson Rivera who takes you through the process step by step.

Step 2 - Proceed to the regular instructions for enabling Nested ESXi for vCloud Director.

Step 3 - Lastly, you will go through the vCloud Director setup which is to attach your vCenter Server & vCNS Manager, create a Provider VDC, create an Organization and assign resources to your Organization VDC and ensure that the OrgVDC is consuming the VXLAN network pool that is automatically created for you when you create the Provider VDC. Once that is done, when you deploy your vApp, you will see a vWire that automatically created for you. If we login to the vSphere Web Client and go to the VDS settings, you will see the vWire has both promiscuous mode and forged transmits automatically enabled.

Additional Resources:

  • Nested Virtualization Resources

Categories // Automation, Nested Virtualization, NSX Tags // nested, vcloud director 5.1, vcloud networking and security, vcns, vhv, vSphere 5.1, VXLAN

  • « Previous Page
  • 1
  • …
  • 455
  • 456
  • 457
  • 458
  • 459
  • …
  • 561
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025