WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Automated NSX-T 2.0 Lab Deployment

10.24.2017 by William Lam // 21 Comments

Last week, I had spent some time exploring and getting myself more familiar with NSX-T, which is the next generation release of the NSX platform from VMware. One of the first thing I do when learning about a new product is to setup a lab environment that I can using. Having gone through the deployment once by hand, I realized it would be quite painful if I needed to do this again, which I know I will and I did 🙂 I wanted to have a simliar experience to my vGhetto Automated vSphere Lab deployment script which also including setting up the entire vSphere infrastructure along with deploying and configuring NSX-V and extending it to support NSX-T.

Since my original script leverages PowerCLI to access both the vSphere and NSX APIs, I wanted to do the same with NSX-T. Funny enough, the PowerCLI team had just published an update release (6.5.3) which also added support for NSX-T and I thought this was perfect timing to try out the NSX-T APIs, which I had never used before.

UPDATE (01/01/2018) - I have verified the script also works with the latest NSX-T 2.1 which was just released before Christmas. The script has also been updated to create a new Edge Uplink Profile along with an Edge Cluster and automatically associate all Edge VMs to Edge Cluster.

I have created a new Github repository called vghetto-nsxt-automated-lab-deployment which contains detailed instructions along with the PowerCLI script.

Here is what the script is currently performing:

  1. Deploy and configure vCenter Server Appliance 6.5u1
  2. Deploy and configure 3 x Nested ESXi 6.5u1 Virtual Appliance VMs and attaching it to vCenter Server
  3. Deploy NSX-T Manager, 3 x Controllers & 1 x Edge and setup both the Management and Control Cluster Plane
  4. Configure NSX-T with IP Pool, Transport Zone, Add vCenter Server as Compute Manager, Create Logical Switch, Prepare ESXi hosts, Create Uplink Profile & Add configure ESXi hosts as a Transport Node

Similiar to the vSphere version of this script, all deployed VMs will be placed inside of a vCenter vApp construct as shown in the example screenshot below:


Here is an example output of a succesful deployment and you go from nothing to a fully functional NSX-T environment in just 50 minutes, which is pretty awesome if you ask me!?

[Read more...]

Categories // Automation, ESXCLI, Home Lab, NSX, PowerCLI, VCSA, vSphere 6.5 Tags // ESXi 6.5, NSX-T, PowerCLI, vSphere 6.5 Update 1

Quick Tip - VSAN 6.2 (vSphere 6.0 Update 2) now supports creating all-flash diskgroup using ESXCLI

03.02.2016 by William Lam // 5 Comments

One of my all time favorite features of VSAN is still the ability to be able to "bootstrap" a VSAN Datastore starting with just a single ESXi node. This is especially useful if you would like to bootstrap vCenter Server on top of VSAN out of the box without having to require additional VMFS/NFS storage. This bootstrap method has been possible and supported since the very first release of VSAN which I have written in great detail here and here.

With the release of VSAN 6.1 (vSphere 6.0 Update 1), an all-flash VSAN configuration was also now possible in addition to a hybrid configuration which uses a combination of SSDs and MDs. One observation that was made by a few folks including myself was that you could not configure an all-flash diskgroup using ESXCLI which was one of the methods that could be used to bootstrap VSAN. If you tried to create an all-flash diskgroup using ESXCLI, you would get the following error:

Unable to add device: Can not create all-flash disk group: current Virtual SAN license does not support all-flash

This turned out to be a bug and the workaround at the time was to add the ESXi host to a vCenter Server which would then allow you to create the all-flash diskgroup. This usually was not a problem but for those wanting to bootstrap VSAN, this would require you to have an already running vCenter Server instance. While setting up my new VSAN 6.2 home lab last night

Just finished installing all 32GB of awesomeness + 2 SSD (M.2 & 2.5). Super simple#VSAN62HomeLab pic.twitter.com/tYOujQmCqX

— William Lam (@lamw.bsky.social | @*protected email*) (@lamw) March 2, 2016

I found that this issue has actually been resolved in the upcoming release of VSAN 6.2 (vSphere 6.0 Update 2) and you can now create an all-flash diskgroup using ESXCLI which includes do so from the vSphere API as well. For those interested, you can find the list commands required to bootstrap an all-flash VSAN configuration below:

[Read more...]

Categories // Automation, ESXCLI, ESXi, VSAN, vSphere 6.0 Tags // esxcli, ESXi 6.0, Virtual SAN, VSAN, vSphere 6.0 Update 2

Override default VSAN Maintenance (decommission) Mode in VSAN 6.1

09.14.2015 by William Lam // Leave a Comment

Earlier this year, there was an interesting use case that was brought up from a customer regarding the use of vSphere Update Manager (VUM) and VSAN enabled ESXi hosts. Everything was working from a functional standpoint, but the customer wanted a way to control the default VSAN decommission mode which specifies how the data should be moved, if at all when a host is placed into maintenance mode. There are three supported options which includes Ensure Accessibility (default), Evacuate All Data and No Action. Depending on the customer and their use case, there may be valid reasons to use one or the other. For example, if I am shutting down my entire VSAN cluster for some hardware upgrade, I probably do not want any of my data to be migrated and the No Action setting would be acceptable. During an upgrade or patching an of ESXi host, some customers have expressed that they would prefer to leverage the Evacuate All Data setting which is perfectly fine, of course the maintenance mode would take long as all the dat must be migrated off the host first.

Prior to VSAN 6.1 (included in the vSphere 6.0 Update 1 release), it was not possible to override the default VSAN maintenance mode (decommission mode) option which defaults to Ensure Accessibility. This was a problem because if you decided you wanted to use a different option, there would be some manual intervention required from the user when using VUM. The workaround for the customer would be to either manually or using the vSphere API to automate the ESXi host maintenance mode operation and specify the decommission mode type before VUM would take over and update the host. Not an ideal solution but would work if you needed to override the default.

I thought this would be a nice feature enhancement to be able to override the default VSAN maintenance mode option which could vary from customer to customer depending on their use case. I got in touch with one of the VSAN Engineers to discuss the use case in more detail and he agreed that it would be useful to expose this type of a capability. In VSAN 6.1, there is now a new ESXi Advanced Setting called DefaultHostDecommissionMode which allows you to specify the default VSAN maintenance mode behavior.

vsan-6.1-decomission-mode-0
Below is a table of the three available options (ensureAccessibility is default) that can be configured:

VSAN Decommission Mode Value  Description
ensureAccessibility  VSAN data reconfiguration should be performed to ensure storage object accessibility
evacuateAllData  VSAN data evacuation should be performed such that all storage object data is removed from the host
noAction  No special action should take place regarding VSAN data

This ESXi Advanced Setting can also be retrieved and configured using ESXCLI as well as the vSphere API.

To retrieve the current VSAN maintenance mode option using ESXCLI, run the following command:

esxcli system settings advanced list -o /VSAN/DefaultHostDecommissionMode

To configure the default VSAN maintenance mode option using ESXCLI, run the following command:

esxcli system settings advanced set -o /VSAN/DefaultHostDecommissionMode -s [DECOMISSION_MODE]

Categories // ESXCLI, ESXi, VSAN, vSphere 6.0 Tags // DefaultHostDecommissionMode, ESXi 6.0, maintenance mode, Virtual SAN, VSAN, VSAN 6.1, vSphere 6.0 Update 1

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...