WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Search Results for: nested esxi

Updated VSAN 6.0 Nested ESXi OVF Templates for 64 Nodes, All-Flash Array & Fault Domain Testing

02.12.2015 by William Lam // 22 Comments

During the development of vSphere & VSAN 6.0, I built several new OVF templates to be able to help quickly deploy different configurations of VSAN to help test and provide feedback to Engineering. I figure I would share out these new templates as it may also benefit others in the community who wish to quickly setup VSAN 6.0 and give it spin in their own home lab or development environment using Nested ESXi.

Disclaimer: Nested ESXi is not officially supported by VMware, please use at your own risk.

One thing to be aware of when running VSAN 6.0 using Nested ESXi is that you will need a minimum of 6GB of memory for each ESXi VM. If you wish to run additional workloads on top or create larger VSAN Clusters, you may need to add additional memory on each ESXi VM. This is not really a problem for real physical hardware but can be a problem for smaller lab environments when running Nested ESXi.

Here are the four new VSAN 6.0 Nested ESXi OVF Templates:

  • Nested-ESXi-3-Node-VSAN-6.0-Template - This is a standard 3-Node VSAN OVF
  • Nested-ESXi-3-Node-VSAN-6.0-All-Flash-Template - This is a standard 3-Node VSAN with both disk2 & 3 set as an SSD for All-Flash configuration (Check out this blog article on steps of setting up an All-Flash VSAN environment)
  • Nested-ESXi-6-Node-VSAN-6.0-FD-Template - This is a 6-Node VSAN which can be useful when wanting to test out VSAN's new Fault Domain (aka Rack Aware) feature
  • Nested-ESXi-64-Node-VSAN-6.0-Template - This is mind-blowing 64-Node VSAN OVF! Can you handle all this awesomeness? If you can, would love to see you share some screenshots 🙂

Note: If you decide to build a 64-Node Cluster, you will need run the following ESXCLI command (reboot is required) to go beyond 32 nodes up to 64 node:

esxcli system settings advanced set -o /VSAN/goto11 -i 1

Here is a few screenshots of running VSAN 6.0 using the above OVF Templates:

vsan-6.0-nested-esxi
vsan-6.0-fault-domain

Categories // ESXi, VSAN, vSphere 6.0

How to configure an All-Flash VSAN 6.0 Configuration using Nested ESXi?

02.11.2015 by William Lam // 11 Comments

There has been a great deal of interest from customers and partners for an All-Flash VSAN configuration, especially as consumer grade SSDs (eMLC) continue to drop in price and the endurance levels of these devices lasting much longer than originally expected as mentioned in this article by Duncan Epping. In fact, last year at VMworld the folks over at Micron and SanDisk built and demoed an All-Flash VSAN configuration proving this was not only cost effective but also quite performant. You can read more about the details here and here. With the announcement of vSphere 6 this week and VMware Partner Exchange taking place the same week, there was a lot of excitement on what VSAN 6.0 might bring.

One of the coolest feature in VSAN 6.0 is the support for an All-Flash configuration. The folks over at Sandisk gave a sneak peak at VMware Partner Exchange couple weeks back on what they were able to accomplish with VSAN 6.0 using an All-Flash configuration. They achieved an impressive 2 Million IOPs, for more details take a look here. I am pretty sure there are going to be plenty more partner announcements as we get closer to the GA of vSphere 6 and there will be a list of supported vendors and devices on the VMware VSAN HCL, so stay tuned.

To easily demonstrate this new feature, I will be using Nested ESXi but the process to configure an All-Flash VSAN configuration is exactly the same for a real physical hardware setup. Nested ESXi is a great learning tool to understand and be able to walk through the exact process but should not be a substituted for actual hardware testing. You will need a minimum of 3 Nested ESXi hosts and they should be configured with at least 6GB of memory or more when working with VSAN 6.0.

Disclaimer: Nested ESXi is not officially supported by VMware, please use at your own risk.

In VSAN 1.0, an All-Flash configuration was not officially supported, the only way to get this working was by "tricking" ESXi into thinking the SSD's used for capacity tier are MD's by creating claimrules using ESXCLI. Though this method had worked, VSAN itself was assuming the capacity tier of storage are regular magnetic disks and hence the operations were not really optimized for anything but magnetic disks. With VSAN 6.0, this is now different and VSAN will optimize based on whether are you using using a hybrid or an All-Flash configuration. In VSAN 6.0, there is now a new property called IsCapacityFlash that is exposed and it allows a user to specify whether an SSD is used for the write buffer or for capacity purposes.

Screen Shot 2015-02-10 at 10.01.12 PM
Step 1 - We can easily view the IsCapacityFlash property by using our handy vdq VSAN utility which has now been enhanced to include a few more properties. Run the following command to view your disks:

vdq -q

all-flash-vsan-6
From the screenshot above, we can see we have two disks eligible for VSAN and that they both are SSDs. We can also see thew new IsCapacityFlash property which is currently set to 0 for both. We will want to select one of the disk(s) and set this property to 1 to enable it for capacity use within VSAN.

Step 2 - Identity the SSD device(s) you wish to use for your capacity tier, a very simple to do this is by using the following ESXCLI snippet:

esxcli storage core device list  | grep -iE '(   Display Name: |   Size: )'

all-flash-vsan-1
We can quickly get a list of the devices and their ID along with their disk capacity. In the example above, I will be using the 8GB device for SSD capacity

Step 3 - Once you have identified the device(s) from the previous step, we now need to add a new option called enable_capacity_flash to these device(s) using ESXCLI. There are actually three methods of assigning the capacity flash tag to a device and both provide the same end result. Personally, I would go with Option 2 as it is much simpler to remember than syntax for claimrules 🙂 If you have the ESXi hosts connected to your vCenter Server, then Option 3 would be great as you can perform this step from a single location.

Option 1: ESXCLI Claim Rules

Run the following two ESXCLI commands for each device you wish to mark for SSD capacity:

esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d naa.6000c295be1e7ac4370e6512a0003edf -o enable_capacity_flash
esxcli storage core claiming reclaim -d naa.6000c295be1e7ac4370e6512a0003edf

all-flash-vsan-2
Option 2: ESXCLI using new VSAN tagging command

esxcli vsan storage tag add -d naa.6000c295be1e7ac4370e6512a0003edf -t capacityFlash

Option 3: RVC using new vsan.host_claim_disks_differently command

vsan.host_claim_disks_differently --disk naa.6000c295be1e7ac4370e6512a0003edf --claim-type capacity_flash

Step 4 - To verify the changes took effect, we can re-run the vdq -q command and we should now see our device(s) marked for SSD capacity.

all-flash-vsan-3
Step 5 - You can now create your VSAN Cluster using the vSphere Web Client as you normally would and add the ESXi host into the cluster or you can bootstrap it using ESXCLI if you are trying to run vCenter Server on top of VSAN, for more details take a look here.

One thing that I found interesting is that in the vSphere Web Client when setting up an All-Flash VSAN configuration, the SSD(s) used for capacity will still show up as "HDD". I am not sure if this is what the final UI will look like before vSphere 6.0 GA's.

all-flash-vsan-4
If you want to check the actual device type, you can always go to a specific ESXi host under Manage->Storage->Storage Devices to see get more details. If we look at our NAA* device ID, we can see that both devices are in fact SSDs.

all-flash-vsan-5
Hopefully for those of you interested in an All-Flash VSAN configuration, you can now quickly get a feel for that running VSAN 6.0 in a Nested ESXi environment. I will be publishing updated OVF templates for various types of VSAN 6.0 testing in the coming weeks so stay tune.

Categories // ESXi, Nested Virtualization, VSAN, vSphere 6.0 Tags // enable_capacity_flash, esxcli, IsCapacityFlash, Virtual SAN, VSAN, vSphere 6.0

Does the ESXi Mac Learn dvFilter work with Nested ESXi on NSX VXLAN's?

09.19.2014 by William Lam // 3 Comments

After publishing my article on the new ESXi Mac Learn dvFilter which helps improve CPU/Network performance when using promiscuous mode with Nested ESXi, I received a couple of questions asking whether the dvFilter would work with NSX VXLAN's? At the time, I had only tested the Mac Learn dvFilter using standard VSS/VDS and not with any VXLAN based networks. I had reached out to a couple of folks asking whether this would work and to my surprise, I actually got back a mix set of answers to it will not work to it could work. One of the reasons that was given to me on why this may not work is that NSX-v (NSX for vSphere) leverages a different "virtual switch" than VSS/VDS and hence the Mac Learn dvFilter would not properly function. This actually would make sense, but because I received other responses negating that fact, I figured I probably should just test it for myself and see.

NSX 6.1 was recently released and I figured this would be a great opportunity for me to learn a bit more about NSX, as I have never played with it before and also test whether Mac Learn dvFilter would in fact work with NSX VXLAN's. In my lab environment I have deployed NSX and I have 3 physical ESXi hosts running VSAN (go SDS!). I deployed both an NSX ESR (Edge Service Router) hosting 2 Logical Networks (aka VXLAN segments) and an NSX DLR (Distributed Logical Router) hosting another 2 Logical Networks.

Here is a screenshot of the 4 Logical Networks, the first two on NSX ESR and the last two on NSX DLR:

nesetd-esxi-promiscous-mode-nsx-vxlan-0
Here is a screenshot of both the NSX ESR and DLR:

nesetd-esxi-promiscous-mode-nsx-vxlan-1
Note: If you would like to learn more about NSX ESR and DLR, check out this great article by Brad Hedlund who goes into more detail.

For my test, I first enabled Promiscuous Mode and Forged Transmit on the respective Logical Switches which is just a dvPortgroup on the VDS for my NSX ESR setup. I then had 2 Nested ESXi VMs running (without the Mac Learn dvFilter), a Windows "Jump Box" VM and vMA all connected to the same VLXAN network.
nesetd-esxi-promiscous-mode-nsx-vxlan-3
I then transfer an ISO from the Windows VM to vMA while running ESXTOP on the physical ESXi host which is hosting these four VMs. As I expected, both the Nested ESXi VMs and vMA were receiving network packets. Next, I installed the Mac Learn dvFilter VIB on the physical ESXi host and added the required VM Advanced Settings to both the Nested ESXi VMs and then re-ran the test. To my surprise, both the Nested ESXi VMs were no longer receiving the erroneous packets! So it seems that using VLXAN with NSX ESR, the Mac Learn dvFilter is working as expected.

To be thorough, I also ran through same test but now for the VXLAN segments backed by NSX DLR. This time, I was really surprised by the results. The test was prior to installing the Mac Learn dvFilter and my expectation was that the two Nested ESXi VMs would be seeing the duplicated network packets from the VDS, but to my surprise, they did not! Both the Nested ESXi VMs were pretty much idling at 0 packets as nothing was being sent to them. I am not exactly sure why I was seeing this behavior, perhaps there is some type of optimization in the DLR? This is something I hope to get an answer from someone in Engineering on why I might be seeing this positive behavior.

To summarize, this myth has been busted and the Mac Learn dvFilter does in fact work with VXLAN networks. If you are using NSX ESR for your VXLAN setup, then you will need to install the dvFilter and if you are using NSX DLR, it seems like you do not need to make any additional changes. After briefly speaking with Christian Dickmann, the creator of the dvFilter as I wanted to share the results with him, I also learned about some interesting tidbits. Christian was not surprised by the results actually, the reason for this is that the VMkernel networking stack was architected and designed to be modular. This meant that, one could switch out the "virtual switch" with other implementations and the underlying dvFilter framework would still continue to work regardless of the "virtual switch" being used.

Additional Note:

  • I did not get a chance to test with vCNS and VXLAN, but I believe it should work given NSX-v is functional. If you are able to test this, feel free to leave a comment on whether the expected behavior is seen with the Mac Learn dvFilter.
  • I did not get a chance to test this with vCloud Director with VXLAN based networks, but as I mentioned, this should work. Please leave a comment if you can confirm
  • I also noticed when creating the Logical Switches, there is a Mac Learning capability, but from my testing, I found it did not benefited Nested ESXi and the Mac Learn dvFilter was still required.

Categories // ESXi, Nested Virtualization, NSX Tags // dvFilter, ESXi, mac learning, NSX, VXLAN

  • « Previous Page
  • 1
  • …
  • 13
  • 14
  • 15
  • 16
  • 17
  • …
  • 67
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...