WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

VMware Tools is now pre-installed with Nested ESXi 6.0

02.26.2015 by William Lam // 9 Comments

I just came across this super awesome little tidbit from Core Nested ESXi Engineer Jim Mattson, that ESXi 6.0 now comes per-installed with VMware Tools when running Nested ESXi. This means you no longer have to manually install the VMware Tools for Nested ESXi but ESXi will be able to automatically detect that it is running inside of a VM and automatically startup the vmtoolsd process.

Disclaimer: Nested ESXi is not officially supported by VMware, please use at your own risk.

Screen Shot 2015-02-26 at 9.25.12 AM
This new feature of Nested ESXi is agnostic to the underlying physical ESXi version as well as the virtual hardware version. The only requirement is that the Nested ESXi is running ESXi 6.0. Talk about ease of use, this just made Nested ESXi that much cooler as if it was not already! 🙂

If you need to directly call into the vmtoolsd process for extracting OVF properties/etc. make sure you have correct library paths setup before running the vmtoolsd command, else you will get an error. To do so, run the following two commands:

export LD_LIBRARY_PATH=/usr/lib/vmware/vmtools/lib:$LD_LIBRARY_PATH
export PATH=/usr/lib/vmware/vmtools/bin:$PATH

Categories // ESXi, Nested Virtualization, vSphere 6.0 Tags // nested virtualization, vmware tools, vSphere 6.0

How to configure an All-Flash VSAN 6.0 Configuration using Nested ESXi?

02.11.2015 by William Lam // 11 Comments

There has been a great deal of interest from customers and partners for an All-Flash VSAN configuration, especially as consumer grade SSDs (eMLC) continue to drop in price and the endurance levels of these devices lasting much longer than originally expected as mentioned in this article by Duncan Epping. In fact, last year at VMworld the folks over at Micron and SanDisk built and demoed an All-Flash VSAN configuration proving this was not only cost effective but also quite performant. You can read more about the details here and here. With the announcement of vSphere 6 this week and VMware Partner Exchange taking place the same week, there was a lot of excitement on what VSAN 6.0 might bring.

One of the coolest feature in VSAN 6.0 is the support for an All-Flash configuration. The folks over at Sandisk gave a sneak peak at VMware Partner Exchange couple weeks back on what they were able to accomplish with VSAN 6.0 using an All-Flash configuration. They achieved an impressive 2 Million IOPs, for more details take a look here. I am pretty sure there are going to be plenty more partner announcements as we get closer to the GA of vSphere 6 and there will be a list of supported vendors and devices on the VMware VSAN HCL, so stay tuned.

To easily demonstrate this new feature, I will be using Nested ESXi but the process to configure an All-Flash VSAN configuration is exactly the same for a real physical hardware setup. Nested ESXi is a great learning tool to understand and be able to walk through the exact process but should not be a substituted for actual hardware testing. You will need a minimum of 3 Nested ESXi hosts and they should be configured with at least 6GB of memory or more when working with VSAN 6.0.

Disclaimer: Nested ESXi is not officially supported by VMware, please use at your own risk.

In VSAN 1.0, an All-Flash configuration was not officially supported, the only way to get this working was by "tricking" ESXi into thinking the SSD's used for capacity tier are MD's by creating claimrules using ESXCLI. Though this method had worked, VSAN itself was assuming the capacity tier of storage are regular magnetic disks and hence the operations were not really optimized for anything but magnetic disks. With VSAN 6.0, this is now different and VSAN will optimize based on whether are you using using a hybrid or an All-Flash configuration. In VSAN 6.0, there is now a new property called IsCapacityFlash that is exposed and it allows a user to specify whether an SSD is used for the write buffer or for capacity purposes.

Screen Shot 2015-02-10 at 10.01.12 PM
Step 1 - We can easily view the IsCapacityFlash property by using our handy vdq VSAN utility which has now been enhanced to include a few more properties. Run the following command to view your disks:

vdq -q

all-flash-vsan-6
From the screenshot above, we can see we have two disks eligible for VSAN and that they both are SSDs. We can also see thew new IsCapacityFlash property which is currently set to 0 for both. We will want to select one of the disk(s) and set this property to 1 to enable it for capacity use within VSAN.

Step 2 - Identity the SSD device(s) you wish to use for your capacity tier, a very simple to do this is by using the following ESXCLI snippet:

esxcli storage core device list  | grep -iE '(   Display Name: |   Size: )'

all-flash-vsan-1
We can quickly get a list of the devices and their ID along with their disk capacity. In the example above, I will be using the 8GB device for SSD capacity

Step 3 - Once you have identified the device(s) from the previous step, we now need to add a new option called enable_capacity_flash to these device(s) using ESXCLI. There are actually three methods of assigning the capacity flash tag to a device and both provide the same end result. Personally, I would go with Option 2 as it is much simpler to remember than syntax for claimrules 🙂 If you have the ESXi hosts connected to your vCenter Server, then Option 3 would be great as you can perform this step from a single location.

Option 1: ESXCLI Claim Rules

Run the following two ESXCLI commands for each device you wish to mark for SSD capacity:

esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d naa.6000c295be1e7ac4370e6512a0003edf -o enable_capacity_flash
esxcli storage core claiming reclaim -d naa.6000c295be1e7ac4370e6512a0003edf

all-flash-vsan-2
Option 2: ESXCLI using new VSAN tagging command

esxcli vsan storage tag add -d naa.6000c295be1e7ac4370e6512a0003edf -t capacityFlash

Option 3: RVC using new vsan.host_claim_disks_differently command

vsan.host_claim_disks_differently --disk naa.6000c295be1e7ac4370e6512a0003edf --claim-type capacity_flash

Step 4 - To verify the changes took effect, we can re-run the vdq -q command and we should now see our device(s) marked for SSD capacity.

all-flash-vsan-3
Step 5 - You can now create your VSAN Cluster using the vSphere Web Client as you normally would and add the ESXi host into the cluster or you can bootstrap it using ESXCLI if you are trying to run vCenter Server on top of VSAN, for more details take a look here.

One thing that I found interesting is that in the vSphere Web Client when setting up an All-Flash VSAN configuration, the SSD(s) used for capacity will still show up as "HDD". I am not sure if this is what the final UI will look like before vSphere 6.0 GA's.

all-flash-vsan-4
If you want to check the actual device type, you can always go to a specific ESXi host under Manage->Storage->Storage Devices to see get more details. If we look at our NAA* device ID, we can see that both devices are in fact SSDs.

all-flash-vsan-5
Hopefully for those of you interested in an All-Flash VSAN configuration, you can now quickly get a feel for that running VSAN 6.0 in a Nested ESXi environment. I will be publishing updated OVF templates for various types of VSAN 6.0 testing in the coming weeks so stay tune.

Categories // ESXi, Nested Virtualization, VSAN, vSphere 6.0 Tags // enable_capacity_flash, esxcli, IsCapacityFlash, Virtual SAN, VSAN, vSphere 6.0

VMware has the best platform to run latest Windows 10 Desktop, Server & Hyper-V Tech Preview!

10.08.2014 by William Lam // 6 Comments

I am constantly amazed at the number of guest operating systems that is supported on VMware products like VMware vSphere our Enterprise Hypervisor, vCloud Air our public cloud offering which runs on vSphere and our desktop products such as VMware Fusion and Workstation. If we just look at vSphere alone, it currently "lists" 101 supported guest operating systems! (full list below) However, this is actually a tiny subset of what is actually supported on vSphere as new guest OSes are constantly being added to the support matrix. This also does not include any pre-released operating systems like the recent Apple OS X Yosemite (10.10) Tech Preview. Heck, you can even run Windows 3.11 if you really want to as shown by my fellow VMware colleague Chris Colotti.

To get the complete list of currently supported operating systems for vSphere or any other VMware products, you will want to check the VMware HCL for Guest Operating Systems. Running a filter on latest ESXi 5.5 Update 2 release for all Guest OSes, we can see that the total number of supported Guest OSes is astounding 231! I know this number is even greater as we probably can not capture every single x86 Guest OS that exists out there today which can run on VMware.

Getting back to the topic of this post, I know Microsoft has recently released a new Tech Preview of their upcoming Windows platform dubbed Windows 10 (not a typo, they decided to skip Windows 9) and I know some of you may be interested in trying out their latest release. What better way than to run it on VMware? I know there was a blog or two about running Windows 10 on vSphere, however there was some incorrect information about not being able to install VMware Tools or getting the optimized VMXNET3 driver working. I decided to run all three flavors (Windows 10 Desktop, Server and Hyper-V) on the latest vSphere 5.5 release (should work on previous releases of 5.5) and will share the Virtual Machine configuration.

Note: You can also run Windows 10 Tech Preview on both VMware Fusion and Workstation, take a look at this article for more details. These are great options in addition to vSphere and vCloud Air.

Windows 10 Desktop:

  • GuestOS: Windows 8 64-bit
  • Virtual HW: vHW10
  • Network Driver: VMXNET3
  • Storage Controller: LSI Logic SAS

windows10-desktop

Windows 10 Server:

  • GuestOS: Windows 2012 64-bit
  • Virtual HW: vHW10
  • Network Driver: VMXNET3
  • Storage Controller: LSI Logic SAS

windows10-server

Windows 10 Hyper-v:

  • GuestOS: Windows 2012 64-bit
  • Virtual HW: vHW10
  • Network Driver: VMXNET3
  • Storage Controller: LSI Logic SAS
  • CPU Advanced Setting: Enable VHV
  • VM Advanced Setting: hypervisor.cpuid.v0

For more details about running Hyper-V and the last two advanced settings, please take a look at this article on running other Hypervisors.

windows10-hyper-v
If you look closely at this last screenshot, you will see that I am not only running Windows 10 Hyper-V within a VM on ESXi, but I am also running a Nested Windows 10 VM within this Hyper-V VM! How cool is that!? Not sure there are good use cases for this, but if you wanted to, you could! In my opinion (although I may be bias because I work for VMware, but results speak for itself), VMware truly provides the best platform to the widest variety of x86 guest operating systems that exists.

Here are the guest operating systems that are currently "listed" in vSphere today that can be selected:

Apple Mac OS X 10.5 (32-bit)
Apple Mac OS X 10.5 (64-bit)
Apple Mac OS X 10.6 (32-bit)
Apple Mac OS X 10.6 (64-bit)
Apple Mac OS X 10.7 (32-bit)
Apple Mac OS X 10.7 (64-bit)
Apple Mac OS X 10.8 (64-bit)
Apple Mac OS X 10.9 (64-bit)
Asianux 3 (32-bit)
Asianux 3 (64-bit)
Asianux 4 (32-bit)
Asianux 4 (64-bit)
CentOS 4/5/6 (32-bit)
CentOS 4/5/6/7 (64-bit)
Debian GNU/Linux 4 (32-bit)
Debian GNU/Linux 4 (64-bit)
Debian GNU/Linux 5 (32-bit)
Debian GNU/Linux 5 (64-bit)
Debian GNU/Linux 6 (32-bit)
Debian GNU/Linux 6 (64-bit)
Debian GNU/Linux 7 (32-bit)
Debian GNU/Linux 7 (64-bit)
FreeBSD (32-bit)
FreeBSD (64-bit)
IBM OS/2
Microsoft MS-DOS
Microsoft Small Business Server 2003
Microsoft Windows 2000
Microsoft Windows 2000 Professional
Microsoft Windows 2000 Server
Microsoft Windows 3.1
Microsoft Windows 7 (32-bit)
Microsoft Windows 7 (64-bit)
Microsoft Windows 8 (32-bit)
Microsoft Windows 8 (64-bit)
Microsoft Windows 95
Microsoft Windows 98
Microsoft Windows NT
Microsoft Windows Server 2003 (32-bit)
Microsoft Windows Server 2003 (64-bit)
Microsoft Windows Server 2003 Datacenter (32-bit)
Microsoft Windows Server 2003 Datacenter (64-bit)
Microsoft Windows Server 2003 Standard (32-bit)
Microsoft Windows Server 2003 Standard (64-bit)
Microsoft Windows Server 2003 Web Edition (32-bit)
Microsoft Windows Server 2008 (32-bit)
Microsoft Windows Server 2008 (64-bit)
Microsoft Windows Server 2008 R2 (64-bit)
Microsoft Windows Server 2012 (64-bit)
Microsoft Windows Vista (32-bit)
Microsoft Windows Vista (64-bit)
Microsoft Windows XP Professional (32-bit)
Microsoft Windows XP Professional (64-bit)
Novell NetWare 5.1
Novell NetWare 6.x
Novell Open Enterprise Server
Oracle Linux 4/5/6 (32-bit)
Oracle Linux 4/5/6/7 (64-bit)
Oracle Solaris 10 (32-bit)
Oracle Solaris 10 (64-bit)
Oracle Solaris 11 (64-bit)
Other (32-bit)
Other (64-bit)
Other 2.4.x Linux (32-bit)
Other 2.4.x Linux (64-bit)
Other 2.6.x Linux (32-bit)
Other 2.6.x Linux (64-bit)
Other 3.x Linux (32-bit)
Other 3.x Linux (64-bit)
Other Linux (32-bit)
Other Linux (64-bit)
Red Hat Enterprise Linux 2.1
Red Hat Enterprise Linux 3 (32-bit)
Other (32-bit)
Red Hat Enterprise Linux 3 (64-bit)
Red Hat Enterprise Linux 4 (32-bit)
Red Hat Enterprise Linux 4 (64-bit)
Red Hat Enterprise Linux 5 (32-bit)
Red Hat Enterprise Linux 5 (64-bit)
Red Hat Enterprise Linux 6 (32-bit)
Red Hat Enterprise Linux 6 (64-bit)
Red Hat Enterprise Linux 7 (32-bit)
Red Hat Enterprise Linux 7 (64-bit)
SCO OpenServer 5
SCO OpenServer 6
SCO UnixWare 7
SUSE Linux Enterprise 10 (32-bit)
SUSE Linux Enterprise 10 (64-bit)
SUSE Linux Enterprise 11 (32-bit)
SUSE Linux Enterprise 11 (64-bit)
SUSE Linux Enterprise 12 (32-bit)
SUSE Linux Enterprise 12 (64-bit)
SUSE Linux Enterprise 8/9 (32-bit)
SUSE Linux Enterprise 8/9 (64-bit)
Serenity Systems eComStation 1
Serenity Systems eComStation 2
Sun Microsystems Solaris 8
Sun Microsystems Solaris 9
Ubuntu Linux (32-bit)
Ubuntu Linux (64-bit)
VMware ESX 4.x
VMware ESXi 5.x

Categories // ESXi, Nested Virtualization, vSphere Tags // ESXi, guest os, hyper-v, Microsoft, vSphere, windows 10

  • « Previous Page
  • 1
  • …
  • 18
  • 19
  • 20
  • 21
  • 22
  • …
  • 27
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Ultimate Lab Resource for VCF 9.0 06/25/2025
  • VMware Cloud Foundation (VCF) on ASUS NUC 15 Pro (Cyber Canyon) 06/25/2025
  • VMware Cloud Foundation (VCF) on Minisforum MS-A2 06/25/2025
  • VCF 9.0 Offline Depot using Synology 06/25/2025
  • Deploying VCF 9.0 on a single ESXi host? 06/24/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...