WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

The future of the ESXi Embedded Host Client

03.04.2016 by William Lam // 14 Comments

As many of you know, the ESXi Embedded Host Client project is something that is very near and dear to my heart. I have always felt that we needed a simple web interface that customers can just point their web browser to an ESXi host after a new installation and be able to quickly get started. One of the biggest benefit in addition to simplicity is that it is also very intuitive from a user experience standpoint which I believe is very important in a world where things can quickly get complex. In addition, it can also provide an interface for basic troubleshooting and support greenfield deployments where vCenter Server has not been deployed yet.

It has truly been amazing to follow the Embedded Host Client development from the initial idea to the first prototype built by VMware Engineers Kevin Christopher and Jehad Affoneh to its current implementation lead by Etienne Le Sueur and the ESXi team. I have really been fortunate to have had the opportunity to be so involved in this project. It is hard to imagine that in just little over 6 months, we have had had 5 releases of the Embedded Host Client Fling, all of which, produced with high quality development and rich feature sets.

You can click on the links below to get more details about each release.

esxi-embedded-host-client-history

  • 08/11/15 - EHC Fling v1 released 
  • 08/26/15 - EHC Fling v2 released
  • 10/23/15 - EHC Fling v3 released
  • 12/21/15 - EHC Fling v4 released
  • 02/07/16 - EHC Fling v5 released

I think its an understatement to say that customers are genuinely excited about this project as well, just look at some of the comments left on the Flings page here. Interestingly, this excitement has also been felt internally at VMware as well and I think this goes to show that the team has built something really special that affects anyone who works with VMware's ESXi Hypervisor.

So where to do we go from here? Are we done? Far from it ...

For those of you who follow me on Twitter know that I had recently refreshed my personal vSphere home lab from a Apple Mac Mini to latest Intel NUC running the yet to be release VSAN 6.2 (vSphere 6.0 Update 2). I was pleasantly surprised to see that ESXi Embedded Host Client (EHC) is now included out of the box with ESXi! Although this has been said by a few folks including myself, it is another thing to actually see it in person 🙂

vsan62-esxi-embedded-host-client
Although the VMware Flings program is a great way to share and engage with our customers to get early feedback, it may not always be a viable option. As some of you may know, Flings are not officially supported and this sometimes prevents some of our customers from engaging with us and really putting the Flings through its paces. By making EHC out of the box, not only are we officially supporting it but it will also make it easier for customers to try out this new interface.

UPDATE (03/04/16) - It looks like I made a mistake and that the ESXi Embedded Host Client will NOT be released as a "Tech Preview" as previously mentioned but rather it will be officially GA'ed with vSphere 6.0 Update 2. EHC is a fully supported feature of ESXi.

Although EHC is very close to parity with the vSphere C# Client, it is still not 100% there. We will continue to improve its capabilities and if you have any feedback when trying out the EHC, do not hesitate and leave feedback or file a Feature Request through GSS. For those looking to live on the "edge" a bit more, we will still continue to release updates to the EHC Fling but if you want something that is stable, you can stick with the stock EHC included in ESXi 6.0 Update 2. We will still ship the legacy Windows vSphere C# Client, so you will not be forced to use this interface. However, it is no secret that VMware wants to get rid of the vSphere C# Client and that EHC is the future interface to standalone ESXi hosts.

One feature that I know that many of you have been asking about is Free ESXi. Well, I am please to say that support for Free ESXi has been added in the latest version of EHC included with the upcoming ESXi 6.0 Update 2 release and below is a screenshot demonstrating that it is fully functional.

esxi-embedded-host-client-free-esxi-support
Lastly, I just want to say that EHC has really morphed beyond just a "simple UI" for managing standalone ESXi hosts and has also enabled other teams at VMware to do some really amazing things and create new experiences with this interface. As I said earlier, this is just the beginning 😀 Happy Friday!

Here are some additional cool capabilities provided by EHC

  • Neat way of installing or updating any VIB using just the ESXi Embedded Host Client
  • How to bootstrap the VCSA using the ESXi Embedded Host Client?

Categories // ESXi, vSphere 6.0 Tags // embedded host client, ESXi 6.0, vSphere 6.0 Update 2

VSAN 6.2 (vSphere 6.0 Update 2) homelab on 6th Gen Intel NUC

03.03.2016 by William Lam // 33 Comments

As many of you know, I have been happily using an Apple Mac Mini for my personal vSphere home lab for the past few years now. I absolutely love the simplicity and the versatility of the platform to easily run a basic vSphere lab to being able to consume advanced capabilities of the vSphere platform like VMware VSAN or NSX for example. The Mac Mini's also supports more complex networking configurations by allowing you to add an additional network adapter which leverages the built-in Thunderbolt adapter which many other similar form factors lack. Having said that all that, one major limitation with the Mac Mini platform has always been the limited amount of memory it can support which is a maximum of 16GB (same limitation as other form factors in this space). Although it is definitely possible to run a vSphere lab with only 16GB of memory, it does limit you some what on what you can deploy which is challenging if you want to explore other solutions like VSAN, NSX and vRealize.

I was really hoping that Apple would have released an update to their Mac Mini platform last year that would include support for 32GB of memory, but instead it was a very minor update and was mostly a let down which you can read more about here. Earlier this year, I found out from fellow blogger Florian Grehl that Intel has just released their 6th generation of the Intel NUC which officially adds support for 32GB of memory. I have been keeping an eye on the Intel NUC for some time now but due to the same memory limitation as the Mac Mini, I had never considered it as viable option, especially given that I own a Mac Mini already. With the added support for 32GB of memory and the ability to house two disk drives (M.2 and 2.5"), this was the update I was finally waiting for to pull the trigger and refresh my home lab given 16GB was just not cutting it for the work I was doing anymore.

There have been quite a bit of interest in what I ended up purchasing for running VSAN 6.2 (vSphere 6.0 Update 2) which has not GA'ed ... yet and so I figure I would together a post with all the details in case others were looking to build a similar lab. This article is broken down into the following sections:

  • Bill of Materials (BOM)
  • Installation
  • VSAN Configuration
  • Final Word

Disclaimer: The Intel NUC is not on VMware's official Hardware Compatibility List (HCL) and there for is not officially supported by VMware. Please use this platform at your own risk.

Bill of Materials (BOM)

vsan62-intel-nuc-bom
Below are the components with links that I used for my configuration which is partially based on budget as well as recommendations from others who have a similar setup. If you think you will need more CPU horsepower, you can look at the Core i5 (NUC6i5SYH) model which is slightly more expensive than the i3. I opted for an all-flash configuration because I not only wanted the performance but I also wanted to take advantage of the much anticipated Deduplication and Compression feature in VSAN 6.2 which is only supported with an all-flash VSAN setup. I also did not have a need for large amount of storage capacity, but you could also pay a tiny bit more for the exact same drive giving you a full 1TB if needed. If you do not care for an all-flash setup, you can definitely look at spinning rust which can give you several TB's of storage at a very reasonable cost. The overall cost of the system for me was ~$700 USD (before taxes) and that was because some of the components were slightly discounted through the use of a preferred retailer that my employer provided. I would highly recommend you check with your employer to see if you have similiar HR benefits as that can help with the cost if that is important to you. The SSDs actually ended up being cheaper on Amazon and so I ended up purchasing them there. 

  • 1 x Intel NUC 6th Gen NUC6i3SYH (supports 2 drives: M.2 & 2.5)
  • 2 x Crucial 16GB DDR4
  • 1 x Samsung 850 EVO 250GB M.2 for “Caching” Tier (Thanks to my readers, decided to upgrade to 1 x Samsung SM951 NVMe 128GB M.2 for "Caching" Tier)
  • 1 x Samsung 850 EVO 500GB 2.5 SATA3 for “Capacity” Tier

Installation

vsan62-intel-nuc-1
The installation of the memory and the SSDs on NUC was super simple. You just need a regular phillips screwdriver and there were four screws at the bottom of the NUC that you will need to unscrew. Once loosen, you just need to flip the NUC unit back on top while holding the bottom and slowly taking the top off. The M.2 SSD requires a smaller phillips screwdriver which you will need to unscrew before you can plug in the device. The memory just plugs right in and you should hear a click to confirm its inserted all the way. The 2.5" SSD just plugs into the drive bay which is attached to the top of the NUC casing. If you are interested in more details, you can find various unboxing and installation videos online like this one. 

UPDATE (05/25/16): Intel has just released BIOS v44 which fully enables unleashes the power of your NVMe devices. One thing to note from the article is that you do NOT need to unplug the security device, you can just update BIOS by simply download the BIOS file and loading it onto a USB key (FAT32).

UPDATE (03/06/16): Intel has just released BIOS v36 which resolves the M.2 SSD issue. If you have updated using earlier versions, to resolve the problem you just need to go into the BIOS and re-enable the M.2 device as mentioned in this blog here.

One very important thing to note which I was warned about by a fellow user was NOT to update/flash to a newer version of the BIOS. It turns out that if you do, the M.2 SSD will fail to be detected by the system which sounds like a serious bug if you ask me. The stock BIOS version that came with my Intel NUC is SYSKLi35.86A.0024.2015.1027.2142 in case anyone is interested. I am not sure if you can flash back the original version but another user just informed me that they had accidentally updated the BIOS and now he can no longer see the M.2 device 🙁

For the ESXi installation, I just used a regular USB key that I had lying around and used the unetbootin tool to create a bootable USB key. I am using the upcoming ESXi 6.0 Update 2 (which has not been released ... yet) and you will be able to use the out of the box ISO that is shipped from VMware. There are no additional custom drivers that are required. Once the ESXi installation loads up, you can then install ESXi back onto the same ESXi USB key which it initially boot it up. I know this is not always common knowledge and as some may think you need an additional USB device to install ESXi. Ensure you do not install anything on the two SSDs if you plan to use VSAN as it requires at least (2 x SSD) or (1 x SSD and 1 x MD).

vsan62-intel-nuc-3
If you are interested in adding a bit of personalization to your Intel NUC setup and replace the default Intel BIOS splash screen like I have, take a look at this article here for more details.

custom-vsan-bios-splash-screen-for-intel-nuc-0
If you are interested in adding additional network adapters to your Intel NUC via USB Ethernet Adapter, have a look at this article here.

VSAN Configuration

Bootstrapping VSAN Datastore:

  • If you plan to run VSAN on the NUC and you do not have additional external storage to deploy and setup things like vCenter Server, you have the option to "bootstrap" VSAN using a single ESXi node to start with which I have written in more detail here and here. This option allows you to setup VSAN so that you can deploy vCenter Server and then help you configure the remainder nodes of your VSAN cluster which will require at least 3 nodes unless you plan on doing a 2-Node VSAN Cluster with the VSAN Witness Appliance. For more detailed instructions on bootstrapping an all-flash VSAN datastore, please take a look at my blog article here.
  • If you plan to *ONLY* run a single VSAN Node which is possible but NOT recommended given you need a minimum of 3 nodes for VSAN to properly function. After the vCenter Server is deployed, you will need to update the default VSAN VM Storage Policy to ether allow "Forced Provisioning" or changing the FTT from 1 to 0 (e.g. no protection given you only have a single node). This will be required else you will run into provisioning issues as VSAN will prevent you from deploying VMs as it is expecting two additional VSAN nodes. When logged into the home page of the vSphere Web Client, click on "VM Storage Policies" icon and edit the "Virtual SAN Default Storage Policy" and change the following values as show in the screenshot below:

Screen Shot 2016-03-03 at 6.08.16 AM

Installing vCenter Server:

  • If you are new to deploying the vCenter Server, VMware has a deployment guide which you can follow here.

Optimizations:

  • In addition, because this is for a home lab, my buddy Cormac Hogan has a great tip on disabling device monitoring as the SSD devices may not be on the VMware's official HCL and can potentially negatively impact your lab environment. The following ESXCLI command needs to be run once on each of the ESXi hosts in the ESXi Shell or remotely:

esxcli system settings advanced set -o /LSOM/VSANDeviceMonitoring -i 0

  • I also recently learned from reading Cormac's blog that there is also new ESXi Advanced Setting in VSAN 6.2 which allows VSAN to provision a VM swap object as "thin" versus "thick" which has been the historically default. To disable the use of "thick" provisioning, you will need to run the following ESXCLI command on each ESXi host:

esxcli system settings advanced set -o /VSAN/SwapThickProvisionDisabled -i 1

  • Lastly, if you plan to run Nested ESXi VMs on top of your physical VSAN Cluster, be sure to add this configuration change outlined in this article here, else you may see some strangeness when trying to create VMFS volumes.

vsan62-intel-nuc-2

Final Word

I have only had the NUC for a couple of days but so far I have been pretty impressed with the ease of setup and the super tiny form factor. I thought the Mac Mini's were small and portable, but the NUC really blows it out of the water. I was super happy with the decision to go with an all-flash setup, the deployment of the VCSA was super fast as you would expect. If I compare this to my Mac Mini which had spinning rust, for a portion of the VCSA deployment, the fan would go a bit psycho and you can feel the heat if you put your face close to it. I could barely feel any heat from the NUC and it was dead silent which is great as it sits in our living room. Like the Mac Mini, the NUC has regular HDMI port which is great as I can connect it directly to our TV and has plenty of USB ports which could come in handy if you wanted to play with VSAN using USB-based disks 😉

vsan62-intel-nuc-4
One neat idea that Duncan Epping had brought up in a recent chat with him was to run a 2-Node VSAN Cluster and have the VSAN Witness appliance running on a desktop or laptop. This would make for a very simple and affordable VSAN home lab without requiring a 3rd physical ESXi node. I had also thought about doing the same but instead of 2 NUCs, I would be combining my Mac Mini and NUC to form the 2-Node VSAN Cluster and then run the VSAN Witness on my iMac desktop which has 16GB of memory. This is just another slick way you can leverage this new and powerful platform to run a full blow VSAN setup. For those of you following my blog, I am also looking to see if there is a way to add a secondary network adapter to the NUC by the way of a USB 3.0 based ethernet adapter. I have already shown that it is definitely possible with older releases of ESXi and if this works, could make the NUC even more viable.

Lastly, for those looking for a more beefier setup, there are rumors that Intel maybe close to releasing another update to the Intel NUC platform code named "Skull Canyon" which could include a Quad-Core i7 (Broadwell based) along with supporting the new USB-c interface which would be able to run Thunderbolt 3. If true, this could be another option for those looking for a bit more power for their home lab.

A few folks had been asking what I plan to do with my Mac Mini now that I have my NUC. I probably will be selling it, it is still a great platform and has Core i7 which definitely helps with any CPU intensive tasks. It also supports two drives, so it is quite inexpensive to purchase another SSD as it already comes with one to setup an all-flash VSAN 6.2 setup. Below are the the specs and If you interested in the setup, feel free to drop me an email at info.virtuallyghetto [at] gmail [dot] com.

  • Mac Mini 5,3 (Late 2011)
  • Quad-Core i7 (262635QM)
  • 16GB memory
  • 1 x SSD (120GB) Corsair Force GT
  • 1 x MD (750 GB) Seagate Momentus XT
  • 1 x built-in 1Gbe Ethernet port
  • 1 x Thunderbolt port
  • 4 x USB ports
  • 1 x HDMI
  • Original packaging available
  • VSAN capable
  • ESXi will install OOTB w/o any issues

Additional Useful Resources:

  • http://www.virten.net/2016/01/vmware-homeserver-esxi-on-6th-gen-intel-nuc/
  • http://www.ivobeerens.nl/2016/02/24/intel-nuc-6th-generation-as-home-server/
  • http://www.sindalschmidt.me/how-to-run-vmware-esxi-on-intel-nuc-part-1-installation/

Categories // ESXi, Home Lab, Not Supported, VSAN, vSphere 6.0 Tags // ESXi 6.0, homelab, Intel NUC, notsupported, Virtual SAN, VSAN, VSAN 6.2, vSphere 6.0 Update 2

Quick Tip - VSAN 6.2 (vSphere 6.0 Update 2) now supports creating all-flash diskgroup using ESXCLI

03.02.2016 by William Lam // 5 Comments

One of my all time favorite features of VSAN is still the ability to be able to "bootstrap" a VSAN Datastore starting with just a single ESXi node. This is especially useful if you would like to bootstrap vCenter Server on top of VSAN out of the box without having to require additional VMFS/NFS storage. This bootstrap method has been possible and supported since the very first release of VSAN which I have written in great detail here and here.

With the release of VSAN 6.1 (vSphere 6.0 Update 1), an all-flash VSAN configuration was also now possible in addition to a hybrid configuration which uses a combination of SSDs and MDs. One observation that was made by a few folks including myself was that you could not configure an all-flash diskgroup using ESXCLI which was one of the methods that could be used to bootstrap VSAN. If you tried to create an all-flash diskgroup using ESXCLI, you would get the following error:

Unable to add device: Can not create all-flash disk group: current Virtual SAN license does not support all-flash

This turned out to be a bug and the workaround at the time was to add the ESXi host to a vCenter Server which would then allow you to create the all-flash diskgroup. This usually was not a problem but for those wanting to bootstrap VSAN, this would require you to have an already running vCenter Server instance. While setting up my new VSAN 6.2 home lab last night

Just finished installing all 32GB of awesomeness + 2 SSD (M.2 & 2.5). Super simple#VSAN62HomeLab pic.twitter.com/tYOujQmCqX

— William Lam (@lamw.bsky.social | @*protected email*) (@lamw) March 2, 2016

I found that this issue has actually been resolved in the upcoming release of VSAN 6.2 (vSphere 6.0 Update 2) and you can now create an all-flash diskgroup using ESXCLI which includes do so from the vSphere API as well. For those interested, you can find the list commands required to bootstrap an all-flash VSAN configuration below:

[Read more...]

Categories // Automation, ESXCLI, ESXi, VSAN, vSphere 6.0 Tags // esxcli, ESXi 6.0, Virtual SAN, VSAN, vSphere 6.0 Update 2

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...