WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud
  • Tanzu
    • Application Modernization
    • Tanzu services
    • Tanzu Community Edition
    • Tanzu Kubernetes Grid
    • vSphere with Tanzu
  • Home Lab
  • Nested Virtualization
  • Apple

VSAN 6.2 (vSphere 6.0 Update 2) homelab on 6th Gen Intel NUC

03.03.2016 by William Lam // 33 Comments

As many of you know, I have been happily using an Apple Mac Mini for my personal vSphere home lab for the past few years now. I absolutely love the simplicity and the versatility of the platform to easily run a basic vSphere lab to being able to consume advanced capabilities of the vSphere platform like VMware VSAN or NSX for example. The Mac Mini's also supports more complex networking configurations by allowing you to add an additional network adapter which leverages the built-in Thunderbolt adapter which many other similar form factors lack. Having said that all that, one major limitation with the Mac Mini platform has always been the limited amount of memory it can support which is a maximum of 16GB (same limitation as other form factors in this space). Although it is definitely possible to run a vSphere lab with only 16GB of memory, it does limit you some what on what you can deploy which is challenging if you want to explore other solutions like VSAN, NSX and vRealize.

I was really hoping that Apple would have released an update to their Mac Mini platform last year that would include support for 32GB of memory, but instead it was a very minor update and was mostly a let down which you can read more about here. Earlier this year, I found out from fellow blogger Florian Grehl that Intel has just released their 6th generation of the Intel NUC which officially adds support for 32GB of memory. I have been keeping an eye on the Intel NUC for some time now but due to the same memory limitation as the Mac Mini, I had never considered it as viable option, especially given that I own a Mac Mini already. With the added support for 32GB of memory and the ability to house two disk drives (M.2 and 2.5"), this was the update I was finally waiting for to pull the trigger and refresh my home lab given 16GB was just not cutting it for the work I was doing anymore.

There have been quite a bit of interest in what I ended up purchasing for running VSAN 6.2 (vSphere 6.0 Update 2) which has not GA'ed ... yet and so I figure I would together a post with all the details in case others were looking to build a similar lab. This article is broken down into the following sections:

  • Bill of Materials (BOM)
  • Installation
  • VSAN Configuration
  • Final Word

Disclaimer: The Intel NUC is not on VMware's official Hardware Compatibility List (HCL) and there for is not officially supported by VMware. Please use this platform at your own risk.

Bill of Materials (BOM)

vsan62-intel-nuc-bom
Below are the components with links that I used for my configuration which is partially based on budget as well as recommendations from others who have a similar setup. If you think you will need more CPU horsepower, you can look at the Core i5 (NUC6i5SYH) model which is slightly more expensive than the i3. I opted for an all-flash configuration because I not only wanted the performance but I also wanted to take advantage of the much anticipated Deduplication and Compression feature in VSAN 6.2 which is only supported with an all-flash VSAN setup. I also did not have a need for large amount of storage capacity, but you could also pay a tiny bit more for the exact same drive giving you a full 1TB if needed. If you do not care for an all-flash setup, you can definitely look at spinning rust which can give you several TB's of storage at a very reasonable cost. The overall cost of the system for me was ~$700 USD (before taxes) and that was because some of the components were slightly discounted through the use of a preferred retailer that my employer provided. I would highly recommend you check with your employer to see if you have similiar HR benefits as that can help with the cost if that is important to you. The SSDs actually ended up being cheaper on Amazon and so I ended up purchasing them there. 

  • 1 x Intel NUC 6th Gen NUC6i3SYH (supports 2 drives: M.2 & 2.5)
  • 2 x Crucial 16GB DDR4
  • 1 x Samsung 850 EVO 250GB M.2 for “Caching” Tier (Thanks to my readers, decided to upgrade to 1 x Samsung SM951 NVMe 128GB M.2 for "Caching" Tier)
  • 1 x Samsung 850 EVO 500GB 2.5 SATA3 for “Capacity” Tier

Installation

vsan62-intel-nuc-1
The installation of the memory and the SSDs on NUC was super simple. You just need a regular phillips screwdriver and there were four screws at the bottom of the NUC that you will need to unscrew. Once loosen, you just need to flip the NUC unit back on top while holding the bottom and slowly taking the top off. The M.2 SSD requires a smaller phillips screwdriver which you will need to unscrew before you can plug in the device. The memory just plugs right in and you should hear a click to confirm its inserted all the way. The 2.5" SSD just plugs into the drive bay which is attached to the top of the NUC casing. If you are interested in more details, you can find various unboxing and installation videos online like this one. 

UPDATE (05/25/16): Intel has just released BIOS v44 which fully enables unleashes the power of your NVMe devices. One thing to note from the article is that you do NOT need to unplug the security device, you can just update BIOS by simply download the BIOS file and loading it onto a USB key (FAT32).

UPDATE (03/06/16): Intel has just released BIOS v36 which resolves the M.2 SSD issue. If you have updated using earlier versions, to resolve the problem you just need to go into the BIOS and re-enable the M.2 device as mentioned in this blog here.

One very important thing to note which I was warned about by a fellow user was NOT to update/flash to a newer version of the BIOS. It turns out that if you do, the M.2 SSD will fail to be detected by the system which sounds like a serious bug if you ask me. The stock BIOS version that came with my Intel NUC is SYSKLi35.86A.0024.2015.1027.2142 in case anyone is interested. I am not sure if you can flash back the original version but another user just informed me that they had accidentally updated the BIOS and now he can no longer see the M.2 device 🙁

For the ESXi installation, I just used a regular USB key that I had lying around and used the unetbootin tool to create a bootable USB key. I am using the upcoming ESXi 6.0 Update 2 (which has not been released ... yet) and you will be able to use the out of the box ISO that is shipped from VMware. There are no additional custom drivers that are required. Once the ESXi installation loads up, you can then install ESXi back onto the same ESXi USB key which it initially boot it up. I know this is not always common knowledge and as some may think you need an additional USB device to install ESXi. Ensure you do not install anything on the two SSDs if you plan to use VSAN as it requires at least (2 x SSD) or (1 x SSD and 1 x MD).

vsan62-intel-nuc-3
If you are interested in adding a bit of personalization to your Intel NUC setup and replace the default Intel BIOS splash screen like I have, take a look at this article here for more details.

custom-vsan-bios-splash-screen-for-intel-nuc-0
If you are interested in adding additional network adapters to your Intel NUC via USB Ethernet Adapter, have a look at this article here.

VSAN Configuration

Bootstrapping VSAN Datastore:

  • If you plan to run VSAN on the NUC and you do not have additional external storage to deploy and setup things like vCenter Server, you have the option to "bootstrap" VSAN using a single ESXi node to start with which I have written in more detail here and here. This option allows you to setup VSAN so that you can deploy vCenter Server and then help you configure the remainder nodes of your VSAN cluster which will require at least 3 nodes unless you plan on doing a 2-Node VSAN Cluster with the VSAN Witness Appliance. For more detailed instructions on bootstrapping an all-flash VSAN datastore, please take a look at my blog article here.
  • If you plan to *ONLY* run a single VSAN Node which is possible but NOT recommended given you need a minimum of 3 nodes for VSAN to properly function. After the vCenter Server is deployed, you will need to update the default VSAN VM Storage Policy to ether allow "Forced Provisioning" or changing the FTT from 1 to 0 (e.g. no protection given you only have a single node). This will be required else you will run into provisioning issues as VSAN will prevent you from deploying VMs as it is expecting two additional VSAN nodes. When logged into the home page of the vSphere Web Client, click on "VM Storage Policies" icon and edit the "Virtual SAN Default Storage Policy" and change the following values as show in the screenshot below:

Screen Shot 2016-03-03 at 6.08.16 AM

Installing vCenter Server:

  • If you are new to deploying the vCenter Server, VMware has a deployment guide which you can follow here.

Optimizations:

  • In addition, because this is for a home lab, my buddy Cormac Hogan has a great tip on disabling device monitoring as the SSD devices may not be on the VMware's official HCL and can potentially negatively impact your lab environment. The following ESXCLI command needs to be run once on each of the ESXi hosts in the ESXi Shell or remotely:

esxcli system settings advanced set -o /LSOM/VSANDeviceMonitoring -i 0

  • I also recently learned from reading Cormac's blog that there is also new ESXi Advanced Setting in VSAN 6.2 which allows VSAN to provision a VM swap object as "thin" versus "thick" which has been the historically default. To disable the use of "thick" provisioning, you will need to run the following ESXCLI command on each ESXi host:

esxcli system settings advanced set -o /VSAN/SwapThickProvisionDisabled -i 1

  • Lastly, if you plan to run Nested ESXi VMs on top of your physical VSAN Cluster, be sure to add this configuration change outlined in this article here, else you may see some strangeness when trying to create VMFS volumes.

vsan62-intel-nuc-2

Final Word

I have only had the NUC for a couple of days but so far I have been pretty impressed with the ease of setup and the super tiny form factor. I thought the Mac Mini's were small and portable, but the NUC really blows it out of the water. I was super happy with the decision to go with an all-flash setup, the deployment of the VCSA was super fast as you would expect. If I compare this to my Mac Mini which had spinning rust, for a portion of the VCSA deployment, the fan would go a bit psycho and you can feel the heat if you put your face close to it. I could barely feel any heat from the NUC and it was dead silent which is great as it sits in our living room. Like the Mac Mini, the NUC has regular HDMI port which is great as I can connect it directly to our TV and has plenty of USB ports which could come in handy if you wanted to play with VSAN using USB-based disks 😉

vsan62-intel-nuc-4
One neat idea that Duncan Epping had brought up in a recent chat with him was to run a 2-Node VSAN Cluster and have the VSAN Witness appliance running on a desktop or laptop. This would make for a very simple and affordable VSAN home lab without requiring a 3rd physical ESXi node. I had also thought about doing the same but instead of 2 NUCs, I would be combining my Mac Mini and NUC to form the 2-Node VSAN Cluster and then run the VSAN Witness on my iMac desktop which has 16GB of memory. This is just another slick way you can leverage this new and powerful platform to run a full blow VSAN setup. For those of you following my blog, I am also looking to see if there is a way to add a secondary network adapter to the NUC by the way of a USB 3.0 based ethernet adapter. I have already shown that it is definitely possible with older releases of ESXi and if this works, could make the NUC even more viable.

Lastly, for those looking for a more beefier setup, there are rumors that Intel maybe close to releasing another update to the Intel NUC platform code named "Skull Canyon" which could include a Quad-Core i7 (Broadwell based) along with supporting the new USB-c interface which would be able to run Thunderbolt 3. If true, this could be another option for those looking for a bit more power for their home lab.

A few folks had been asking what I plan to do with my Mac Mini now that I have my NUC. I probably will be selling it, it is still a great platform and has Core i7 which definitely helps with any CPU intensive tasks. It also supports two drives, so it is quite inexpensive to purchase another SSD as it already comes with one to setup an all-flash VSAN 6.2 setup. Below are the the specs and If you interested in the setup, feel free to drop me an email at info.virtuallyghetto [at] gmail [dot] com.

  • Mac Mini 5,3 (Late 2011)
  • Quad-Core i7 (262635QM)
  • 16GB memory
  • 1 x SSD (120GB) Corsair Force GT
  • 1 x MD (750 GB) Seagate Momentus XT
  • 1 x built-in 1Gbe Ethernet port
  • 1 x Thunderbolt port
  • 4 x USB ports
  • 1 x HDMI
  • Original packaging available
  • VSAN capable
  • ESXi will install OOTB w/o any issues

Additional Useful Resources:

  • http://www.virten.net/2016/01/vmware-homeserver-esxi-on-6th-gen-intel-nuc/
  • http://www.ivobeerens.nl/2016/02/24/intel-nuc-6th-generation-as-home-server/
  • http://www.sindalschmidt.me/how-to-run-vmware-esxi-on-intel-nuc-part-1-installation/

Categories // ESXi, Home Lab, Not Supported, VSAN, vSphere 6.0 Tags // esxi 6.0, homelab, Intel NUC, notsupported, Virtual SAN, VSAN, VSAN 6.2, vSphere 6.0 Update 2

Working USB Ethernet Adapter (NIC) for ESXi

03.01.2016 by William Lam //

As part of upgrading my personal vSphere home lab from an Apple Mac Mini to an Intel NUC (more on this in a future blog), I have been researching to see if there are other alternatives for adding an additional network adapter. The Intel NUC only includes a single built-in ethernet adapter which is similar to the Mac Mini. However, the NUC also lacks additional IO connectors like a Thunderbolt port which the Mac Mini includes and can support a Thunderbolt to Ethernet adapter. I think this is probably the only downside to the Intel NUC platform which has been similar feedback that I have seen from other vSphere home labbers who currently use or would like to use the NUC. Perhaps, with the next update of the NUC platform code named "Skull Canyon", the rumored Thunderbolt 3 / USB-c connector may make things easier as some of the existing vendors who produce Thunderbolt to ethernet adapter also use common drivers like the Broadcom tg3 which have historically worked with ESXi.

One option that has been suggested by many folks over the years was to see if a USB based ethernet adapter could be used to provide additional networking to ESXi? Historically, the answer had been no because there were no known device drivers that would work with ESXi. I had even looked into this a few years ago and although I ran into some folks who seemed to have made it work, I was never able to find the right USB ethernet adapter to personally confirm myself. It was only until last week, I decided to start fresh again and after a bit of Googling I came across an old VMTN thread here where VMTN user AK_____28 mentioned he had success with the StarTech USB 3.0 to Gigabit Ethernet NIC Network Adapter and using a custom compiled driver that was posted over here by another user named Trickstarter.

UPDATE (02/12/19) - A new VMware Native Driver for USB-based NICs has just been released for ESXi 6.5/6.7, please use this driver going forward. If you are still on ESXi 5.5/6.0, you can continue using the existing driver but please note there will be no additional development in the existing vmklinux-based driver.

UPDATE (03/29/16) - Please have a look at this updated article here which includes ESXi 5.5 and 6.0 driver.

Disclaimer: In case it is not clear and apparent, you should never install any unknown 3rd party software on your ESXi host as it can potentially lead to instability issues or worse open yourself to a security hole. The following solution is not officially supported by VMware, please use at your own risk.

I decided to bite the bullet and give this solution a try and purchased the USB ethernet adapter from Amazon here.

usb-ethernet-adapter
There are two modules that needs to be downloaded, extracted and loaded onto ESXi. I have included the links below for your convenience:

  • ax88179vz026.gz
  • usbnetvz026.gz

As the VMTN thread mentioned, you can load using either the vmkload_mod or ESXCLI. Here are the two commands that I used in the following order:

vmkload_mod /vmfs/volumes/mini-local-datastore-1/usbnetvz026
vmkload_mod /vmfs/volumes/mini-local-datastore-1/ax88179vz026

When I tried to initially load either of the modules, I would always get the following error:

vmkwarning.log:2016-02-28T21:54:54.531Z cpu6:374787)WARNING: Elf: 2041: Load of <usbnetvz026> failed : missing required namespace <com.vmware.usb#9.2.1.0>

As you can imagine, I was pretty bummed to see this since I was afraid that something like this would happen. I was not sure if the device I had purchased no longer worked or if was the drivers? I saw that these modules were initially compiled for ESXi 5.1 (at the time, that was the latest version) and the only difference was that I was using a much newer version of ESXi, specifically 6.0 Update 1. I decided to install the latest version of ESXi 5.1 Update 3 and tried the process again and to my surprise, the modules loaded without errors. I suspect that this was a hard dependency on the namespace version which was version 9.2.1.0 and the latest version is now 9.2.3.0.

usb-network-adapter-esxi-1
After successfully loading the two modules, I ran the following command:

esxcfg-nics -l

to verify that ESXi did in fact did claim the USB ethernet device and as you can see from the screenshot below, it did indeed!

usb-network-adapter-esxi-2
Next up, I needed to verify basic connectivity and added the new uplink to my existing vSwitch. You must use the following ESXCLI command (esxcfg-vswitch command does not work apparently for non vmnicX devices)

esxcli network vswitch standard uplink add -u vusb0 -v vSwitch0

Once added, I hopped over to the vSphere C# Client to see if the device is now showing up under the Network Adapters tab, which it is.

usb-network-adapter-esxi-4
Finally, the last test was to make the vsb0 (this is how ESXi names the device) device the active connection while moving my existing vmnic0 to stand-by. Networking connectivity continued to function and I was even able to transfer an ISO image over the USB ethernet adapter without any issues.

usb-network-adapter-esxi-5
So it looks like it is possible to get a USB based ethernet adapter to function with ESXi, at least with the specific model listed above (PCI ID 0b95:1790). The challenge now is to see if there is a way to build an updated version of the drivers targeted at the latest ESXi 6.0 release. From what I have been able to follow on the forum here, it looks like there was also some amount of non-trivial code changes that were required to get the driver to function. If true, without those changes, it can difficult to re-compile the driver. I have reached out to the original author to see if he might be able to share the changes he made to the driver code. In the mean time, if folks are interested in giving the build process a try, Trickstarter did a great two part write up on how to setup your build environment and compile an example driver.

  • ESXI 5.x Drivers Part 1: Making a Build Environment
  • ESXI 5.x Drivers Part 2: Preparing to compile

Although the write up is targeted at ESXi 5.x, you can download the equilvenet packages for ESXi 6.0 which includes the ESXi Open Source Disclosure Package as well as the VMware Toolchain which is required and used to compile the source code. I have provided the direct download links below.

  • VMware-ESXI-600B-ODP-21Sept2015.iso
  • VMware-TOOLCHAIN-ODP-17July2015.iso

You can also find the latest version of the USB ethernet adapter ax88179 ASIX driver here. I have also attempted to compile just the driver but have already ran into some issues. I have not had time to dig further, so not sure how far I will be able to get. Any tips or tricks others may have for compiling against ESXi 6.0, feel free to share them and I will give them a shot when I get some time!

Categories // ESXi, Home Lab, Not Supported Tags // esxi, ESXi 5.1, homelab, usb, usb network adapter, vSphere 5.1

Running Nested ESXi / VSAN Home Lab on Ravello

04.14.2015 by William Lam // 3 Comments

nested_esxi_on_ravello
There are many options when it comes to building and running your own vSphere home lab. There are going to be different pros and cons to each of these solutions which you will need to evaluate things like cost, performance, maintenance, ease of use and complexity to name a few. Below is a list of the currently available options to you today.

Home Lab Options:


On-Premises

  • Using hardware on the VMware HCL
  • Using Apple Mac Mini, Intel NUC, etc.
  • Using whitebox or off the shelf hardware

Off-Premises (hosted)

  • VMware HOL
  • VMware vCloud Air or other vCloud Air Service Providers
  • Colo-located labs

For example, you could purchase a couple of Apple Mac Mini's and build out a decent size vSphere environment, but it could potentially be costly and not to mention a bit limited on the memory options. Compared to other platforms, it is pretty energy efficient and easy to use and maintain. If you did not want to manage any hardware at all, you could look at a hosted or an on-demand lab such as vCloud Air which can run Nested ESXi unofficially or anyone of the many vCloud Air Service Providers. Heck, you could even use VMware Hands On Lab, though the access will be limited as you will be constrained by the pre-built labs and would not be able to directly upload or download files to the lab. However, this could be a quick way to get access to an environment for testing and best of all, it is 100% free. As you can see, there are many options for a home lab and it really just depends on your goals and what you are trying to accomplish.

Ravello says hello to Nested ESXi


Today, we have a new player entering the off-premises (hosted) option for running vSphere based home labs. I am please to announce that Ravello, a startup that uses Nested Virtualization to target dev/test workloads has just introduced beta support for running Nested ESXi on their platform. I have written about Ravello in the past and you can find more details here. Ravello uses their own home grown KVM-based nested hypervisor called HVX which runs on top of a VM provisioned by either Amazon EC2 or Google Compute Engine. As you can imagine, this was not a trivial feature to add support for especially when things like Intel-VT/AMD-V is not directly exposed to the virtual machines in EC2 or GCE which is required to run ESXi. The folks over at Ravello has solved this in a very interesting way by "emulating" the capabilities of Intel-VT/AMD-V using Binary Translation with direct execution.

Over the last month, I have had the privilege of getting early access to the Ravello platform with the Nested ESXi capability and have been providing early feedback to their R&D team to ensure the best possible user experience for customers looking to run Nested ESXi on their platform. I have also spent quite a bit of time working out the proper workflow for getting Nested ESXi running and being able to quickly scale up the number of nodes, especially useful when testing new features like VSAN 6.0. I have also been working with their team to develop a script that will allow users to quickly spin up as many Nested ESXi VMs as needed after a one time initial preparation. This will greatly simplify deployments of more than a couple of Nested ESXi VMs. Hopefully I will be able to share more details about the script in the very near future.

Before jumping into the instructions on getting Nested ESXi running on the Ravello platform, I also wanted to quickly highlight what is currently supported from a vSphere perspective as well as some of the current limitations and caveats regarding Nested ESXi that you should be aware of. Lastly, I have also provided some details around pricing so the proper expectations is set if you are considering a vSphere home lab on Ravello. You can find more information in the next few sections else you can go straight to the setup instructions.

Supports:


  • vCenter Server 5.x (Windows) & VCSA 5.x
  • vCenter Server 6.0 (Windows)
  • ESXi 5.x
  • ESXi 6.0

Caveats:


Coming from a pure vSphere background, I have enjoyed many of the simplicities that VMware has built into their core platform such as support for OVF capabilities like Dynamic Disks and Deployment Options for example. While using the Ravello platform I came across several limitations with respect to Nested ESXi and the VCSA. Below is just a quick list of the caveats that I have found while testing the platform and I have been told that many of these are being looked at and hopefully will be resolved in the future. Nonetheless, I still wanted to make sure these were called out so that you go in with the right expectations.

  • There is currently no support for virtuallyGhetto's Nested ESXi /VSAN VM OVF Templates (though you can import the OVFs, most of the configurations are lost)
  • There is currently no support for VM Advanced Settings such as marking a VMDK as an SSD or enabling UUID for disks for example (configurations are not preserved through import)
  • There is currently no support for VCSA 6.0 OVA due to disk controller limitation + no OVF property support, you will need to use Windows based vCenter Server for now (VCSA 5.5 is supported)
  • There is currently no OVF property support
  • There is currently no support for VMXNET3 for Nested ESXi VM, e1000 must be used due to a known network bug
  • Running Nested SMP-FT is not supported as 10Gbit vNICs are required and VMXNET3 is not currently supported

Pricing:


When publishing your Ravello Application, you have the option selecting two different deployment optimization. The first is optimized for cost, if TCO is what you care most about, then the platform will automatically select the cloud provider (EC2 or GCE) that is the cheapest to satisfy the requirements. The second option is to optimize based on performance and if selected, you can choose to place your application on either EC2 or GCE. In both of cases, you will be provided with an estimated cost which is broken down to compute, storage, networking as well as a final cost (per hour). Once you agree to the terms, you can then click on the "publish" button which will then deploy your workload onto the selected cloud provider.

Here is a screenshot summary view of a Ravello Application which I have built that consists of 65 VMs (1 Windows VM for vCenter Server) and 64 Nested ESXi VMs and I chose to optimize based on cost. The total price would be $17.894/hr

ravello-vghetto-nested-esxi-vsan-6.0-64-Node-cost-optmized
Note: Prices as of 04/05/2015

I also went through an exercise of going through several more configurations to give you an idea of what the cost could be for varying sized environments. Below is a table for a 3 Node, 32 Node & 64 Node VSAN setup (includes one additional VM for the vCenter Server).

# of VM Optimization Hosting Platform Compute Cost Storage Cost Network Cost Public IP Cost Total Price
4 Cost N/A $1.09/hr $0.0292/hr $0.15/GB $0.01/hr $1.1292/hr
4 Performance Amazon $1.62/hr $0.0292/hr $0.15/GB $0.01/hr $1.6592/hr
4 Performance Google $1.38/hr $0.0292/hr $0.15/GB $0.01/hr $1.4192/hr
33 Cost N/A $8.92/hr $0.1693/hr $0.15/GB $0.01/hr $9.0993/hr
33 Performance Amazon $13.22/hr $0.1693/hr $0.15/GB $0.01/hr $13.3993/hr
33 Performance Google $11.24/hr $0.1693/hr $0.15/GB $0.01/hr $11.4193/hr
65 Cost N/A $17.56/hr $0.324/hr $0.15/GB $0.01/hr $17.894/hr
65 Performance Amazon $26.02/hr $0.324/hr $0.15/GB $0.01/hr $26.354/hr
65 Performance Google $22.12/hr $0.324/hr $0.15/GB $0.01/hr $22.454/hr

How to Setup:


Here is the process for setting up Nested ESXi on the Ravello platform. The process consists of installing a single Nested ESXi VM and "preparing" it so that it can then be used later to deploy additional unique Nested ESXi instances from the Ravello Library.

Step 1 - Upload either an ESXi 5.x or 6.0 ISO to the Library using the Ravello VM Uploader tool which you will be prompted to install.

Screen Shot 2015-04-08 at 8.43.14 PM
Step 2 - Deploy the empty Ravello ESXi VM Template from the Library which has already been prepared with the required CPU ID

<ns1:cpuIds value="0000000768747541444d416369746e65" index="f00d”/>

Adding the above CPU ID will enable the emulation of Intel VT-x/AMD-V. If you decide to create your own Ravello VM Template, you will need to perform this operation yourself which is only available today via their REST API today, you can find more details here.

Step 3 - Add a CD-ROM device to the Nested ESXi VM by highlighting the ESXi VM and under "Disks" (yes, this was not intuitive for me either)

Screen Shot 2015-04-08 at 8.48.40 PM
Once you have added the CD-ROM, you will want to mount the ESXi ISO.

Step 4 - Power on the Nested ESXi VM and perform a regular installation of ESXi as you normally would.

At this point, you have now successfully installed Nested ESXi on Ravello! The next series of step is to "prepare" this ESXi image so that it can be duplicated (cloned) to deploy additional instances without causing conflicts, else you would have to perform this step N-number of times for additional nodes which I am sure many of you would not want to do. The steps outlined here will be following the process which I have documented in my How to properly clone a Nested ESXi VM? article.

Step 5 - Login to the console of ESXi VM and run the following ESXCLI command:

esxcli system settings advanced set -o /Net/FollowHardwareMac -i 1

Note: If you wish to connect to the ESXi VM directly for ease of use versus going through the remote console. You can go to "Services" tab for the VM and enable external access as seen in the screenshot below.

ravello-networking
Step 6 - Edit /etc/vmware/esx.conf and remove the uuid entry and then run /sbin/auto-backup.sh to ensure the changes have been saved.

At this point, you have prepared a vanilla Nested ESXi VM. You can save this image into the Ravello Library and you can deploy additional instances and by default Ravello platform is set for DHCP. You can of course change it to DHCP reservations so you get a particular IP Address or specifying a static IP Address assignment.

If you wish to prepare the Nested ESXi VM for use with VSAN, then you will need to run through these additional steps:

  • Create a claim rule to mark the 4GB VMDK as SSD
  • Enable VSAN traffic type on vmk0

Step 7 - I have also enabled remote logging as well as suppress any shell warnings and you just need to run the snippet below within the ESXi Shell

DEVICE=$(esxcli storage core device list  | grep -iE '(   Display Name: |   Size: )' | grep -B1 4096 | grep mpx | awk -F '(' '{print $2}' | sed 's/)//g');esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d $DEVICE -o enable_ssd;esxcli storage core claiming reclaim -d $DEVICE;esxcli vsan network ipv4 add -i vmk0;esxcli system syslog config set --loghost=10.0.0.100;esxcli system settings advanced set -o /UserVars/SuppressShellWarning -i 1

Step 8 -

If you wish to setup 32 Nodes with VSAN 1.0, then you will need to run this additional command:

esxcli system settings advanced set -o /CMMDS/goto11 -i 1

If you wish to setup 64 Nodes with VSAN 6.0, then you will need to run this additional command:

esxcli system settings advanced set -o /VSAN/goto11 -i 1

At this point, you have completed preparing your Nested ESXi VM. You can now save your image to the Ravello Library and once that has been done, you can now easily clone additional Nested ESXi instances by simply dragging/dropping into your canvas from the Ravello Library. For vCenter Server, if you are setting up a vSphere 5.x environment you will need to upload the VCSA and go through the normal configuration using the VAMI UI. For vCenter Server 6.0, you will not be able to use the VCSA 6.0 because there is a limitation in the platform today that does not support it. At this time, you will need to deploy and install a Windows VM and then install the vCenter Server 6.0 installation.

I of course had some fun with the Ravello platform and below are some screenshots of running both a 32 Node VSAN Cluster (vSphere 5.5) as well as a 64 Node VSAN Cluster (vSphere 6.0) Overall, I thought it was a pretty good experience. There were definitely some sluggishness while installing vCenter Server bits and navigating through the vSphere Web Client. It took a little over 40min which was almost double the amount of time that I have seen in my home lab. I was told that VNC might perform better than RDP, though RDP is what Ravello folks recommend for connecting to a Windows based desktop. It is great to see another option for running vSphere home labs and I think the performance is probably acceptable for most people and hopefully it will continue to improve in the future. I definitely recommend giving Ravello a try and who knows, it might be the platform of choice for your vSphere home lab.

Nested ESXi 5.5 running 32 Node VSAN Cluster:

vghetto-nested-esxi-5.5-32-node-cluster-ravello-1

vghetto-nested-esxi-5.5-32-node-cluster-ravello-0

Nested ESXi 6.0 running 64 Node VSAN Cluster:

vghetto-nested-esxi-64-node-cluster-ravello-1

vghetto-nested-esxi-64-node-cluster-ravello-0

Categories // ESXi, Home Lab, Nested Virtualization, vSphere Tags // homelab, intel vt, nested, nested virtualization, ravello

  • « Previous Page
  • 1
  • …
  • 10
  • 11
  • 12

Search

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Infrastructure Business Group (CIBG) at VMware. He focuses on Cloud Native technologies, Automation, Integration and Operation for the VMware Cloud based Software Defined Datacenters (SDDC)

Connect

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Recent

  • How to enable passthrough for USB Network Adapters claimed by ESXi CDCE Driver? 03/30/2023
  • Self-Contained & Automated VMware Cloud Foundation (VCF) deployment using new VLC Holodeck Toolkit 03/29/2023
  • ESXi configstorecli enhancement in vSphere 8.0 Update 1 03/28/2023
  • ESXi on Intel NUC 13 Pro (Arena Canyon) 03/27/2023
  • Quick Tip - Enabling ESXi Coredumps to be stored on USB 03/26/2023

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2023

 

Loading Comments...