WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

ESXi on the new Intel NUC Skull Canyon

05.21.2016 by William Lam // 62 Comments

Earlier this week I found out the new Intel NUC "Skull Canyon" (NUC6i7KYK) has been released and have been shipping for a couple of weeks now. Although this platform is mainly targeted at gaming enthusiast, there have also been a lot of anticipation from the VMware community on leveraging the NUC for a vSphere based home lab. Similiar to the 6th Gen Intel NUC system which is a great platform to run vSphere as well as VSAN, the new NUC includes a several new enhancements beyond the new aesthetics. In addition to the Core i7 CPU, it also includes a dual M.2 slots (no SATA support), Thunderbolt 3 and most importantly, an Intel Iris Pro GPU a Thunderbolt 3 Controller. I will get to why this is important ...
intel_nuc_skull_canyon_1
UPDATE (05/26/16) - With some further investigation from folks like Erik and Florian, it turns out the *only* device that needs to be disabled for ESXi to successfully boot and install is the Thunderbolt Controller. Once ESXi has been installed, you can re-enable the Thunderbolt Controller and Florian has also written a nice blog post here which has instructions as well as screenshots for those not familiar with the Intel NUC BIOs.

UPDATE (05/23/16) - Shortly after sharing this article internally, Jason Joy, a VMware employee shared the great news that he has figured out how to get ESXi to properly boot and install. Jason found that by disabling unnecessary hardware devices like the Consumer IR/etc in the BIOS, it allowed the ESXi installer to properly boot up. Jason was going to dig a bit further to see if he can identify the minimal list of devices that needed to be disabled to boot ESXi. In the meantime, community blogger Erik Bussink has shared the list of settings he has applied to his Skull Canyon to successfully boot and install latest ESXi 6.0 Update 2 based on the feedback from Jason. Huge thanks to Jason for quickly identifying the workaround and sharing it with the VMware community and thanks to Erik for publishing his list. For all those that were considering the new Intel NUC Skull Canyon for a vSphere-based home lab, you can now get your ordering on! 😀

Below is an except from his blog post Intel NUC Skull Canyon (NUC6I7KYK) and ESXi 6.0 on the settings he has disabled:

BIOS\Devices\USB

  • disabled - USB Legacy (Default: On)
  • disabled - Portable Device Charging Mode (Default: Charging Only)
  • not change - USB Ports (Port 01-08 enabled)

BIOS\Devices\SATA

  • disabled - Chipset SATA (Default AHCI & SMART Enabled)
  • M.2 Slot 1 NVMe SSD: Samsung MZVPV256HDGL-00000
  • M.2 Slot 2 NVMe SSD: Samsung MZVPV512HDGL-00000
  • disabled - HDD Activity LED (Default: On)
  • disabled - M.2 PCIe SSD LEG (Default: On)

BIOS\Devices\Video

  • IGD Minimum Memory - 64GB (Default)
  • IGD Aperture Size - 256 (Default)
  • IGD Primary Video Port - Auto (Default)

BIOS\Devices\Onboard Devices

  • disabled - Audio (Default: On)
  • LAN (Default)
  • disabled - Thunderbolt Controller (Default is Enabled)
  • disabled - WLAN (Default: On)
  • disabled - Bluetooth (Default: On)
  • Near Field Communication - Disabled (Default is Disabled)
  • SD Card - Read/Write (Default was Read)
  • Legacy Device Configuration
  • disabled - Enhanced Consumer IR (Default: On)
  • disabled - High Precision Event Timers (Default: On)
  • disabled - Num Lock (Default: On)

BIOS\PCI

  • M.2 Slot 1 - Enabled
  • M.2 Slot 2 - Enabled
  • M.2 Slot 1 NVMe SSD: Samsung MZVPV256HDGL-00000
  • M.2 Slot 2 NVMe SSD: Samsung MZVPV512HDGL-00000

Cooling

  • CPU Fan HEader
  • Fan Control Mode : Cool (I toyed with Full fan, but it does make a lot of noise)

Performance\Processor

  • disabled Real-Time Performance Tuning (Default: On)

Power

  • Select Max Performance Enabled (Default: Balanced Enabled)
  • Secondary Power Settings
  • disabled - Intel Ready Mode Technology (Default: On)
  • disabled - Power Sense (Default: On)
  • After Power Failure: Power On (Default was stay off)

Over the weekend, I had received several emails from folks including Olli from the nucblog.net (highly recommend a follow if you do not), Florian from virten.net (another awesome blog which I follow & recommend) and few others who have gotten their hands on the "Skull Canyon" system. They had all tried to install the latest release of ESXi 6.0 Update 2 including earlier versions but all ran into a problem while booting up the ESXi installer.

The following error message was encountered:

Error loading /tools.t00
Compressed MD5: 39916ab4eb3b835daec309b235fcbc3b
Decompressed MD5: 000000000000000000000000000000
Fatal error: 10 (Out of resources)

intel_nuc_skull_canyon_2
Raymond Huh was the first individual who had reach out to me regarding this issue and then shortly after, I started to get the same confirmations from others as well. Raymond's suspicion was that this was related to the amount of Memory-Mapped I/O resources being consumed by the Intel Iris Pro GPU and does not leave enough resources for the ESXi installer to boot up. Even a quick Google search on this particular error message leads to several solutions here and here where the recommendation was to either disable or reduce the amount of memory for MMIO within the system BIOS.

Unfortunately, it does not look like the Intel NUC BIOS provides any options of disabling or modifying the MMIO settings after Raymond had looked which including tweaking some of the video settings. He currently has a support case filed with Intel to see if there is another option. In the mean time, I had also reached out to some folks internally to see if they had any thoughts and they too came to the same conclusion that without being able to modify or disable MMIO, there is not much more that can be done. There may be a chance that I might be able to get access to a unit from another VMware employee and perhaps we can see if there is any workaround from our side, but there are no guarantees, especially as this is not an officially supported platform for ESXi. I want to thank Raymond, Olli & Florian for going through the early testing and sharing their findings thus far. I know many folks are anxiously waiting and I know they really appreciate it!

For now, if you are considering purchasing or have purchased the latest Intel NUC Skull Canyon with the intention to run ESXi, I would recommend holding off or not opening up the system. I will provide any new updates as they become available. I am still hopeful  that we will find a solution for the VMware community, so crossing fingers.

Categories // ESXi, Home Lab, Not Supported Tags // ESXi, Intel NUC, Skull Canyon

Functional USB 3.0 Ethernet Adapter (NIC) driver for ESXi 5.5 & 6.0

03.28.2016 by William Lam // 81 Comments

Earlier this month I wrote an article demonstrating a functional USB ethernet adapter for ESXi 5.1. This was made possible by using a custom built driver for ESXi that was created over three years ago by a user named Trickstarter. After having re-discovered the thread several years later, I had tried reaching out to the user but concluded that he/she has probably moved on given the lack of forum activity in the recent years. Over the last few weeks I have been investigating to see if it was possible to compile a new version of the driver that would function with newer versions of ESXi such as our 5.5 and 6.0 release.

UPDATE (02/12/19) - A new VMware Native Driver for USB-based NICs has just been released for ESXi 6.5/6.7, please use this driver going forward. If you are still on ESXi 5.5/6.0, you can continue using the existing driver but please note there will be no additional development in the existing vmklinux-based driver.

UPDATE (01/22/17) - For details on using a USB-C / Thunderbolt 3 Ethernet Adapter, please see this post here.

UPDATE (11/17/16) - New driver has been updated for ESXi 6.5, please find the details here.

After reaching out to a few folks internally, I was introduced to Songtao Zheng, a VMware Engineer who works on some of our USB code base. Songtao was kind enough to provide some of assistance in his spare time to help with this non-sanction effort that I was embarking on. Today, I am please to announce that we now have a functional USB ethernet adapter driver based on the ASIX AX88179 that works for both ESXi 5.5 and 6.0. This effort could not have been possible without Songtao and I just want to say thank you very much for all of your help and contributions. I think it is safe to say that the overall VMware community also thanks you for your efforts. This new capability will definitely enable new use cases for vSphere home labs that were never possible before when using platforms such as the Intel NUC or Apple Mac Mini for example. Thank you Songtao! I would also like to extend an additional thank you to Jose Gomes, one of my readers, who has also been extremely helpful with his feedback as well as assistance on testing the new drivers.

Now, Before jumping into the goods, I do want to mention there are a few caveats to be aware of and that I think it is important to understand them before making any purchasing decisions.

  • First and foremost, this is NOT officially supported by VMware, use at your own risk.
  • Secondly, we have observed there is a substantial difference in transfer speeds between Transmit (Egress) and Receive (Ingress) traffic which may or may not be acceptable depending on your workload. On Receive, the USB network adapter is performing close to a native gigabit interface. However, on Transmit, the bandwidth mysteriously drops by ~50% which includes very inconsistent transfer speeds. We are not exactly sure why this is the case, but given ESXi does not officially support USB based ethernet adapters, it is possible that the underlying infrastructure was never optimized for such devices. YMMV
  • Lastly, for the USB ethernet adapter to properly function, you will need a system that supports USB 3.0 which kind of makes sense for this type of a solution to be beneficial in the home lab. If you have a system with USB 2.0, the device will probably not work at least from testing that we have done.

Note: For those interested in the required source code changes to build the AX88179 driver, I have published all of the details on my Github repo here.

Disclaimer: In case you some how missed it, this is not officially supported by VMware. Use at your own risk.

Without further ado, here are the USB 3.0 gigabit ethernet adapters that are supported with the two drivers:

  • StarTech USB 3.0 to Gigabit Ethernet NIC Adapter
  • StarTech USB 3.0 to Dual Port Gigabit Ethernet Adapter NIC with USB Port
  • j5create USB 3.0 to Gigabit Ethernet NIC Adapter (verified by reader Sean Hatfield 03/29/16)
  • Vantec CB-U300GNA USB 3.0 Ethernet Adapter (verified by VMware employee 05/19/16)
  • DUB-1312 USB 3.0 Gigabit Ethernet Adapter (verified by twitter user George Markou 07/29/16)

Note: There may be other USB ethernet adapters that uses the same chipset which could also leverage this driver but these are the only two that have been verified.

usbnic
Here are the ESXi driver VIB downloads:

  • ESXi 5.5 Update 3 USB Ethernet Adapter Driver VIB or ESXi 5.5 Update 3 USB Ethernet Adapter Driver Offline Bundle
  • ESXi 6.0 Update 2 USB Ethernet Adapter Driver VIB or ESXi 6.0 Update 2 USB Ethernet Adapter Driver Offline Bundle
  • ESXi 6.5 USB Ethernet Adapter Driver VIB or ESXi 6.5 USB Ethernet Adapter Driver Offline Bundle

Note: Although the drivers were compiled against a specific version of ESXi, they should also work on the same major version of ESXi, but I have not done that level of testing and YMMV.

Verify USB 3.0 Support

As mentioned earlier, you will need a system that is USB 3.0 capable to be able to use the USB ethernet adapter. If you are unsure, you can plug in a USB 3.0 device and run the following command to check:

lsusb

usb3nic-0
What you will be looking for is an entry stating "Linux Foundation 3.0 root hub" which shows that ESXi was able to detect a USB 3.0 port on your system. Secondly, look for the USB device you just plugged in and ensure the "Bus" ID matches that of the USB 3.0 bus. This will tell you if your device is being claimed as a USB 3.0 device. If not, you may need to update your BIOS as some systems may have USB 2.0 enabled by default like earlier versions of Intel NUC as desribed here. You may also be running pre-ESXi 5.5 which did not support USB 3.0 as mentioned here, so you may need to upgrade your ESXi host to at least 5.5 or greater.

Install Driver

You can either install the VIB directly onto your ESXi host or by creating a custom ESXi ISO that includes the driver using a popular tool like ESXi Customizer by Andreas Peetz.

To install the VIB, upload the VIB to your ESXi host and then run the following ESXCLI command specifying the full path to the VIB:

esxcli software vib install -v /vghetto-ax88179-esxi60u2.vib -f

usb3nic-1
Lastly, you will need to disable the USB native driver to be able to use this driver. To do so, run the following command:

esxcli system module set -m=vmkusb -e=FALSE

You will need to reboot for the change to go into effect.

To verify that the USB network adapter has been successfully claimed, run either of the following commands to list your physical NICs:

esxcli network nic list
esxcfg-nics -l

usb3nic-2
To add the USB uplink, you will need to either use the vSphere Web Client or ESXCLI to add the uplink to either a Virtual or Distributed Virtual Switch.

usb3nic-3
To do so using ESXCLI, run the following command and specify the name of your vSwitch:

esxcli network vswitch standard uplink add -u vusb0 -v vSwitch0

Uninstall Driver

To uninstall the VIB, first make sure to completely unplug the USB network adapter from the ESXi first. Next, run the following ESXCLI command which will automatically unload the driver and remove the VIB from your ESXi host:

esxcli software vib remove -n vghetto-ax88179-esxi60u2

Note: If you try to remove the VIB while the USB network adapter is still plugged in, you may hang the system or cause a PSOD. Simply reboot the system if you accidentally get into this situation.

Troubleshooting

If you are not receiving link on the USB ethernet adapter, it is most likely that your system does not support USB 3.0. If you find the a similar message like the one below in /var/log/vmkernel.log then you are probably running USB 1.0 or 2.0.

2016-03-21T23:30:49.195Z cpu6:33307)WARNING: LinDMA: Linux_DMACheckConstraints:138: Cannot map machine address = 0x10f5b6b44, length = 2 for device 0000:00:1d.7; reason = address exceeds dma_mask (0xffffffff))

Persisting USB NIC Configurations after reboot

ESXi does not natively support USB NIC and upon a reboot, the USB NICs are not picked up until much later in the boot process which prevents them from being associated with VSS/VDS and their respective portgroups. To ensure things are connected properly after a reboot, you will need to add something like the following in /etc/rc.local.d/local.sh which re-links the USB NIC along with the individual portgroups as shown in the example below.

esxcfg-vswitch -L vusb0 vSwitch0
esxcfg-vswitch -M vusb0 -p "Management Network" vSwitch0
esxcfg-vswitch -M vusb0 -p "VM Network" vSwitch0

You will also need to run /sbin/auto-backup.sh to ensure the configuration changes are saved and then you can issue a reboot to verify that everything is working as expected.

Summary

For platforms that have limited built-in networking capabilities such as the Intel NUC and Apple Mac Mini, customers now have the ability to add additional network interfaces to these systems. This will now open up a whole new class of use cases for vSphere based home labs that were never possible before, especially with solutions such as VSAN and NSX. I look forward to seeing what our customers can now do with these new networking capabilities.

Additional Info

Here are some additional screenshots testing the dual USB 3.0 ethernet adapter as well as a basic iPerf benchmark for the single USB ethernet adapter. I was not really impressed with the speeds for the dual ethernet adapter which I had shared some more info here. Unless you are limited on number of USB 3.0 ports, I would probably recommend just sticking with the single port ethernet adapter.

usb3nic-5
usb3nic-6

iPerf benchmark for Ingress traffic (single port USB ethernet adapter):
usb3nic-7
iPerf benchmark for Egress traffic (single port USB ethernet adapter):
usb3nic-8

Categories // ESXi, Home Lab, Not Supported, vSphere 6.0 Tags // ESXi 5.5, ESXi 6.0, homelab, lsusb, usb, usb ethernet adapter, usb network adapter, vSphere 5.5

VSAN 6.2 (vSphere 6.0 Update 2) homelab on 6th Gen Intel NUC

03.03.2016 by William Lam // 33 Comments

As many of you know, I have been happily using an Apple Mac Mini for my personal vSphere home lab for the past few years now. I absolutely love the simplicity and the versatility of the platform to easily run a basic vSphere lab to being able to consume advanced capabilities of the vSphere platform like VMware VSAN or NSX for example. The Mac Mini's also supports more complex networking configurations by allowing you to add an additional network adapter which leverages the built-in Thunderbolt adapter which many other similar form factors lack. Having said that all that, one major limitation with the Mac Mini platform has always been the limited amount of memory it can support which is a maximum of 16GB (same limitation as other form factors in this space). Although it is definitely possible to run a vSphere lab with only 16GB of memory, it does limit you some what on what you can deploy which is challenging if you want to explore other solutions like VSAN, NSX and vRealize.

I was really hoping that Apple would have released an update to their Mac Mini platform last year that would include support for 32GB of memory, but instead it was a very minor update and was mostly a let down which you can read more about here. Earlier this year, I found out from fellow blogger Florian Grehl that Intel has just released their 6th generation of the Intel NUC which officially adds support for 32GB of memory. I have been keeping an eye on the Intel NUC for some time now but due to the same memory limitation as the Mac Mini, I had never considered it as viable option, especially given that I own a Mac Mini already. With the added support for 32GB of memory and the ability to house two disk drives (M.2 and 2.5"), this was the update I was finally waiting for to pull the trigger and refresh my home lab given 16GB was just not cutting it for the work I was doing anymore.

There have been quite a bit of interest in what I ended up purchasing for running VSAN 6.2 (vSphere 6.0 Update 2) which has not GA'ed ... yet and so I figure I would together a post with all the details in case others were looking to build a similar lab. This article is broken down into the following sections:

  • Bill of Materials (BOM)
  • Installation
  • VSAN Configuration
  • Final Word

Disclaimer: The Intel NUC is not on VMware's official Hardware Compatibility List (HCL) and there for is not officially supported by VMware. Please use this platform at your own risk.

Bill of Materials (BOM)

vsan62-intel-nuc-bom
Below are the components with links that I used for my configuration which is partially based on budget as well as recommendations from others who have a similar setup. If you think you will need more CPU horsepower, you can look at the Core i5 (NUC6i5SYH) model which is slightly more expensive than the i3. I opted for an all-flash configuration because I not only wanted the performance but I also wanted to take advantage of the much anticipated Deduplication and Compression feature in VSAN 6.2 which is only supported with an all-flash VSAN setup. I also did not have a need for large amount of storage capacity, but you could also pay a tiny bit more for the exact same drive giving you a full 1TB if needed. If you do not care for an all-flash setup, you can definitely look at spinning rust which can give you several TB's of storage at a very reasonable cost. The overall cost of the system for me was ~$700 USD (before taxes) and that was because some of the components were slightly discounted through the use of a preferred retailer that my employer provided. I would highly recommend you check with your employer to see if you have similiar HR benefits as that can help with the cost if that is important to you. The SSDs actually ended up being cheaper on Amazon and so I ended up purchasing them there. 

  • 1 x Intel NUC 6th Gen NUC6i3SYH (supports 2 drives: M.2 & 2.5)
  • 2 x Crucial 16GB DDR4
  • 1 x Samsung 850 EVO 250GB M.2 for “Caching” Tier (Thanks to my readers, decided to upgrade to 1 x Samsung SM951 NVMe 128GB M.2 for "Caching" Tier)
  • 1 x Samsung 850 EVO 500GB 2.5 SATA3 for “Capacity” Tier

Installation

vsan62-intel-nuc-1
The installation of the memory and the SSDs on NUC was super simple. You just need a regular phillips screwdriver and there were four screws at the bottom of the NUC that you will need to unscrew. Once loosen, you just need to flip the NUC unit back on top while holding the bottom and slowly taking the top off. The M.2 SSD requires a smaller phillips screwdriver which you will need to unscrew before you can plug in the device. The memory just plugs right in and you should hear a click to confirm its inserted all the way. The 2.5" SSD just plugs into the drive bay which is attached to the top of the NUC casing. If you are interested in more details, you can find various unboxing and installation videos online like this one. 

UPDATE (05/25/16): Intel has just released BIOS v44 which fully enables unleashes the power of your NVMe devices. One thing to note from the article is that you do NOT need to unplug the security device, you can just update BIOS by simply download the BIOS file and loading it onto a USB key (FAT32).

UPDATE (03/06/16): Intel has just released BIOS v36 which resolves the M.2 SSD issue. If you have updated using earlier versions, to resolve the problem you just need to go into the BIOS and re-enable the M.2 device as mentioned in this blog here.

One very important thing to note which I was warned about by a fellow user was NOT to update/flash to a newer version of the BIOS. It turns out that if you do, the M.2 SSD will fail to be detected by the system which sounds like a serious bug if you ask me. The stock BIOS version that came with my Intel NUC is SYSKLi35.86A.0024.2015.1027.2142 in case anyone is interested. I am not sure if you can flash back the original version but another user just informed me that they had accidentally updated the BIOS and now he can no longer see the M.2 device 🙁

For the ESXi installation, I just used a regular USB key that I had lying around and used the unetbootin tool to create a bootable USB key. I am using the upcoming ESXi 6.0 Update 2 (which has not been released ... yet) and you will be able to use the out of the box ISO that is shipped from VMware. There are no additional custom drivers that are required. Once the ESXi installation loads up, you can then install ESXi back onto the same ESXi USB key which it initially boot it up. I know this is not always common knowledge and as some may think you need an additional USB device to install ESXi. Ensure you do not install anything on the two SSDs if you plan to use VSAN as it requires at least (2 x SSD) or (1 x SSD and 1 x MD).

vsan62-intel-nuc-3
If you are interested in adding a bit of personalization to your Intel NUC setup and replace the default Intel BIOS splash screen like I have, take a look at this article here for more details.

custom-vsan-bios-splash-screen-for-intel-nuc-0
If you are interested in adding additional network adapters to your Intel NUC via USB Ethernet Adapter, have a look at this article here.

VSAN Configuration

Bootstrapping VSAN Datastore:

  • If you plan to run VSAN on the NUC and you do not have additional external storage to deploy and setup things like vCenter Server, you have the option to "bootstrap" VSAN using a single ESXi node to start with which I have written in more detail here and here. This option allows you to setup VSAN so that you can deploy vCenter Server and then help you configure the remainder nodes of your VSAN cluster which will require at least 3 nodes unless you plan on doing a 2-Node VSAN Cluster with the VSAN Witness Appliance. For more detailed instructions on bootstrapping an all-flash VSAN datastore, please take a look at my blog article here.
  • If you plan to *ONLY* run a single VSAN Node which is possible but NOT recommended given you need a minimum of 3 nodes for VSAN to properly function. After the vCenter Server is deployed, you will need to update the default VSAN VM Storage Policy to ether allow "Forced Provisioning" or changing the FTT from 1 to 0 (e.g. no protection given you only have a single node). This will be required else you will run into provisioning issues as VSAN will prevent you from deploying VMs as it is expecting two additional VSAN nodes. When logged into the home page of the vSphere Web Client, click on "VM Storage Policies" icon and edit the "Virtual SAN Default Storage Policy" and change the following values as show in the screenshot below:

Screen Shot 2016-03-03 at 6.08.16 AM

Installing vCenter Server:

  • If you are new to deploying the vCenter Server, VMware has a deployment guide which you can follow here.

Optimizations:

  • In addition, because this is for a home lab, my buddy Cormac Hogan has a great tip on disabling device monitoring as the SSD devices may not be on the VMware's official HCL and can potentially negatively impact your lab environment. The following ESXCLI command needs to be run once on each of the ESXi hosts in the ESXi Shell or remotely:

esxcli system settings advanced set -o /LSOM/VSANDeviceMonitoring -i 0

  • I also recently learned from reading Cormac's blog that there is also new ESXi Advanced Setting in VSAN 6.2 which allows VSAN to provision a VM swap object as "thin" versus "thick" which has been the historically default. To disable the use of "thick" provisioning, you will need to run the following ESXCLI command on each ESXi host:

esxcli system settings advanced set -o /VSAN/SwapThickProvisionDisabled -i 1

  • Lastly, if you plan to run Nested ESXi VMs on top of your physical VSAN Cluster, be sure to add this configuration change outlined in this article here, else you may see some strangeness when trying to create VMFS volumes.

vsan62-intel-nuc-2

Final Word

I have only had the NUC for a couple of days but so far I have been pretty impressed with the ease of setup and the super tiny form factor. I thought the Mac Mini's were small and portable, but the NUC really blows it out of the water. I was super happy with the decision to go with an all-flash setup, the deployment of the VCSA was super fast as you would expect. If I compare this to my Mac Mini which had spinning rust, for a portion of the VCSA deployment, the fan would go a bit psycho and you can feel the heat if you put your face close to it. I could barely feel any heat from the NUC and it was dead silent which is great as it sits in our living room. Like the Mac Mini, the NUC has regular HDMI port which is great as I can connect it directly to our TV and has plenty of USB ports which could come in handy if you wanted to play with VSAN using USB-based disks 😉

vsan62-intel-nuc-4
One neat idea that Duncan Epping had brought up in a recent chat with him was to run a 2-Node VSAN Cluster and have the VSAN Witness appliance running on a desktop or laptop. This would make for a very simple and affordable VSAN home lab without requiring a 3rd physical ESXi node. I had also thought about doing the same but instead of 2 NUCs, I would be combining my Mac Mini and NUC to form the 2-Node VSAN Cluster and then run the VSAN Witness on my iMac desktop which has 16GB of memory. This is just another slick way you can leverage this new and powerful platform to run a full blow VSAN setup. For those of you following my blog, I am also looking to see if there is a way to add a secondary network adapter to the NUC by the way of a USB 3.0 based ethernet adapter. I have already shown that it is definitely possible with older releases of ESXi and if this works, could make the NUC even more viable.

Lastly, for those looking for a more beefier setup, there are rumors that Intel maybe close to releasing another update to the Intel NUC platform code named "Skull Canyon" which could include a Quad-Core i7 (Broadwell based) along with supporting the new USB-c interface which would be able to run Thunderbolt 3. If true, this could be another option for those looking for a bit more power for their home lab.

A few folks had been asking what I plan to do with my Mac Mini now that I have my NUC. I probably will be selling it, it is still a great platform and has Core i7 which definitely helps with any CPU intensive tasks. It also supports two drives, so it is quite inexpensive to purchase another SSD as it already comes with one to setup an all-flash VSAN 6.2 setup. Below are the the specs and If you interested in the setup, feel free to drop me an email at info.virtuallyghetto [at] gmail [dot] com.

  • Mac Mini 5,3 (Late 2011)
  • Quad-Core i7 (262635QM)
  • 16GB memory
  • 1 x SSD (120GB) Corsair Force GT
  • 1 x MD (750 GB) Seagate Momentus XT
  • 1 x built-in 1Gbe Ethernet port
  • 1 x Thunderbolt port
  • 4 x USB ports
  • 1 x HDMI
  • Original packaging available
  • VSAN capable
  • ESXi will install OOTB w/o any issues

Additional Useful Resources:

  • http://www.virten.net/2016/01/vmware-homeserver-esxi-on-6th-gen-intel-nuc/
  • http://www.ivobeerens.nl/2016/02/24/intel-nuc-6th-generation-as-home-server/
  • http://www.sindalschmidt.me/how-to-run-vmware-esxi-on-intel-nuc-part-1-installation/

Categories // ESXi, Home Lab, Not Supported, VSAN, vSphere 6.0 Tags // ESXi 6.0, homelab, Intel NUC, notsupported, Virtual SAN, VSAN, VSAN 6.2, vSphere 6.0 Update 2

  • « Previous Page
  • 1
  • …
  • 47
  • 48
  • 49
  • 50
  • 51
  • …
  • 55
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Ultimate Lab Resource for VCF 9.0 06/25/2025
  • VMware Cloud Foundation (VCF) on ASUS NUC 15 Pro (Cyber Canyon) 06/25/2025
  • VMware Cloud Foundation (VCF) on Minisforum MS-A2 06/25/2025
  • VCF 9.0 Offline Depot using Synology 06/25/2025
  • Deploying VCF 9.0 on a single ESXi host? 06/24/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...