WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

ESXi on the new Intel NUC Skull Canyon

05.21.2016 by William Lam // 62 Comments

Earlier this week I found out the new Intel NUC "Skull Canyon" (NUC6i7KYK) has been released and have been shipping for a couple of weeks now. Although this platform is mainly targeted at gaming enthusiast, there have also been a lot of anticipation from the VMware community on leveraging the NUC for a vSphere based home lab. Similiar to the 6th Gen Intel NUC system which is a great platform to run vSphere as well as VSAN, the new NUC includes a several new enhancements beyond the new aesthetics. In addition to the Core i7 CPU, it also includes a dual M.2 slots (no SATA support), Thunderbolt 3 and most importantly, an Intel Iris Pro GPU a Thunderbolt 3 Controller. I will get to why this is important ...
intel_nuc_skull_canyon_1
UPDATE (05/26/16) - With some further investigation from folks like Erik and Florian, it turns out the *only* device that needs to be disabled for ESXi to successfully boot and install is the Thunderbolt Controller. Once ESXi has been installed, you can re-enable the Thunderbolt Controller and Florian has also written a nice blog post here which has instructions as well as screenshots for those not familiar with the Intel NUC BIOs.

UPDATE (05/23/16) - Shortly after sharing this article internally, Jason Joy, a VMware employee shared the great news that he has figured out how to get ESXi to properly boot and install. Jason found that by disabling unnecessary hardware devices like the Consumer IR/etc in the BIOS, it allowed the ESXi installer to properly boot up. Jason was going to dig a bit further to see if he can identify the minimal list of devices that needed to be disabled to boot ESXi. In the meantime, community blogger Erik Bussink has shared the list of settings he has applied to his Skull Canyon to successfully boot and install latest ESXi 6.0 Update 2 based on the feedback from Jason. Huge thanks to Jason for quickly identifying the workaround and sharing it with the VMware community and thanks to Erik for publishing his list. For all those that were considering the new Intel NUC Skull Canyon for a vSphere-based home lab, you can now get your ordering on! 😀

Below is an except from his blog post Intel NUC Skull Canyon (NUC6I7KYK) and ESXi 6.0 on the settings he has disabled:

BIOS\Devices\USB

  • disabled - USB Legacy (Default: On)
  • disabled - Portable Device Charging Mode (Default: Charging Only)
  • not change - USB Ports (Port 01-08 enabled)

BIOS\Devices\SATA

  • disabled - Chipset SATA (Default AHCI & SMART Enabled)
  • M.2 Slot 1 NVMe SSD: Samsung MZVPV256HDGL-00000
  • M.2 Slot 2 NVMe SSD: Samsung MZVPV512HDGL-00000
  • disabled - HDD Activity LED (Default: On)
  • disabled - M.2 PCIe SSD LEG (Default: On)

BIOS\Devices\Video

  • IGD Minimum Memory - 64GB (Default)
  • IGD Aperture Size - 256 (Default)
  • IGD Primary Video Port - Auto (Default)

BIOS\Devices\Onboard Devices

  • disabled - Audio (Default: On)
  • LAN (Default)
  • disabled - Thunderbolt Controller (Default is Enabled)
  • disabled - WLAN (Default: On)
  • disabled - Bluetooth (Default: On)
  • Near Field Communication - Disabled (Default is Disabled)
  • SD Card - Read/Write (Default was Read)
  • Legacy Device Configuration
  • disabled - Enhanced Consumer IR (Default: On)
  • disabled - High Precision Event Timers (Default: On)
  • disabled - Num Lock (Default: On)

BIOS\PCI

  • M.2 Slot 1 - Enabled
  • M.2 Slot 2 - Enabled
  • M.2 Slot 1 NVMe SSD: Samsung MZVPV256HDGL-00000
  • M.2 Slot 2 NVMe SSD: Samsung MZVPV512HDGL-00000

Cooling

  • CPU Fan HEader
  • Fan Control Mode : Cool (I toyed with Full fan, but it does make a lot of noise)

Performance\Processor

  • disabled Real-Time Performance Tuning (Default: On)

Power

  • Select Max Performance Enabled (Default: Balanced Enabled)
  • Secondary Power Settings
  • disabled - Intel Ready Mode Technology (Default: On)
  • disabled - Power Sense (Default: On)
  • After Power Failure: Power On (Default was stay off)

Over the weekend, I had received several emails from folks including Olli from the nucblog.net (highly recommend a follow if you do not), Florian from virten.net (another awesome blog which I follow & recommend) and few others who have gotten their hands on the "Skull Canyon" system. They had all tried to install the latest release of ESXi 6.0 Update 2 including earlier versions but all ran into a problem while booting up the ESXi installer.

The following error message was encountered:

Error loading /tools.t00
Compressed MD5: 39916ab4eb3b835daec309b235fcbc3b
Decompressed MD5: 000000000000000000000000000000
Fatal error: 10 (Out of resources)

intel_nuc_skull_canyon_2
Raymond Huh was the first individual who had reach out to me regarding this issue and then shortly after, I started to get the same confirmations from others as well. Raymond's suspicion was that this was related to the amount of Memory-Mapped I/O resources being consumed by the Intel Iris Pro GPU and does not leave enough resources for the ESXi installer to boot up. Even a quick Google search on this particular error message leads to several solutions here and here where the recommendation was to either disable or reduce the amount of memory for MMIO within the system BIOS.

Unfortunately, it does not look like the Intel NUC BIOS provides any options of disabling or modifying the MMIO settings after Raymond had looked which including tweaking some of the video settings. He currently has a support case filed with Intel to see if there is another option. In the mean time, I had also reached out to some folks internally to see if they had any thoughts and they too came to the same conclusion that without being able to modify or disable MMIO, there is not much more that can be done. There may be a chance that I might be able to get access to a unit from another VMware employee and perhaps we can see if there is any workaround from our side, but there are no guarantees, especially as this is not an officially supported platform for ESXi. I want to thank Raymond, Olli & Florian for going through the early testing and sharing their findings thus far. I know many folks are anxiously waiting and I know they really appreciate it!

For now, if you are considering purchasing or have purchased the latest Intel NUC Skull Canyon with the intention to run ESXi, I would recommend holding off or not opening up the system. I will provide any new updates as they become available. I am still hopeful  that we will find a solution for the VMware community, so crossing fingers.

Categories // ESXi, Home Lab, Not Supported Tags // ESXi, Intel NUC, Skull Canyon

Support your Virtualization Bloggers by voting for Top vBlog 2016

05.03.2016 by William Lam // Leave a Comment

It is that time of the year again, Eric Siebert who runs the popular vSphere-land.com website has just opened up the voting polls for the Top 25 Virtualization Blogs of 2016. There are over 300+ bloggers this year and it is a very impressive list! Here is your chance to show your support for your favorite bloggers by casting a vote which only takes a few minutes. Before voting, be sure to check out Eric's blog post on the criteria's you should consider when voting such as Longevity, Length, Frequency & Quality.

Lastly, I want to thank Eric for all of his hard work for putting this together year after year. I know he spends an enormous amount of time and energy to make this happen and make sure to support Eric and his sponsors by visiting their sites as this would not be possible without them. Happy voting!

20150211_203831-small

Vote now!

Categories // Uncategorized

Generating vCenter Server & Platform Services Controller deployment topology diagrams

05.02.2016 by William Lam // 16 Comments

A really useful capability that vCenter Server used to provide was a feature called vCenter Maps. I say "used to" because this feature was only available when using the vSphere C# Client and was not available in the vSphere Web Client. vCenter Maps provided a visual representation of your vCenter Server inventory along with the different relationships between your Virtual Machines, Hosts, Networks and Datastores. There were a variety of use cases for this feature but it was especially useful when it came to troubleshooting storage or networking connectivity. An administrator could quickly identify if they had an ESXi host that was not connected to the right datastore for example with just a few clicks.

vcenter_server_and_platform_services_controller_topology_diagram_3
Although much of this information can be obtained either manually or programmatically using the vSphere API, the consumption of this data can sometimes be more effective when it is visualized.

I was recently reminded of the vCenter Maps feature as I have seen an increase in discussions around the different vSphere 6.0 deployment topology options. This is an area where I think we could have leveraged visualizations to provide a better user experience to help our customers understand what they have deployed as it relates to install, upgrade and expansion of their vSphere environment. Today, this information is spread across a variety interfaces ranging from the vSphere Web Client (here and here) as well as across different CLIs (here and here) and there is nothing that aggregates all of this dispart information into an easy to consume manner. Collecting this information can also be challenging as you scale up the number of environments you are managing or dealing with complex deployments that can also span multiple sites.

Would it not be cool if you could easily extract and visualize your vSphere 6.0 deployment topology? 🙂

Well, this was a little side project I recently took up. I have created a small python script called extract_vsphere_deployment_topology.py that can run on either a Windows Platform Services Controller (PSC) or a vCenter Server Appliance (VCSA) PSC and from that system extract the current vSphere deployment topology which includes details about the individual vCenter Servers, SSO Sites as well as the PSC replication agreements. The result of the script is outputted in the DOT format, a popular graph description language which can then be used to generate a diagram like the example shown below.vcenter_server_and_platform_services_controller_topology_diagram_0Requirements:

  • vSphere 6.0 environment
  • Access to either a Windows or VCSA PSC as a System Administrator
  • SSO Administrator credentials

Step 1 - Download the extract_vsphere_deployment_topology.py python script to either your Windows vCenter Server PSC or vCenter Server Appliance (VCSA) PSC.

Step 2 - To run on a vCenter Server Appliance (VCSA) PSC, you will need to first set the script to an executable by running the following command:

chmod +x extract_vsphere_deployment_topology.py

To run on a vCenter Server for Windows PSC, you will need to first update your environmental PATH variable to include the python interpreter. Follow the directions here if you have never done this before and add C:\Program Files\VMware\vCenter Server\python

Step 3 - The script requires that you provide an SSO Administrator username and password. You can specify everything in the command-line or you omit the password in which you would then be prompted to enter.

To run the script on a VCSA PSC, run the following command specifying your credentials:

./extract_vsphere_deployment_topology.py  -u *protected email* -p VMware1!

To run the script on Windows VC PSC, run the following command specifying your credentials:

python C:\Users\primp\Desktop\extract_vsphere_deployment_topology.py  -u *protected email* -p VMware1!

Here is an example output from one of my environments.

graph vghetto_vsphere_topology_extraction {
   graph [fontsize = 20,label = "\nSSO Domain: vsphere.local"];
   subgraph cluster_0 {
      style=filled;
      node [style=filled];
      "vcenter60-5.primp-industries.com" -- "psc-06.primp-industries.com"
      label = "Site: East-Coast";
    }
   subgraph cluster_1 {
      style=filled;
      node [style=filled];
      "vcenter60-4.primp-industries.com" -- "psc-05.primp-industries.com"
      "psc-05.primp-industries.com";
      label = "Site: West-Coast";
    }
   "psc-06.primp-industries.com" -- "psc-05.primp-industries.com"
   "vcenter60-4.primp-industries.com" [color="0.578 0.289 1.000"]
   "vcenter60-5.primp-industries.com" [color="0.578 0.289 1.000"]
   "psc-06.primp-industries.com" [color="0.355 0.563 1.000"];
   "psc-05.primp-industries.com" [color="0.355 0.563 1.000"];
}

Step 4 - Save the output from the script and then open a browser that has internet access to the following URL: http://www.webgraphviz.com Paste the output and then click on the "Generate Graph" which will generate a visual diagram of your vSphere deployment. Hopefully it is pretty straight forward to understand and I have also colorized the nodes to represent the different functionality such as Blue for a vCenter Server and Green for Platform Services Controller.

vcenter_server_and_platform_services_controller_topology_diagram_4
In addition, if you have deployed an Embedded vCenter Server which is replicating with an External PSC (which is considered a deprecated topology and will not be supported in the future), you will notice the node is colored Orange instead as seen in the example below.

vcenter_server_and_platform_services_controller_topology_diagram_1
This is pretty cool if you ask me! 😀 Just imagine the possibilities if you could use such an interface to also manage operations across a given vSphere deployment when it comes to install, upgrade and expansion of your existing environment. What do you think, would this be useful?

I have done a limited amount of testing across Windows and the VCSA using a couple of deployment scenarios. It is very possible that I could have missed something and if you are running into issues, it would be good to provide some details about your topology to help me further troubleshoot. I have not done any type of testing using load balancers, so it is very likely that the diagram may not be accurate for these scenarios but I would love to hear from folks if you have tried running the script in such environments.

Categories // Automation, VCSA, vSphere 6.0 Tags // lstool.py, platform service controller, psc, vCenter Server, vcenter server appliance, vdcrepadmin, vmafd-cli, vSphere 6.0

  • « Previous Page
  • 1
  • …
  • 314
  • 315
  • 316
  • 317
  • 318
  • …
  • 560
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...