WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

ESXi on the new Intel NUC Skull Canyon

05.21.2016 by William Lam // 62 Comments

Earlier this week I found out the new Intel NUC "Skull Canyon" (NUC6i7KYK) has been released and have been shipping for a couple of weeks now. Although this platform is mainly targeted at gaming enthusiast, there have also been a lot of anticipation from the VMware community on leveraging the NUC for a vSphere based home lab. Similiar to the 6th Gen Intel NUC system which is a great platform to run vSphere as well as VSAN, the new NUC includes a several new enhancements beyond the new aesthetics. In addition to the Core i7 CPU, it also includes a dual M.2 slots (no SATA support), Thunderbolt 3 and most importantly, an Intel Iris Pro GPU a Thunderbolt 3 Controller. I will get to why this is important ...
intel_nuc_skull_canyon_1
UPDATE (05/26/16) - With some further investigation from folks like Erik and Florian, it turns out the *only* device that needs to be disabled for ESXi to successfully boot and install is the Thunderbolt Controller. Once ESXi has been installed, you can re-enable the Thunderbolt Controller and Florian has also written a nice blog post here which has instructions as well as screenshots for those not familiar with the Intel NUC BIOs.

UPDATE (05/23/16) - Shortly after sharing this article internally, Jason Joy, a VMware employee shared the great news that he has figured out how to get ESXi to properly boot and install. Jason found that by disabling unnecessary hardware devices like the Consumer IR/etc in the BIOS, it allowed the ESXi installer to properly boot up. Jason was going to dig a bit further to see if he can identify the minimal list of devices that needed to be disabled to boot ESXi. In the meantime, community blogger Erik Bussink has shared the list of settings he has applied to his Skull Canyon to successfully boot and install latest ESXi 6.0 Update 2 based on the feedback from Jason. Huge thanks to Jason for quickly identifying the workaround and sharing it with the VMware community and thanks to Erik for publishing his list. For all those that were considering the new Intel NUC Skull Canyon for a vSphere-based home lab, you can now get your ordering on! 😀

Below is an except from his blog post Intel NUC Skull Canyon (NUC6I7KYK) and ESXi 6.0 on the settings he has disabled:

BIOS\Devices\USB

  • disabled - USB Legacy (Default: On)
  • disabled - Portable Device Charging Mode (Default: Charging Only)
  • not change - USB Ports (Port 01-08 enabled)

BIOS\Devices\SATA

  • disabled - Chipset SATA (Default AHCI & SMART Enabled)
  • M.2 Slot 1 NVMe SSD: Samsung MZVPV256HDGL-00000
  • M.2 Slot 2 NVMe SSD: Samsung MZVPV512HDGL-00000
  • disabled - HDD Activity LED (Default: On)
  • disabled - M.2 PCIe SSD LEG (Default: On)

BIOS\Devices\Video

  • IGD Minimum Memory - 64GB (Default)
  • IGD Aperture Size - 256 (Default)
  • IGD Primary Video Port - Auto (Default)

BIOS\Devices\Onboard Devices

  • disabled - Audio (Default: On)
  • LAN (Default)
  • disabled - Thunderbolt Controller (Default is Enabled)
  • disabled - WLAN (Default: On)
  • disabled - Bluetooth (Default: On)
  • Near Field Communication - Disabled (Default is Disabled)
  • SD Card - Read/Write (Default was Read)
  • Legacy Device Configuration
  • disabled - Enhanced Consumer IR (Default: On)
  • disabled - High Precision Event Timers (Default: On)
  • disabled - Num Lock (Default: On)

BIOS\PCI

  • M.2 Slot 1 - Enabled
  • M.2 Slot 2 - Enabled
  • M.2 Slot 1 NVMe SSD: Samsung MZVPV256HDGL-00000
  • M.2 Slot 2 NVMe SSD: Samsung MZVPV512HDGL-00000

Cooling

  • CPU Fan HEader
  • Fan Control Mode : Cool (I toyed with Full fan, but it does make a lot of noise)

Performance\Processor

  • disabled Real-Time Performance Tuning (Default: On)

Power

  • Select Max Performance Enabled (Default: Balanced Enabled)
  • Secondary Power Settings
  • disabled - Intel Ready Mode Technology (Default: On)
  • disabled - Power Sense (Default: On)
  • After Power Failure: Power On (Default was stay off)

Over the weekend, I had received several emails from folks including Olli from the nucblog.net (highly recommend a follow if you do not), Florian from virten.net (another awesome blog which I follow & recommend) and few others who have gotten their hands on the "Skull Canyon" system. They had all tried to install the latest release of ESXi 6.0 Update 2 including earlier versions but all ran into a problem while booting up the ESXi installer.

The following error message was encountered:

Error loading /tools.t00
Compressed MD5: 39916ab4eb3b835daec309b235fcbc3b
Decompressed MD5: 000000000000000000000000000000
Fatal error: 10 (Out of resources)

intel_nuc_skull_canyon_2
Raymond Huh was the first individual who had reach out to me regarding this issue and then shortly after, I started to get the same confirmations from others as well. Raymond's suspicion was that this was related to the amount of Memory-Mapped I/O resources being consumed by the Intel Iris Pro GPU and does not leave enough resources for the ESXi installer to boot up. Even a quick Google search on this particular error message leads to several solutions here and here where the recommendation was to either disable or reduce the amount of memory for MMIO within the system BIOS.

Unfortunately, it does not look like the Intel NUC BIOS provides any options of disabling or modifying the MMIO settings after Raymond had looked which including tweaking some of the video settings. He currently has a support case filed with Intel to see if there is another option. In the mean time, I had also reached out to some folks internally to see if they had any thoughts and they too came to the same conclusion that without being able to modify or disable MMIO, there is not much more that can be done. There may be a chance that I might be able to get access to a unit from another VMware employee and perhaps we can see if there is any workaround from our side, but there are no guarantees, especially as this is not an officially supported platform for ESXi. I want to thank Raymond, Olli & Florian for going through the early testing and sharing their findings thus far. I know many folks are anxiously waiting and I know they really appreciate it!

For now, if you are considering purchasing or have purchased the latest Intel NUC Skull Canyon with the intention to run ESXi, I would recommend holding off or not opening up the system. I will provide any new updates as they become available. I am still hopeful  that we will find a solution for the VMware community, so crossing fingers.

Categories // ESXi, Home Lab, Not Supported Tags // ESXi, Intel NUC, Skull Canyon

Test driving VMware Photon Controller Part 3c: Deploying Docker Swarm

04.28.2016 by William Lam // 3 Comments

In this final article, we will now take a look at deploying a Docker Swarm Cluster running on top of Photon Controller.

test-driving-photon-controller-docker-swarm-cluster
A minimal deployment for a Docker Swarm Cluster consists of 3 Virtual Machines: 1 Masters, 1 etcd, 1 Slave. If you only have 16GB of memory on your ESXi host, then you will need override the default VM Flavor used which is outlined in Step 1. If you have more than 16GB of memory, then you can skip Step 1 and move directly to Step 2.

Deploying Docker Swarm Cluster

Step 1 -If you have not already created a new cluster-tiny-vm VM Flavor from the previous article that consists of 1vCPU/1GB memory, please run the following command:

./photon -n flavor create --name cluster-tiny-vm --kind "vm" --cost "vm 1 COUNT,vm.flavor.cluster-other-vm 1 COUNT,vm.cpu 1 COUNT,vm.memory 1 GB,vm.cost 1 COUNT"

Step 2 - Download the Swarm VMDK from here

Step 3 -We will now upload our Swarm image and make a note of the ID that is generated after the upload completes by running the following command:

./photon -n image create photon-swarm-vm-disk1.vmdk -n photon-swarm-vm.vmdk -i EAGER

Step 4 - Next, we will also need the ID of our Photon Controller Instance deployment as it will be required in the next step by running the following command:

./photon deployment list

Step 5 - We will now enable the Docker Swarm Cluster Orchestration on our Photon Controller instance by running the following command and specifying the ID of your deployment as well as the ID of the Swarm image from the previous two steps:

./photon -n deployment enable-cluster-type cc49d7f7-b6c4-43dd-b8f3-fe17e6648d0f -k SWARM -i 13ae437d-3fd1-48a3-9d14-287b9259cbad

test-driving-photon-controller-docker-swarm-0
Step 6 -We are now ready to spin up our Docker Swarm Cluster by simply running the following command and substituting the network information from your environment. We are going to only deploying a single Swarm Slave (if you have additional resources you can spin up more or you can always re-size the cluster after it has been deployed). Do not forget to override the default VM Flavor used by specifying -v option and providing the name of our VM Flavor which we had created earlier called cluster-tiny-vm. You can just hit enter when prompted for the two zookeeper IP Addresses.

./photon cluster create -n swarm-cluster -k SWARM --dns 192.168.1.1 --gateway 192.168.1.1 --netmask 255.255.255.0 --etcd1 192.168.1.45 -s 1 -v cluster-tiny-vm

test-driving-photon-controller-docker-swarm-1
Step 7 - The process can take a few minutes and you should see a message like the one shown above which prompts you to run the cluster show command to get more details about the state of the cluster.

./photon cluster show 276b6934-6eb5-42fd-9fb1-031e311b3c45

test-driving-photon-controller-docker-swarm-2
At this point, you have successfully deployed a Docker Swarm Cluster running on Photon Controller. What you will be looking for in this screen is the IP Address of the Master VM which we will need in the next section if you plan to explore Docker Swarm a bit more.

Exploring Docker Swarm

To interact with your newly deployed Docker Swarm Cluster, you will need to ensure that you have a Docker client that matches the Docker version running the Docker Swarm Cluster which is currently today 1.20. The easiest way is to deploy PhotonOS 1.0 TP2 using either an ISO or OVA.

To verify that you have the correct Docker client version, you can just run the following command:

docker version

test-driving-photon-controller-docker-swarm-5
Once you have verified that your Docker Client matches the version, we will go ahead and set the DOCKER_HOST variable to point to the IP Address of our Master VM which you can find above in Step 7. When you have identified the IP Address, go ahead and run the following command to set variable:

export DOCKER_HOST=tcp://192.168.1.105:8333

We can run the following command to list the Docker Containers running for our Docker Swarm Cluster:

docker ps -a

test-driving-photon-controller-docker-swarm-3
Lets go ahead and download a Docker Container which we can then use to run on our Docker Swarm Cluster. We will download the VMware PhotonOS Docker Container by running the following command:

docker pull vmware/photon

Once the Docker Container has been downloaded, we can then run it by specifying the following command:

docker run --rm -it vmware/photon

test-driving-photon-controller-docker-swarm-6
For those familiar with Docker, you can see how easily it is to interact with the Docker interface that you are familiar with. Underneath the hood, Photon Controller is automatically provisioning the necessary infrastructure needed to run your applications. This concludes our series in test driving VMware's Photon Controller. If you have made it this far, I hope you have enjoyed the series and if you have any feedback or feature enhancements on Photon Controller, be sure to file an issue on the Photon Controller Github page.

  • Test driving VMware Photon Controller Part 1: Installation
  • Test driving VMware Photon Controller Part 2: Deploying first VM
  • Test driving VMware Photon Controller Part 3a: Deploying Kubernetes
  • Test driving VMware Photon Controller Part 3b: Deploying Mesos
  • Test driving VMware Photon Controller Part 3c: Deploying Docker Swarm

Categories // Automation, Cloud Native, ESXi, vSphere 6.0 Tags // cloud native apps, Docker, ESXi, Photon Controller, swarm

Test driving VMware Photon Controller Part 3b: Deploying Mesos

04.26.2016 by William Lam // 4 Comments

In the previous article, we demonstrated the first Cluster Orchestration solution supported by Photon Controller by deploying a fully functional Kubernetes Cluster using Photon Controller. In this article, we will now look at deploying a Mesos Cluster using Photon Controller.

test-driving-photon-controller-mesos-cluster
The minimal deployment for a Mesos Cluster in Photon Controller consists of 6 Virtual Machines: 3 Masters, 1 Zookeeper, 1 Marathon & 1 Slave. If you only have 16GB of memory on your ESXi host, then you will need to override the default VM Flavor when deploying a Mesos Cluster. If you have more than 16GB of available memory, then you can skip Step 1 and move to Step 2 directly.

Deploying Mesos Cluster

Step 1 - If you have not already created a new cluster-tiny-vm VM Flavor from the previous article that consists of 1vCPU/1GB memory, please run the following command:

./photon -n flavor create --name cluster-tiny-vm --kind "vm" --cost "vm 1 COUNT,vm.flavor.cluster-other-vm 1 COUNT,vm.cpu 1 COUNT,vm.memory 1 GB,vm.cost 1 COUNT"

Step 2- Download the Mesos VMDK from here

Step 3 - We will now upload our Mesos image and make a note of the ID that is generated after the upload completes by running the following command:

./photon -n image create photon-mesos-vm-disk1.vmdk -n photon-meos-vm.vmdk -i EAGER

Step 4 - Next, we will also need the ID of our Photon Controller Instance deployment as it will be required in the next step by running the following command:

./photon deployment list

Step 5 - We will now enable the Mesos Cluster Orchestration on our Photon Controller instance by running the following command and specifying the ID of your deployment as well as the ID of the Mesos image from the previous two steps:

./photon -n deployment enable-cluster-type 569c3963-2519-4893-969c-aed768d12623 -k MESOS -i 51c331ea-d313-499c-9d8f-f97532dd6954

test-driving-photon-controller-meso-1
Step 6 - We are now ready to spin up our Mesos Cluster by simply running the following command and substituting the network information from your environment. We are going to only deploying a single Mesos Slave (if you have additional resources you can spin up more or you can always re-size the cluster after it has been deployed). Do not forget to override the default VM Flavor used by specifying -v option and providing the name of our VM Flavor which we had created earlier called cluster-tiny-vm. You can just hit enter when prompted for the two zookeeper IP Addresses.

./photon cluster create -n mesos-cluster -k MESOS --dns 192.168.1.1 --gateway 192.168.1.1 --netmask 255.255.255.0 --zookeeper1 192.168.1.45 -s 1 -v cluster-tiny-vm

test-driving-photon-controller-meso-2
Step 7 - The process can take a few minutes and you should see a message like the one shown above which prompts you to run the cluster show command to get more details about the state of the cluster.

./photon cluster show bf962c3a-28a2-435d-bd96-0313ca254667

test-driving-photon-controller-meso-3
At this point, you have now successfully deployed a Mesos cluster running on Photon Controller. What you will be looking for in this screen is the IP Address of the Marathon VM which is the management interface to Mesos. We will need this IP Address in the next section if you plan to explore Mesos a bit more.

Exploring Mesos

Using the IP Address obtained from the previous step, you can now open a web browser and enter the following: http://[MARATHON-IP]:8080 which should launch the Marathon UI as shown in the screenshot below. If you wish to deploy a simple application using Marathon, you can follow the workflow here. Since we deployed Mesos using a tiny VM Flavor, we would not be able to exercise the final step of deploying an application running on Mesos. If you have more resources, I definitely recommend you give the workflow a try.

test-driving-photon-controller-meso-4
In our last and final article of the series, we will be covering the last Cluster Orchestration supported on Photon Controller which is Docker Swarm.

  • Test driving VMware Photon Controller Part 1: Installation
  • Test driving VMware Photon Controller Part 2: Deploying first VM
  • Test driving VMware Photon Controller Part 3a: Deploying Kubernetes
  • Test driving VMware Photon Controller Part 3b: Deploying Mesos
  • Test driving VMware Photon Controller Part 3c: Deploying Docker Swarm

Categories // Automation, Cloud Native, ESXi, vSphere 6.0 Tags // cloud native apps, ESXi, Mesos, Photon Controller

  • « Previous Page
  • 1
  • …
  • 25
  • 26
  • 27
  • 28
  • 29
  • …
  • 61
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...