WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

ESXi 6.0 on Apple Xserve 3,1

11.17.2015 by William Lam // 76 Comments

A couple of months ago, I shared a guest blog post from one of my readers John Clendenen who was able to get ESXi 6.0 running on an Apple Xserve 2,1. At the end of that article, it was hinted that John was also looking into getting ESXi 6.0 running on an Apple XServe 3,1 and you can the details below after several months of investigation.

Disclaimer: This is not officially supported by VMware, please use at your own risk.

*** This is a guest blog post from John Clendenen ***

First an update on my Xserve 2,1’s. I had them running for over 100 days without any issue! However, now that I have the 3,1 working reliably, it is time that I part ways with my Xserve 2,1’s. I currently have them up on eBay. Here is the link: http://www.ebay.com/itm/231752771080?ssPageName=STRK:MESELX:IT&_trksid=p3984.m1555.l2649

Anyway, onto the Xserve 3,1.

--

I came across an Xserve 3,1 on eBay about a year ago. It was badly photographed, and the seller didn’t really know what he/she had. It wasn’t getting much attention, so I thought I might get it cheap. I ended up paying $500 for it which I felt ok about, but not great.

When it arrived, it had no processors, heatsinks or airflow duct. I immediately messaged the seller, and was able to get $350 refunded to me. I found the missing parts for under $100 over the next few weeks, and developed an intimate understanding of the Xserve 3,1 hardware.

At this point, I had no familiarity with vSphere at all. I was running OS X server and virtualizing a few services in Fusion. It was only through researching the Xserve 3,1 to find the missing hardware that I discovered that VMware had supported once as an ESXi 5 host. This made me wonder if it might still be possible to run ESXi on it, despite it no longer being supported.

I have found, after a considerable time investment, that the Xserve 3,1 can run ESXi 6, just as I found the Xserve 2,1 can run ESXi 6. However, unlike the Xserve 2,1, the Xserve 3,1 took months of troubleshooting before I had it running as a reliable ESXi host.

--

As it turns out, despite how much time it took me to get it working, there are only 2 serious issues with the Xserve 3,1 running ESXi 6. The first is somewhat specific to my configuration, but the second will be relevant to all configurations.

The first issue concerns booting into ESXi on a headless Xserve 3,1. The issue is limited to configurations where ESXi is booting from a drive installed in the optical bay (my original configuration). I have since changed my configuration and swapped the ESXi boot drive from the optical bay to the first hard drive bay. I have had no issue since I made this change.

For my configuration, I used an OWC bracket to replace the optical drive with an SSD. I installed ESXi onto it without issue. During installation, it was connected to monitor, keyboard, etc. I ran some VM’s on it to make sure it worked, and there were zero issues. I was relieved! So, I put it in the rack, wired it up and turned it on. Nothing. The Xserve lit up, and it was clear that it got through POST, but ESXi was clearly not booting.

Long story short, when no monitor is plugged into the Xserve 3,1, it will not automatically boot into ESXi if the boot drive is installed in the optical bay. The Xserve boot options can even be programmed through the front panel, but no configuration will make it reliably boot from the optical bay when a hard drive is installed. It is truly baffling, and if anyone has some insight here, or if it is a problem specific to my particular Xserve, I would love to know.

The solution, in my case, was to plug a keyboard into the Xserve, and hold down option for a few minutes while it boots (bringing up the boot options). Once all LED activity has normalized and the fan has settled down, I released the option key and pushed the arrow buttons. I think you only need to push the up button, but I always just pressed all of them to be sure. Then I pressed enter, and ESXi will boot. I have since simply swapped the boot drive to the first drive bay. Ideally, I’d have the other drives in the hot-swap bays, but I felt it was too much trouble to keep it in the optical bay.

The second issue concerns the onboard NIC. Once I had ESXi up and running, everything worked fine for anywhere between a few hours and 2 days, after which the Xserve 3,1 host would disappear from the VCSA and become completely unresponsive (no ping/ssh/etc). The length of time before failure made this issue especially difficult and time consuming to diagnose.

After nearly a month of frustration and disappointment, I determined that ESXi actually continued to run, but all network connectivity was ceasing. The only solution I have found is to install a 3rd party NIC and completely avoid using the onboard NIC. Even in standby, the onboard NIC can cause problems, but when it is completely unused, both for management and VM traffic, it no longer causes any problems.

This has been superficially improved with the last update, but use of the onboard NIC should still be completely avoided. The ESXi host will remain accessible via the VCSA, but the network management will become grayed out after a day or so. I suspect this is a driver issue in ESXi, but I really do not know.

--

Beyond these 2 issues, I have had no problems. Since the last update, even the performance and hardware status tabs are functional. RDM is not available, but not recommended in the first place. The Apple RAID backplane will not be recognized, but this was even the case in ESXI 5 when it was officially supported by VMware.

I hope that my efforts here will save others a lot of time and frustration. I think that for a lot of IT infrastructures, ESXi on an Xserve might make sense. It can run non-critical OS X services (which are hopefully the only kind of services you’re trying to run in OS X).

--

Summary

  •      Completely avoid using the on-board NIC. Silicom NIC’s are recommended.
  •      Find a standard backplane. The RAID backplane is useless in ESXi.
  •      A 2.5” drive can be installed in the optical bay, but booting from it is problematic

 

xserve31-pic-1
The Xserve 3,1 with the Silicom NIC installed

xserve31-pic-2
The 6 ports are a tight squeeze, but they just fit. My other 2 EXSi hosts are Supermicro Nodes, also with Silicom NIC’s and I had to use a Dremel to grind off part of the chassis to make all the ports accessible. But the Xserve works out of the box.

xserve31-pic-3
The OWC SSD “Data Doubler” bracket in the optical bay. Booting from here is a pain, but putting an additional SSD here works great for host caching.

xserve31-pic-4
The standard backplane is difficult to find, but is a great asset for vSphere. It is easy to distinguish it from the RAID backplane which would have a heat sink here.

xserve31-pic-5
There are no complications during installation/initial configuration.

xserve31-pic-6
Apologies for not having a longer uptime. I updated to ESXi6.0U1a 12 days ago, but I’ve had the Xserve 3,1 up for months. If something changes, I will post an update here, but I am confident that the system is stable.

xserve31-pic-7
This is the final stage of my home lab. The Xserve 3,1 is 1 of 3 ESXi hosts. These are accompanied by a primary domain controller (Samba4), a media server (Emby) and a home-grown NAS (Centos7). Networking in the back is Ubiquiti. I use this lab to prototype production environments for clients, and of course to run my home media services 🙂

Categories // Apple, ESXi, vSphere 6.0 Tags // apple, ESXi 6.0, osx, xserve

UEFI PXE boot is possible in ESXi 6.0

10.09.2015 by William Lam // 21 Comments

A couple of days ago I received an interesting question from fellow colleague Paudie O'Riordan, who works over in our Storage and Availability Business Unit at VMware. He was helping a customer who was interested in PXE booting/installing ESXi using UEFI which is short for Unified Extensible Firmware Interface. Historically, we only had support for PXE booting/installing ESXi using the BIOS firmware. You also could boot an ESXi ISO using UEFI, but we did not have support for UEFI when it came to booting/installing ESXi over the network using PXE and other variants such as iPXE/gPXE.

For those of you who may not know, UEFI is meant to eventually replace the legacy BIOS firmware. There are many benefits with using UEFI over BIOS, a recent article that does a good job of explaining the differences can be found here. In doing some research and pinging a few of our ESXi experts internally, I found that UEFI PXE boot support is actually possible with ESXi 6.0. Not only is it possible to PXE boot/install ESXi 6.x using UEFI, but the changes in the EFI boot image are also backwards compatible, which means you could potentially PXE boot/install an older release of ESXi.

Note: Auto Deploy still requires legacy BIOS firmware, UEFI is not currently supported today. This is something we will be addressing in the future, so stay tuned.

Not having worked with ESXi and UEFI before, I thought this would be a great opportunity for me to give this a try in my homelab which would also allow me to document the process in case others were interested. For my PXE server, I am using CentOS 6.7 Minimal (64-Bit) which runs both the DHCP and TFTP services but you can use any distro that you are comfortable with.

[Read more...]

Categories // Automation, ESXi, vSphere 6.0 Tags // bios, boot.cfg, bootx64.efi, dhcp, ESXi 6.0, kickstart, mboot.efi, pxe boot, tftp, UEFI, vSphere 6.0

Override default VSAN Maintenance (decommission) Mode in VSAN 6.1

09.14.2015 by William Lam // Leave a Comment

Earlier this year, there was an interesting use case that was brought up from a customer regarding the use of vSphere Update Manager (VUM) and VSAN enabled ESXi hosts. Everything was working from a functional standpoint, but the customer wanted a way to control the default VSAN decommission mode which specifies how the data should be moved, if at all when a host is placed into maintenance mode. There are three supported options which includes Ensure Accessibility (default), Evacuate All Data and No Action. Depending on the customer and their use case, there may be valid reasons to use one or the other. For example, if I am shutting down my entire VSAN cluster for some hardware upgrade, I probably do not want any of my data to be migrated and the No Action setting would be acceptable. During an upgrade or patching an of ESXi host, some customers have expressed that they would prefer to leverage the Evacuate All Data setting which is perfectly fine, of course the maintenance mode would take long as all the dat must be migrated off the host first.

Prior to VSAN 6.1 (included in the vSphere 6.0 Update 1 release), it was not possible to override the default VSAN maintenance mode (decommission mode) option which defaults to Ensure Accessibility. This was a problem because if you decided you wanted to use a different option, there would be some manual intervention required from the user when using VUM. The workaround for the customer would be to either manually or using the vSphere API to automate the ESXi host maintenance mode operation and specify the decommission mode type before VUM would take over and update the host. Not an ideal solution but would work if you needed to override the default.

I thought this would be a nice feature enhancement to be able to override the default VSAN maintenance mode option which could vary from customer to customer depending on their use case. I got in touch with one of the VSAN Engineers to discuss the use case in more detail and he agreed that it would be useful to expose this type of a capability. In VSAN 6.1, there is now a new ESXi Advanced Setting called DefaultHostDecommissionMode which allows you to specify the default VSAN maintenance mode behavior.

vsan-6.1-decomission-mode-0
Below is a table of the three available options (ensureAccessibility is default) that can be configured:

VSAN Decommission Mode Value  Description
ensureAccessibility  VSAN data reconfiguration should be performed to ensure storage object accessibility
evacuateAllData  VSAN data evacuation should be performed such that all storage object data is removed from the host
noAction  No special action should take place regarding VSAN data

This ESXi Advanced Setting can also be retrieved and configured using ESXCLI as well as the vSphere API.

To retrieve the current VSAN maintenance mode option using ESXCLI, run the following command:

esxcli system settings advanced list -o /VSAN/DefaultHostDecommissionMode

To configure the default VSAN maintenance mode option using ESXCLI, run the following command:

esxcli system settings advanced set -o /VSAN/DefaultHostDecommissionMode -s [DECOMISSION_MODE]

Categories // ESXCLI, ESXi, VSAN, vSphere 6.0 Tags // DefaultHostDecommissionMode, ESXi 6.0, maintenance mode, Virtual SAN, VSAN, VSAN 6.1, vSphere 6.0 Update 1

  • « Previous Page
  • 1
  • …
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...