WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Community stories of VMware & Apple OS X in Production: Part 1

07.30.2014 by William Lam // 4 Comments

I caught this tweet from Yoann Gini a couple of weeks back which I thought was quite interesting:

For people interested by OS X and ESXi, I run vSphere setup over mac mini in 24/7 production setup. It works.#psumac

— Yoann Gini (@ygini) July 10, 2014

After sharing this tweet internally on our Socialcast group related to all things Apple, I came to learn that Yoann was not the only one running Production workloads on Apple Mac Minis'. It turns out, we at VMware also use Mac Mini's for a very special Production environment. This actually got me thinking about how other customers are leveraging VMware and Apple OS X in their environment? Would it not be cool to hear about how others leverage VMware and Apple Technologies together in their production environments?

This was the primary motivation behind this blog series, the idea is to interview folks from the Virtualization community willing to share their experiences and educate the community on how they leverage VMware and Apple OS X in their Production environments. To help kick off this series, I would like to start off by sharing how VMware leverages vSphere and Apple OS X in our own Production environment. I got in touch with the person responsible for managing this environment and below is our chat transcript.

Disclaimer: The Apple Mac Mini platform is not officially supported by VMware

Company: VMware
Product: VMware vSphere
Hardware: Apple Mac Mini

[William] - Hi Michael, thanks for taking some time out of your day to share some information about how VMware uses Apple Technologies. For those that do not know you, can you introduce yourself and your role within VMware?

[Michael] - Certainly. I'm Mike Lemoine, I've been at VMware for just over three years, and my title is Senior Tools and Infrastructure Engineer. I'm part of the Build/SCM team here, responsible for the care and feeding of the infrastructure that is used for VMware product builds.

[William] - Build/SCM, that sounds pretty cool! So this is the Infrastructure that Engineering uses to compile and build out the various VMware (Apple) installers, executables and binaries that customers eventually consume?

[Michael] - Yes, it is. While engineers can and do perform local builds on their desktops, or shared interactive environments, the only 'real builds' are the ones that go through this infrastructure. If it didn't go through our machines, it doesn't get released to the world.

[William] - Sounds like a very critical piece of Infrastructure at VMware. So Michael, I heard from someone that you manage a very special build infrastructure at VMware and it involves some vSphere and Apple hardware? Do you mind telling us a little bit about this environment and what it is used for?

[Michael] - I do, indeed! We have a fleet of Apple Mac Minis serving as the basis of our OS X build farm. While they're not on the HCL, they're really our best option for providing an environment for products intended to run on OS X or IOS.  While the Mac Pro is supported, it has a lot of unnecessary equipment which makes it prohibitively expensive (well, wasteful) to use at scale. The Mini, on the other hand, has most of what we need. It's not perfect, but it's the best match we can accomplish without violating the Apple EULA.

[William] - Wow, that is an awesome use case for the Apple Mac Minis! Even though the Mac Mini’s are not on the HCL, they are being utilized for Production workloads and building out products like VMware Fusion, Horizon View Client and iOS applications. Can you tell us a little bit more about the environment, number of hosts, version of ESXi and the amount of capacity it can support?

Here is a picture of the Mac Mini Cluster in the VMware Datacenter. The rack that is used to hold the Mac Mini's is MMR-2G-5URS along with MMR-2G-2URS for brace stabilizer.

vmware-apple-mac-mini-build-cluster
[Michael] - We've got roughly 50 Mac Minis in production (a mix of 5,3 and 6,3 depending on when they were ordered), running ESXi 5.5. Each is stuffed full with the maximum supported config. In the case of the 6,2 minis that's i7 at 2.6ghz and 16G of memory. That's one of the lamentable issues with the mini, that it maxes out at 16G. We run two VMs on top of them, each taking their own spindle and 8G of memory. Our 6,2 minis are presently running 10.8.4 VMs, while the 5,2 minis are running VMs with older versions of OS X for build reproducibility.

[William] - That is a lot of Mac Mini power! Are these ESXi hosts currently being managed by vCenter Server or are they managed individually?

[Michael] Both, actually. Those ESXi hosts are all managed by vCenter, and our build system uses an in-house inventory and lease system to choose among available hosts.  One of the reasons that we can confidently run these Minis in production is due to our automation being written for failure. We assume systems will be wedged, we assume every machine is a landmine. In reality, the Mac Minis very rarely give us any trouble, but knowing that we can lose nearly all of them and still produce builds gives us a sense of safety.

[William] - That is a very cool solution. It sounds like the Mac Minis have been rock solid for our Production usage, but as a backup we still have intelligence built into our software so we can safely rely on the use of the “consumer” hardware. Sounds like a mini SDDC to me! Speaking of hardware failures, what components of the Mac Mini have you seen fail the most and how do you go about getting the parts replaced?

[Michael] - Certainly. We're all about belts and suspenders; nobody wants that 2am call about a production outage. The issues we've had with the minis has been the death of the drives in them.  These machines are very rarely idle, which is a usage pattern that the drives in them simply aren't prepared for. Amusingly, our support for the Minis is the same as anyone else's.  One of the guys in our datacenter will pull the machine and take it to the genius bar, where they will tell us what we already know and replace the drive. The system will then be re-racked, the new drive set up as a datastore, and we deploy a new VM to that drive. Our Minis are all running ESXi off of USB sticks, so losing the drive isn't much of a hurdle.

[William] - Ah, that’s cool that we’ve built that into the design and can easily tolerate host failures and rebuild is not really a big deal. So what about the VMs that were running, is there any data that needs to backed up and restored or is it stateless configuration?

[Michael] - Nothing that needs to be backed up or restored. The systems are all configured via puppet, so all that's necessary is the base OS installation, which we have created a template for. The only manual step involved is running puppet the first time. Then we perform a test build to be sure everything is in order and put the system back into production. The total time a human spends on this once the system is re-racked is probably under ten minutes.

[William] - Are there any plans in the future to upgrade the Mac Mini Cluster to using the new Mac Pro’s or do you see Mac Minis being more than sufficient?

[Michael] - We've looked at the Mac Pro, but the extra cost of the dual GPU makes them expensive. If there was a Mac Pro option with cheap onboard video, we'd probably tolerate the inferior form factor and see what we could do for racking them. We still wouldn't have BMC, but we'd gain a gigabit network port.  

[William] On the topic of support, does the Mac Mini not being officially supported by VMware have any impact on you?

[Michael] - The mac mini not being supported certainly has an impact on us. Our internal experience of support is already inferior to the customer experience. Having to fight the battle of "No, we don't support it, but we unofficially do" adds a cost to every interaction.

[William] - My understanding is that the primary reason for not supporting the Mac Mini’s today is their lack of support for ECC memory. I know that our customers expect an Enterprise ready solution when we certify a platform, but in your opinion is this a requirement we potentially could reduce and do you feel customers would agree?

[Michael] - I think that if we communicate the concerns to our customers and allow them to make their own decision on whether to take the risk of running consumer-grade hardware, it would be better for everyone. Customers would feel more secure with the off-label use, we already do the work in-house to make the Mini a usable platform, and the cost seems very low. I think the only challenge would be clearly enumerating the dangers.

[William] - Michael, I really appreciate you taking the time and sharing with our customers on how VMware leverages the Apple Mac Mini’s for Production. Do you have any tips or tricks you would give to our customers looking at running vSphere on the Mac Mini for more than home labs? Are there any go to resources you would recommend customers to if they are looking to get started with running ESXi on a Mac Mini?

[Michael] - My advice would be to follow our lead. Realize you're using consumer-grade hardware, and plan for failure. The low cost allows for easy redundancy, take advantage of that. In what other situation can you have an entire spare server on hand for $1200? While the information available on the internet is great, and I spent more than a little time on virtuallyghetto.com reading about mac mini attempts, I also had the ability to annoy and harass some of our talent inside VMware to get answers. To be honest, the amount of information you need in order to get ESXi running on a Mac Mini would probably fit on an index card. The challenge is hunting down which gotchas there are for your combination of ESXi version and mac mini revision.  A KB article covering the few pitfalls in the process would be wonderful.

Hopefully you enjoyed this first post in the series and stay tune for a couple other interviews that I am working on. In the meantime, if you are interested in sharing your story on how you use VMware and Mac OS X in Production, you can reach out to me here.

  • Community stories of VMware & Apple OS X in Production: Part 1
  • Community stories of VMware & Apple OS X in Production: Part 2
  • Community stories of VMware & Apple OS X in Production: Part 3
  • Community stories of VMware & Apple OS X in Production: Part 4
  • Community stories of VMware & Apple OS X in Production: Part 5
  • Community stories of VMware & Apple OS X in Production: Part 6
  • Community stories of VMware & Apple OS X in Production: Part 7
  • Community stories of VMware & Apple OS X in Production: Part 8
  • Community stories of VMware & Apple OS X in Production: Part 9
  • Community stories of VMware & Apple OS X in Production: Part 10

 

Categories // Apple, ESXi, vSphere Tags // apple, mac mini, osx, vmware, vSphere

How to quickly deploy CoreOS on ESXi?

07.25.2014 by William Lam // 1 Comment

deploy-coreos-on-esxiThere has been a tremendous amount of buzz lately regarding Docker, a platform that allows developers to easily build, deploy and manage Linux Containers. Docker can run on variety of Linux Distributions, one that has been quite popular lately is a new Linux Distribution called CoreOS.

CoreOS is actually a fork of Google's ChromeOS and was designed to run next generation workloads similar to those at Google and Facebook. A major benefit of CoreOS is the minimal footprint the base operating system consumes which allows for maximum resource utilization for the Container workloads.

Having heard so much about Docker and CoreOS, I figure this would be a great opportunity to explore and learn about a new technology which I always enjoy when I get the time. I know Duncan Epping has written an article on how to run CoreOS on VMware Fusion, but since I primarily work with vSphere, I wanted to run CoreOS on ESXi. The first place I went to was the CoreOS documentation and there is a section for VMware. After going through the instructions, I found the process to be quite manual and potentially requiring additional tools as a simple OVF/OVA for CoreOS did not exist.

I figured I could wrap the process in a very simple shell script that only required a couple of input parameters from the user based on their environment and the script would auto-magically handle the deployment. I created a shell script that would run on the ESXi Shell called deploy_coreos_on_esxi.sh

Note: The script assumes you can connect directly to the CoreOS website to download the zip directly onto the ESXi host.

There are three variables that you will need to edit prior to running the script:

  • DATASTORE_PATH - The full path to the Datastore to deploy CoreOS onto (e.g. /vmfs/volumes/datastore)
  • VM_NETWORK - The name of the vSphere Network to connect the CoreOS VM to
  • VM_NAME - The name of the CoreOS VM

Once you have finished editing the script, you just need to scp to your ESXi host and run the script using the following command:

./deploy_coreos_on_esxi.sh

Here is screenshot of running the script:

deploy-coreos-on-esxi-0
Once the script has completed, you should see a new CoreOS VM on your ESXi host and if you have DHCP, you should also see an associated IP Address in the VM Console:

deploy-coreos-on-esxi-1
Once the CoreOS VM is booted up, you use the SSH key that was included in the zip file, by default it is also extracted into the CoreOS VM directory. You can SSH into the VM by running the following command:

ssh -i insecure_ssh_key core@IP-ADDRESS-OF-COREOS-VM

Once logged in, we can run "docker images" to see a list of Containers. As you can see that there is only one and we can connect to that Container by running the "toolbox" command which will pull down the latest and then connect to that Container as seen in the screenshot below.

deploy-coreos-on-esxi-3
I was hoping that I could also get VMware Tools installed within the CoreOS VM, but I was not able to get SSH working within the Toolbox as stated in the Install Debugging Tools documentation. I may need to tinker around a bit more with CoreOS.

If you are interested in other methods of deploying CoreOS, be sure to check out CoreOS's documentation.

Additional Resources:

  • http://www.vreference.com/2014/06/09/deploy-coreos-into-your-esxi-lab/ - This was a great primer on CoreOS by Forbes Guthrie that I really enjoyed reading, highly recommend
  • http://gosddc.com/articles/dock-your-container-on-vmware-with-vagrant-and-docker/ - If you use Vagrant and would like to play with Docker, be sure to check out Fabio Rapposelli Vagrant vCloud Provider

Categories // Automation, Docker, ESXi, vSphere Tags // container, coreos, Docker, ESXi, vSphere

Does VSAN work with Free ESXi?

07.22.2014 by William Lam // 8 Comments

I recently had to re-provision one of my VSAN lab environments using my recently shared ESXi 5.5 VSAN Kickstart. I usually specify a license key within the Kickstart so I do not have to license the ESXi host later. This actually got me wondering on whether VSAN would in fact work with Free ESXi aka vSphere Hypevisor? Being a curious person, I of course had to test this in the lab 🙂

Needless to say, if you want to properly evaluate or use VSAN in production, you should go through the supported method of using vCenter Server as it provides a simple and intuitive management interface for VSAN. More importantly, having the ability to create individual VM Storage Policies that can be applied on a per VMDK basis based on SLA's for your given application or Virtual Machine.

Disclaimer: This is not officially supported by VMware and running ESXi without a VSAN license is against VMware's EULA.

Since we do not have a vCenter Server, we will need to be able to fully configure VSAN without it. Luckily, we know of a way to "bootstraping" VSAN onto an ESXi host without vCenter Server and I will be leveraging that blog post to test this scenario with Free ESXi.

Prerequisite:

  • 3 ESXi 5.5 hosts already installed and licensed with vSphere Hypervisor (Free ESXi) License
  • SSH Enabled

Step 1 - SSH to the first ESXi host and run the following ESXCLI command to create a VSAN Cluster:

esxcli vsan cluster join -u $(python -c 'import uuid; print str(uuid.uuid4());')

configure-vsan-for-free-esxi-0
Step 2 - Run the following ESXCLI command to make a note of the VSAN Cluster UUID (highlighted in green in the screenshot above) which will be needed later:

esxcli vsan cluster get

Step 3 - Enable VSAN Traffic for VMkernel interface you plan on using for VSAN traffic by running the following ESXCLI command:

esxcli vsan network ipv4 add -i vmk0

Step 4 - Run the following command to view a list of disks that are eligible for use with VSAN. You will need a minimum of 1xSSD and 1xMD

vdq -q

configure-vsan-for-free-esxi-1
Step 5 - Using the information from vdq, we will now create our VSAN Disk Group which will contain the SSD/MD's to be used for VSAN. Use the following ESXCLI command and substituting in the SSD/MD Names (please refer to the screenshot above for an example):

esxcli vsan storage add -s [SSD] -d [MD]

Step 6 - To ensure you have properly configured a VSAN Disk Group, you can run the following ESXCLI command to confirm:

esxcli vsan storage list

configure-vsan-for-free-esxi-2
At this point, we now have a single ESXi host configured with VSAN Datastore, we can also confirm this by running the following ESXCLI command:

esxcli storage filesystem list

configure-vsan-for-free-esxi-3
Step 7 - Repeat Steps 3-6 on the remainder two ESXi hosts

Step 8 - Finally, we now need to join the remainder ESXi hosts to the VSAN Cluster. We will need the VSAN Cluster UUID that we recorded earlier and specify that in the following ESXCLI command on each of the remainder ESXi hosts:

esxcli vsan cluster join -u [VSAN-CLUSTER-UUID]

If we now login to all of our ESXi hosts using the vSphere C# Client, we will see a common VSAN Datastore that is shared among the three ESXi hosts. To prove that that VSAN is in fact working, we can create a Virtual Machine and ensure we can power it on as seen in the screenshot below. By default, VSAN has a "Default" policy which defines FTT (Number of host failures to tolerate) set to 1 and assuming you have at least 3 ESXi hosts, all Virtual Machines will be protected by default.

configure-vsan-for-free-esxi-4
Even though you can run VSAN using Free ESXi and leveraging the default VM Storage Policy that is built into VSAN for protecting Virtual Machines, you are only exercising a tiny portion of the potential that VSAN can bring when consuming it through vCenter Server. As mentioned earlier, you will not have the ability to create specific VM Storage Policies and assign them based on the specific SLAs and be able to easily monitor their compliance and remediation. The management of VSAN Cluster for adding additional capacity or serviceability is also quite limited without vCenter Server, though it can be definitely be done it is much easier with just a couple of clicks in the vSphere Web Client or a simple API call.

Categories // ESXCLI, ESXi, VSAN, vSphere 5.5 Tags // ESXi 5.5, free esxi, VSAN, vsanDa, vSphere 5.5, vsphere hypervisor

  • « Previous Page
  • 1
  • …
  • 125
  • 126
  • 127
  • 128
  • 129
  • …
  • 146
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...