WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Community stories of VMware & Apple OS X in Production: Part 1

07.30.2014 by William Lam // 4 Comments

I caught this tweet from Yoann Gini a couple of weeks back which I thought was quite interesting:

For people interested by OS X and ESXi, I run vSphere setup over mac mini in 24/7 production setup. It works.#psumac

— Yoann Gini (@ygini) July 10, 2014

After sharing this tweet internally on our Socialcast group related to all things Apple, I came to learn that Yoann was not the only one running Production workloads on Apple Mac Minis'. It turns out, we at VMware also use Mac Mini's for a very special Production environment. This actually got me thinking about how other customers are leveraging VMware and Apple OS X in their environment? Would it not be cool to hear about how others leverage VMware and Apple Technologies together in their production environments?

This was the primary motivation behind this blog series, the idea is to interview folks from the Virtualization community willing to share their experiences and educate the community on how they leverage VMware and Apple OS X in their Production environments. To help kick off this series, I would like to start off by sharing how VMware leverages vSphere and Apple OS X in our own Production environment. I got in touch with the person responsible for managing this environment and below is our chat transcript.

Disclaimer: The Apple Mac Mini platform is not officially supported by VMware

Company: VMware
Product: VMware vSphere
Hardware: Apple Mac Mini

[William] - Hi Michael, thanks for taking some time out of your day to share some information about how VMware uses Apple Technologies. For those that do not know you, can you introduce yourself and your role within VMware?

[Michael] - Certainly. I'm Mike Lemoine, I've been at VMware for just over three years, and my title is Senior Tools and Infrastructure Engineer. I'm part of the Build/SCM team here, responsible for the care and feeding of the infrastructure that is used for VMware product builds.

[William] - Build/SCM, that sounds pretty cool! So this is the Infrastructure that Engineering uses to compile and build out the various VMware (Apple) installers, executables and binaries that customers eventually consume?

[Michael] - Yes, it is. While engineers can and do perform local builds on their desktops, or shared interactive environments, the only 'real builds' are the ones that go through this infrastructure. If it didn't go through our machines, it doesn't get released to the world.

[William] - Sounds like a very critical piece of Infrastructure at VMware. So Michael, I heard from someone that you manage a very special build infrastructure at VMware and it involves some vSphere and Apple hardware? Do you mind telling us a little bit about this environment and what it is used for?

[Michael] - I do, indeed! We have a fleet of Apple Mac Minis serving as the basis of our OS X build farm. While they're not on the HCL, they're really our best option for providing an environment for products intended to run on OS X or IOS.  While the Mac Pro is supported, it has a lot of unnecessary equipment which makes it prohibitively expensive (well, wasteful) to use at scale. The Mini, on the other hand, has most of what we need. It's not perfect, but it's the best match we can accomplish without violating the Apple EULA.

[William] - Wow, that is an awesome use case for the Apple Mac Minis! Even though the Mac Mini’s are not on the HCL, they are being utilized for Production workloads and building out products like VMware Fusion, Horizon View Client and iOS applications. Can you tell us a little bit more about the environment, number of hosts, version of ESXi and the amount of capacity it can support?

Here is a picture of the Mac Mini Cluster in the VMware Datacenter. The rack that is used to hold the Mac Mini's is MMR-2G-5URS along with MMR-2G-2URS for brace stabilizer.

vmware-apple-mac-mini-build-cluster
[Michael] - We've got roughly 50 Mac Minis in production (a mix of 5,3 and 6,3 depending on when they were ordered), running ESXi 5.5. Each is stuffed full with the maximum supported config. In the case of the 6,2 minis that's i7 at 2.6ghz and 16G of memory. That's one of the lamentable issues with the mini, that it maxes out at 16G. We run two VMs on top of them, each taking their own spindle and 8G of memory. Our 6,2 minis are presently running 10.8.4 VMs, while the 5,2 minis are running VMs with older versions of OS X for build reproducibility.

[William] - That is a lot of Mac Mini power! Are these ESXi hosts currently being managed by vCenter Server or are they managed individually?

[Michael] Both, actually. Those ESXi hosts are all managed by vCenter, and our build system uses an in-house inventory and lease system to choose among available hosts.  One of the reasons that we can confidently run these Minis in production is due to our automation being written for failure. We assume systems will be wedged, we assume every machine is a landmine. In reality, the Mac Minis very rarely give us any trouble, but knowing that we can lose nearly all of them and still produce builds gives us a sense of safety.

[William] - That is a very cool solution. It sounds like the Mac Minis have been rock solid for our Production usage, but as a backup we still have intelligence built into our software so we can safely rely on the use of the “consumer” hardware. Sounds like a mini SDDC to me! Speaking of hardware failures, what components of the Mac Mini have you seen fail the most and how do you go about getting the parts replaced?

[Michael] - Certainly. We're all about belts and suspenders; nobody wants that 2am call about a production outage. The issues we've had with the minis has been the death of the drives in them.  These machines are very rarely idle, which is a usage pattern that the drives in them simply aren't prepared for. Amusingly, our support for the Minis is the same as anyone else's.  One of the guys in our datacenter will pull the machine and take it to the genius bar, where they will tell us what we already know and replace the drive. The system will then be re-racked, the new drive set up as a datastore, and we deploy a new VM to that drive. Our Minis are all running ESXi off of USB sticks, so losing the drive isn't much of a hurdle.

[William] - Ah, that’s cool that we’ve built that into the design and can easily tolerate host failures and rebuild is not really a big deal. So what about the VMs that were running, is there any data that needs to backed up and restored or is it stateless configuration?

[Michael] - Nothing that needs to be backed up or restored. The systems are all configured via puppet, so all that's necessary is the base OS installation, which we have created a template for. The only manual step involved is running puppet the first time. Then we perform a test build to be sure everything is in order and put the system back into production. The total time a human spends on this once the system is re-racked is probably under ten minutes.

[William] - Are there any plans in the future to upgrade the Mac Mini Cluster to using the new Mac Pro’s or do you see Mac Minis being more than sufficient?

[Michael] - We've looked at the Mac Pro, but the extra cost of the dual GPU makes them expensive. If there was a Mac Pro option with cheap onboard video, we'd probably tolerate the inferior form factor and see what we could do for racking them. We still wouldn't have BMC, but we'd gain a gigabit network port.  

[William] On the topic of support, does the Mac Mini not being officially supported by VMware have any impact on you?

[Michael] - The mac mini not being supported certainly has an impact on us. Our internal experience of support is already inferior to the customer experience. Having to fight the battle of "No, we don't support it, but we unofficially do" adds a cost to every interaction.

[William] - My understanding is that the primary reason for not supporting the Mac Mini’s today is their lack of support for ECC memory. I know that our customers expect an Enterprise ready solution when we certify a platform, but in your opinion is this a requirement we potentially could reduce and do you feel customers would agree?

[Michael] - I think that if we communicate the concerns to our customers and allow them to make their own decision on whether to take the risk of running consumer-grade hardware, it would be better for everyone. Customers would feel more secure with the off-label use, we already do the work in-house to make the Mini a usable platform, and the cost seems very low. I think the only challenge would be clearly enumerating the dangers.

[William] - Michael, I really appreciate you taking the time and sharing with our customers on how VMware leverages the Apple Mac Mini’s for Production. Do you have any tips or tricks you would give to our customers looking at running vSphere on the Mac Mini for more than home labs? Are there any go to resources you would recommend customers to if they are looking to get started with running ESXi on a Mac Mini?

[Michael] - My advice would be to follow our lead. Realize you're using consumer-grade hardware, and plan for failure. The low cost allows for easy redundancy, take advantage of that. In what other situation can you have an entire spare server on hand for $1200? While the information available on the internet is great, and I spent more than a little time on virtuallyghetto.com reading about mac mini attempts, I also had the ability to annoy and harass some of our talent inside VMware to get answers. To be honest, the amount of information you need in order to get ESXi running on a Mac Mini would probably fit on an index card. The challenge is hunting down which gotchas there are for your combination of ESXi version and mac mini revision.  A KB article covering the few pitfalls in the process would be wonderful.

Hopefully you enjoyed this first post in the series and stay tune for a couple other interviews that I am working on. In the meantime, if you are interested in sharing your story on how you use VMware and Mac OS X in Production, you can reach out to me here.

  • Community stories of VMware & Apple OS X in Production: Part 1
  • Community stories of VMware & Apple OS X in Production: Part 2
  • Community stories of VMware & Apple OS X in Production: Part 3
  • Community stories of VMware & Apple OS X in Production: Part 4
  • Community stories of VMware & Apple OS X in Production: Part 5
  • Community stories of VMware & Apple OS X in Production: Part 6
  • Community stories of VMware & Apple OS X in Production: Part 7
  • Community stories of VMware & Apple OS X in Production: Part 8
  • Community stories of VMware & Apple OS X in Production: Part 9
  • Community stories of VMware & Apple OS X in Production: Part 10

 

Categories // Apple, ESXi, vSphere Tags // apple, mac mini, osx, vmware, vSphere

How to quickly deploy CoreOS on ESXi?

07.25.2014 by William Lam // 1 Comment

deploy-coreos-on-esxiThere has been a tremendous amount of buzz lately regarding Docker, a platform that allows developers to easily build, deploy and manage Linux Containers. Docker can run on variety of Linux Distributions, one that has been quite popular lately is a new Linux Distribution called CoreOS.

CoreOS is actually a fork of Google's ChromeOS and was designed to run next generation workloads similar to those at Google and Facebook. A major benefit of CoreOS is the minimal footprint the base operating system consumes which allows for maximum resource utilization for the Container workloads.

Having heard so much about Docker and CoreOS, I figure this would be a great opportunity to explore and learn about a new technology which I always enjoy when I get the time. I know Duncan Epping has written an article on how to run CoreOS on VMware Fusion, but since I primarily work with vSphere, I wanted to run CoreOS on ESXi. The first place I went to was the CoreOS documentation and there is a section for VMware. After going through the instructions, I found the process to be quite manual and potentially requiring additional tools as a simple OVF/OVA for CoreOS did not exist.

I figured I could wrap the process in a very simple shell script that only required a couple of input parameters from the user based on their environment and the script would auto-magically handle the deployment. I created a shell script that would run on the ESXi Shell called deploy_coreos_on_esxi.sh

Note: The script assumes you can connect directly to the CoreOS website to download the zip directly onto the ESXi host.

There are three variables that you will need to edit prior to running the script:

  • DATASTORE_PATH - The full path to the Datastore to deploy CoreOS onto (e.g. /vmfs/volumes/datastore)
  • VM_NETWORK - The name of the vSphere Network to connect the CoreOS VM to
  • VM_NAME - The name of the CoreOS VM

Once you have finished editing the script, you just need to scp to your ESXi host and run the script using the following command:

./deploy_coreos_on_esxi.sh

Here is screenshot of running the script:

deploy-coreos-on-esxi-0
Once the script has completed, you should see a new CoreOS VM on your ESXi host and if you have DHCP, you should also see an associated IP Address in the VM Console:

deploy-coreos-on-esxi-1
Once the CoreOS VM is booted up, you use the SSH key that was included in the zip file, by default it is also extracted into the CoreOS VM directory. You can SSH into the VM by running the following command:

ssh -i insecure_ssh_key core@IP-ADDRESS-OF-COREOS-VM

Once logged in, we can run "docker images" to see a list of Containers. As you can see that there is only one and we can connect to that Container by running the "toolbox" command which will pull down the latest and then connect to that Container as seen in the screenshot below.

deploy-coreos-on-esxi-3
I was hoping that I could also get VMware Tools installed within the CoreOS VM, but I was not able to get SSH working within the Toolbox as stated in the Install Debugging Tools documentation. I may need to tinker around a bit more with CoreOS.

If you are interested in other methods of deploying CoreOS, be sure to check out CoreOS's documentation.

Additional Resources:

  • http://www.vreference.com/2014/06/09/deploy-coreos-into-your-esxi-lab/ - This was a great primer on CoreOS by Forbes Guthrie that I really enjoyed reading, highly recommend
  • http://gosddc.com/articles/dock-your-container-on-vmware-with-vagrant-and-docker/ - If you use Vagrant and would like to play with Docker, be sure to check out Fabio Rapposelli Vagrant vCloud Provider

Categories // Automation, Docker, ESXi, vSphere Tags // container, coreos, Docker, ESXi, vSphere

How to efficiently transfer files to Datastore in vCenter using the vSphere API?

06.18.2014 by William Lam // 19 Comments

A pretty common task for vSphere Administrators is to upload or download content from a vSphere Datastore which usually contains ISOs and floppy images. You can initiate the file transfer using the vSphere Web/C# Client, however this process can be quite tedious when having to manually upload several ISOs. Instead, you will probably want to automate this process and and there are a couple of ways in which you can accomplish this. One option, is to go directly to an ESXi host and upload your files but this is not ideal when you have vCenter Server to centrally manage your infrastructure. The second option is to go through vCenter Server, but depending on the implementation, you can potentially add unnecessary load to the vCenter Server if implemented incorrectly.

Let me explain this further with two diagrams and you can decide which implementation you prefer?
inefficent-file-transfer-to-datastore
In this first implementation, I directly access the file management API which leverages a simple HTTP GET/PUT operation to upload files to a vSphere Datastore. What I found out while transferring the data was that the data actually traverses through the vCenter Server and then onto the ESXi host before writing to the vSphere Datastore. This of course made the data transfer very inefficient not to mention additional bandwidth and load being added to vCenter Server.

I created a sample vSphere SDK for Perl script that demonstrates this inefficent transfer called inefficent-upload-files-to-datastore.pl

Here is sample execution of the script which accepts the name of the vSphere Datacenter, vSphere Datastore, the source file to transfer and the destination path of where the file will be uploaded to:

./inefficent-upload.pl --config ~/vmware-dev/.vcenter55-1 --datacenter Datacenter --datastore vsanDatastore --sourcefile /Volumes/Storage/Images/CentOS-6.4-x86_64-netinstall.iso --destfile ISO/CentOS-6.4-x86_64-netinstall.iso

After talking to some folks about this problem, I learned about a more efficient method as shown in the diagram below.
efficent-file-transfer-to-datastore.png
As you can see, we can still initiate the transfer using the vCenter Server, but the actual data transfer is than sent to one of the ESXi hosts that has access to the vSphere Datastore. To accomplish this, we need to use the AcquireGenericServiceTicket() method which is part of the sessionManager. Using this method, we can request a ticket for a one time HTTP request to connect directly to an ESXi. To upload a file, the request must include the method which in this case will be a PUT operation and the local URL to an ESXi host that has access to the vSphere Datastore.

Here is an example of a URL: https://vesxi55-1.primp-industries.com/folder/ISO/CentOS-6.4-x86_64-netinstall.iso?dcPath=ha-datacenter&dsName=vsanDatastore

  • ESXi IP Address/Hostname - In the script, I select the first ESXi host that has access to the vSphere Datastore
  • vSphere Datastore Directory - Directory into which the contents of the file will be placed in. In this example, we just have one top-level directory called ISO which must already exist
  • Destination file name - The name of the file that should appear in the vSphere Datastore
  • Datacenter Name - This should always be ha-datacenter when connecting directly to an ESXi host
  • vSphere Datastore - The name of the vSphere Datastore

To demonstrate this functionality, I have created a vSphere SDK for Perl script called efficent-upload-files-to-datastore.pl which accepts the name of the vSPhere Datastore along with the source and destination of where the file will be placed:

./upload-files-to-datastore.pl --config ~/vmware-dev/.vcenter55-1 --datastore vsanDatastore --sourcefile /Volumes/Storage/Images/CentOS-6.4-x86_64-netinstall.iso --destfile ISO/CentOS-6.4-x86_64-netinstall.iso

Hopefully after looking at these two implementations, you will also agree that the second option is the best! One last thing that I would like to point out is that even though we are talking about transferring files to a vSphere Datastore, this method can also be used to efficiently transfer other supported files to an ESXi hosts through vCenter Server as described in this blog article.

Categories // ESXi, vSphere Tags // datastore, HTTP, iso, vSphere, vSphere API

  • « Previous Page
  • 1
  • …
  • 19
  • 20
  • 21
  • 22
  • 23
  • …
  • 40
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...