WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

VMware Tools For Apple Mac OS X Guests?

05.22.2013 by William Lam // 3 Comments

With the release of vSphere 5, virtualizing Apple Mac OS X as a guest OS was possible and fully supported from VMware. To do so, you would need to be run ESXi on Apple hardware either the now deprecated Apple XServe 3.1 or an Apple Mac Pro. A comment that came up yesterday on Twitter was that VMware Tools did not exists for Mac OS X guests and this would make it difficult to manage Mac OS X guests on vSphere. I guess it may not be that well known or just an assumption, but VMware Tools does in fact exists for Mac OS X guests and it is also documented in the VMware Tools installation guide.

It is still amazing to me to see the number of guest OSes the vSphere platform supports and perhaps virtualizing Mac OS X is still relatively new for folks and hence the initial assumption about VMware Tools not being available. In any case, I thought I take you through a few screenshots of installing VMware Tools for a Mac OS X 10.7 guests running on my Apple Mac Mini.

In the screenshot below, we can see that VMware Tools is not detected in the guest OS and we have a option to install VMware Tools, so we go ahead and click on that.

This will mount the darwin.iso to the VM from the vmimages directory of the ESXi host and you can proceed with the VMware Tools installation.

Upon finishing the installation, you will be asked to reboot the guest OS and now when we take a look at the VM summary view, we can see VMware Tools is now running in our Mac OS X guests.

Note: For instructions on installing Apple Mac OS X as a guest OS on vSphere, please refer to this tech note.

Categories // Uncategorized Tags // mac, osx, vmware tools, vSphere 5.0

Running ESXi 5.0 & 5.1 on 2012 Mac Mini 6,2

12.04.2012 by William Lam // 59 Comments

If you recently purchased the new 2012 Apple Mac Mini 6,2 which was just released not too long ago and tried to install either ESXi 5.0 or 5.1, you probably noticed a PSOD (Pink/Purple Screen of Death) during the installation. This is currently a known issue and there is an extensive VMTN thread (9,300+ views) about this problem which also includes a fix through a collaboration between VMTN community user zer010gic and VMware Engineer dariusd. Even though the Apple Mac Mini is not an officially supported hardware platform for running ESXi, it is great to see VMware engineers going out their way and trying to help the VMware community find a solution as well as providing an "unofficial" fix in this case.

I would also like to point out that this issue only applies to the new 2012 Apple Mac Mini, for previous models such as the Apple Mac Mini 5,1 or 5,3 you can install ESXi 5.0 or 5.1 without any issues. For more details, please refer to the instructions in this blog post.

Disclaimer: The Apple Mac Mini is not officially supported by VMware. The only supported platform for ESXi 5.0 for Apple hardware is the Apple XServe 3,1 and for ESXi 5.1 is the Apple Mac Pro, which you can get more details here.

Before jumping into the solution, if you think VMware should support the Apple Mac Mini for running ESXi, please provide feedback to VMware by submitting a Feature Request. The more feedback that VMware receives from customers along with business justifications, the better our product management team can prioritize features that are most important to our customers.

Here are the current problem/solutions when trying to install on the new Mac Mini:

Problem: PSOD during ESXi 5.0 or 5.1 installation.
Solution: Add iovDisableIR=true to the kernel option before attempting installation. When you are asked to reboot, be prepared to enter iovDisableIR=true again (SHIFT+O) which is required to get ESXi to boot after installation. Once the system has booted up, go ahead and run "esxcli system settings kernel set -s iovDisableIR -v true" in the ESXi Shell to persist the kernel setting. This is a "temp" workaround while PSOD is being investigated.

Problem: Unable to install new OSX Server on a VM or power on existing OSX Server VMs.
Solution: There appears to be a significant change in Apple's SMC (System Management Controller) device in the newer models that prevents the Apple SMC VMkernel driver from properly loading. A tempoary fix was provided to zer010gic to create a custom ISO until the fix is integrated into a future release.

Note: There may be other minor/unconfirmed issues listed on the VMTN thread, but for basic ESXi installation/usage + OSX Server VM creation/installation, the above solutions should be sufficent.

Instead of having everyone walk through the process of creating a custom ESXi ISO which includes the two fixes mentioned above as well as the bundling the updated tg3 Broadcom network drivers for network connectivity, zer010gic has generously created and is hosting ESXi 5.1 ISOs for users to download and use. It contains some work that I have been doing with zer010gic to create an ESXi 5.1 ISO that does not require any manual intervention outside of the normal ESXi installation. I recently completed the rest of this work which is based off of the oriignal ISO that zer010gic has shared on the VMTN community (unfortunately I have not been able to get a hold of him to provide him with the necessary bits and I have decided to post a modified ISO).

Here is a step by step instruction for zer010gic ESXi 5.1 ISO

Step 1 - Download zer010gic ESXi-5.1-MacMini-SMC-6-2.iso.

Step 2 - Transfer ISO to either USB key or CD-ROM

Step 3 - Perform ESXi installation as you would, but when you get to the very last step prior to rebooting, be ready for some typing when the host boots back up (this is important else you will get a PSOD)

Step 4 - When ESXi starts to boot up, hit SHIFT+O which will allow you to add additional kernel boot option. Add the following text the bootUUID (remember to add a space first)

iovDisableIR=true

This step is required to ensure your ESXi boots up properly for the first time so you can permanently enable this kernel option using ESXCLI which will then persist this upon sub-sequent reboots.

Step 5 - Login to ESXi Shell (you may need to enable it first) and run the following ESXCLI command:

esxcli system settings kernel set -s iovDisableIR -v true

Once this is set, you no longer have to do this again. If you prefer not to go through these manual steps, please refer to the section below for a modified ESXi 5.1 ISO which automates all this for you.

Here is my modified ESXi 5.1 ISO which does not require any additional user intervention

Step 1 - Download my ESXi-5.1-MacMini-SMC-BOOT-FIX-6-2.iso

Step 2 - Transfer ISO to either USB key or CD-ROM

Step 3 - Go through normal ESXi install and enjoy

Note: For details on how I automated the kernel setting setting, take a look at the very end.

So if you are looking to refresh your home lab, you just may want to consider using the new Apple Mac Minis, especially with small form factor footprint 🙂

Note: A couple of users mentioned it took a bit of time to boot up, specifically when usbarbitrator module is being loaded. I noticed this too and it took quite a bit of time, probably 5-6 minutes. If you do not plan on any USB pass-through from the Mac Mini to your guestOSes, you can actually disable this service which should help speed the bootup. If you wish to disable usbarbitrator, run the following command:

chkconfig usbarbitrator stop

ESXi ISO Customization Details

If you take a look at the steps required to install the ISO provided by zer010gic, most of the heavy work has already been done for you. The only "manual" part that is required from the user is to enter a kernel option during the first boot and then run an ESXCLI command to persist this kernel setting which will prevent Mac Mini from PSODing. Removing these these manual steps is actually harder than it looks because of when you need to actually perform the changes. After much trial and error, I came up with the following script below (it's not the cleanest, but it works).

Basically the script is loaded from custom.tgz and executed before the installation begins and it generates a script stored in /tmp/customboot.sh which will look for the boot.cfg configuration file stored in the primary bootbank. This is where we insert the iovDisableIR=true parameter so the user is not required to do this after the first boot up. The challenge with this is the boot.cfg does not exists until after the installation has completed, so what I ended up doing was insert a command into /usr/lib/vmware/weasel/process_end.py which is part of the weasel installer for ESXi and is the very last script that is called when a user hits reboot. The command points back to the /tmp/customboot.sh which will perform the insert into boot.cfg right before rebooting. To automatically take care of the ESXCLI configuration, I added the ESXCLI command to /etc/rc.local.d/local.sh which will automatically run after all init scripts have executed. Then finally, I need to clean up local.sh since I only need that that run once which is handled by another script that is also created and stored in /etc/init.d/customcleanup which will just clean up local.sh file as well as delete itself. Simple right? 😉

Note: There is probably a more optimal way of doing this, probably using one of the weasel installer scripts and just set the boot.cfg option and then clean up with an init script, but I decided to leverage some of my earlier work for Disabling LUN Duringn ESXi Installation

Here is the script within the custom.tgz file:

#!/bin/ash

sed -i "s/time.sleep(4)/time.sleep(4)\n    util.execCommand('\/tmp\/customboot.sh')/g" /usr/lib/vmware/weasel/process_end

cat > /tmp/customboot.sh << __CUSTOM_BOOT__
#!/bin/ash

for BOOTCFG in \$(find / -iname boot.cfg);
do
        grep "no-auto-partition" \${BOOTCFG} > /dev/null 2>&1
        if [ \$? -eq 0 ];then
                sed -i 's/kernelopt.*/kernelopt=no-auto-partition iovDisableIR=true/g' \${BOOTCFG}
        fi
done
__CUSTOM_BOOT__
chmod +x /tmp/customboot.sh

sed -i 's/exit 0/localcli system settings kernel set -s iovDisableIR -v true\nexit 0/g' /etc/rc.local.d/local.sh

cat > /etc/init.d/customcleanup << __CUSTOM_CLEANUP__
sed -i 's/localcli.*//g' /etc/rc.local.d/local.sh
rm -f /etc/init.d/customcleanup
__CUSTOM_CLEANUP__

chmod +x /etc/init.d/customcleanup

Categories // Apple, ESXi, Home Lab Tags // apple, ESXi 5.0, ESXi 5.1, mac, mac mini, mini, osx, smc, tg3, vSphere 5.0, vSphere 5.1

Changing VCSA Failed Login Attempt & Lock Out Period

10.05.2012 by William Lam // 2 Comments

I was going through my twitter-feed this morning and came across an interesting article by @herseyc Locked out of the vCenter Server Virtual Appliance. I recommend you give Hersey's article a read as it contains some very useful information about failed login attempts to the VCSA (vCenter Server Appliance).

There was a question posed at the end of the article on how to increase the number of login attempts before an account would be locked out. The answer is to simply modify one of the PAM modules on the VCSA, specifically /etc/pam.d/common-auth

The system default for login attempts before locking out an account is 3 and you can modify this by changing the following line, where X is the number of attempts:

auth    requisite       pam_tally.so deny=X

You also have the option of specifying an "unlock" time which will lock the account for the specified period of time after reaching the max failed login attempts. This can useful if you do not want to manually reset a user account due to a user fat fingering a password. For more details about these parameters, you can search on Google or refer to this article.

Note: The login attempts here is specific to the OS system login on the VCSA (5.0 & 5.1) and not vCenter Server. If you successfully login before hitting the maximum attempts, the tally will automatically reset back to 0.

In vSphere 5.1, and with the use of vCenter SSO, you now have an easy way of controlling password and lock out policy using the new vSphere Web Client. Here is a screenshot of where the configurations are located at:

Note: These policies only pertain to identity sources connected to vCenter SSO and not OS system logins.

Categories // Uncategorized Tags // appliance, lockout, login, pam, VCSA, vcva, vSphere 5.0, vSphere 5.1

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • …
  • 23
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025