WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud
  • Tanzu
    • Application Modernization
    • Tanzu services
    • Tanzu Community Edition
    • Tanzu Kubernetes Grid
    • vSphere with Tanzu
  • Home Lab
  • Nested Virtualization
  • Apple

A killer custom Apple Mac Mini setup running VSAN

10.21.2014 by William Lam // 12 Comments

*** This is a guest blog post from Peter Bjork ***

The first time I was briefed on VMware VSAN, I fell in love. I finally knew how I would build my Home Lab.

Let me first introduce myself, my name is Peter Björk and I work at VMware as Lead Specialist within the EMEA EUC Practice. I fortunately have the opportunity to limit my focus on a very few products and truly specialize in these. I cover two products; VMware ThinApp and VMware Workspace Portal and one feature; the Application Publishing feature of VMware Horizon 6. I’m an End-User application kind of guy. That said, you should understand that I’m far from your ESXi and vSphere expert. If you want to keep up with the latest news in the VMware End-User Computing space make sure to follow me on Twitter, my handle is @thepeb. When I’m not a guest blogger, I frequently blog on the official ThinApp and Horizon Tech blogs.

In my role I produce a lot of blog posts and internal enablement material. I perform many tests using early code drops and on a daily basis I run my home lab to deliver live demos. I need a Home Lab that I can trust and that supports all my work tasks. I started building my lab many years ago. It all started with a single mid tower white box, but pretty soon I ran into resource constraints. I started to investigate what my next upgrade would look like.

I had a few requirements:

  • Keep the noise down
  • Shouldn’t occupy that much space
  • Should be affordable
  • Modular, I do not have money to buy everything upfront so it should be something I could build on top of.
  • Should be able to run VMware ESXi/vSphere
  • Should be cool

Being an Apple junky for many years, my eyes soon fell on the Apple Mac Minis and I stumbled over this great blog by William Lam that you are reading right now. At the same time I started to hear about VSAN and my design was pretty much decided. I was going to build a Mac Mini cluster using VSAN as storage. While I have Synology NAS I only use it for my private files. It is not used in my home lab and for reasons I can not really explain I wanted to keep it separate and use a separate storage solution for my home lab.

Now that I have decided to build my home lab, I went and bought my first Mac Mini. To keep cost down I found two used late 2012 models with i7 CPUs. Since VSAN requires one SSD and one HDD I had to upgrade them using the OWC Data Doubler Mounting Kit. I also upgraded the memory to 16GB RAM in each Mac Mini. This setup gave me some extra resources and together with my old Tower Server I could start building my VSAN Cluster. I started with the VSAN beta. I quickly realized that VSAN didn’t support my setup. I waited for the GA release of VSAN and on the release date I decided to go for a pure Mac Mini VSAN setup so I stole my families HTPC which was a late 2012 Mac Mini model with a i5 CPU. (I managed to get away with it because I replaced it with an Apple-TV.) I took one HDD and the SSD from my old Tower Server and put it into the i5 Mac Mini. While I managed to get VSAN up and running it was only running for an hour or so before I lost one disk in my VSAN setup. I recovered the disk back up through a simple reboot but then the next disk went down. The reason for the instability is that the GA release of VSAN did not support the AHCI controller. Hugely disappointed I had to run my home lab on local attached storage and my dreams of VSAN was just that, dreams. In all my eagerness I’ve already migrated the majority of my VMs onto the VSAN Datastore so I pretty much lost my entire home lab.

After complaining to my colleagues, I found out that AHCI controller support for VSAN was coming in vSphere 5.5 U2. I heard it was likely to solve my problems. So the 9th October came and vSphere 5.5u2 was finally here. To my joy, my three Mac Minis were finally able to run VSAN and it was completely stable.

Lets take a closer look at my setup. Below is an overview of the setup and how things are tied together.

Home Lab Picture
My VSAN Datastore houses most of my VMs. My old Tower Server is connected to the VSAN Datastore but does not currently contribute any storage. On the Tower Server I host my management VMs. Since I got burned loosing all my VMs, I made sure I keep my management VMs on a local disk in the Tower Server. Since my environment has been running quite stable for nearly two weeks now I’m considering migrating all of my VMs onto the VSAN Datastore.

I have noticed one issue so far which is with my i5 based Mac Mini. One day it was reporting not connected in the vCenter Server. The machine was running but I got a lot of timeouts when I pinged it. While I was thinking about rebooting the host it showed up as connected again and since then I’ve not noticed any other issues. I suspect the i5 CPU isn’t powerful enough to host a couple VMs and being a part of the VSAN. When I saw it disappear it might have been running some heavy workloads. So with this in mind I would recommend running i7 Mac Minis and leave the i5 models for HTPC work loads :).

Another thing I’ve noticed is that the Mac Minis are running quite hot. There is no power saving functionality that is active and my small server room doesn’t have cooling. The room is constantly around 30-35 degrees Celsius (86-95F) but the gear just keeps on running. The only time I got a little bit worried was when the room’s temperature peaked at 45 degrees Celsius (113F), for Sweden, that is an exceptionally warm summer day. Leaving the door to the room opened for a while helps cool things down. I’m quite impressed by the Mac Minis and how durable they are. My first two Mac Minis have been running like this for well over a year now.

IMG_2810
Here’s a picture of my server room. While I do have UPS there is no cooling or windows so the room tends to be quite warm. Stacking the Mac Minis on top of each other doesn’t really help cooling either. When I started stacking my Mac Minis on top of each other I realized how stupid it would be to have three separate power cords. So I ended up creating a custom Y-Y-Y-cable (last Y is for future expansion).

IMG_2807

IMG_2809

Y-cable inside

Y connector
The power cord is a simple lamp cable (0.75mm2) that has the three original Apple power cables butchered together. The Y-connector was found in a local Swedish hardware store. Since the Mac Mini’s maximum continuous power consumption is 85W, a 0.75mm2 cable would work perfectly. A 2 meter (6.56 feet) long 0.75mm2 cable is able to support at least 3A. My three Mac Minis only consume 1.1A (3x85W / 230V = 1.1A). In 120V countries you would have 2.5-3A running through the cable but this wouldn’t be a problem.

Since the Mac Minis only have a single onboard NIC and I wanted to have two physically separated networks I had to get an Ethernet Thunderbolt NIC. As shown in the overview picture I’m am running both VSAN traffic and VM traffic over the same NIC. This is probably not ideal from a performance point of view but for my EUC related workloads I’ve not noticed any performance bottlenecks. On the other hand, I’m very pleased with the performance and with the benefits of having shared storage, so things like DRS and vMotion can deal with the balance between my hosts and I’m super happy with this setup.

I found that the easiest way was to use VMware Fusion to install ESXi onto a USB key. Then I simply plug in the USB key in my Mac Mini and I’m up and running. I need to use external monitor and keyboard to configure the ESXi initially.

As for the next steps I’m planning on getting an SSD and an extra HDD for my Tower Server. This would allow my Tower Server to participate in the VSAN Cluster and contribute additional capacity. If the opportunity arises and I can find another Mac Mini with an i7 CPU for a decent price I would also like to replace the i5. Other than that, I don’t think I need much else. Well, I could always use a little bit more RAM of course (who doesn’t) but disk and CPU runs very low all the time.

VC
Technical details:

  • All Mac Minis are late 2012 models
    • All SSD disks are different models and vendors. Their capacity ranges from 120 to 250GB. Since I’ve had a couple SSD crashes I made sure to purchase the more heavy-duty models offered from the vendors. But none of them are designed for constant use in servers.
    • All Mac Minis have 16GB RAM (2x8GB)
    • I have 1TB HDD in my two i7 Mac Minis and 500GB HDD in the i5 one.
  • ESXi installed on USB key
  • My Tower Server specs are:
    • Supermicro Xeon E3 motherboard, uATX (X9SCM-F-B)
    • Intel Xeon E3-1230 3.2GHz QuadCore HT, 8MB
    • 4x8GB 1333MHz DDR3 ECC
    • Barracuda 500GB, 7200rpm, 16MB, SATA 6Gb/s

To wrap up, I’m very pleased with the setup I‘ve built. It works perfectly for my needs. Lastly, I do recommend having a separate management host, as I found it extremely useful when I had to move VMs back and forth to test earlier releases of VSAN. I also recommend going for the i7 CPU models of Mac Mini for better performance.

Download the VMware ESXi 5.5u2 Mac Mini ISO from virtuallyGhetto:

  • https://mega.nz/#!EJNSFJyb!hm-AWAiqEisDnMV9XpZphSLn_puJLu9RTep9R83N6rY

Apple Thunderbolt Ethernet Adapter VIB:

  • https://s3.amazonaws.com/virtuallyghetto-download/vghetto-apple-thunderbolder-ethernet.vib

UPDATE (01/15/15) - Peter just shared with me a new custom Mac Mini Rack that he had built and welded together, check out the pictures below to see what it looks like.

mac-mini-rack-1 mac-mini-rack-2

Categories // Apple, ESXi, Home Lab, VSAN, vSphere Tags // apple, esxi 5.5, mac mini, VSAN, vSphere 5.5

Does VSAN work with Free ESXi?

07.22.2014 by William Lam // 8 Comments

I recently had to re-provision one of my VSAN lab environments using my recently shared ESXi 5.5 VSAN Kickstart. I usually specify a license key within the Kickstart so I do not have to license the ESXi host later. This actually got me wondering on whether VSAN would in fact work with Free ESXi aka vSphere Hypevisor? Being a curious person, I of course had to test this in the lab 🙂

Needless to say, if you want to properly evaluate or use VSAN in production, you should go through the supported method of using vCenter Server as it provides a simple and intuitive management interface for VSAN. More importantly, having the ability to create individual VM Storage Policies that can be applied on a per VMDK basis based on SLA's for your given application or Virtual Machine.

Disclaimer: This is not officially supported by VMware and running ESXi without a VSAN license is against VMware's EULA.

Since we do not have a vCenter Server, we will need to be able to fully configure VSAN without it. Luckily, we know of a way to "bootstraping" VSAN onto an ESXi host without vCenter Server and I will be leveraging that blog post to test this scenario with Free ESXi.

Prerequisite:

  • 3 ESXi 5.5 hosts already installed and licensed with vSphere Hypervisor (Free ESXi) License
  • SSH Enabled

Step 1 - SSH to the first ESXi host and run the following ESXCLI command to create a VSAN Cluster:

esxcli vsan cluster join -u $(python -c 'import uuid; print str(uuid.uuid4());')

configure-vsan-for-free-esxi-0
Step 2 - Run the following ESXCLI command to make a note of the VSAN Cluster UUID (highlighted in green in the screenshot above) which will be needed later:

esxcli vsan cluster get

Step 3 - Enable VSAN Traffic for VMkernel interface you plan on using for VSAN traffic by running the following ESXCLI command:

esxcli vsan network ipv4 add -i vmk0

Step 4 - Run the following command to view a list of disks that are eligible for use with VSAN. You will need a minimum of 1xSSD and 1xMD

vdq -q

configure-vsan-for-free-esxi-1
Step 5 - Using the information from vdq, we will now create our VSAN Disk Group which will contain the SSD/MD's to be used for VSAN. Use the following ESXCLI command and substituting in the SSD/MD Names (please refer to the screenshot above for an example):

esxcli vsan storage add -s [SSD] -d [MD]

Step 6 - To ensure you have properly configured a VSAN Disk Group, you can run the following ESXCLI command to confirm:

esxcli vsan storage list

configure-vsan-for-free-esxi-2
At this point, we now have a single ESXi host configured with VSAN Datastore, we can also confirm this by running the following ESXCLI command:

esxcli storage filesystem list

configure-vsan-for-free-esxi-3
Step 7 - Repeat Steps 3-6 on the remainder two ESXi hosts

Step 8 - Finally, we now need to join the remainder ESXi hosts to the VSAN Cluster. We will need the VSAN Cluster UUID that we recorded earlier and specify that in the following ESXCLI command on each of the remainder ESXi hosts:

esxcli vsan cluster join -u [VSAN-CLUSTER-UUID]

If we now login to all of our ESXi hosts using the vSphere C# Client, we will see a common VSAN Datastore that is shared among the three ESXi hosts. To prove that that VSAN is in fact working, we can create a Virtual Machine and ensure we can power it on as seen in the screenshot below. By default, VSAN has a "Default" policy which defines FTT (Number of host failures to tolerate) set to 1 and assuming you have at least 3 ESXi hosts, all Virtual Machines will be protected by default.

configure-vsan-for-free-esxi-4
Even though you can run VSAN using Free ESXi and leveraging the default VM Storage Policy that is built into VSAN for protecting Virtual Machines, you are only exercising a tiny portion of the potential that VSAN can bring when consuming it through vCenter Server. As mentioned earlier, you will not have the ability to create specific VM Storage Policies and assign them based on the specific SLAs and be able to easily monitor their compliance and remediation. The management of VSAN Cluster for adding additional capacity or serviceability is also quite limited without vCenter Server, though it can be definitely be done it is much easier with just a couple of clicks in the vSphere Web Client or a simple API call.

Categories // ESXCLI, ESXi, VSAN, vSphere 5.5 Tags // esxi 5.5, free esxi, VSAN, vsanDa, vSphere 5.5, vsphere hypervisor

ESXi 5.5 Kickstart script for setting up VSAN

07.21.2014 by William Lam // 12 Comments

In my lab, when I need to provision a new or rebuild an existing ESXi host, I still prefer to use the true and tried method of an unattended/scripted installation also known as Kickstart. Below is a handy ESXi 5.5 Kickstart that I have been using to setup a basic VSAN environment. I figure this might come in handy for anyone looking to automate their ESXi 5.5 deployment and include some of the VSAN configurations like creating a VSAN Disk Group or enabling VSAN Traffic type on a particular VMkernel interface. For more details about this Kickstart, refer to the bottom of the file where I break down the file in more detail.

# Sample kickstart for ESXi 5.5 configuring VSAN Disk Groups
# William Lam
# www.virtuallyghetto.com
#########################################

accepteula
install --firstdisk --overwritevmfs
rootpw vmware123
reboot

%include /tmp/networkconfig
%pre --interpreter=busybox

# extract network info from bootup
VMK_INT="vmk0"
VMK_LINE=$(localcli network ip interface ipv4 get | grep "${VMK_INT}")
IPADDR=$(echo "${VMK_LINE}" | awk '{print $2}')
NETMASK=$(echo "${VMK_LINE}" | awk '{print $3}')
GATEWAY=$(esxcfg-route | awk '{print $5}')
DNS="172.30.0.100"
HOSTNAME=$(nslookup "${IPADDR}" "${DNS}" | grep Address | grep "${IPADDR}" | awk '{print $4}')

echo "network --bootproto=static --addvmportgroup=true --device=vmnic0 --ip=${IPADDR} --netmask=${NETMASK} --gateway=${GATEWAY} --nameserver=${DNS} --hostname=${HOSTNAME}" > /tmp/networkconfig

%firstboot --interpreter=busybox

vsan_syslog_key = "VSAN-KS"

logger $vsan_syslog_key " Enabling & Starting SSH"
vim-cmd hostsvc/enable_ssh
vim-cmd hostsvc/start_ssh

logger $vsan_syslog_key " Enabling & Starting ESXi Shell"
vim-cmd hostsvc/enable_esx_shell
vim-cmd hostsvc/start_esx_shell

logger $vsan_syslog_key " Suppressing ESXi Shell Warning"
esxcli system settings advanced set -o /UserVars/SuppressShellWarning -i 1

logger $vsan_syslog_key " Reconfiguring VSAN Default Policy"
esxcli vsan policy setdefault -c vdisk -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"
esxcli vsan policy setdefault -c vmnamespace -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"

logger $vsan_syslog_key "Enabling VSAN Traffic on vmk0"
esxcli vsan network ipv4 add -i vmk0

# assign license
vim-cmd vimsvc/license --set AAAAA-BBBBB-CCCCC-DDDDD-EEEEE

%firstboot --interpreter=python

import commands, os, uuid, syslog

vsan_syslog_key = "VSAN-KS"
debug = False

# Build VSAN Disk Group command based on vdq -q output
def createVsanDiskGroup():
	vdqoutput = eval(commands.getoutput("/sbin/vdq -q"))
	md = []
	ssd = ''
	for i in vdqoutput:
		if i['State'] == 'Eligible for use by VSAN':
			if i['Reason'] == 'Non-local disk':
				syslog.syslog(vsan_syslog_key + " Setting enable_local and reclaiming " + i['Name'])
				if debug == False:
					os.system("esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -o enable_local -d " + i['Name'])
					os.system("esxcli storage core claiming reclaim -d " + i['Name'])
			if i['IsSSD'] == '1':
				ssd = i['Name']
			else:
				md.append(i['Name'])

	diskgroup_cmd = 'esxcli vsan storage add -s ' + ssd + ' -d ' + ' -d '.join(md)
	syslog.syslog(vsan_syslog_key + " Creating VSAN Disk Group using SSD: " + ssd +  " MD: " + ', '.join(md))
	if debug == False:
		os.system(diskgroup_cmd)

# Create VSAN Cluster (required only for first ESXi node)
def createVsanCluster():
	# generate UUID for VSAN Cluster
	vsan_uuid = str(uuid.uuid4())

	syslog.syslog(vsan_syslog_key + " Creating VSAN Cluster using UUID: " + vsan_uuid)
	if debug == False:
		os.system("esxcli vsan cluster join -u " + vsan_uuid)

createVsanDiskGroup()
createVsanCluster()

If you would like to see more details on creating ESXi Kickstart, make sure to check out my ESXi 4.x & 5.x examples here.

Line 6-9 This is generic Kickstart configurations specifying EULA, how to install, password, etc. You can refer to VMware's scripted install documentation.

Line 11-25 This extracts the DHCP IP Address (static allocation) and re-creates the network configuration in Line 25 for statically assigning the IP Address to the ESXi host

Line 27 This starts the firstboot script and assumes "Busybox" as the interpreter which means basic shell commands

Line 30 I create a custom key which will be logged in syslog for our installation

Line 32-41 Basic ESXi configurations leveraging vim-cmd and ESXCLI

Line 43-45 Configure the VSAN default storage policy, please refer to this article for more details.

Line 47-38 Configure the VSAN Traffic type on vmk0

Line 35 This starts a second firstboot script, but now using "Python"

Line 50-51 Assign a license to ESXi host

Line 53 Importing the appropriate libraries that will be used in the Python script

Line 58 Using the same custom key that I created earlier for logging to syslog

Line 61-81 A method for creating VSAN Disk Group by inspecting vdq CLI and marking disks as local

Line 83-90 A method for creating VSAN Cluster, please refer to this article for more details.

Line 92-93 Invoking the two Python methods. You can either create a custom Kickstart for your "first" ESXi node if you decide to bootstrap your VSAN Cluster onto a single ESXi host. You can also use custom boot options to specify whether the ESXi host being provisioned is the first or additional nodes. This topic is a bit advanced, but if you are interested, take a look at this article here.

Categories // Automation, ESXCLI, ESXi, VSAN, vSphere, vSphere 5.5 Tags // esxi 5.5, kickstart, ks.cfg, VSAN, vSphere 5.5

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • …
  • 16
  • Next Page »

Search

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Infrastructure Business Group (CIBG) at VMware. He focuses on Cloud Native technologies, Automation, Integration and Operation for the VMware Cloud based Software Defined Datacenters (SDDC)

Connect

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Recent

  • Changing the default HTTP(s) Reverse Proxy Ports on ESXi 8.0 03/22/2023
  • Quick Tip - How to download ESXi ISO image for all releases including patch updates? 03/15/2023
  • SSD with multiple NVMe namespaces for VMware Homelab 03/14/2023
  • Is my vSphere Cluster managed by vSphere Lifecycle Manager (vLCM) as a Desired Image or Baseline? 03/10/2023
  • Interesting VMware Homelab Kits for 2023 03/08/2023

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2023

 

Loading Comments...