WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

How to quickly deploy CoreOS on ESXi?

07.25.2014 by William Lam // 1 Comment

deploy-coreos-on-esxiThere has been a tremendous amount of buzz lately regarding Docker, a platform that allows developers to easily build, deploy and manage Linux Containers. Docker can run on variety of Linux Distributions, one that has been quite popular lately is a new Linux Distribution called CoreOS.

CoreOS is actually a fork of Google's ChromeOS and was designed to run next generation workloads similar to those at Google and Facebook. A major benefit of CoreOS is the minimal footprint the base operating system consumes which allows for maximum resource utilization for the Container workloads.

Having heard so much about Docker and CoreOS, I figure this would be a great opportunity to explore and learn about a new technology which I always enjoy when I get the time. I know Duncan Epping has written an article on how to run CoreOS on VMware Fusion, but since I primarily work with vSphere, I wanted to run CoreOS on ESXi. The first place I went to was the CoreOS documentation and there is a section for VMware. After going through the instructions, I found the process to be quite manual and potentially requiring additional tools as a simple OVF/OVA for CoreOS did not exist.

I figured I could wrap the process in a very simple shell script that only required a couple of input parameters from the user based on their environment and the script would auto-magically handle the deployment. I created a shell script that would run on the ESXi Shell called deploy_coreos_on_esxi.sh

Note: The script assumes you can connect directly to the CoreOS website to download the zip directly onto the ESXi host.

There are three variables that you will need to edit prior to running the script:

  • DATASTORE_PATH - The full path to the Datastore to deploy CoreOS onto (e.g. /vmfs/volumes/datastore)
  • VM_NETWORK - The name of the vSphere Network to connect the CoreOS VM to
  • VM_NAME - The name of the CoreOS VM

Once you have finished editing the script, you just need to scp to your ESXi host and run the script using the following command:

./deploy_coreos_on_esxi.sh

Here is screenshot of running the script:

deploy-coreos-on-esxi-0
Once the script has completed, you should see a new CoreOS VM on your ESXi host and if you have DHCP, you should also see an associated IP Address in the VM Console:

deploy-coreos-on-esxi-1
Once the CoreOS VM is booted up, you use the SSH key that was included in the zip file, by default it is also extracted into the CoreOS VM directory. You can SSH into the VM by running the following command:

ssh -i insecure_ssh_key core@IP-ADDRESS-OF-COREOS-VM

Once logged in, we can run "docker images" to see a list of Containers. As you can see that there is only one and we can connect to that Container by running the "toolbox" command which will pull down the latest and then connect to that Container as seen in the screenshot below.

deploy-coreos-on-esxi-3
I was hoping that I could also get VMware Tools installed within the CoreOS VM, but I was not able to get SSH working within the Toolbox as stated in the Install Debugging Tools documentation. I may need to tinker around a bit more with CoreOS.

If you are interested in other methods of deploying CoreOS, be sure to check out CoreOS's documentation.

Additional Resources:

  • http://www.vreference.com/2014/06/09/deploy-coreos-into-your-esxi-lab/ - This was a great primer on CoreOS by Forbes Guthrie that I really enjoyed reading, highly recommend
  • http://gosddc.com/articles/dock-your-container-on-vmware-with-vagrant-and-docker/ - If you use Vagrant and would like to play with Docker, be sure to check out Fabio Rapposelli Vagrant vCloud Provider

Categories // Automation, Docker, ESXi, vSphere Tags // container, coreos, Docker, ESXi, vSphere

ESXi 5.5 Kickstart script for setting up VSAN

07.21.2014 by William Lam // 12 Comments

In my lab, when I need to provision a new or rebuild an existing ESXi host, I still prefer to use the true and tried method of an unattended/scripted installation also known as Kickstart. Below is a handy ESXi 5.5 Kickstart that I have been using to setup a basic VSAN environment. I figure this might come in handy for anyone looking to automate their ESXi 5.5 deployment and include some of the VSAN configurations like creating a VSAN Disk Group or enabling VSAN Traffic type on a particular VMkernel interface. For more details about this Kickstart, refer to the bottom of the file where I break down the file in more detail.

# Sample kickstart for ESXi 5.5 configuring VSAN Disk Groups
# William Lam
# www.virtuallyghetto.com
#########################################

accepteula
install --firstdisk --overwritevmfs
rootpw vmware123
reboot

%include /tmp/networkconfig
%pre --interpreter=busybox

# extract network info from bootup
VMK_INT="vmk0"
VMK_LINE=$(localcli network ip interface ipv4 get | grep "${VMK_INT}")
IPADDR=$(echo "${VMK_LINE}" | awk '{print $2}')
NETMASK=$(echo "${VMK_LINE}" | awk '{print $3}')
GATEWAY=$(esxcfg-route | awk '{print $5}')
DNS="172.30.0.100"
HOSTNAME=$(nslookup "${IPADDR}" "${DNS}" | grep Address | grep "${IPADDR}" | awk '{print $4}')

echo "network --bootproto=static --addvmportgroup=true --device=vmnic0 --ip=${IPADDR} --netmask=${NETMASK} --gateway=${GATEWAY} --nameserver=${DNS} --hostname=${HOSTNAME}" > /tmp/networkconfig

%firstboot --interpreter=busybox

vsan_syslog_key = "VSAN-KS"

logger $vsan_syslog_key " Enabling & Starting SSH"
vim-cmd hostsvc/enable_ssh
vim-cmd hostsvc/start_ssh

logger $vsan_syslog_key " Enabling & Starting ESXi Shell"
vim-cmd hostsvc/enable_esx_shell
vim-cmd hostsvc/start_esx_shell

logger $vsan_syslog_key " Suppressing ESXi Shell Warning"
esxcli system settings advanced set -o /UserVars/SuppressShellWarning -i 1

logger $vsan_syslog_key " Reconfiguring VSAN Default Policy"
esxcli vsan policy setdefault -c vdisk -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"
esxcli vsan policy setdefault -c vmnamespace -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"

logger $vsan_syslog_key "Enabling VSAN Traffic on vmk0"
esxcli vsan network ipv4 add -i vmk0

# assign license
vim-cmd vimsvc/license --set AAAAA-BBBBB-CCCCC-DDDDD-EEEEE

%firstboot --interpreter=python

import commands, os, uuid, syslog

vsan_syslog_key = "VSAN-KS"
debug = False

# Build VSAN Disk Group command based on vdq -q output
def createVsanDiskGroup():
	vdqoutput = eval(commands.getoutput("/sbin/vdq -q"))
	md = []
	ssd = ''
	for i in vdqoutput:
		if i['State'] == 'Eligible for use by VSAN':
			if i['Reason'] == 'Non-local disk':
				syslog.syslog(vsan_syslog_key + " Setting enable_local and reclaiming " + i['Name'])
				if debug == False:
					os.system("esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -o enable_local -d " + i['Name'])
					os.system("esxcli storage core claiming reclaim -d " + i['Name'])
			if i['IsSSD'] == '1':
				ssd = i['Name']
			else:
				md.append(i['Name'])

	diskgroup_cmd = 'esxcli vsan storage add -s ' + ssd + ' -d ' + ' -d '.join(md)
	syslog.syslog(vsan_syslog_key + " Creating VSAN Disk Group using SSD: " + ssd +  " MD: " + ', '.join(md))
	if debug == False:
		os.system(diskgroup_cmd)

# Create VSAN Cluster (required only for first ESXi node)
def createVsanCluster():
	# generate UUID for VSAN Cluster
	vsan_uuid = str(uuid.uuid4())

	syslog.syslog(vsan_syslog_key + " Creating VSAN Cluster using UUID: " + vsan_uuid)
	if debug == False:
		os.system("esxcli vsan cluster join -u " + vsan_uuid)

createVsanDiskGroup()
createVsanCluster()

If you would like to see more details on creating ESXi Kickstart, make sure to check out my ESXi 4.x & 5.x examples here.

Line 6-9 This is generic Kickstart configurations specifying EULA, how to install, password, etc. You can refer to VMware's scripted install documentation.

Line 11-25 This extracts the DHCP IP Address (static allocation) and re-creates the network configuration in Line 25 for statically assigning the IP Address to the ESXi host

Line 27 This starts the firstboot script and assumes "Busybox" as the interpreter which means basic shell commands

Line 30 I create a custom key which will be logged in syslog for our installation

Line 32-41 Basic ESXi configurations leveraging vim-cmd and ESXCLI

Line 43-45 Configure the VSAN default storage policy, please refer to this article for more details.

Line 47-38 Configure the VSAN Traffic type on vmk0

Line 35 This starts a second firstboot script, but now using "Python"

Line 50-51 Assign a license to ESXi host

Line 53 Importing the appropriate libraries that will be used in the Python script

Line 58 Using the same custom key that I created earlier for logging to syslog

Line 61-81 A method for creating VSAN Disk Group by inspecting vdq CLI and marking disks as local

Line 83-90 A method for creating VSAN Cluster, please refer to this article for more details.

Line 92-93 Invoking the two Python methods. You can either create a custom Kickstart for your "first" ESXi node if you decide to bootstrap your VSAN Cluster onto a single ESXi host. You can also use custom boot options to specify whether the ESXi host being provisioned is the first or additional nodes. This topic is a bit advanced, but if you are interested, take a look at this article here.

Categories // Automation, ESXCLI, ESXi, VSAN, vSphere, vSphere 5.5 Tags // ESXi 5.5, kickstart, ks.cfg, VSAN, vSphere 5.5

Quick Update - ESXi support for Apple Mac Pro 6,1

07.18.2014 by William Lam // 3 Comments

I know many of you have been asking about ESXi support for the latest Mac Pro 6,1 that was released from Apple late last year and I just wanted to give a quick update. VMware Engineering has been hard at work on getting this new platform certified and supported with ESXi, however, there were some unforeseen challenges that is currently preventing the current version of ESXi to run on the new Mac Pro.

VMware is working closely with Apple's hardware team to resolve these issues and we expect to have a Mac Pro 6,1 supported with ESXi 5.5 in the future. In the meantime, if you wish to evaluate ESXi on the new Mac Pro (though not officially supported), you can sign up for the new vSphere Beta and run a Beta version of ESXi on the new Mac Pro.

Here is a screenshot of Mac Pro 6,1 running the Beta version of ESXi:

esxi-mac-pro-6.1-0
There are a couple of workarounds that is required for right now, which will all be resolved by GA. For more details, please refer to this VMTN thread.

Categories // Apple, ESXi, vSphere Tags // apple, ESXi, mac pro, vSphere 5.5

  • « Previous Page
  • 1
  • …
  • 88
  • 89
  • 90
  • 91
  • 92
  • …
  • 109
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...