WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Community stories of VMware & Apple OS X in Production: Part 6

09.10.2014 by William Lam // Leave a Comment

Company: Public Education K-12
Software: VMware vSphere
Hardware: Xserve

[William] - Hi Pete, thanks for reaching out on Twitter and offering to share your experiences in managing VMware and Apple OS X in an academic environment. Can you start off by quickly introducing yourself and what your role is currently?

[Pete] - My name is Pete Wann, I've been a sysadmin for over 15 years, mostly in education. I switched to Mac at the OSX transition because I was really interested in the Unix (BSD) foundation. My interest in Unix was piqued by my exposure to Solaris in the military, and since then I've tried to focus my career around all the various flavors out there, it just so happens that I like Macs, and it's been a good niche to be in. The community is awesome and ridiculously supportive.

My current role is as a Principal Systems Technologist at Oracle. I work for our Global IT group, but I primarily support a subset of our Marketing department. I'm responsible for the infrastructure around our video, print, and web production efforts. Although, the specific implementation we're going to discuss was done at my last position, with a large school district in Alaska.

[William] - Thanks for the background Pete. So I hear you were involved in an implementation that involved VMware and Apple OS X Technologies, can you share with us some more details about the environment?

[Pete] - Well, as you know, Apple discontinued the Xserve in 2010. (boo! hiss!) This was disastrous for that environment since the schools were very far apart, and our WAN links were slow and sometimes tenuous, in addition to some decisions made before I arrived about how home directories were handled, we needed to have some kind of server presence in every school. Since we couldn't count on having someone in each school who was comfortable going into a server closet to reset a system, we really needed Lights-Out-Management on whatever hardware we put out there.

Additionally, this was by far the largest Open Directory deployment that we (or Apple) had ever heard of. We had both computers and users in OD, and with our sometimes rickety WAN, we needed to have OD replicas as close to the clients as we could get, so again, a server presence in every school.

Eventually we migrated all of our user authentication over to AD, but still used OD for some computer management functions (mostly we used JAMF Casper for imaging and package deployment), so we still needed separate OD replicas for each school. (Each school was its own OU within OD so that we could distribute computer management tasks.)

[William] - I too remember the EOL announcement of the Xserve, it definitely had an impact to everyone who relied on that hardware. It sounds like you had a decent Apple Infrastructure, where was all this running? Physical or Virtual?

[Pete] - At the time, ESX did not support the Apple RAID card, so I could not use the internal storage with any of the systems I had available, which was fine with me, since I didn't want any moving parts on the hosts if I could avoid it, to hopefully increase longevity.

So, after much bugging of the powers-that-be, I got three licenses for vSphere for the three Xserves I scrounged from our secondary schools, removed all internal storage, then installed ESXi on a small USB drive on each host. I used the built-in iSCSI support in ESXi to connect to our NetApp storage, and integrated the Xserves with the rest of our vSphere environment, with full support for vMotion and everything. It was really easy, and worked insanely well.

We wound up virtualizing about 20 hosts across the three Xserves, mostly OS X, but also a couple of Linux hosts to act as web front-ends for our Casper environment. I fought hard to make the Xserves full-fledged members of the vSphere deployment, but my counterparts on the Windows side resisted harder. I still think that was a waste of available CPU power, but such is life.

[William] - Wow, this is pretty cool! I think this is the first implementation that I have heard of that leverages external storage w/Apple hardware. Could you share some details about the hardware specs for the Xserve and how you came to this particular configuration?

[Pete] - Well, in the case of the Xserves, we lucked out by having already ordered 77 of the last generation before Apple announced the end-of-production. We were in the process of transitioning from Xserve G5s to Intel in all the schools.

I was at the MacTech conference in LA when word came out that the Xserve was killed (Can you imagine the mood in that room?) and immediately got in touch with my boss to ask for as many more of the last generation we could afford to buy. Initially my intention was to go with Parallels Server, and we did buy it and deploy it at a couple of sites, but let's just say that didn't go well, and I jumped off that path as soon as ESXi 5 was released.

Initially I wanted dual-processor systems with the internal SSD and maxed RAM (I believe 48GB on that model), and since I was still thinking in terms of what Parallels Server supported, I got 3 internal 1Tb drives to use for local storage. Unfortunately, the option of adding the internal SSD as a fourth drive disappeared almost as quickly as it appeared, and we missed the window. I got the rest of what i asked for, though.

Once I discovered that ESXi 5 didn't support the Apple internal RAID controller, I had to find another solution for storage, since I didn't want to run everything, Hypervisor and VM Storage on USB drives. Fortunately for me, our vSphere environment was already configured to connect to our NetApp NAS, so it was trivial to add that storage for the VMs once the Xserves were added as hosts to the vSphere DC.

I also managed to scrounge additional NICs for the Xserves to give the nodes more network capacity for the guest VMs. So I think ultimately we wound up with 6 total 1Gb connections — 1 management, 1 vMotion etc., and 4 on a vSwitch for guest VMs. The three Xserves were segregated into their own vDC to avoid confusion for our management and SysAdmins.

[William] - How did you go about monitoring this infrastructure? Any challenges or gotchas you found while building and managing this environment?

[Pete] - Honestly, no. We used all of the same management tools that we used for our wider vSphere environment, and it all just worked.

At the time, I believe they were implementing some monitoring tools from Symantec, but I left while that was still being implemented. Before that was in place, it was largely a manual process. I stayed as hands-off as possible once I had my environment up and running because I take a "less is more" approach to being a SysAdmin. 🙂

The ONLY gotcha, and it was very easily overcome, was the lack of support in ESXi 5.0 for the Apple internal RAID controller. That turned out to be good for us, as it forced us to use the existing vSphere infrastructure.

As for management, we just had to embrace a new way of deploying VMs, but there again, once I built a template for vSphere, it was trivial to deploy new Mac VMs, which I then configured as needed. If we'd had a larger environment, I would have leveraged tools like Puppet or Casper to auto-configure hosts to our needs.

[William] - In building out this environment, it sounds like you learned quite a bit. Was this something you already had some experienced with or were you learning on the job? If the latter, were there any key resources you leveraged that helped you build and manage such an infrastructure?

[Pete] - I had experience with VMware from my previous job, where I got involved in deploying new VMware nodes to help transition to a virtual datacenter. In truth, it worked so well and was so easy to set up, I didn't really need support except for gathering the specifics of our environment.

There was literally no difference between the setup for generic x86 hardware and Xserve as far as I could see. The only difference was that in addition to all the other guest OSes, we could also run OS X on these hosts.

[William] - Pete, I would like to thank you very much for your time this afternoon and sharing with us your experiences. I think this has been very informative/educational and should help others thinking about building or managing a similar type of environment. Before we finish up, do you have any words of wisdom or advice to others looking to start a similar project and perhaps also working in the academic/education field?

[Pete] - I would say that if you're thinking about it and if you think that virtualizing OS X will help, then go for it. It's actually easier than you probably think. Also, I'd say to remember that as a SysAdmin, managing up is just as important as managing your systems. Keep your eyes open to what's happening in your industry, and try to be prepared for new things and opportunities to save money and improve efficiency. Especially in public K12, budgets are shrinking, but demands (particularly on IT) are increasing. Don't be afraid to speak up if you think you can find a way to save money and provide the same or a better level of service for your students.

If you are interested in sharing your story with the community (can be completely anonymous) on how you use VMware and Mac OS X in Production, you can reach out to me here.

  • Community stories of VMware & Apple OS X in Production: Part 1
  • Community stories of VMware & Apple OS X in Production: Part 2
  • Community stories of VMware & Apple OS X in Production: Part 3
  • Community stories of VMware & Apple OS X in Production: Part 4
  • Community stories of VMware & Apple OS X in Production: Part 5
  • Community stories of VMware & Apple OS X in Production: Part 6
  • Community stories of VMware & Apple OS X in Production: Part 7
  • Community stories of VMware & Apple OS X in Production: Part 8
  • Community stories of VMware & Apple OS X in Production: Part 9
  • Community stories of VMware & Apple OS X in Production: Part 10

 

Categories // Apple, ESXi, vSphere Tags // apple, ESXi, osx, vSphere, xserve

How to deploy a Kubernetes Cluster on vSphere?

09.05.2014 by William Lam // 18 Comments

In the previous article, we walked through the installation of govmomi which is the vSphere SDK for Go and govc which is a command-line interface that uses the SDK to expose vSphere functionality which is used in the Kubernetes vSphere Provider. Now that we have all the prerequisites installed, we are now ready to deploy a Kubernetes Cluster onto a vSphere based infrastructure.

google-kubernetes-vmware-vsphere
UPDATE (10/26/15) - It looks like the instructions on setting up a Kubernetes Cluster has since changed and I have updated the instructions below. One of the main changes is instead of building from source we are just downloading the Kubernetes binaries.

tep 1 - You will need to download the latest Kubernetes binary (kubernetes.tar.gz) which can be found here. At the time of updating this article, the latest is v1.2.0-alpha2.

Step 2 - Go ahead and extract the contents of kubernetes.tar.gz file by running the following command:

tar -zxvf kubernetes.tar.gz

Step 2 - Download the Kubernetes VMDK using either "wget" or "curl" depending on what is available on your system. Since I am on a Mac, it only has curl by default. Here are the two commands depending on the which download utility you have access to:

wget https://storage.googleapis.com/govmomi/vmdk/kube.vmdk.gz{,.md5}
curl -O https://storage.googleapis.com/govmomi/vmdk/kube.vmdk.gz{,.md5}

deploy-kubernetes-on-vsphere-1
Once the download has completed, you should now see two files in your working directory: kube.vmdk.gz and kube.vmdk.gz.md5

deploy-kubernetes-on-vsphere-2
Step 3 - Next we need to un-compress the VMDK by running the following command:

gzip -d kube.vmdk.gz

Step 4 - Once the VMDK has been extracted, we will need to upload it to a vSphere datastore. Before doing so, we need to set a couple of environmental variables that provide connection details to your vSphere environment. Below are the commands to set the environmental variables, you will need to replace the information from your own environment.

export GOVC_URL='https://[USERNAME]:[PASSWORD]@[ESXI-HOSTNAME-IP]/sdk'
export GOVC_DATASTORE='[DATASTORE-NAME]'
export GOVC_DATACENTER='[DATACENTER-NAME]'
export GOVC_RESOURCE_POOL='*/Resources'
export GOVC_GUEST_LOGIN='kube:kube'
export GOVC_INSECURE=true

You can leave the last three variables as-is. The GOVC_RESOURCE_POOL defines the full path to root Resource Pool on an ESXi host which will always exists and for vCenter Server, it is the name of the vSphere Cluster or Resource Pool the GOVC_GUEST_LOGIN is the credentials to the Kubernetes Master/Node VMs which are defaulted in the VMDK that was downloaded. The last variable GOVC_INSECURE is if you have an ESXi or vCenter Server using self-signed SSL Certificate, you will need to ensure this variable is set.

To upload the kube.vmdk to the vSphere Datastore and under the kube directory which will be created for you, you will run the following command:

govc datastore.import kube.vmdk kube

deploy-kubernetes-on-vsphere-3
Step 5 - We now have our base kube.vmdk uploaded to our ESXi host and before we are ready to deploy our Kubernetes Cluster, we need to set the provider by running the following command:

export KUBERNETES_PROVIDER=vsphere

Step 6 - We are now ready to deploy the Kubernetes Cluster which consists of Kubernetes Master and 4 Kubernetes Minions and they will be derived from the kube.vmdk that we just uploaded. To do so, you will run the following command:

kubernetes/cluster/kube-up.sh

deploy-kubernetes-on-vsphere-6
Note: If you see a message about "Docker failed to install on kubernetes-minion-N" it is possible that this related to a timing issue in which the Minion may not be up when the Master is checking. You can verify this by running the next command, else you can follow the instructions to bring down the Kubernetes Cluster and re-creating it.

Step 7 - In the previous step, we deployed the Kubernetes Cluster and you will see the assigned IP Addresses for the Master/Minions along with the credentials (auto-generated) for the Docker Containers. We can confirm that the everything was created successfully by checking the number of running Minions by running the following command:

cluster/kubecfg.sh list minions

Screen Shot 2014-09-04 at 9.52.13 PM
Step 8 - Once we have confirmed we have 4 running Minions, we can now deploy a Docker Container onto our Kubernetes Cluster. Here is an example of deploying 2 nginx mapping from port 8080 to 80 instances by running the following command:

cluster/kubecfg.sh -p 8080:80 run dockerfile/nginx 2 myNginx

deploy-kubernetes-on-vsphere-8
Step 9 - We should expect to see two "Pods" for the nginx instances we have instantiated and we can do so by running the following command:

cluster/kubecfg.sh list pods

deploy-kubernetes-on-vsphere-9
Here are some additional commands to stop and remove the Pods:

cluster/kubecfg.sh stop myNginx
cluster/kubecfg.sh rm myNginx

You can also bring down the Kubernetes Cluster (which will destroy the Master/Minion VMs) by running the following command:

cluster/kube-down.sh

Hopefully this gave you a good introduction to the new Kuberenetes vSphere Provider and I would like to re-iterate that this is still being actively developed on and the current build is an Alpha release. If you have any feedback/requests or would like to contribute, be sure to check out the Kubernetes and govmomi Github repository and post your issues/pull requests.

Categories // Automation, Docker, ESXi, vSphere Tags // Docker, ESXi, go, govc, govmomi, Kubernetes, vSphere

govmomi (vSphere SDK for Go), govc CLI & Kubernetes on vSphere

09.04.2014 by William Lam // 15 Comments

go-sdk-for-vsphere
One of the exciting announcements that was made last week at VMworld was the joint partnership between Docker, Google, Pivotal and VMware. Paul Strong (Office of the CTO) wrote a great blog post Better Together – Containers are a Natural Part of the Software-Defined Data Center where he goes into more details about the partnership. The really neat part of the announcement which I think some people may have missed is that this was more than just an announcement. There are actually active projects currently being worked on, most notably a working prototype for a Kubernetes vSphere Provider.

For those of you who are not familiar with Kubernetes, it is an open-source project that was started by Google which provides Container Cluster Management. You can think of Kubernetes as a placement engine/scheduler for Containers, similar to how vSphere DRS is responsible for scheduling Virtual Machines. The Kubernetes vSphere Provider allows you to run a Kubernetes Cluster on top of a vSphere based infrastructure and provides a platform for scheduling Docker Containers running on top of vSphere.

Kubernetes is completely written in Go (short for Golang), a programming language developed by Google. To be able to easily integrate with Kubernetes, a Go library needed to be written for the vSphere API and hence govmomi was born! Similar to pyvmomi and rbvmomi which are vSphere SDKs for Python and Ruby respectively, govomimi is the vSphere SDK equivalent for Go. The govmomi project is an open source project lead by VMware and you can find the Github repository at https://github.com/vmware/govmomi.

In addition to govmomi, I also learned about a neat little CLI that was built on top of the SDK called govc (currently an Alpha release) which provides a simplified command-line interface to a vSphere environment leveraging govmomi. You can find the source code under the govmomi Github repository https://github.com/vmware/govmomi/tree/master/govc. The Kubernetes vSphere Providers leverages govc to be able to orchestrate the deployment of a Kubernetes Cluster on top of vSphere leveraging the vSphere API.

To use govc, you will need to ensure you have Go 1.2+ installed on your system. Here are the steps on installing Go and govc

Step 1 - Download the latest Go package installer for your OS here and once you have Go installed, you can verify that everything is working by running the following command:

go version

Screen Shot 2014-09-03 at 4.00.10 PM
Step 2 - Setup your build environment by running the following commands:

export GOPATH=$HOME/src/go
mkdir -p $GOPATH
export PATH=$PATH:$GOPATH/bin

Screen Shot 2014-09-03 at 4.02.17 PM
Step 3 - Check out govc source code by running the following command:

go get github.com/vmware/govmomi/govc

Screen Shot 2014-09-03 at 4.03.00 PM
At this point, govc has been installed. You can now connect to either a vCenter Server or ESXi host. The easiest way is to specify the vSphere API endpoint and credentials is by setting a couple of environmental variables, so you do not have to specify it on the command-line.

Step 4 - Run the following command and specify the username, password and either the hostname or IP Address of your vCenter Server or ESXi host:

export GOVC_URL='https://[USERNAME]:[PASSWORD]@[ESXI-OR-VCENTER-HOSTNAME-OR-IP]/sdk'

Step 5 - To verify that everything is working, you can run the following command to query the endpoint you have connected to:

govc about

Screen Shot 2014-09-03 at 4.45.06 PM
If everything was successful, you should see some basic information about the vSphere API endpoint you have connected to. In the example above, I am connected to a VCSA (vCenter Server Appliance). The govc CLI is quite similar to RVC with commands broken up into various namespaces. However, one feature that is not there today is the ability to tab complete the commands which is something I just love about RVC!

You can also just run "govc" and it will provide a list of available commands:
Screen Shot 2014-09-03 at 4.45.19 PM
You can get more details about each command by specifying --help command, here is an example of the host.info
Screen Shot 2014-09-03 at 4.45.58 PM
To get information about one of my ESXi hosts, I need to specify --host.ip option along with the IP:
Screen Shot 2014-09-03 at 4.46.20 PM
As you can see from the screenshot above, some basic information is displayed about my ESXi host which is running on a Mac Mini. If you would like to learn more about govc, I highly recommend you check out the govc repository on Github which has additional documentation. You can also file any bugs or feature requests you would like to see on the project page.

At this point you are now ready to proceed to the next steps which is to setup Kuberentes and deploy a Kubernetes Cluster onto your vSphere environment. Unfortunately I ran into a problem while going through the Kubernetes deployment and I did not know where to go next and decided to file a Github issue here. To my surprise, I immediately got a response back from the VMware Engineers who are working on the project. I had a couple of email exchanges with the team to debug the problem. It looks like we found the culprit and I was able to get Kubernetes up and running. There are a couple of minor caveats which I will explain in more detail in Part 2 of this post and walk you through the steps of deploying a Kubernetes Cluster running on top of vSphere.

Categories // Automation, Docker, ESXi, vSphere Tags // container, Docker, go, golang, govc, govmomi, Kubernetes, vSphere

  • « Previous Page
  • 1
  • …
  • 85
  • 86
  • 87
  • 88
  • 89
  • …
  • 109
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...