WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Search Results for: advanced load balancer

How to setup private GitLab on a Synology for Project Keswick?

09.26.2023 by William Lam // 3 Comments

My recent blog post on setting up a custom vSphere Content Library on my Synology gave me another idea that I had been thinking about regarding Project Keswick, which was announced back at VMware Explore Las Vegas.

If you have network connectivity to the Keswick Cloud Service, you can easily associate a Git repository, which is used for host configurations and workload deployments using GitOps using Github or even a privately managed Gitlab instance. For organizations that have additional compliance, security or air-gapped requirements, using the Keswick Cloud Service may not be an option. With that said, Project Keswick also supports an advanced deployment option where the association of a Git repository, such as GitLab, can also be accomplished without requiring the use of the Keswick Cloud Service.

While I have had experience using both Github as well as GitLab, which VMware uses to host its own code repository, I have actually never setup my own GitLab instance before. I thought this would be a great learning opportunity, especially with the ability to run additional add-on applications on a Synology.

After a bit of researching online, I found that GitLab can easily run as a Container workload and it just so happens that the Synology DiskStation Manager (DSM) has a package for running containers creatively called Container Manager and below are the step by step instructions for setting up GitLab running on Synology DSM 7.2.

[Read more...]

Categories // Automation Tags // GitLab, Synology

Test driving VMware Photon Controller Part 1: Installation

04.12.2016 by William Lam // 11 Comments

Several weeks back, the Cloud Native Apps team at VMware released a significant update to their Photon Controller platform with their v0.8 release focused on simplified management and support for Production scale. For those of you who are not familiar with Photon Controller, it is an infrastructure stack purposefully-built for cloud-native applications. It is a highly distributed and scale-out control plane designed from the ground up to support multi-tenant deployments that require elasticity, high churn and self-healing. If you would like to get more details about the v0.8 release, be sure to check out this blog post here by James Zabala, Product Manager in the Cloud Native Apps team.

photon-controller-architecture
One of the most visible enhancement to the v0.8 release is the introduction of a UI for installing and managing Photon Controller. Previously, the only way to deploy Photon Controller was using an already pre-configured appliance that required customers to have a particular network configuration for their infrastructure. Obviously, this was not ideal and it made it challenging for customers to evaluate Photon Controller in their own specific environment. With this new update, customers can now easily deploy Photon Controller into their own unique environment using a UI that is provided by a Virtual Appliance (OVA). This Virtual Appliance is only used for the initial deployment of Photon Controller and is no longer needed afterwards. Once Photon Controller is up and running, you can manage it using either the CLI or the new management UI.

In this first article, I will take you through the steps of deploying Photon Controller onto an already provisioned ESXi host. We have a quick look at the Photon CLI and how you can interact with Photon Controller and lastly, we will also take a look at the new Photon Controller Management UI. In future articles, we will be looking at deploying our first VM using Photon Controller as well as run through the different cluster orchestration solutions that Photon Controller integrates with.

  • Test driving VMware Photon Controller Part 1: Installation
  • Test driving VMware Photon Controller Part 2: Deploying first VM
  • Test driving VMware Photon Controller Part 3a: Deploying Kubernetes
  • Test driving VMware Photon Controller Part 3b: Deploying Mesos
  • Test driving VMware Photon Controller Part 3c: Deploying Docker Swarm

To start using Photon Controller, you will need at least one physical ESXi 6.x host (4vCPU / 16GB memory / 50GB storage) with some basic networking capabilities which you can read more about here. Obviously, if you really want to see what Photon Controller in action and what it can do, having additional hosts will definitely help. If you do not have a dedicated ESXi host for use with Photon Controller, the next best option is to leverage Nested ESXi. The more resources you can allocate to the Nested ESXi VM, the better your experience will be in addition to the number of cluster orchestration workflows you will be able to exercise. If you have access to a physical ESXi host, you can skip steps 2 through 4.

For this exercise, I will be using my Apple Mac Mini which is running the latest version of ESXi 6.0 Update 2 and has 16GB of available memory and 100+GB of local storage.

Deploying Photon Controller

Step 1 - Download both the Photon Controller Installer OVA as well as the Photon CLI for your OS platform from here.

Step 2 - Deploy the Photon Controller Installer OVA using either the ovftool CLI directly against an existing ESXi host or using the vSphere Web/C# Client connected to a vCenter Server. For more detailed instructions, please have a look at this blog article here.

Step 3 (optional) - Download the Nested ESXi 6.x Virtual Appliance which you can find here which also includes instructions on how to deploy the Nested ESXi VA. Make sure the version of the Nested ESXi 6.x VA version is v5.0 as earlier versions will not work. You can refer to the screenshot in the next step if you are wondering where to look.

Step 4 -(optional) Deploy the Nested ESXi OVA with at least 4vCPU, 16GB of memory and increase the storage for the 3rd VMDK to at least 50GB. If you have vCenter Server, you can deploy by using either the vSphere Web or C# Client as shown in the screenshot below:

photon-controller-using-nested-esxi-16
Make sure you enable SSH (currently required for Photon Controller) and enable local datastore unless you have shared storage to connect to the Nested ESXi VM (VSAN is currently not supported with Photon Controller at this time). If you only have an ESXi host, then you can deploy using the ovftool CLI which can be downloaded here and follow the instructions found here.

Note: If you have more than one Nested ESXi VM, you will need to setup shared storage else you may run into issues when images are being replicated across the hosts. The other added benefit is that you are not wasting local storage just to replicate the same images over and over.

At this point, you should have the Photon Controller Installer VM running and at least one physical or Nested ESXi  powered on and ready to go.

UPDATE (04/25/16): Please have a look at this article on How to override the default CPU/Memory when deploying Photon Controller Management VM? which can be very helpful for resource constrained environments.

Step 6 - Next, open a browser to the IP Address of your Photon Controller Installer VM whether that was an IP Address you had specified or on that it was automatically obtained via DHCP. You should be taken to the installer screen as seen in the screenshot below.

testing-driving-photon-controller-part1-0
Step 7 - Click on the "Get Started" button and then accept the EULA.

Step 8 - The next section is "Management" where you will define the ESXi host(s) to run the Photon Controller Management VMs. If you only have one ESXi host, then you will also want to check the "Also use as Cloud Host" box in which case the ESXi host will be used to run both the Photon Controller management VM as well as the workload VMs. In a real Production environment, you will most likely want to separate these out as a best practice to not mix your management plane with your compute workload.

The Host IP will be the IP Address (yes, you will have to use IP Address as hostnames are not currently supported) of your first ESXi host. Following that, you will then need to provide credentials to the ESXi host as well as the datastore and networking configurations in which the Photon Controller VM will be deployed to.

testing-driving-photon-controller-part1-1
Note: One important thing to note is that the installer will dynamically size the Photon Controller Management VM based on the available resources of the ESXi host. Simply speaking, it will consume as much available resources (taking into considerations powered off VMs if they exists) depending if it is purely a "Management" and/or "Cloud" host.

Step 9 - The next section is "Cloud" where you will specify additional ESXi host(s) that will run your workload. Since we only have a single host, we already accounted for this in previous step and will skip this. If you do have additional hosts, you can specify either individual IPs or a range of IPs. If you have hosts with different credentials, you can add addition logical groups by simply clicking into the "Add host group" icon.

Step 10 - The last page is the "Global Settings" where you have the ability to configure some of the advanced options. For a minimal setup, you only need to specify the share storage for the images as well as deploying a load balancer which is part of the installer itself. If you only have a single host, then you can specify the name of your datastore or the shared datastore in which you have already mounted on your ESXi host. In my environment, the datastore name is datastore1. If you have multiple ESXi hosts that *only* have local datastores, make sure they are uniquely named as there is a known bug that different hosts can not have the same datastore name. In this case, you would list all the datastore names in the box (e.g. datastore1, datastore2).

Make sure to also check the box "Allow cloud hosts to use image datastore for VM Storage" if you wish to allow the VMs to also be deployed to these datastores. All other settings are all optional including deploying the Lightwave identity service, you can refer to the documentation for more details.

testing-driving-photon-controller-part1-2
Step 11 - Finally, before you click on the "Deploy" button, I recommend that you export your current configurations. This allows you to easily adjust the configurations without having to re-enter it into the UI or if you get a failure so you can easily re-try. This a very handy feature and hope to see this in other VMware based installers. Once you are ready, go ahead and click on the "Deploy" button.

testing-driving-photon-controller-part1-3
Depending on environment and resources, the deployment can take anywhere from 5-10 minutes. The installer will discover your ESXi hosts and the resources you had specified earlier, it will then install an agent on each of the ESXi hosts which will allow Photon Controller to communicate with the hosts, deploy the Photon Controller Management VM and then finally upload the necessary images from the Photon Controller Installer VM over to the Image Datastores. If everything was successful, you should see the success screen in the screenshot above.

Note: If you run into any deployment issues, the most common issue is most like resource related. If you did not correctly size the Nested ESXi VM with the minimal configuration, you will definitely run into issues. If you do run into this situation, go ahead and re-size your Nested ESXi VMs and then re-initialize the Photon Controller Installer VM by jumping to the bottom of this article in the Troubleshooting section where I document the process.

Exploring Photon CLI

At this point, we will now switch over to the Photon CLI that you had downloaded earlier to interact with the Installer VM to get some information about our deployed Photon Controller instance. The Photon CLI uses the Photon REST API, so you could also interact with the system using the API rather than the CLI. We will also quickly cover the REST API in this section in case you might be interested in using it.

Step 1 - Another method to verify that our deployment was successful is by pointing our Photon CLI to the IP Address of the Photon Controller Installer VM by running the following command:

./photon target set http://192.168.1.250

testing-driving-photon-controller-part1-4
Step 2 - Here, we will be able to list any of deployments performed by the Installer VM by running the following command:

./photon deployment list

Step 3 - Using the deployment ID from previous step, we can then get more details about a given deployment by running the following command and specifying the ID:

./photon deployment show de4d276f-16c1-4666-b586-a800dc83d4d6

testing-driving-photon-controller-part1-5
As you can see from the output, we get a nice summary of the Photon Controller instance that we just deployed. What you will be looking for here is that the State property shows "Ready" which means we are now ready to start using the Photon Controller platform. From here, we can also see the IP Address of the load balancer that was setup for us within the Photon Controller Management VM which in this example is 192.168.1.150.

Step 4 - To interact with our Photon Controller instance, we will need to point the Photon CLI to the IP Address of the load balancer and specify port 28080. If you had enabled authentication using the Lightwave identity service, you would then use port 443 instead.

./photon target set http://192.168.1.150:28080

Step 5 - You can also check the state of the overall system and the various components once you have pointed to your Photon Controller by running the following command:

./photon system status

testing-driving-photon-controller-part1-6
Step 6 - If you want to get the list of ESXi hosts that is part of a given deployment, we can use the deployment ID from Step 2 and then run the following command which will give you some basic information including the functionality of the ESXi host whether it is serving as a "Management" or "Cloud" Host:

./photon deployment list-hosts de4d276f-16c1-4666-b586-a800dc83d4d6

testing-driving-photon-controller-part1-7
Step 7 - To show more details about a given ESXi host, we just need to take the host ID from the previous step and then run the following command:

./photon host show ce37fca9-c8c6-4986-bb47-b0cf48fd9724

testing-driving-photon-controller-part1-8
Note: I had noticed that the ESXi host's root password was being displayed in this output. I have already reported this internally and this will be removed in a future update as it should not be displaying the password, especially in plaintext.

Hopefully this gives you a quick primer on how the Photon CLI works and how you can easily interact with a given Photon Controller deployment. If you would like more details on Photon CLI, be sure to check out the official documentation here.

Exploring Photon Controller API

The Photon Controller also provides a REST API interface which you can explore by using the built in Swagger interface. You can connect to it by opening a browser to the following address: https://[photon-controller-load-balancer]:9000/api For those of you who have not used Swagger before, its a tool that allows you to easily test drive the underlying API as well as providing interactive documentation on the specific APIs that are available. This is a great way to learn about the Photon Controller API and allows you to even try it out without having to write a single line of code.

testing-driving-photon-controller-part1-9

Exploring Photon Controller UI

Saving the best for the last, we will now take a look at the new Photon Controller Management UI. To access the UI, you just need to open a browser to the IP Address of the Photon Controller load balancer. In this example, it is 192.168.1.150 and once loaded, you should be taken to the main dashboard.

testing-driving-photon-controller-part1-10
If you recall earlier in the Photon CLI example, we had to run through several commands to get the overall system status as well as list of ESXi hosts participating in either a "Management" or "Cloud" host role. With the UI, this is literally a single click!

testing-driving-photon-controller-part1-11
There are other objects within the UI that you may notice while exploring but we will save that for the next article in which we will walk through the process provisioning your first Virtual Machine using Photon Controller.

Troubleshooting

Here are some useful things I learned from the Photon Controller team while troubleshooting some of my initial deployments.

The following logs are useful to take a look at during a failed deployment and usually will give some hints to what had happened. You can find these by logging into the Photon Controller Installer VM:

  • /var/log/esxcloud/management-api/management-api.log
  • /var/log/esxcloud/deployer/deployer.log

If you need to restart or re-deploy using the Photon Controller Installer VM, there is some clean up that you need to do (in the future, there will be an easier way to initialize without going through this process). To do so, SSH to Photon Controller Installer VM using the username esxcloud and vmware as the password. Next, you will change over to the root user via the su command and the password will be what you had set earlier:

su - root
rm -rf /etc/esxcloud/deployer/deployer/sandbox_18000/
rm -rf /etc/esxcloud/cloud-store/cloud-store/sandbox_19000/
reboot

Once the Photon Controller Installer VM has started back up, you will need to restart the Docker Container for the UI by running the following command:

docker restart ui_installer

This is required as currently it does not correctly restart upon reboot. This is a known issue and will be fixed in a future update. Before opening a browser to the installer UI, you can run the following command to ensure all Docker Containers have successfully started:

docker ps -a

Screen Shot 2016-04-08 at 3.31.49 PM

Categories // Automation, Cloud Native, ESXi, vSphere 6.0 Tags // cloud native apps, ESXi, Photon Controller

Community stories of VMware & Apple OS X in Production: Part 2

08.06.2014 by William Lam // 1 Comment

After sharing VMware's story on how they leverage Apple Mac Mini's for their OS X build infrastructure, I thought it was only fair to reach out to Yoann Gini to see if he would also like to share some of his experiences working with VMware and Apple OS X. I was able to catch up with Yoann and you can find our chat transcript below.

Company: Fortune 500
Product: VMware vSphere
Hardware: Apple Mac Mini

[William] - Hi Yoann, I appreciate you taking some time out of your evening to share with us some your experiences working with VMware ESXi and Apple OS X. Your recent tweet was really the motivation behind this series, so thank you. Before we dive in, can you quickly introduce yourself?

[Yoann] - I’m a french computer scientist, working as a freelance consultant and trainer on Apple products for Enterprise and Education. I also work on network architecture and security, doing reverse engineering for fun in my spare time. All Apple OS X focused. You can find more details on my website.

[William] - Awesome. So, based on your tweet, I assume you have some experience working with Mac Mini's and VMware vSphere? Can you share with us some of the customer environments you have been in and how you have solved the challenges leveraging vSphere?

[Yoann] - Yes, I have two main setup with vSphere at this time (and my lab). One with 10 Mac Minis hosting up to 20 OS X VM which is basically building agent for an iOS forge for a Fortune 500 company (I can’t tell the number of iOS project build on it). The other one with three Mac Mini hosting two VM, one for Open Directory, DNS, File Sharing and the other for e-mail serving around 500 users.

[William] - Wow, Mac Mini's really being used in a Production environment! How cool! What was the reason for selecting the Mac Mini versus an Xserve or Mac Pro? How did the customer react to using a non-supported platform? Were there any challenges?

[Yoann] - When these two projects started, the Xserve was already stopped, so it wasn’t an option. For Mac Mini vs MacPro, it was only a matter of reasonable risk versus unreasonable cost. Mac Mini is unsupported by VMware and Apple as a virtualization node, but it’s really cheap and, it works. Mac Pro is supported, but it so expensive with the following challenges:

- don’t fit in a server rack
- can’t be exploited at 100% (especially the new Mac Pro with super duper graphical card totally useless for most server jobs)
- really can’t be exploited at 100% if you read the Apple EULA who seems to don’t allow us to run more than 2 (or maybe 3) Apple OS X per Mac hardware…

The last point is that the most important decision for one of my customers: buying expensive hardware officially supported can be OK if at least we can run a lot of Apple OS X VM on it. But the Apple limitation is a real PITA when you try to develop Apple OS X Server and Virtualization in the Enterprise. It so stupid that in at the end, customers prefer to place the same amount of money in multiple Mac Mini instead of one good Mac Pro. It allows hardware redundancy for the same price + an iSCSI storage and it leverage the risk due to unsupported hardware.

For me, the real challenge is here, the legal imbroglio with Apple legal things (and contacting Apple SE about this subject does not help, the only answer is, ask your legal department).

They also have other challenges: IT against everything with an Apple on it. It always fun to start a meeting telling the team in charge of Virtualization that they will have to support a non-supported small form factor system without a redundant power supply. But we always find a solution, Apple Consultants are used to this situation. It's a common denominator to all OS X and iOS deployment in enterprise.

[William] - Interesting, so it looks like the Apple EULA played a pretty large role in the organization's decision. At this point, you have selected the hardware platform and you knew you were going to Virtualize on vSphere. Can you talk a little bit about the applications, was this a new environment you were building out or was this a migration from an existing infrastructure?

[Yoann] - For the iOS forge, it was a new environment. The system was a Java based application and a pilot has been done in the past. So blank page here. A project leaded by company needs increasing with iOS software demand. For the more traditional server setup with all internal services like directory service, DNS, mail, etc. It was an existing setup on dying Xserve. We’ve done the migration on vSphere to take away all hardware problem (we’ve got more and more disk failure and random problem on the Xserve in the end).

[William] - For the environment which you had to migrate your existing Apple OS X systems running on the Xserve, what type of tools did you leverage? Were there any tips and tricks you used or things people should look out for if they are attempting a similar migration?

[Yoann] - We’ve taken the opportunity of hardware to Virtualize the systems and  migrate to a newer system version. So we’ve just followed the recommended migration path in this situation. We’ve installed a new system on the vSphere setup and then we’ve imported our data inside with a combination of directory export/import feature and rsync for files.

It was really simple with Apple OS X Server, you just have to ensure that your directory service is there and then put all the data in the good place before starting every services.Another option is use common Apple OS X imaging system like DeployStudio or Carbon Copy Cloner to create a image from your existing system and deploy it on your virtual system.

Is not as simple as vCenter Converter but when we’ve done our “state of the art” migration, we’ve got only a 5 min shutdown on a Sunday morning. All linked service like TSE, Citrix, Cisco Call Manager and custom app haven’t seen any thing. Only a reboot needed for Windows based system.

[William] - Very nice, it sounds like you got the process pretty much nailed down. How about after everything has been migrated over to vSphere. How does the customer manage the environment, are they running vCenter Server or are these stand alone systems?

[Yoann] - In this setup, we have a vCenter Server and we use the vSphere Web Client to handle it. By the way, it work like a charm from Safari on OSX, no more needs of Windows VM on our Mac to manage the setup and create new VMs.

[William] - I am with you on that, I too used to run a Windows VM just to use the vSphere C# Client. I’m glad I can use the vSphere Web Client on my Apple OS X system to manage my vSphere environment. In terms of Apple OS X guest management, how do you go about handling that and how do you go about provisioning new Virtual Machines?

[Yoann] - Just like any other Mac hardware, since ESXi supports NetBoot, I can use my existing provisioning system for free. I know that vSphere include some provisioning system to create VM on the flight when needed but I didn’t have the time to play well with it. At the end, Apple OS X VM are just like real Mac with HA in addition, I use all pre existing system without a change. It can even simplify my deployment (no need of Xsan and Load Balancer for HA for example).

[William] - Yoann, these are some great tips! I wanted to thank very much for taking the time and sharing with us your experiences with running Production Apple OS X workloads using VMware vSphere and Apple Mac Mini’s. Before I let you go, I wanted to ask if you had any recommendations for others looking to either Virtualize their existing Apple OS X deployments or looking to building out a new environment using VMware?

[Yoann] - Yeah, talking about HA, it remind me existing setup I have. I have some customer setup I’ve created and I still maintain who use Xsan (the Apple’s cluster file system) with Barracuda Load Balancer in front of two or more OS X Server to handle HA for all services (web, file sharing, databases, etc.).

It works but it’s hard to maintain and definitively not accessible for un-experienced system administrators. If I had to do it again, this kind of setup will end directly on a vSphere system with Fault Tolerance and things like that. It will be cheaper in so many ways (iSCSI instead of Fibre Channel, less time consuming, no need to have advanced knowledge on all network protocols, no need to play with clustered system like MySQL Cluster who’s a really PITA to make it work, etc.).

I also considered deploying free ESXi for all new setup, whether it is a Mac Mini or Mac Pro. The only challenge is that there is no vCenter Server with Free ESXi and you would need a Windows VM to be able to use the legacy vSphere C# Client. If you want or need to use the vSphere Web Client, you would need a vCenter Server license. However, the vSphere Essential Kit is not that expensive and it make sense for SMBs.

With this kind of a setup, it is really easy to manage: simple to deploy a new VM, simple hardware redundancy and can easily be expanded in the future. Keeping everything simple. Need to add a Windows server for accounting? Add a VM. Need HA? Add a Mac Mini and iSCSI storage. No service interruption.

If you are interested in sharing your story with the community (can be completely anonymous) on how you use VMware and Mac OS X in Production, you can reach out to me here.

  • Community stories of VMware & Apple OS X in Production: Part 1
  • Community stories of VMware & Apple OS X in Production: Part 2
  • Community stories of VMware & Apple OS X in Production: Part 3
  • Community stories of VMware & Apple OS X in Production: Part 4
  • Community stories of VMware & Apple OS X in Production: Part 5
  • Community stories of VMware & Apple OS X in Production: Part 6
  • Community stories of VMware & Apple OS X in Production: Part 7
  • Community stories of VMware & Apple OS X in Production: Part 8
  • Community stories of VMware & Apple OS X in Production: Part 9
  • Community stories of VMware & Apple OS X in Production: Part 10

 

Categories // Apple, ESXi, vSphere Tags // apple, mac mini, osx, vmware, vSphere, xserve

  • « Previous Page
  • 1
  • …
  • 3
  • 4
  • 5

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Crowdsourced Lab Hardware for ESXi 9.0 Dashboard 06/17/2025
  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...