WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Test driving VMware Photon Controller Part 1: Installation

04.12.2016 by William Lam // 11 Comments

Several weeks back, the Cloud Native Apps team at VMware released a significant update to their Photon Controller platform with their v0.8 release focused on simplified management and support for Production scale. For those of you who are not familiar with Photon Controller, it is an infrastructure stack purposefully-built for cloud-native applications. It is a highly distributed and scale-out control plane designed from the ground up to support multi-tenant deployments that require elasticity, high churn and self-healing. If you would like to get more details about the v0.8 release, be sure to check out this blog post here by James Zabala, Product Manager in the Cloud Native Apps team.

photon-controller-architecture
One of the most visible enhancement to the v0.8 release is the introduction of a UI for installing and managing Photon Controller. Previously, the only way to deploy Photon Controller was using an already pre-configured appliance that required customers to have a particular network configuration for their infrastructure. Obviously, this was not ideal and it made it challenging for customers to evaluate Photon Controller in their own specific environment. With this new update, customers can now easily deploy Photon Controller into their own unique environment using a UI that is provided by a Virtual Appliance (OVA). This Virtual Appliance is only used for the initial deployment of Photon Controller and is no longer needed afterwards. Once Photon Controller is up and running, you can manage it using either the CLI or the new management UI.

In this first article, I will take you through the steps of deploying Photon Controller onto an already provisioned ESXi host. We have a quick look at the Photon CLI and how you can interact with Photon Controller and lastly, we will also take a look at the new Photon Controller Management UI. In future articles, we will be looking at deploying our first VM using Photon Controller as well as run through the different cluster orchestration solutions that Photon Controller integrates with.

  • Test driving VMware Photon Controller Part 1: Installation
  • Test driving VMware Photon Controller Part 2: Deploying first VM
  • Test driving VMware Photon Controller Part 3a: Deploying Kubernetes
  • Test driving VMware Photon Controller Part 3b: Deploying Mesos
  • Test driving VMware Photon Controller Part 3c: Deploying Docker Swarm

To start using Photon Controller, you will need at least one physical ESXi 6.x host (4vCPU / 16GB memory / 50GB storage) with some basic networking capabilities which you can read more about here. Obviously, if you really want to see what Photon Controller in action and what it can do, having additional hosts will definitely help. If you do not have a dedicated ESXi host for use with Photon Controller, the next best option is to leverage Nested ESXi. The more resources you can allocate to the Nested ESXi VM, the better your experience will be in addition to the number of cluster orchestration workflows you will be able to exercise. If you have access to a physical ESXi host, you can skip steps 2 through 4.

For this exercise, I will be using my Apple Mac Mini which is running the latest version of ESXi 6.0 Update 2 and has 16GB of available memory and 100+GB of local storage.

Deploying Photon Controller

Step 1 - Download both the Photon Controller Installer OVA as well as the Photon CLI for your OS platform from here.

Step 2 - Deploy the Photon Controller Installer OVA using either the ovftool CLI directly against an existing ESXi host or using the vSphere Web/C# Client connected to a vCenter Server. For more detailed instructions, please have a look at this blog article here.

Step 3 (optional) - Download the Nested ESXi 6.x Virtual Appliance which you can find here which also includes instructions on how to deploy the Nested ESXi VA. Make sure the version of the Nested ESXi 6.x VA version is v5.0 as earlier versions will not work. You can refer to the screenshot in the next step if you are wondering where to look.

Step 4 -(optional) Deploy the Nested ESXi OVA with at least 4vCPU, 16GB of memory and increase the storage for the 3rd VMDK to at least 50GB. If you have vCenter Server, you can deploy by using either the vSphere Web or C# Client as shown in the screenshot below:

photon-controller-using-nested-esxi-16
Make sure you enable SSH (currently required for Photon Controller) and enable local datastore unless you have shared storage to connect to the Nested ESXi VM (VSAN is currently not supported with Photon Controller at this time). If you only have an ESXi host, then you can deploy using the ovftool CLI which can be downloaded here and follow the instructions found here.

Note: If you have more than one Nested ESXi VM, you will need to setup shared storage else you may run into issues when images are being replicated across the hosts. The other added benefit is that you are not wasting local storage just to replicate the same images over and over.

At this point, you should have the Photon Controller Installer VM running and at least one physical or Nested ESXi  powered on and ready to go.

UPDATE (04/25/16): Please have a look at this article on How to override the default CPU/Memory when deploying Photon Controller Management VM? which can be very helpful for resource constrained environments.

Step 6 - Next, open a browser to the IP Address of your Photon Controller Installer VM whether that was an IP Address you had specified or on that it was automatically obtained via DHCP. You should be taken to the installer screen as seen in the screenshot below.

testing-driving-photon-controller-part1-0
Step 7 - Click on the "Get Started" button and then accept the EULA.

Step 8 - The next section is "Management" where you will define the ESXi host(s) to run the Photon Controller Management VMs. If you only have one ESXi host, then you will also want to check the "Also use as Cloud Host" box in which case the ESXi host will be used to run both the Photon Controller management VM as well as the workload VMs. In a real Production environment, you will most likely want to separate these out as a best practice to not mix your management plane with your compute workload.

The Host IP will be the IP Address (yes, you will have to use IP Address as hostnames are not currently supported) of your first ESXi host. Following that, you will then need to provide credentials to the ESXi host as well as the datastore and networking configurations in which the Photon Controller VM will be deployed to.

testing-driving-photon-controller-part1-1
Note: One important thing to note is that the installer will dynamically size the Photon Controller Management VM based on the available resources of the ESXi host. Simply speaking, it will consume as much available resources (taking into considerations powered off VMs if they exists) depending if it is purely a "Management" and/or "Cloud" host.

Step 9 - The next section is "Cloud" where you will specify additional ESXi host(s) that will run your workload. Since we only have a single host, we already accounted for this in previous step and will skip this. If you do have additional hosts, you can specify either individual IPs or a range of IPs. If you have hosts with different credentials, you can add addition logical groups by simply clicking into the "Add host group" icon.

Step 10 - The last page is the "Global Settings" where you have the ability to configure some of the advanced options. For a minimal setup, you only need to specify the share storage for the images as well as deploying a load balancer which is part of the installer itself. If you only have a single host, then you can specify the name of your datastore or the shared datastore in which you have already mounted on your ESXi host. In my environment, the datastore name is datastore1. If you have multiple ESXi hosts that *only* have local datastores, make sure they are uniquely named as there is a known bug that different hosts can not have the same datastore name. In this case, you would list all the datastore names in the box (e.g. datastore1, datastore2).

Make sure to also check the box "Allow cloud hosts to use image datastore for VM Storage" if you wish to allow the VMs to also be deployed to these datastores. All other settings are all optional including deploying the Lightwave identity service, you can refer to the documentation for more details.

testing-driving-photon-controller-part1-2
Step 11 - Finally, before you click on the "Deploy" button, I recommend that you export your current configurations. This allows you to easily adjust the configurations without having to re-enter it into the UI or if you get a failure so you can easily re-try. This a very handy feature and hope to see this in other VMware based installers. Once you are ready, go ahead and click on the "Deploy" button.

testing-driving-photon-controller-part1-3
Depending on environment and resources, the deployment can take anywhere from 5-10 minutes. The installer will discover your ESXi hosts and the resources you had specified earlier, it will then install an agent on each of the ESXi hosts which will allow Photon Controller to communicate with the hosts, deploy the Photon Controller Management VM and then finally upload the necessary images from the Photon Controller Installer VM over to the Image Datastores. If everything was successful, you should see the success screen in the screenshot above.

Note: If you run into any deployment issues, the most common issue is most like resource related. If you did not correctly size the Nested ESXi VM with the minimal configuration, you will definitely run into issues. If you do run into this situation, go ahead and re-size your Nested ESXi VMs and then re-initialize the Photon Controller Installer VM by jumping to the bottom of this article in the Troubleshooting section where I document the process.

Exploring Photon CLI

At this point, we will now switch over to the Photon CLI that you had downloaded earlier to interact with the Installer VM to get some information about our deployed Photon Controller instance. The Photon CLI uses the Photon REST API, so you could also interact with the system using the API rather than the CLI. We will also quickly cover the REST API in this section in case you might be interested in using it.

Step 1 - Another method to verify that our deployment was successful is by pointing our Photon CLI to the IP Address of the Photon Controller Installer VM by running the following command:

./photon target set http://192.168.1.250

testing-driving-photon-controller-part1-4
Step 2 - Here, we will be able to list any of deployments performed by the Installer VM by running the following command:

./photon deployment list

Step 3 - Using the deployment ID from previous step, we can then get more details about a given deployment by running the following command and specifying the ID:

./photon deployment show de4d276f-16c1-4666-b586-a800dc83d4d6

testing-driving-photon-controller-part1-5
As you can see from the output, we get a nice summary of the Photon Controller instance that we just deployed. What you will be looking for here is that the State property shows "Ready" which means we are now ready to start using the Photon Controller platform. From here, we can also see the IP Address of the load balancer that was setup for us within the Photon Controller Management VM which in this example is 192.168.1.150.

Step 4 - To interact with our Photon Controller instance, we will need to point the Photon CLI to the IP Address of the load balancer and specify port 28080. If you had enabled authentication using the Lightwave identity service, you would then use port 443 instead.

./photon target set http://192.168.1.150:28080

Step 5 - You can also check the state of the overall system and the various components once you have pointed to your Photon Controller by running the following command:

./photon system status

testing-driving-photon-controller-part1-6
Step 6 - If you want to get the list of ESXi hosts that is part of a given deployment, we can use the deployment ID from Step 2 and then run the following command which will give you some basic information including the functionality of the ESXi host whether it is serving as a "Management" or "Cloud" Host:

./photon deployment list-hosts de4d276f-16c1-4666-b586-a800dc83d4d6

testing-driving-photon-controller-part1-7
Step 7 - To show more details about a given ESXi host, we just need to take the host ID from the previous step and then run the following command:

./photon host show ce37fca9-c8c6-4986-bb47-b0cf48fd9724

testing-driving-photon-controller-part1-8
Note: I had noticed that the ESXi host's root password was being displayed in this output. I have already reported this internally and this will be removed in a future update as it should not be displaying the password, especially in plaintext.

Hopefully this gives you a quick primer on how the Photon CLI works and how you can easily interact with a given Photon Controller deployment. If you would like more details on Photon CLI, be sure to check out the official documentation here.

Exploring Photon Controller API

The Photon Controller also provides a REST API interface which you can explore by using the built in Swagger interface. You can connect to it by opening a browser to the following address: https://[photon-controller-load-balancer]:9000/api For those of you who have not used Swagger before, its a tool that allows you to easily test drive the underlying API as well as providing interactive documentation on the specific APIs that are available. This is a great way to learn about the Photon Controller API and allows you to even try it out without having to write a single line of code.

testing-driving-photon-controller-part1-9

Exploring Photon Controller UI

Saving the best for the last, we will now take a look at the new Photon Controller Management UI. To access the UI, you just need to open a browser to the IP Address of the Photon Controller load balancer. In this example, it is 192.168.1.150 and once loaded, you should be taken to the main dashboard.

testing-driving-photon-controller-part1-10
If you recall earlier in the Photon CLI example, we had to run through several commands to get the overall system status as well as list of ESXi hosts participating in either a "Management" or "Cloud" host role. With the UI, this is literally a single click!

testing-driving-photon-controller-part1-11
There are other objects within the UI that you may notice while exploring but we will save that for the next article in which we will walk through the process provisioning your first Virtual Machine using Photon Controller.

Troubleshooting

Here are some useful things I learned from the Photon Controller team while troubleshooting some of my initial deployments.

The following logs are useful to take a look at during a failed deployment and usually will give some hints to what had happened. You can find these by logging into the Photon Controller Installer VM:

  • /var/log/esxcloud/management-api/management-api.log
  • /var/log/esxcloud/deployer/deployer.log

If you need to restart or re-deploy using the Photon Controller Installer VM, there is some clean up that you need to do (in the future, there will be an easier way to initialize without going through this process). To do so, SSH to Photon Controller Installer VM using the username esxcloud and vmware as the password. Next, you will change over to the root user via the su command and the password will be what you had set earlier:

su - root
rm -rf /etc/esxcloud/deployer/deployer/sandbox_18000/
rm -rf /etc/esxcloud/cloud-store/cloud-store/sandbox_19000/
reboot

Once the Photon Controller Installer VM has started back up, you will need to restart the Docker Container for the UI by running the following command:

docker restart ui_installer

This is required as currently it does not correctly restart upon reboot. This is a known issue and will be fixed in a future update. Before opening a browser to the installer UI, you can run the following command to ensure all Docker Containers have successfully started:

docker ps -a

Screen Shot 2016-04-08 at 3.31.49 PM

Categories // Automation, Cloud Native, ESXi, vSphere 6.0 Tags // cloud native apps, ESXi, Photon Controller

Using Ansible to provision a Kubernetes Cluster on VMware Photon

11.05.2015 by William Lam // 1 Comment

ansible-vmware-photon-kubernetes
I am always interested in learning and playing with new technologies, solutions and tools. Ansible, a popular configuration management tool which was recently acquired by Redhat, is one such tool that I have had on my to do list for some time now. It is quite difficult to find extra free time and with a new 7month year old, it has gotten even harder. However, in the last week or so I have been waking up randomly at 4-5am and I figured I might as well put this time to go use and give Ansible a try.

As the title suggests, I will be using Ansible to deploy a Kubernetes Cluster running on top of VMware's Photon OS. The motivation behind this little project was after watching Kelsey Hightower's recorded session at HashiConf on Managing Applications at Scale and comparing HashiCorp's Nomad and Google's Kubernetes (K8s) scheduler. I knew there were already a dozen different ways to deploy K8s, but I figure I would try something new and add a VMware spin to it by using the Photon OS.

I had found an out dated reference on setting up K8s in the Photon OS documentation and though a few of the steps are no longer needed, it provided a good base for me on creating the Ansible playbook for setting up a K8s Cluster. If you are not familiar with Ansible, this getting started guide was quite helpful. For our K8s setup, we will have a 2-Node setup, one being the Master and the other the Minion. If you are interested in an overview of K8s, be sure to check out the official documentation here.

Step 1 - You will need to deploy at least 2 Photon OS VMs, one for the Kubernetes Master and one for the Minon. This can be done using either the ISO or by deploying the pre-packaged OVA. For more details on how to setup Photon OS, please refer to the documentation here. This should take only a few minutes as the installation or deployment of Photon OS is pretty quick. In my setup, I have 192.168.1.133 as Master and 192.168.1.111 as the Minion.

Step 2 - Download and install Ansible on your client desktop. There are several options depending on the platform you plan to use. For more information take a look at the documentation here. In my setup, I will be using a Mac OS X system and you can easily install Ansible by running the following command:

brew install ansible

Step 3 - Next, to verify and test that our installation of Ansible was successful, we will need to create our inventory host file (I called it hosts but you can name it anything you want) which will contain the mappings to our Photon OS VMs. The example below assumes you do not have DNS running in your environment and I am making use of the variable options in host file to specify a friendly names versus just using the IP Addresses which will be read in later. If you do have DNS in your environment, you do not need the last section of the file.

[kubernetes_cluster]
192.168.1.133
192.168.1.111

[masters]
192.168.1.133

[minions]
192.168.1.111

[kubernetes_cluster:vars]
master_hostname=photon-master
master_ip=192.168.1.133
minion_hostname=photon-node
minion_ip=192.168.1.111

Step 3 - We will be performing a basic "ping" test to validate that Ansible is in fact working and can communicate with our deployed Photon VMs. Run the following command which will specify the inventory host file as input:

ansible -i hosts all -m ping --user root --ask-pass

Screen Shot 2015-11-04 at 5.45.12 PM
Step 4 - If the previous step was successful, we can now create our Ansible playbook which will contain the instructions on setting up our K8s Cluster. Download the kubernetes_cluster.yml to your desktop and then run the following command:

ansible-playbook -i hosts --user root --ask-pass kubernetes_cluster.yml

If you want to use SSH keys for authentication and if you have already uploaded the public keys to your Photon VMs, then you can replace --ask-pass with --private-key and specify the full path to your SSH private keys.

using-ansible-to-provision-kubernetes-cluster-running-on-vmware-photon-0
Step 5 - Once the Ansible playbook has been successfully executed, you should see summary at the end showing everything was ok. To verify that our K8s Cluster has been properly setup, we will check the Minon's node status state which should show "ready". To do so, we will login to the K8s Master node and run the following command:

kubectl get nodes

You should see that the status field shows "Ready" which means the K8s Cluster has been properly configured.

using-ansible-to-provision-kubernetes-cluster-running-on-vmware-photon-1
At this point you have a basic K8s Cluster setup running on top of VMware Photon. If you are interested in exploring K8s further, there are some nice 101 and 201 official tutorials that can be found here. Another handy reference that I used for creating my Ansible playbook was this article here which provided a way to create loops using the lineinfile param.

Categories // Automation, Cloud Native, vSphere Tags // Ansible, cloud native apps, K8s, Kubernetes, Photon

How to configure shared folders in VMware AppCatalyst?

10.02.2015 by William Lam // Leave a Comment

A widely used feature in VMware's hosted products (Fusion & Workstation) is the shared folders capabilities which allows you to easily share files between the host system and your Virtual Machines. With the latest release of VMware's AppCatalyst TP2, shared folders is now officially supported and can be configured using the AppCatalyst's REST API. Below are the instructions on setting up shared folders for your VMs running in AppCatalyst. Thanks to Fabio for sharing the details.

I will assume that you already have a VM that is running under AppCatalyst. If you do not, you can run the following commands to quickly deploy a new VM called "photon" from the default VMware Photon OS template and retrieve the IP Address assigned to the VM (which we will make use later).

/opt/vmware/appcatalyst/bin/appcatalyst vm create photon
/opt/vmware/appcatalyst/bin/appcatalyst vmpower on photon
/opt/vmware/appcatalyst/bin/appcatalyst guest getip photon

If you have not used AppCatalyst before, be sure to check out this getting started guide here.

Step 1 - Open a terminal and start the AppCatalyst Daemon by running the following command:

/opt/vmware/appcatalyst/bin/appcatalyst-daemon

Step 2 - Open a browser to connect to the following URL https://localhost:8080 to access the REST API explorer provided by Swagger.

Step 3 - Expand the POST /vms/{id}/folders which is the API to configure shared folders for a particular VM. You will need to fill in the id property which specifies the name of the VM and the state property defines the shared folder configuration.

Here is an example for creating a shared folder in the guest called "shared-folder" and mapping that to the host folder under /Users/lamw/Development. The flags property specifies how the folder will be accessed by the VM. Currently, there is only one flag implemented which is "4" and means read/write access. In the future there maybe more flags implemented for different types of access.

{
"guestPath":"shared-folder",
"hostPath":"/Users/lamw/Development",
"flags":4
}

Once you have entered the information into the UI, to execute the API request you just need to click on the "Try it out" button at the bottom. If the operation was successful, you should get back a 200 response from the UI.

Screen Shot 2015-10-02 at 9.33.31 AM
Note: The guestPath property is not actually a path in the VM but rather the name of the directory which will map to the host shared folder.

Step 4 - We can now login to our VM to confirm that the shared folder has been configured. If you are using the default Photon template, you can login by specifying the default SSH keys included with AppCatalyst and the IP Address of the Photon VM we deployed earlier.

Here is the command to login (be sure to replace with the IP Address from your enviornment):

ssh -i /opt/vmware/appcatalyst/etc/appcatalyst_insecure_ssh_key [email protected]

Step 5 - Once logged in, if we now look under /mnt/hgfs directory, we should now see the shared directory automatically created and mapped to the host shared folder we created earlier.

Screen Shot 2015-10-02 at 9.35.13 AM
If you wish to remove shared folders from a VM, you can do so by executing the DELETE /vms/{id}/folders/{folderId} API and specifying the id for both the VM and folder. Hopefully in the future, shared folders can also be easily consumed through the simple AppCatalyst CLI, but for now you can do so using the REST API.

Categories // Apple, Automation, Cloud Native Tags // appcatalyst, cloud native apps, DevOps, Docker, hgfs, Photon, shared folder

  • « Previous Page
  • 1
  • …
  • 4
  • 5
  • 6
  • 7
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Ultimate Lab Resource for VCF 9.0 06/25/2025
  • VMware Cloud Foundation (VCF) on ASUS NUC 15 Pro (Cyber Canyon) 06/25/2025
  • VMware Cloud Foundation (VCF) on Minisforum MS-A2 06/25/2025
  • VCF 9.0 Offline Depot using Synology 06/25/2025
  • Deploying VCF 9.0 on a single ESXi host? 06/24/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...