WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

How to override the default CPU/Memory when deploying Photon Controller Management VM?

04.25.2016 by William Lam // 1 Comment

When installing Photon Controller, the resource configuration of the Management VM is sized dynamically as mentioned here based on the total available CPU, Memory and Storage on the physical ESXi host that it is being provisioned to. This is generally not a problem when deploying Photon Controller in Production with larger hosts but if you are trying to play with it in a home lab or a resource constrained environment, then this can be a challenge.

Currently, the minimal requirement to play with Photon Controller is a single physical or Nested ESXi VM that is configured with at least 4vCPU, 16GB of memory and 50GB of storage. The biggest constraint for most home labs is usually on memory. As an example, using the configuration above, the default size used for the Photon Controller Management VM is 2vCPU and 4GB of memory which is quite hefty for such a small environment. It potentially could get worse with slightly larger hosts and ultimately this impacts the amount of workload you can run on the ESXi host, especially if you only have one.

In talking to one of the Engineers on the Photon Controller team, I learned about a neat little capability that is currently only available in the Photon CLI which allows you to override the default CPU, Memory and Storage settings for the Photon Controller Management VM. The following three variables can be added to a deployment configuration YAML file which will override the default behavior.

UPDATE (06/02/16) - In v0.9 release of Photon Controller, the MANAGMENT_VM_MEMORY_GB_OVERWRITE variable has been renamed to MANAGEMENT_VM_MEMORY_MB_OVERWRITE. The rest should be the same.

  • MANAGEMENT_VM_CPU_COUNT_OVERWRITE - Number of vCPUs for the Management VM
  • MANAGEMENT_VM_MEMORY_GB_OVERWRITE - Amount of Memory for the Management VM (It is actually in MB even though variable says GB)
  • MANAGEMENT_VM_DISK_GB_OVERWRITE - Amount of storage for the Management VM (there seems to be a bug but property does not actually override the default storage configuration)

Note: One thing that I found while testing out this capability is that you MUST specify all three variables regardless if wish to override one or more resources. If you do not, you will see a strange 500 error  code when running the CLI. I assume this is probably a bug and have already reported this to the Engineering team.

Below are the recommended instructions if you plan to override the default configuration for the Photon Controller Management VM.

Step 1 - Open a browser to the IP Address of your Photon Controller Installer VM and go through the wizard as you normally would, but DO NOT click on the Deploy button once you are done. Instead, click on the "Export Configuration" option and save your configuration to your desktop. You can then close the Photon Controller Installer UI window as we will not be using the UI to deploy.

override-photon-controller-mgmt-vm-0
Step 2 - Open the Photon Controller deployment configuration YAML file that you had just saved in the previous step using a text editor of your choice. There will be two modifications that we will need to make. The first is by adding the following three variables under the "metadata" section towards the top and replacing the values with the ones you wish to use. I recommend using 2vCPU/2GB of memory. For storage, there seems to be a bug in which the override does not work, but you STILL MUST specify it in the configuration file else the deployment will fail. Go ahead and leave it as the default 80.

MANAGEMENT_VM_CPU_COUNT_OVERWRITE: 2
MANAGEMENT_VM_MEMORY_GB_OVERWRITE: 2048
MANAGEMENT_VM_DISK_GB_OVERWRITE: 80

Step 3 - The second modification that we need to make to the YAML file is how the datastores are listed under the image_datastores property. In the UI, it stores this property as a collection. However, when using the Photon CLI, it expects it as a string. The fix is quite simple, you just need to change the following

from

deployment:
  image_datastores:
    - datastore1

to

deployment:
  image_datastores: datastore1

At this point, we are done modifying our YAML configuration file and we can save our changes and get ready to deploy.

Step 4 - You will need the Photon CLI for the remainder of the steps. If you have not downloaded the Photon CLI, take a look here for the details. Point the Photon CLI to the IP Address of your Photon Controller Installer VM by running the following command:

./photon target set http://192.168.1.250

Step 5 - We will now deploy Photon Controller using the CLI and overriding the default algorithm on how the Photon Controller Management VM is configured by running the following command and specifying the full path to your YAML file:

./photon system deploy esxcloud-installation-export-config-vghetto-sample.yaml

override-photon-controller-mgmt-vm-1
Once the deployment has started, you will be provided with a progress bar. If everything is successful, you should be able to login to your ESXi host using either the ESXi Embedded Host Client or vSphere C# Client and you should see that your Photon Controller Management VM has been deployed with the overrides you had specified earlier.

override-photon-controller-mgmt-vm-2
If you are new to Photon Controller, be sure to check out my blog series on test driving Photon Controller:

  • Test driving VMware Photon Controller Part 1: Installation
  • Test driving VMware Photon Controller Part 2: Deploying first VM
  • Test driving VMware Photon Controller Part 3a: Deploying Kubernetes
  • Test driving VMware Photon Controller Part 3b: Deploying Mesos
  • Test driving VMware Photon Controller Part 3c: Deploying Docker Swarm

Categories // Automation, Cloud Native, ESXi Tags // cloud native apps, Photon Controller

Test driving VMware Photon Controller Part 3a: Deploying Kubernetes

04.21.2016 by William Lam // 6 Comments

If you have been following the series thus far, we have covered installing Photon Controller in Part 1 and then we learned how to create our first virtual machine using Photon Controller in Part 2. Next up, we will demonstrate how easy it is to stand up the three different Cluster Orchestration solutions that are supported on top of Photon Controller, starting with Kubernetes. Once the Cluster Orchestration solution has been setup, you can then deploy your application like you normally would through the Cluster Orchestration and behind the scenes, Photon Controller will automatically provision the necessary infrastructure to run your given application without having to know anything about the underlying resources.

test-driving-photon-controller-k8-cluster
If you recall from our last article, there are several default VM Flavors that are included in the Photon Controller installation. The ones that are named cluster-* are VM Flavors used for deploying the Cluster Orchestration virtual machines that have been configured to support high scale and throughput (up to 4vCPU and 8GB of memory). If you are testing this in a lab environment where you might be constrained on memory resources for your ESXi host (16GB of memory), then you actually have a few options. The first option is to create a new VM Flavor with a smaller configuration (e.g. 1vCPU/2GB memory) and then override the default VM Flavor when deploying the Cluster Orchestration. The second option which I learned from talking to Kris Thieler was that you can actually re-define the default cluster-* VM Flavors to fit your environment needs which he has documented here. To simplify our deployment, we will actually use Option 1 on just creating a new VM Flavor that we will use to override the default VM Flavor. If you have more than 16GB of memory, then you can skip Step 2.

Deploying Kubernetes Cluster

Step 1 - Download the Kubernetes VMDK from here and the Kubectl binary from here.

Step 2 - Run the following command to create our new VM Flavor override which we will call cluster-tiny-vm that is configured with 1vCPU/1GB of memory:

./photon -n flavor create --name cluster-tiny-vm --kind "vm" --cost "vm 1 COUNT,vm.flavor.cluster-other-vm 1 COUNT,vm.cpu 1 COUNT,vm.memory 1 GB,vm.cost 1 COUNT"

Step 3 - We will now upload our Kubernetes image and make note of the ID generated after the upload by running the following command:

./photon -n image create photon-kubernetes-vm-disk1.vmdk -n photon-kubernetes-vm.vmdk -i EAGER

Step 4 - Next, we also need the ID of our Photon Controller Instance deployment as it will be required in the next step by running the following command:

./photon deployment list

Step 5 - We will now enable the Kubernetes Cluster Orchestration on our Photon Controller instance by running the following command and specifying the ID of your deployment as well as the ID of the Kubernetes image from the previous two steps:

./photon -n deployment enable-cluster-type 7fd9a13d-e69e-4165-9b34-d436f4c67ea1 -k KUBERNETES -i 4332af67-2ff0-49f7-ba44-dd4140908e32

test-driving-photon-controller-k8-0
Step 6 - We can also see what Cluster Orchestration solutions have been enabled for our Photon Controller by running the following command and specifying our deployment ID:

./photon deployment show 7fd9a13d-e69e-4165-9b34-d436f4c67ea1

test-driving-photon-controller-k8-1
As you can see from the screenshot above, there is a Cluster Configuration section which provides a list of Cluster Orchestration solutions that have been enabled as well as their respective image.

Step 7 - We are now ready to spin up our Kubernetes (K8) Cluster by simply running the following command and substituting the network information from your environment. We are also going to only deploying a single K8 Slave (if you have additional resources you can spin up more or you can always re-size the cluster after it has been deployed) and lastly, we will override the default VM Flavor used by specifying -v option and providing the name of our VM Flavor called cluster-tiny-vm. You can just hit enter when prompted for the two etcd IP Addresses, the assumption is that you have DHCP running and those will automatically obtain an address.

./photon cluster create -n k8-cluster -k KUBERNETES --dns 192.168.1.1 --gateway 192.168.1.1 --netmask 255.255.255.0 --master-ip 192.168.1.55 --container-network 10.2.0.0/16 --etcd1 192.168.1.56 -s 1 -v cluster-tiny-vm

test-driving-photon-controller-k8-2
Step 8 - The process can take a few minutes and you should see a message like the one shown above which prompts you to run the cluster show command to get more details about the state of the cluster.

./photon cluster show 9b159e92-9495-49a4-af58-53ad4764f616

test-driving-photon-controller-k8-3
Exploring Kubernetes

At this point, you have now successfully deployed a fully functional K8 Cluster using Photon Controller with just a single command. We can now take explore our K8 setup a bit by using the kubectl CLI that you had downloaded earlier. For more information on how to interact with K8 Cluster using kubectl command, be sure to check out the official K8 documentation here.

To view the nodes within the K8 Cluster, you can run the following command and specifying the IP Address of the master VM provided in the previous step:

./kubectl -s 192.168.1.55:8080 get nodes

test-driving-photon-controller-k8-4
Lets now do something useful with our K8 Cluster and deploy a simple Tomcat application. We first need to download the following two configuration files that will define our application:

  • photon-Controller-Tomcat-rc.yml
  • photon-Controller-Tomcat-service.yml

We then need to edit the photon-Controller-Tomcat-rc.yml file and delete the last two lines as it contains an incorrect syntax:

labels:
name: "tomcat-server"

To deploy our application, we will run the following two commands which will setup our replication controller as well as the service for our Tomcat application:

./kubectl -s 192.168.1.55:8080 create -f photon-Controller-Tomcat-rc.yml
./kubectl -s 192.168.1.55:8080 create -f photon-Controller-Tomcat-service.yml

We can then check the status of our application deloyment by running the following command:

./kubectl -s 192.168.1.55:8080 get pods

test-driving-photon-controller-k8-5
You should see a tomcat-server-* entry and the status should say "Image: tomcat is not ready on the node". You can give it a few seconds and then re-run the command until it is showing "Running" as the status which means our application has been successfully deployed by the K8 Cluster.

We can now open a browser to the IP Address of our K8 Master VM's IP, which in my environment was 192.168.1.55 and specify port 30001 which was defined in the configuration file of Tomcat application and we should see that we now have Tomcat running.

test-driving-photon-controller-k8-6
We can also easily scale up the number of replication servers for our Tomcat application by running the following command:

./kubectl -s 192.168.1.55:8080 scale --replicas=2 rc tomcat-server

You can easily scale the application back down by re-running the command and specifying a value of one. Lastly, if we want to delete our application, we can run the following two commands:

./kubectl -s 192.168.1.55:8080 delete service tomcat
./kubectl -s 192.168.1.55:8080 delete rc tomcat-server

Once we are done using using our K8 Cluster, we can tear it down by specifying the ID of the K8 Cluster found in Step 8 by running the following command which will now delete the VMs that Photon Controller had deployed:

./photon -n cluster delete 9b159e92-9495-49a4-af58-53ad4764f616

Hopefully this gave you a quick taste on how easy it is to setup a fully functional K8 Cluster using Photon Controller. In the next article, we will take a look at deploying a Mesos Cluster using Photon Controller, so stay tuned!

  • Test driving VMware Photon Controller Part 1: Installation
  • Test driving VMware Photon Controller Part 2: Deploying first VM
  • Test driving VMware Photon Controller Part 3a: Deploying Kubernetes
  • Test driving VMware Photon Controller Part 3b: Deploying Mesos
  • Test driving VMware Photon Controller Part 3c: Deploying Docker Swarm

Categories // Automation, Cloud Native, ESXi, vSphere 6.0 Tags // cloud native apps, ESXi, Kubernetes, Photon Controller

Test driving VMware Photon Controller Part 2: Deploying first VM

04.19.2016 by William Lam // 5 Comments

In our previous article, we walked through the steps of installing Photon Controller into our ESXi environment using the new Photon Controller Installer UI. In this article, we will learn how to provision our first Virtual Machine using Photon Controller and the Photon CLI. Before we get started, we first need to initialize our Photon Controller instance and create some of the initial configurations such as Tenants, Resource Tickets, Projects, Flavors, Images & Networks.

  • Test driving VMware Photon Controller Part 1: Installation
  • Test driving VMware Photon Controller Part 2: Deploying first VM
  • Test driving VMware Photon Controller Part 3a: Deploying Kubernetes
  • Test driving VMware Photon Controller Part 3b: Deploying Mesos
  • Test driving VMware Photon Controller Part 3c: Deploying Docker Swarm

Tenants, Resource Tickets & Projects

As mentioned in the previous article, Photon Controller is a multi-tenant system which allows you to create different tenants for your different consumers like HR, Finance or Engineering for example.

test-driving-photon-controller-first-vm-1
Each tenant is associated with a Security Group that maps to a set of users/groups that can access the tenant's resources. This capability is only available when the Lightwave Identity Source is configured during the initial deployment of Photon Controller. A Resource Ticket represents an collection of compute, storage and networking resources with specific capabilities, limits and quotas that is associated at the tenant level. These resources can then be sub-divided into something consumable called Projects that draws its quotas and limits from their respective Resource Tickets. You can have multiple Resource Tickets and Projects in a given Tenant, but each Project is mapped to a specific Resource Ticket.

Here is an example on how you might use Resource Tickets and Projects. Lets say you have some "High" performant resources which is for your developers working on a very important application for the business, so you create a Gold Resource Ticket. You then also have some "OK" performant resources for developers that are prototyping new ideas and do not necessary care for high end resources, so you create a Silver Resource Ticket. Obviously, there are several "important" components that make up this single application that is being developed. Based on the individual component teams requirements, you decide to create Project A and Project B with their respective resource requirements which pull from the Gold Resource Ticket. You can also have the same folks working on other Projects which pull from a completely different Resource Ticket like the one shown in the Silver Resource Ticket.

test-driving-photon-controller-first-vm-00
Note: For those of you who are familiar with VMware's vCloud Director (vCD) product, you can think of Tenant -> Resource Ticket -> Project similiar to vCD's Organization -> Provider VDC -> Organization VDC concept. vSphere is not a multi-tenant system, but you could also think of its Clusters -> Resource Pool similiar to Resource Ticket -> Project.

Lets go ahead and create a Tenant, Resource Ticket and Project using the Photon CLI. Although you can create these objects using the Photon Controller Management UI, I have found that the UI actually enforces additional parameters than the CLI when creating Resource Tickets. We will stick with the CLI for now but you are more than welcome to use the UI for these sections if you wish.

Step 1 - If you have not done so already, set the target of your Photon CLI to the Photon Controller instance you deployed from the previous article

./photon target set http://[IP]:28080

Step 2 - Create a tenant by running the following command and specifying a name:

./photon -n tenant create Engineering

Step 3 - To use the tenant we just created, we will need to set the tenant by running the following command and specifying the name of our tenant:

./photon tenant set Engineering

When creating a Resource Ticket, there are only two mandatory limits that you need to specify which is the VM's memory (GB) and the number of VMs (COUNT). The syntax to the limits param is a comma separated tuple that consists of Name (e.g. vm.memory), Value (e.g. 16) and Units (e.g. GB, MB, KB, COUNT).

Step 4 - We will create a Resource Ticket called gold-ticket and set the memory limit to 16GB with max number of VMs to 100 by running the following command:

./photon -n resource-ticket create --name gold-ticket --limits "vm.memory 16 GB, vm 100 COUNT"

Step 5 - Next, we will create a Project called secret-project which will just consume the full limits of our Gold Resource Ticket by running the following command:

./photon -n project create --resource-ticket gold-ticket --name secret-project --limits "vm.memory 16 GB, vm 100 COUNT"

Step 6 - Lastly, to use the Project we just created, we will need to set the project by running the following command and specifying the name of our Project:

./photon project set secret-project

When creating Resource Tickets and Projects, you also have the option of creating your own user-defined cost. In the example below, I have something called foo which can have max of 50 and bar which can have a max of 5. We can then consume these user-defined cost when creating our Project as you can see in the example below.

./photon -n resource-ticket create --name silver-ticket --limits "vm.memory 16 GB, vm 100 COUNT, foo 50 COUNT, bar 5 COUNT"
./photon -n project create --resource-ticket silver-ticket --name beta-project --limits "vm.memory 16 GB, vm 100 COUNT foo 25 COUNT, bar 2 COUNT"

Images, Flavors & Networks

When a new VM is instantiated from Photon Controller, it is constructed from an Image along with a VM and Disk Flavor. An Image can be either an OVF/OVA or VMDK residing in the Photon Controller Image Store. A Flavor describes the amount of resources being consumed by the VM from the Resource Ticket. There are two types of Flavors today, one for VM and one for Disk. Although a Disk Flavor is required as part of creating the VM, it is currently not used for anything today and do not actually count against the Resource Ticket. Obviously this behavior may change in the future. Lastly, if you recall from our initial setup of Photon Controller, we had specified the VM network in which all VMs would be assigned to. You also have the option of associating additional networks in Photon Controller in case you want to provide access to different networking capabilities to your VMs which we will quickly cover as well.

Lets go ahead and run through a simple Image and Flavor configuration which we will be using the VMware PhotonOS 1.0 TP2 OVA.

Step 1 - Download the VMware PhotonOS 1.0 TP2 OVA

Step 2 - Before uploading, lets take a quick look at the current image store by running the following command:

./photon image list

test-driving-photon-controller-first-vm-4
We can see that there is currently only one image which is the Photon Controller Management VMDK that was used to stand up our Photon Controller instance. You will find some additional details such as the Replication Type which can either be EAGER (replicate immediately) or ON_DEMAND (replicate when requested) as well as the State, Size and Replication Progress.

Step 3 - To upload our PhotonOS OVA, we will run the following command:

./photon -n image create photon-1.0TP2.ova -n photon-1.0TP2.ova -i EAGER

Step 4 - Once the image has been successfully uploaded, we can get more details by specifying the image ID by running the following command:

./photon image show bca0f75d-c7c6-4cbd-8859-6010c06b0359

test-driving-photon-controller-first-vm-5
Step 5 - Before we create our VM and Disk Flavor, lets have a look at the Flavors that have already been created by running the following command:

./photon flavor list

test-driving-photon-controller-first-vm-6
There is a total of 5 Flavors that are available out of the box. The mgmt-vm* VM and Disk Flavor is used for deploying the Photon Controller Management VM and you can see the default configurations that are used. The cluster-* VM and Disk Flavors are the default configurations used for the different Cluster Orchestration solutions that Photon Controller supports. You will notice that the configuration are quite large and the reason for this is that these Flavors have been designed for scale and throughput. When we get to the different Cluster Orchestration articles, you will see how these will be important based on the available resources you have in your environment.

Step 6 - We will now create a new VM Flavor called tiny-photon-vm with a cost for CPU count of 1 and MEM count of 2GB by running the following command:

./photon -n flavor create --name tiny-photon-vm --kind "vm" --cost "vm.cpu 1.0 COUNT, vm.memory 2.0 GB, vm.cost 1.0 COUNT"

Step 7 - We will also create a new Disk Flavor called tiny-photon-disk using the ephemeral-disk type with a cost of 1 by running the following command:

./photon -n flavor create --name tiny-photon-disk --kind "ephemeral-disk" --cost "ephemeral-disk 1.0 COUNT"

Optionally, you can also create new Flavors based on the user-defined costs. Here is an example consuming our foo and bar attributes:

./photon -n flavor create --name custom-photon-vm --kind "vm" --cost "vm.cpu 1.0 COUNT, vm.memory 2.0 GB, vm.count 1.0 COUNT, foo 5 COUNT, bar 1 COUNT"

Step 8 - If we now list our Flavors again, we should see the three new Flavors that we had just created.

test-driving-photon-controller-first-vm-7
You can also quickly view all Images and Flavors using the Photon Controller Management UI by clicking on the upper right hand corner on the "Cog Wheel" as shown in the screenshot below.

test-driving-photon-controller-first-vm-11
Step 9 - If you wish to add additional networks to be used in Photon Controller, you can run the following command and specifying the information from your environment:

./photon -n network create --name dev-network --portgroups "VM Network" --description "Dev Network for VMs"

Step 10 - To get a list of all available networks, you can run the following command:

./photon network list

VM & Disk Creation

With all the pieces in place, we are now ready to create our first VM! If you remember from the previous section, to create a VM you must provide an Image and VM and Disk Flavor. We will be using the PhotonOS Image which we will need the ID that was generated earlier. We will also be using the tiny-photon-vm VM Flavor as well as the tiny-photon-disk Disk Flavor. The disks argument below accepts a disk name (can be anything you want to call it), the Disk Flavor and whether it is a boot disk (boot=true) or capacity in case where it is an additional disk.

Step 1 - To create the VM we described above, run the following command and specifying the Image ID from your environment:

./photon vm create --name vm-1 --image bca0f75d-c7c6-4cbd-8859-6010c06b0359 --flavor tiny-photon-vm --disks "disk-1 tiny-photon-disk boot=true"

test-driving-photon-controller-first-vm-12
Step 2 - Using the VM ID that was provided, we can now power on the VM by running the following command:

./photon vm start b0854f44-11da-4175-b6c5-657cacbcd113

Step 3 - Once the VM has been powered on, we can also pull some additional information such as the IP Address from the VM by running the following command:

./photon vm show b0854f44-11da-4175-b6c5-657cacbcd113

test-driving-photon-controller-first-vm-13
Note: You may need to re-run the above command in case the IP Address does not show up immediately.

If you wish to confirm that you can login to the PhotonOS VM that we just deployed from our Image, go ahead and ssh in as root and the default password is changeme which you should get prompted to change. One important thing to be aware of is that all VMs created from the Images are created as VMware Linked Clone (copy-on-write), so that is why the process is extremely fast and efficient.

Step 4 - We can also get additional networking details such as the VM's MAC Address and the current state by running the following command

./photon vm networks b0854f44-11da-4175-b6c5-657cacbcd113

test-driving-photon-controller-first-vm-14
We can also create a VM that contains more than one disk. The example below is using our PhotonOS Image and adding a secondary 5GB disk:

./photon -n vm create --name vm-2 --image bca0f75d-c7c6-4cbd-8859-6010c06b0359 --flavor tiny-photon-vm --disks "disk-1 tiny-photon-disk boot=true, disk-2 tiny-photon-disk 5"

If we wanted to add additional disks after a VM has been created, we just need to create a new Disk and associate that with a Disk Flavor. In the example below, we will create a new Disk Flavor using the persistent-disk type and then create a new disk called data-disk with capacity of 10GB

./photon -n flavor create --name persist-disk --kind "persistent-disk" --cost "persistent-disk 1.0 COUNT"
./photon disk create --name data-disk --flavor persist-disk --capacityGB 10

test-driving-photon-controller-first-vm-15
We can get more details about a specific disk such as the state (attached/detached), capacity, etc. by running the following command and specifying the Disk ID:

./photon disk show 55f425e8-2de4-4d30-b819-64c4fd209c3c

test-driving-photon-controller-first-vm-16
To attach a Disk to a VM, we just need to run the following command specifying the Disk ID as well as the VM ID:

./photon vm attach-disk --disk 55f425e8-2de4-4d30-b819-64c4fd209c3c 4e66e4c9-693e-42b3-9e1e-0d96044a6a42

To detach a Disk from a VM, we just need to run the following command specifying the Disk ID as well as the VM ID:

./photon vm detach-disk --disk 55f425e8-2de4-4d30-b819-64c4fd209c3c 4e66e4c9-693e-42b3-9e1e-0d96044a6a42

I also wanted to quickly mention that you can also provision a VM using the Photon Controller Management UI. To do so, you need to be in the Project view and click on three dots next to the name of the Project as seen in the screenshot below.

Screen Shot 2016-04-17 at 11.43.10 AM
Lastly, we will clean up the two VMs along with the disk that we had created.

./photon disk delete 55f425e8-2de4-4d30-b819-64c4fd209c3c
./photon stop b0854f44-11da-4175-b6c5-657cacbcd113
./photon vm delete b0854f44-11da-4175-b6c5-657cacbcd113
./photon vm delete 4e66e4c9-693e-42b3-9e1e-0d96044a6a42

Although we had to cover a few new concepts before we could provision our first VM, hopefully it gave you a better understanding of how Photon Controller works under the hood. The nice thing now is that because we have already done all the heavy lifting such as setting up a Tenant, Resource Ticket & Project, when we take a look at setting up the different Cluster Orchestration solutions, the provisioning workflows should be pretty straight forward 🙂

Categories // Automation, Cloud Native, ESXi, vSphere 6.0 Tags // cloud native apps, ESXi, Photon Controller

  • « Previous Page
  • 1
  • …
  • 166
  • 167
  • 168
  • 169
  • 170
  • …
  • 224
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...