WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Test driving VMware Photon Controller Part 2: Deploying first VM

04.19.2016 by William Lam // 5 Comments

In our previous article, we walked through the steps of installing Photon Controller into our ESXi environment using the new Photon Controller Installer UI. In this article, we will learn how to provision our first Virtual Machine using Photon Controller and the Photon CLI. Before we get started, we first need to initialize our Photon Controller instance and create some of the initial configurations such as Tenants, Resource Tickets, Projects, Flavors, Images & Networks.

  • Test driving VMware Photon Controller Part 1: Installation
  • Test driving VMware Photon Controller Part 2: Deploying first VM
  • Test driving VMware Photon Controller Part 3a: Deploying Kubernetes
  • Test driving VMware Photon Controller Part 3b: Deploying Mesos
  • Test driving VMware Photon Controller Part 3c: Deploying Docker Swarm

Tenants, Resource Tickets & Projects

As mentioned in the previous article, Photon Controller is a multi-tenant system which allows you to create different tenants for your different consumers like HR, Finance or Engineering for example.

test-driving-photon-controller-first-vm-1
Each tenant is associated with a Security Group that maps to a set of users/groups that can access the tenant's resources. This capability is only available when the Lightwave Identity Source is configured during the initial deployment of Photon Controller. A Resource Ticket represents an collection of compute, storage and networking resources with specific capabilities, limits and quotas that is associated at the tenant level. These resources can then be sub-divided into something consumable called Projects that draws its quotas and limits from their respective Resource Tickets. You can have multiple Resource Tickets and Projects in a given Tenant, but each Project is mapped to a specific Resource Ticket.

Here is an example on how you might use Resource Tickets and Projects. Lets say you have some "High" performant resources which is for your developers working on a very important application for the business, so you create a Gold Resource Ticket. You then also have some "OK" performant resources for developers that are prototyping new ideas and do not necessary care for high end resources, so you create a Silver Resource Ticket. Obviously, there are several "important" components that make up this single application that is being developed. Based on the individual component teams requirements, you decide to create Project A and Project B with their respective resource requirements which pull from the Gold Resource Ticket. You can also have the same folks working on other Projects which pull from a completely different Resource Ticket like the one shown in the Silver Resource Ticket.

test-driving-photon-controller-first-vm-00
Note: For those of you who are familiar with VMware's vCloud Director (vCD) product, you can think of Tenant -> Resource Ticket -> Project similiar to vCD's Organization -> Provider VDC -> Organization VDC concept. vSphere is not a multi-tenant system, but you could also think of its Clusters -> Resource Pool similiar to Resource Ticket -> Project.

Lets go ahead and create a Tenant, Resource Ticket and Project using the Photon CLI. Although you can create these objects using the Photon Controller Management UI, I have found that the UI actually enforces additional parameters than the CLI when creating Resource Tickets. We will stick with the CLI for now but you are more than welcome to use the UI for these sections if you wish.

Step 1 - If you have not done so already, set the target of your Photon CLI to the Photon Controller instance you deployed from the previous article

./photon target set http://[IP]:28080

Step 2 - Create a tenant by running the following command and specifying a name:

./photon -n tenant create Engineering

Step 3 - To use the tenant we just created, we will need to set the tenant by running the following command and specifying the name of our tenant:

./photon tenant set Engineering

When creating a Resource Ticket, there are only two mandatory limits that you need to specify which is the VM's memory (GB) and the number of VMs (COUNT). The syntax to the limits param is a comma separated tuple that consists of Name (e.g. vm.memory), Value (e.g. 16) and Units (e.g. GB, MB, KB, COUNT).

Step 4 - We will create a Resource Ticket called gold-ticket and set the memory limit to 16GB with max number of VMs to 100 by running the following command:

./photon -n resource-ticket create --name gold-ticket --limits "vm.memory 16 GB, vm 100 COUNT"

Step 5 - Next, we will create a Project called secret-project which will just consume the full limits of our Gold Resource Ticket by running the following command:

./photon -n project create --resource-ticket gold-ticket --name secret-project --limits "vm.memory 16 GB, vm 100 COUNT"

Step 6 - Lastly, to use the Project we just created, we will need to set the project by running the following command and specifying the name of our Project:

./photon project set secret-project

When creating Resource Tickets and Projects, you also have the option of creating your own user-defined cost. In the example below, I have something called foo which can have max of 50 and bar which can have a max of 5. We can then consume these user-defined cost when creating our Project as you can see in the example below.

./photon -n resource-ticket create --name silver-ticket --limits "vm.memory 16 GB, vm 100 COUNT, foo 50 COUNT, bar 5 COUNT"
./photon -n project create --resource-ticket silver-ticket --name beta-project --limits "vm.memory 16 GB, vm 100 COUNT foo 25 COUNT, bar 2 COUNT"

Images, Flavors & Networks

When a new VM is instantiated from Photon Controller, it is constructed from an Image along with a VM and Disk Flavor. An Image can be either an OVF/OVA or VMDK residing in the Photon Controller Image Store. A Flavor describes the amount of resources being consumed by the VM from the Resource Ticket. There are two types of Flavors today, one for VM and one for Disk. Although a Disk Flavor is required as part of creating the VM, it is currently not used for anything today and do not actually count against the Resource Ticket. Obviously this behavior may change in the future. Lastly, if you recall from our initial setup of Photon Controller, we had specified the VM network in which all VMs would be assigned to. You also have the option of associating additional networks in Photon Controller in case you want to provide access to different networking capabilities to your VMs which we will quickly cover as well.

Lets go ahead and run through a simple Image and Flavor configuration which we will be using the VMware PhotonOS 1.0 TP2 OVA.

Step 1 - Download the VMware PhotonOS 1.0 TP2 OVA

Step 2 - Before uploading, lets take a quick look at the current image store by running the following command:

./photon image list

test-driving-photon-controller-first-vm-4
We can see that there is currently only one image which is the Photon Controller Management VMDK that was used to stand up our Photon Controller instance. You will find some additional details such as the Replication Type which can either be EAGER (replicate immediately) or ON_DEMAND (replicate when requested) as well as the State, Size and Replication Progress.

Step 3 - To upload our PhotonOS OVA, we will run the following command:

./photon -n image create photon-1.0TP2.ova -n photon-1.0TP2.ova -i EAGER

Step 4 - Once the image has been successfully uploaded, we can get more details by specifying the image ID by running the following command:

./photon image show bca0f75d-c7c6-4cbd-8859-6010c06b0359

test-driving-photon-controller-first-vm-5
Step 5 - Before we create our VM and Disk Flavor, lets have a look at the Flavors that have already been created by running the following command:

./photon flavor list

test-driving-photon-controller-first-vm-6
There is a total of 5 Flavors that are available out of the box. The mgmt-vm* VM and Disk Flavor is used for deploying the Photon Controller Management VM and you can see the default configurations that are used. The cluster-* VM and Disk Flavors are the default configurations used for the different Cluster Orchestration solutions that Photon Controller supports. You will notice that the configuration are quite large and the reason for this is that these Flavors have been designed for scale and throughput. When we get to the different Cluster Orchestration articles, you will see how these will be important based on the available resources you have in your environment.

Step 6 - We will now create a new VM Flavor called tiny-photon-vm with a cost for CPU count of 1 and MEM count of 2GB by running the following command:

./photon -n flavor create --name tiny-photon-vm --kind "vm" --cost "vm.cpu 1.0 COUNT, vm.memory 2.0 GB, vm.cost 1.0 COUNT"

Step 7 - We will also create a new Disk Flavor called tiny-photon-disk using the ephemeral-disk type with a cost of 1 by running the following command:

./photon -n flavor create --name tiny-photon-disk --kind "ephemeral-disk" --cost "ephemeral-disk 1.0 COUNT"

Optionally, you can also create new Flavors based on the user-defined costs. Here is an example consuming our foo and bar attributes:

./photon -n flavor create --name custom-photon-vm --kind "vm" --cost "vm.cpu 1.0 COUNT, vm.memory 2.0 GB, vm.count 1.0 COUNT, foo 5 COUNT, bar 1 COUNT"

Step 8 - If we now list our Flavors again, we should see the three new Flavors that we had just created.

test-driving-photon-controller-first-vm-7
You can also quickly view all Images and Flavors using the Photon Controller Management UI by clicking on the upper right hand corner on the "Cog Wheel" as shown in the screenshot below.

test-driving-photon-controller-first-vm-11
Step 9 - If you wish to add additional networks to be used in Photon Controller, you can run the following command and specifying the information from your environment:

./photon -n network create --name dev-network --portgroups "VM Network" --description "Dev Network for VMs"

Step 10 - To get a list of all available networks, you can run the following command:

./photon network list

VM & Disk Creation

With all the pieces in place, we are now ready to create our first VM! If you remember from the previous section, to create a VM you must provide an Image and VM and Disk Flavor. We will be using the PhotonOS Image which we will need the ID that was generated earlier. We will also be using the tiny-photon-vm VM Flavor as well as the tiny-photon-disk Disk Flavor. The disks argument below accepts a disk name (can be anything you want to call it), the Disk Flavor and whether it is a boot disk (boot=true) or capacity in case where it is an additional disk.

Step 1 - To create the VM we described above, run the following command and specifying the Image ID from your environment:

./photon vm create --name vm-1 --image bca0f75d-c7c6-4cbd-8859-6010c06b0359 --flavor tiny-photon-vm --disks "disk-1 tiny-photon-disk boot=true"

test-driving-photon-controller-first-vm-12
Step 2 - Using the VM ID that was provided, we can now power on the VM by running the following command:

./photon vm start b0854f44-11da-4175-b6c5-657cacbcd113

Step 3 - Once the VM has been powered on, we can also pull some additional information such as the IP Address from the VM by running the following command:

./photon vm show b0854f44-11da-4175-b6c5-657cacbcd113

test-driving-photon-controller-first-vm-13
Note: You may need to re-run the above command in case the IP Address does not show up immediately.

If you wish to confirm that you can login to the PhotonOS VM that we just deployed from our Image, go ahead and ssh in as root and the default password is changeme which you should get prompted to change. One important thing to be aware of is that all VMs created from the Images are created as VMware Linked Clone (copy-on-write), so that is why the process is extremely fast and efficient.

Step 4 - We can also get additional networking details such as the VM's MAC Address and the current state by running the following command

./photon vm networks b0854f44-11da-4175-b6c5-657cacbcd113

test-driving-photon-controller-first-vm-14
We can also create a VM that contains more than one disk. The example below is using our PhotonOS Image and adding a secondary 5GB disk:

./photon -n vm create --name vm-2 --image bca0f75d-c7c6-4cbd-8859-6010c06b0359 --flavor tiny-photon-vm --disks "disk-1 tiny-photon-disk boot=true, disk-2 tiny-photon-disk 5"

If we wanted to add additional disks after a VM has been created, we just need to create a new Disk and associate that with a Disk Flavor. In the example below, we will create a new Disk Flavor using the persistent-disk type and then create a new disk called data-disk with capacity of 10GB

./photon -n flavor create --name persist-disk --kind "persistent-disk" --cost "persistent-disk 1.0 COUNT"
./photon disk create --name data-disk --flavor persist-disk --capacityGB 10

test-driving-photon-controller-first-vm-15
We can get more details about a specific disk such as the state (attached/detached), capacity, etc. by running the following command and specifying the Disk ID:

./photon disk show 55f425e8-2de4-4d30-b819-64c4fd209c3c

test-driving-photon-controller-first-vm-16
To attach a Disk to a VM, we just need to run the following command specifying the Disk ID as well as the VM ID:

./photon vm attach-disk --disk 55f425e8-2de4-4d30-b819-64c4fd209c3c 4e66e4c9-693e-42b3-9e1e-0d96044a6a42

To detach a Disk from a VM, we just need to run the following command specifying the Disk ID as well as the VM ID:

./photon vm detach-disk --disk 55f425e8-2de4-4d30-b819-64c4fd209c3c 4e66e4c9-693e-42b3-9e1e-0d96044a6a42

I also wanted to quickly mention that you can also provision a VM using the Photon Controller Management UI. To do so, you need to be in the Project view and click on three dots next to the name of the Project as seen in the screenshot below.

Screen Shot 2016-04-17 at 11.43.10 AM
Lastly, we will clean up the two VMs along with the disk that we had created.

./photon disk delete 55f425e8-2de4-4d30-b819-64c4fd209c3c
./photon stop b0854f44-11da-4175-b6c5-657cacbcd113
./photon vm delete b0854f44-11da-4175-b6c5-657cacbcd113
./photon vm delete 4e66e4c9-693e-42b3-9e1e-0d96044a6a42

Although we had to cover a few new concepts before we could provision our first VM, hopefully it gave you a better understanding of how Photon Controller works under the hood. The nice thing now is that because we have already done all the heavy lifting such as setting up a Tenant, Resource Ticket & Project, when we take a look at setting up the different Cluster Orchestration solutions, the provisioning workflows should be pretty straight forward 🙂

Categories // Automation, Cloud Native, ESXi, vSphere 6.0 Tags // cloud native apps, ESXi, Photon Controller

Test driving VMware Photon Controller Part 1: Installation

04.12.2016 by William Lam // 11 Comments

Several weeks back, the Cloud Native Apps team at VMware released a significant update to their Photon Controller platform with their v0.8 release focused on simplified management and support for Production scale. For those of you who are not familiar with Photon Controller, it is an infrastructure stack purposefully-built for cloud-native applications. It is a highly distributed and scale-out control plane designed from the ground up to support multi-tenant deployments that require elasticity, high churn and self-healing. If you would like to get more details about the v0.8 release, be sure to check out this blog post here by James Zabala, Product Manager in the Cloud Native Apps team.

photon-controller-architecture
One of the most visible enhancement to the v0.8 release is the introduction of a UI for installing and managing Photon Controller. Previously, the only way to deploy Photon Controller was using an already pre-configured appliance that required customers to have a particular network configuration for their infrastructure. Obviously, this was not ideal and it made it challenging for customers to evaluate Photon Controller in their own specific environment. With this new update, customers can now easily deploy Photon Controller into their own unique environment using a UI that is provided by a Virtual Appliance (OVA). This Virtual Appliance is only used for the initial deployment of Photon Controller and is no longer needed afterwards. Once Photon Controller is up and running, you can manage it using either the CLI or the new management UI.

In this first article, I will take you through the steps of deploying Photon Controller onto an already provisioned ESXi host. We have a quick look at the Photon CLI and how you can interact with Photon Controller and lastly, we will also take a look at the new Photon Controller Management UI. In future articles, we will be looking at deploying our first VM using Photon Controller as well as run through the different cluster orchestration solutions that Photon Controller integrates with.

  • Test driving VMware Photon Controller Part 1: Installation
  • Test driving VMware Photon Controller Part 2: Deploying first VM
  • Test driving VMware Photon Controller Part 3a: Deploying Kubernetes
  • Test driving VMware Photon Controller Part 3b: Deploying Mesos
  • Test driving VMware Photon Controller Part 3c: Deploying Docker Swarm

To start using Photon Controller, you will need at least one physical ESXi 6.x host (4vCPU / 16GB memory / 50GB storage) with some basic networking capabilities which you can read more about here. Obviously, if you really want to see what Photon Controller in action and what it can do, having additional hosts will definitely help. If you do not have a dedicated ESXi host for use with Photon Controller, the next best option is to leverage Nested ESXi. The more resources you can allocate to the Nested ESXi VM, the better your experience will be in addition to the number of cluster orchestration workflows you will be able to exercise. If you have access to a physical ESXi host, you can skip steps 2 through 4.

For this exercise, I will be using my Apple Mac Mini which is running the latest version of ESXi 6.0 Update 2 and has 16GB of available memory and 100+GB of local storage.

Deploying Photon Controller

Step 1 - Download both the Photon Controller Installer OVA as well as the Photon CLI for your OS platform from here.

Step 2 - Deploy the Photon Controller Installer OVA using either the ovftool CLI directly against an existing ESXi host or using the vSphere Web/C# Client connected to a vCenter Server. For more detailed instructions, please have a look at this blog article here.

Step 3 (optional) - Download the Nested ESXi 6.x Virtual Appliance which you can find here which also includes instructions on how to deploy the Nested ESXi VA. Make sure the version of the Nested ESXi 6.x VA version is v5.0 as earlier versions will not work. You can refer to the screenshot in the next step if you are wondering where to look.

Step 4 -(optional) Deploy the Nested ESXi OVA with at least 4vCPU, 16GB of memory and increase the storage for the 3rd VMDK to at least 50GB. If you have vCenter Server, you can deploy by using either the vSphere Web or C# Client as shown in the screenshot below:

photon-controller-using-nested-esxi-16
Make sure you enable SSH (currently required for Photon Controller) and enable local datastore unless you have shared storage to connect to the Nested ESXi VM (VSAN is currently not supported with Photon Controller at this time). If you only have an ESXi host, then you can deploy using the ovftool CLI which can be downloaded here and follow the instructions found here.

Note: If you have more than one Nested ESXi VM, you will need to setup shared storage else you may run into issues when images are being replicated across the hosts. The other added benefit is that you are not wasting local storage just to replicate the same images over and over.

At this point, you should have the Photon Controller Installer VM running and at least one physical or Nested ESXi  powered on and ready to go.

UPDATE (04/25/16): Please have a look at this article on How to override the default CPU/Memory when deploying Photon Controller Management VM? which can be very helpful for resource constrained environments.

Step 6 - Next, open a browser to the IP Address of your Photon Controller Installer VM whether that was an IP Address you had specified or on that it was automatically obtained via DHCP. You should be taken to the installer screen as seen in the screenshot below.

testing-driving-photon-controller-part1-0
Step 7 - Click on the "Get Started" button and then accept the EULA.

Step 8 - The next section is "Management" where you will define the ESXi host(s) to run the Photon Controller Management VMs. If you only have one ESXi host, then you will also want to check the "Also use as Cloud Host" box in which case the ESXi host will be used to run both the Photon Controller management VM as well as the workload VMs. In a real Production environment, you will most likely want to separate these out as a best practice to not mix your management plane with your compute workload.

The Host IP will be the IP Address (yes, you will have to use IP Address as hostnames are not currently supported) of your first ESXi host. Following that, you will then need to provide credentials to the ESXi host as well as the datastore and networking configurations in which the Photon Controller VM will be deployed to.

testing-driving-photon-controller-part1-1
Note: One important thing to note is that the installer will dynamically size the Photon Controller Management VM based on the available resources of the ESXi host. Simply speaking, it will consume as much available resources (taking into considerations powered off VMs if they exists) depending if it is purely a "Management" and/or "Cloud" host.

Step 9 - The next section is "Cloud" where you will specify additional ESXi host(s) that will run your workload. Since we only have a single host, we already accounted for this in previous step and will skip this. If you do have additional hosts, you can specify either individual IPs or a range of IPs. If you have hosts with different credentials, you can add addition logical groups by simply clicking into the "Add host group" icon.

Step 10 - The last page is the "Global Settings" where you have the ability to configure some of the advanced options. For a minimal setup, you only need to specify the share storage for the images as well as deploying a load balancer which is part of the installer itself. If you only have a single host, then you can specify the name of your datastore or the shared datastore in which you have already mounted on your ESXi host. In my environment, the datastore name is datastore1. If you have multiple ESXi hosts that *only* have local datastores, make sure they are uniquely named as there is a known bug that different hosts can not have the same datastore name. In this case, you would list all the datastore names in the box (e.g. datastore1, datastore2).

Make sure to also check the box "Allow cloud hosts to use image datastore for VM Storage" if you wish to allow the VMs to also be deployed to these datastores. All other settings are all optional including deploying the Lightwave identity service, you can refer to the documentation for more details.

testing-driving-photon-controller-part1-2
Step 11 - Finally, before you click on the "Deploy" button, I recommend that you export your current configurations. This allows you to easily adjust the configurations without having to re-enter it into the UI or if you get a failure so you can easily re-try. This a very handy feature and hope to see this in other VMware based installers. Once you are ready, go ahead and click on the "Deploy" button.

testing-driving-photon-controller-part1-3
Depending on environment and resources, the deployment can take anywhere from 5-10 minutes. The installer will discover your ESXi hosts and the resources you had specified earlier, it will then install an agent on each of the ESXi hosts which will allow Photon Controller to communicate with the hosts, deploy the Photon Controller Management VM and then finally upload the necessary images from the Photon Controller Installer VM over to the Image Datastores. If everything was successful, you should see the success screen in the screenshot above.

Note: If you run into any deployment issues, the most common issue is most like resource related. If you did not correctly size the Nested ESXi VM with the minimal configuration, you will definitely run into issues. If you do run into this situation, go ahead and re-size your Nested ESXi VMs and then re-initialize the Photon Controller Installer VM by jumping to the bottom of this article in the Troubleshooting section where I document the process.

Exploring Photon CLI

At this point, we will now switch over to the Photon CLI that you had downloaded earlier to interact with the Installer VM to get some information about our deployed Photon Controller instance. The Photon CLI uses the Photon REST API, so you could also interact with the system using the API rather than the CLI. We will also quickly cover the REST API in this section in case you might be interested in using it.

Step 1 - Another method to verify that our deployment was successful is by pointing our Photon CLI to the IP Address of the Photon Controller Installer VM by running the following command:

./photon target set http://192.168.1.250

testing-driving-photon-controller-part1-4
Step 2 - Here, we will be able to list any of deployments performed by the Installer VM by running the following command:

./photon deployment list

Step 3 - Using the deployment ID from previous step, we can then get more details about a given deployment by running the following command and specifying the ID:

./photon deployment show de4d276f-16c1-4666-b586-a800dc83d4d6

testing-driving-photon-controller-part1-5
As you can see from the output, we get a nice summary of the Photon Controller instance that we just deployed. What you will be looking for here is that the State property shows "Ready" which means we are now ready to start using the Photon Controller platform. From here, we can also see the IP Address of the load balancer that was setup for us within the Photon Controller Management VM which in this example is 192.168.1.150.

Step 4 - To interact with our Photon Controller instance, we will need to point the Photon CLI to the IP Address of the load balancer and specify port 28080. If you had enabled authentication using the Lightwave identity service, you would then use port 443 instead.

./photon target set http://192.168.1.150:28080

Step 5 - You can also check the state of the overall system and the various components once you have pointed to your Photon Controller by running the following command:

./photon system status

testing-driving-photon-controller-part1-6
Step 6 - If you want to get the list of ESXi hosts that is part of a given deployment, we can use the deployment ID from Step 2 and then run the following command which will give you some basic information including the functionality of the ESXi host whether it is serving as a "Management" or "Cloud" Host:

./photon deployment list-hosts de4d276f-16c1-4666-b586-a800dc83d4d6

testing-driving-photon-controller-part1-7
Step 7 - To show more details about a given ESXi host, we just need to take the host ID from the previous step and then run the following command:

./photon host show ce37fca9-c8c6-4986-bb47-b0cf48fd9724

testing-driving-photon-controller-part1-8
Note: I had noticed that the ESXi host's root password was being displayed in this output. I have already reported this internally and this will be removed in a future update as it should not be displaying the password, especially in plaintext.

Hopefully this gives you a quick primer on how the Photon CLI works and how you can easily interact with a given Photon Controller deployment. If you would like more details on Photon CLI, be sure to check out the official documentation here.

Exploring Photon Controller API

The Photon Controller also provides a REST API interface which you can explore by using the built in Swagger interface. You can connect to it by opening a browser to the following address: https://[photon-controller-load-balancer]:9000/api For those of you who have not used Swagger before, its a tool that allows you to easily test drive the underlying API as well as providing interactive documentation on the specific APIs that are available. This is a great way to learn about the Photon Controller API and allows you to even try it out without having to write a single line of code.

testing-driving-photon-controller-part1-9

Exploring Photon Controller UI

Saving the best for the last, we will now take a look at the new Photon Controller Management UI. To access the UI, you just need to open a browser to the IP Address of the Photon Controller load balancer. In this example, it is 192.168.1.150 and once loaded, you should be taken to the main dashboard.

testing-driving-photon-controller-part1-10
If you recall earlier in the Photon CLI example, we had to run through several commands to get the overall system status as well as list of ESXi hosts participating in either a "Management" or "Cloud" host role. With the UI, this is literally a single click!

testing-driving-photon-controller-part1-11
There are other objects within the UI that you may notice while exploring but we will save that for the next article in which we will walk through the process provisioning your first Virtual Machine using Photon Controller.

Troubleshooting

Here are some useful things I learned from the Photon Controller team while troubleshooting some of my initial deployments.

The following logs are useful to take a look at during a failed deployment and usually will give some hints to what had happened. You can find these by logging into the Photon Controller Installer VM:

  • /var/log/esxcloud/management-api/management-api.log
  • /var/log/esxcloud/deployer/deployer.log

If you need to restart or re-deploy using the Photon Controller Installer VM, there is some clean up that you need to do (in the future, there will be an easier way to initialize without going through this process). To do so, SSH to Photon Controller Installer VM using the username esxcloud and vmware as the password. Next, you will change over to the root user via the su command and the password will be what you had set earlier:

su - root
rm -rf /etc/esxcloud/deployer/deployer/sandbox_18000/
rm -rf /etc/esxcloud/cloud-store/cloud-store/sandbox_19000/
reboot

Once the Photon Controller Installer VM has started back up, you will need to restart the Docker Container for the UI by running the following command:

docker restart ui_installer

This is required as currently it does not correctly restart upon reboot. This is a known issue and will be fixed in a future update. Before opening a browser to the installer UI, you can run the following command to ensure all Docker Containers have successfully started:

docker ps -a

Screen Shot 2016-04-08 at 3.31.49 PM

Categories // Automation, Cloud Native, ESXi, vSphere 6.0 Tags // cloud native apps, ESXi, Photon Controller

Functional USB 3.0 Ethernet Adapter (NIC) driver for ESXi 5.5 & 6.0

03.28.2016 by William Lam // 81 Comments

Earlier this month I wrote an article demonstrating a functional USB ethernet adapter for ESXi 5.1. This was made possible by using a custom built driver for ESXi that was created over three years ago by a user named Trickstarter. After having re-discovered the thread several years later, I had tried reaching out to the user but concluded that he/she has probably moved on given the lack of forum activity in the recent years. Over the last few weeks I have been investigating to see if it was possible to compile a new version of the driver that would function with newer versions of ESXi such as our 5.5 and 6.0 release.

UPDATE (02/12/19) - A new VMware Native Driver for USB-based NICs has just been released for ESXi 6.5/6.7, please use this driver going forward. If you are still on ESXi 5.5/6.0, you can continue using the existing driver but please note there will be no additional development in the existing vmklinux-based driver.

UPDATE (01/22/17) - For details on using a USB-C / Thunderbolt 3 Ethernet Adapter, please see this post here.

UPDATE (11/17/16) - New driver has been updated for ESXi 6.5, please find the details here.

After reaching out to a few folks internally, I was introduced to Songtao Zheng, a VMware Engineer who works on some of our USB code base. Songtao was kind enough to provide some of assistance in his spare time to help with this non-sanction effort that I was embarking on. Today, I am please to announce that we now have a functional USB ethernet adapter driver based on the ASIX AX88179 that works for both ESXi 5.5 and 6.0. This effort could not have been possible without Songtao and I just want to say thank you very much for all of your help and contributions. I think it is safe to say that the overall VMware community also thanks you for your efforts. This new capability will definitely enable new use cases for vSphere home labs that were never possible before when using platforms such as the Intel NUC or Apple Mac Mini for example. Thank you Songtao! I would also like to extend an additional thank you to Jose Gomes, one of my readers, who has also been extremely helpful with his feedback as well as assistance on testing the new drivers.

Now, Before jumping into the goods, I do want to mention there are a few caveats to be aware of and that I think it is important to understand them before making any purchasing decisions.

  • First and foremost, this is NOT officially supported by VMware, use at your own risk.
  • Secondly, we have observed there is a substantial difference in transfer speeds between Transmit (Egress) and Receive (Ingress) traffic which may or may not be acceptable depending on your workload. On Receive, the USB network adapter is performing close to a native gigabit interface. However, on Transmit, the bandwidth mysteriously drops by ~50% which includes very inconsistent transfer speeds. We are not exactly sure why this is the case, but given ESXi does not officially support USB based ethernet adapters, it is possible that the underlying infrastructure was never optimized for such devices. YMMV
  • Lastly, for the USB ethernet adapter to properly function, you will need a system that supports USB 3.0 which kind of makes sense for this type of a solution to be beneficial in the home lab. If you have a system with USB 2.0, the device will probably not work at least from testing that we have done.

Note: For those interested in the required source code changes to build the AX88179 driver, I have published all of the details on my Github repo here.

Disclaimer: In case you some how missed it, this is not officially supported by VMware. Use at your own risk.

Without further ado, here are the USB 3.0 gigabit ethernet adapters that are supported with the two drivers:

  • StarTech USB 3.0 to Gigabit Ethernet NIC Adapter
  • StarTech USB 3.0 to Dual Port Gigabit Ethernet Adapter NIC with USB Port
  • j5create USB 3.0 to Gigabit Ethernet NIC Adapter (verified by reader Sean Hatfield 03/29/16)
  • Vantec CB-U300GNA USB 3.0 Ethernet Adapter (verified by VMware employee 05/19/16)
  • DUB-1312 USB 3.0 Gigabit Ethernet Adapter (verified by twitter user George Markou 07/29/16)

Note: There may be other USB ethernet adapters that uses the same chipset which could also leverage this driver but these are the only two that have been verified.

usbnic
Here are the ESXi driver VIB downloads:

  • ESXi 5.5 Update 3 USB Ethernet Adapter Driver VIB or ESXi 5.5 Update 3 USB Ethernet Adapter Driver Offline Bundle
  • ESXi 6.0 Update 2 USB Ethernet Adapter Driver VIB or ESXi 6.0 Update 2 USB Ethernet Adapter Driver Offline Bundle
  • ESXi 6.5 USB Ethernet Adapter Driver VIB or ESXi 6.5 USB Ethernet Adapter Driver Offline Bundle

Note: Although the drivers were compiled against a specific version of ESXi, they should also work on the same major version of ESXi, but I have not done that level of testing and YMMV.

Verify USB 3.0 Support

As mentioned earlier, you will need a system that is USB 3.0 capable to be able to use the USB ethernet adapter. If you are unsure, you can plug in a USB 3.0 device and run the following command to check:

lsusb

usb3nic-0
What you will be looking for is an entry stating "Linux Foundation 3.0 root hub" which shows that ESXi was able to detect a USB 3.0 port on your system. Secondly, look for the USB device you just plugged in and ensure the "Bus" ID matches that of the USB 3.0 bus. This will tell you if your device is being claimed as a USB 3.0 device. If not, you may need to update your BIOS as some systems may have USB 2.0 enabled by default like earlier versions of Intel NUC as desribed here. You may also be running pre-ESXi 5.5 which did not support USB 3.0 as mentioned here, so you may need to upgrade your ESXi host to at least 5.5 or greater.

Install Driver

You can either install the VIB directly onto your ESXi host or by creating a custom ESXi ISO that includes the driver using a popular tool like ESXi Customizer by Andreas Peetz.

To install the VIB, upload the VIB to your ESXi host and then run the following ESXCLI command specifying the full path to the VIB:

esxcli software vib install -v /vghetto-ax88179-esxi60u2.vib -f

usb3nic-1
Lastly, you will need to disable the USB native driver to be able to use this driver. To do so, run the following command:

esxcli system module set -m=vmkusb -e=FALSE

You will need to reboot for the change to go into effect.

To verify that the USB network adapter has been successfully claimed, run either of the following commands to list your physical NICs:

esxcli network nic list
esxcfg-nics -l

usb3nic-2
To add the USB uplink, you will need to either use the vSphere Web Client or ESXCLI to add the uplink to either a Virtual or Distributed Virtual Switch.

usb3nic-3
To do so using ESXCLI, run the following command and specify the name of your vSwitch:

esxcli network vswitch standard uplink add -u vusb0 -v vSwitch0

Uninstall Driver

To uninstall the VIB, first make sure to completely unplug the USB network adapter from the ESXi first. Next, run the following ESXCLI command which will automatically unload the driver and remove the VIB from your ESXi host:

esxcli software vib remove -n vghetto-ax88179-esxi60u2

Note: If you try to remove the VIB while the USB network adapter is still plugged in, you may hang the system or cause a PSOD. Simply reboot the system if you accidentally get into this situation.

Troubleshooting

If you are not receiving link on the USB ethernet adapter, it is most likely that your system does not support USB 3.0. If you find the a similar message like the one below in /var/log/vmkernel.log then you are probably running USB 1.0 or 2.0.

2016-03-21T23:30:49.195Z cpu6:33307)WARNING: LinDMA: Linux_DMACheckConstraints:138: Cannot map machine address = 0x10f5b6b44, length = 2 for device 0000:00:1d.7; reason = address exceeds dma_mask (0xffffffff))

Persisting USB NIC Configurations after reboot

ESXi does not natively support USB NIC and upon a reboot, the USB NICs are not picked up until much later in the boot process which prevents them from being associated with VSS/VDS and their respective portgroups. To ensure things are connected properly after a reboot, you will need to add something like the following in /etc/rc.local.d/local.sh which re-links the USB NIC along with the individual portgroups as shown in the example below.

esxcfg-vswitch -L vusb0 vSwitch0
esxcfg-vswitch -M vusb0 -p "Management Network" vSwitch0
esxcfg-vswitch -M vusb0 -p "VM Network" vSwitch0

You will also need to run /sbin/auto-backup.sh to ensure the configuration changes are saved and then you can issue a reboot to verify that everything is working as expected.

Summary

For platforms that have limited built-in networking capabilities such as the Intel NUC and Apple Mac Mini, customers now have the ability to add additional network interfaces to these systems. This will now open up a whole new class of use cases for vSphere based home labs that were never possible before, especially with solutions such as VSAN and NSX. I look forward to seeing what our customers can now do with these new networking capabilities.

Additional Info

Here are some additional screenshots testing the dual USB 3.0 ethernet adapter as well as a basic iPerf benchmark for the single USB ethernet adapter. I was not really impressed with the speeds for the dual ethernet adapter which I had shared some more info here. Unless you are limited on number of USB 3.0 ports, I would probably recommend just sticking with the single port ethernet adapter.

usb3nic-5
usb3nic-6

iPerf benchmark for Ingress traffic (single port USB ethernet adapter):
usb3nic-7
iPerf benchmark for Egress traffic (single port USB ethernet adapter):
usb3nic-8

Categories // ESXi, Home Lab, Not Supported, vSphere 5.5, vSphere 6.0 Tags // ESXi 5.5, ESXi 6.0, homelab, lsusb, usb, usb ethernet adapter, usb network adapter

  • « Previous Page
  • 1
  • …
  • 11
  • 12
  • 13
  • 14
  • 15
  • …
  • 51
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...