WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Getting Started with Tech Preview of Docker Volume Driver for vSphere

05.31.2016 by William Lam // 8 Comments

A couple of weeks ago, I got an early sneak peak at some of the work that was being done in VMware's Storage and Availability Business Unit (SABU) on providing storage persistency for Docker Containers in a vSphere based environment. Today, VMware has open sourced a new Docker Volume Driver for vSphere (Tech Preview) that will enable customers to easily take advantage of their existing vSphere Storage (VSAN, VMFS and NFS) and provide persistent storage access to Docker Containers running on top of the vSphere platform. Both the Developers and vSphere Administrators will have familiar interfaces in how they manage and interact with these Docker Volumes from vSphere, which we will explore further below. 

The new Docker Volume Driver for vSphere is comprised of two components: The first is the vSphere Docker Volume Plugin that is installed inside of a Docker Host (VM) that will allow you to instantiate new Docker Volumes. The second is the vSphere Data Volume Driver that is installed in the ESXi Hypervisor host that will handle the VMDK creation and the mapping of the Docker Volume request back to the Docker Hosts. If you have shared storage on your ESXi hosts, you can have a VM on one ESXi host create a Docker Volume and have a completely different VM on another ESXi host mount the exact same Docker Volume. Below is diagram to help illustrate the different components that make up the Docker Volume Driver for vSphere.
docker-volume-driver-for-vsphere-00
Below is a quick tutorial on how to get started with the new Docker Volume Driver for vSphere.

Pre-Requisites

  • vSphere ESXi 6.0+
  • vSphere Storage (VSAN, VMFS or NFS) for ESXi host (shared storage required for multi-ESXi host support)
  • Docker Host (VM) running Docker 1.9+ (recommend using VMware Photon 1.0 RC OVA but Ubuntu 10.04 works as well)

Getting Started

Step 1 - Download the vSphere Docker Volume Plugin (RPM or DEB) and vSphere Docker Volume Driver VIB for ESXi

Step 2 - Install the vSphere Docker Volume Driver VIB in ESXi by SCP'ing the VIB to the ESXi and then run the following command specifying the full path to the VIB:

esxcli software vib install -v /vmware-esx-vmdkops-0.1.0.tp.vib -f

docker-volume-driver-for-vsphere-1
Step 3 - Install the vSphere Docker Volume Plugin by SCP'ing the RPM or DEB file to your Docker Host (VM) and then run one of the following commands:

rpm -ivh docker-volume-vsphere-0.1.0.tp-1.x86_64.rpm
dpkg -i docker-volume-vsphere-0.1.0.tp-1.x86_64.db

docker-volume-driver-for-vsphere-2

Creating Docker Volumes on vSphere (Developer)

To create your first Docker Volume on vSphere, a Developer would only need access to a Container Host (VM) like PhotonOS for example that has the vSphere Docker Volume Plugin installed. They would then use the familiar Docker CLI to create a Docker Volume like they normally would and there is nothing they need to know about the underlying infrastructure.

Run the following command to create a new Docker Volume called vol1 with the capacity of 10GB using the new vmdk driver:

docker volume create --driver=vmdk --name=vol1 -o size=10gb

We can list all the Docker Volumes that available by running the following command:

docker volume ls

We can also inspect a specific Docker Volume by running the following command and specifying the name of the volume:

docker volume inspect vol1

docker-volume-driver-for-vsphere-3
Lets actually do something with this volume now by attaching it to a simple Busybox Docker Container by running the following command:

docker run --rm -it -v vol1:/mnt/volume1 busybox

docker-volume-driver-for-vsphere-4
As you can see from the screenshot above, I have now successfully accessed the Docker Volume that we had created earlier and I am now able to write to it. If you have another VM that resides on the same underlying shared storage, you can also mount the Docker Volume that you had just created from a different system.

Pretty straight forward and easy right? Happy Developers 🙂

Managing Docker Volumes on vSphere (vSphere Administrator)

For the vSphere Administrators, you must be wondering, did I just give my Developers full access to the underlying vSphere Storage to consume as much storage as possible? Of course not, we have not forgotten about our VI Admins and we have some tools to help. Today, there is a CLI utility located at /usr/lib/vmware/vmdkops/bin/vmdkops_admin.py which runs directly on the ESXi Shell (hopefully this will turn into an API in the future) which provides visibility into how much storage is being consumed (provisioned and usage) by the individual Docker Volumes as well as who is creating them and their respective Virtual Machine mappings.

Lets take a look at a quick example by logging into the ESXi Shell. To view the list ofDocker Volumes that have been created, run the following command:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py ls

You should see the name of the Docker Volume that we had created earlier and the respective vSphere Datastore in which it was provisioned to. At the time of writing this, these were the only two default properties that are displayed out of the box. You can actually add additional columns by simply using the -c option by running the following command:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py ls -c volume,datastore,created-by,policy,attached-to,capacity,used

docker-volume-driver-for-vsphere-5
Now we get a bunch more information like which VM had created the Docker Volume, the BIOS UUID that the Docker Volume is currently attached to, the VSAN VM Storage Policy that was used (applicable to VSAN env only), the provisioned and used capacity. In my opinion, this should be the default set of columns and this is something I have feedback to the team, so perhaps this will be the default when the Tech Preview is released.

One thing that to be aware of is that the Docker Volumes (VMDKs) will automatically be provisioned onto the same underlying vSphere Datastore as the Docker Host VM (which makes sense given it needs to be able to access it). In the future, it may be possible to specify where you may want your Docker Volumes to be provisioned. If you have any feedback on this, be sure to leave a comment in the Issues page of the Github project.

Docker Volume Role Management

Although not yet implemented in the Tech Preview, it looks like VI Admins will also have the ability to create Roles that restrict the types of Docker Volume operations that a given set of VM(s) can perform as well as the maximum amount of storage that can be provisioned.

Here is an example of what the command would look like:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py role create --name DevLead-Role --volume-maxsize 100GB --rights create,delete,mount --matches-vm photon-docker-host-*

Docker Volume VSAN VM Storage Policy Management

Since VSAN is one of the supported vSphere Storage backends with the new Docker Volume Driver, VI Admins will also have the ability to create custom VSAN VM Storage Policies that can then be specified during Docker Volume creations. Lets take a look at how this works.

To create a new VSAN Policy, you will need to specify the name of the policy and provide the set of VSAN capabilities formatted using the same syntax found in esxcli vsan policy getdefault command. Here is a mapping of the VSAN capabilities to the attribute names:

VSAN Capability Description VSAN Capability Key
Number of failures to tolerate hostFailuresToTolerate
Number of disk stripes per object stripeWidth
Force provisioning forceProvisioning
Object space reservation proportionalCapacity
Flash read cache reservation cacheReservation

Run the following command to create a new VSAN Policy called FTT=0 which sets Failure to Tolerate to 0 and Force Provisioning to true:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py policy create --name FTT=0 --content '(("hostFailuresToTolerate" i0) ("forceProvisioning" i1))'

docker-volume-driver-for-vsphere-6
If we now go back to our Docker Host, we can create a second Docker Volume called vol2 with capacity of 20GB, but we will also now include our new VSAN Policy called FTT=0 policy by running the following command:

docker volume create --driver=vmdk --name=vol2 -o size=20gb -o vsan-policy-name=FTT=0

We can also easily see which VSAN Policies are in use by simply listing all policies by running the following command:

docker-volume-driver-for-vsphere-7
All VSAN Policies and Docker Volumes (VMDK) that are created are stored under a folder called dockvols in the root of the vSphere Datastore as shown in the screenshot below.

docker-volume-driver-for-vsphere-8
Hopefully this gave you a nice overview of what the Docker Volume Driver for vSphere can do in its first release. Remember, this is still in Tech Preview and our Engineers would love to get your feedback on the things you like, new features or things that we can improve on. The project is on Github which you can visit the page here and if you have any questions or run into bugs, be sure to submit an issue here or contribute back!

Categories // Automation, Cloud Native, Docker, ESXi, VSAN, vSphere Tags // cloud native apps, container, Docker, docker volume, ESXi, nfs, vmdkops_admin.py, vmfs, VSAN

Test driving VMware Photon Controller Part 3c: Deploying Docker Swarm

04.28.2016 by William Lam // 3 Comments

In this final article, we will now take a look at deploying a Docker Swarm Cluster running on top of Photon Controller.

test-driving-photon-controller-docker-swarm-cluster
A minimal deployment for a Docker Swarm Cluster consists of 3 Virtual Machines: 1 Masters, 1 etcd, 1 Slave. If you only have 16GB of memory on your ESXi host, then you will need override the default VM Flavor used which is outlined in Step 1. If you have more than 16GB of memory, then you can skip Step 1 and move directly to Step 2.

Deploying Docker Swarm Cluster

Step 1 -If you have not already created a new cluster-tiny-vm VM Flavor from the previous article that consists of 1vCPU/1GB memory, please run the following command:

./photon -n flavor create --name cluster-tiny-vm --kind "vm" --cost "vm 1 COUNT,vm.flavor.cluster-other-vm 1 COUNT,vm.cpu 1 COUNT,vm.memory 1 GB,vm.cost 1 COUNT"

Step 2 - Download the Swarm VMDK from here

Step 3 -We will now upload our Swarm image and make a note of the ID that is generated after the upload completes by running the following command:

./photon -n image create photon-swarm-vm-disk1.vmdk -n photon-swarm-vm.vmdk -i EAGER

Step 4 - Next, we will also need the ID of our Photon Controller Instance deployment as it will be required in the next step by running the following command:

./photon deployment list

Step 5 - We will now enable the Docker Swarm Cluster Orchestration on our Photon Controller instance by running the following command and specifying the ID of your deployment as well as the ID of the Swarm image from the previous two steps:

./photon -n deployment enable-cluster-type cc49d7f7-b6c4-43dd-b8f3-fe17e6648d0f -k SWARM -i 13ae437d-3fd1-48a3-9d14-287b9259cbad

test-driving-photon-controller-docker-swarm-0
Step 6 -We are now ready to spin up our Docker Swarm Cluster by simply running the following command and substituting the network information from your environment. We are going to only deploying a single Swarm Slave (if you have additional resources you can spin up more or you can always re-size the cluster after it has been deployed). Do not forget to override the default VM Flavor used by specifying -v option and providing the name of our VM Flavor which we had created earlier called cluster-tiny-vm. You can just hit enter when prompted for the two zookeeper IP Addresses.

./photon cluster create -n swarm-cluster -k SWARM --dns 192.168.1.1 --gateway 192.168.1.1 --netmask 255.255.255.0 --etcd1 192.168.1.45 -s 1 -v cluster-tiny-vm

test-driving-photon-controller-docker-swarm-1
Step 7 - The process can take a few minutes and you should see a message like the one shown above which prompts you to run the cluster show command to get more details about the state of the cluster.

./photon cluster show 276b6934-6eb5-42fd-9fb1-031e311b3c45

test-driving-photon-controller-docker-swarm-2
At this point, you have successfully deployed a Docker Swarm Cluster running on Photon Controller. What you will be looking for in this screen is the IP Address of the Master VM which we will need in the next section if you plan to explore Docker Swarm a bit more.

Exploring Docker Swarm

To interact with your newly deployed Docker Swarm Cluster, you will need to ensure that you have a Docker client that matches the Docker version running the Docker Swarm Cluster which is currently today 1.20. The easiest way is to deploy PhotonOS 1.0 TP2 using either an ISO or OVA.

To verify that you have the correct Docker client version, you can just run the following command:

docker version

test-driving-photon-controller-docker-swarm-5
Once you have verified that your Docker Client matches the version, we will go ahead and set the DOCKER_HOST variable to point to the IP Address of our Master VM which you can find above in Step 7. When you have identified the IP Address, go ahead and run the following command to set variable:

export DOCKER_HOST=tcp://192.168.1.105:8333

We can run the following command to list the Docker Containers running for our Docker Swarm Cluster:

docker ps -a

test-driving-photon-controller-docker-swarm-3
Lets go ahead and download a Docker Container which we can then use to run on our Docker Swarm Cluster. We will download the VMware PhotonOS Docker Container by running the following command:

docker pull vmware/photon

Once the Docker Container has been downloaded, we can then run it by specifying the following command:

docker run --rm -it vmware/photon

test-driving-photon-controller-docker-swarm-6
For those familiar with Docker, you can see how easily it is to interact with the Docker interface that you are familiar with. Underneath the hood, Photon Controller is automatically provisioning the necessary infrastructure needed to run your applications. This concludes our series in test driving VMware's Photon Controller. If you have made it this far, I hope you have enjoyed the series and if you have any feedback or feature enhancements on Photon Controller, be sure to file an issue on the Photon Controller Github page.

  • Test driving VMware Photon Controller Part 1: Installation
  • Test driving VMware Photon Controller Part 2: Deploying first VM
  • Test driving VMware Photon Controller Part 3a: Deploying Kubernetes
  • Test driving VMware Photon Controller Part 3b: Deploying Mesos
  • Test driving VMware Photon Controller Part 3c: Deploying Docker Swarm

Categories // Automation, Cloud Native, ESXi, vSphere 6.0 Tags // cloud native apps, Docker, ESXi, Photon Controller, swarm

Test driving VMware Photon Controller Part 3b: Deploying Mesos

04.26.2016 by William Lam // 4 Comments

In the previous article, we demonstrated the first Cluster Orchestration solution supported by Photon Controller by deploying a fully functional Kubernetes Cluster using Photon Controller. In this article, we will now look at deploying a Mesos Cluster using Photon Controller.

test-driving-photon-controller-mesos-cluster
The minimal deployment for a Mesos Cluster in Photon Controller consists of 6 Virtual Machines: 3 Masters, 1 Zookeeper, 1 Marathon & 1 Slave. If you only have 16GB of memory on your ESXi host, then you will need to override the default VM Flavor when deploying a Mesos Cluster. If you have more than 16GB of available memory, then you can skip Step 1 and move to Step 2 directly.

Deploying Mesos Cluster

Step 1 - If you have not already created a new cluster-tiny-vm VM Flavor from the previous article that consists of 1vCPU/1GB memory, please run the following command:

./photon -n flavor create --name cluster-tiny-vm --kind "vm" --cost "vm 1 COUNT,vm.flavor.cluster-other-vm 1 COUNT,vm.cpu 1 COUNT,vm.memory 1 GB,vm.cost 1 COUNT"

Step 2- Download the Mesos VMDK from here

Step 3 - We will now upload our Mesos image and make a note of the ID that is generated after the upload completes by running the following command:

./photon -n image create photon-mesos-vm-disk1.vmdk -n photon-meos-vm.vmdk -i EAGER

Step 4 - Next, we will also need the ID of our Photon Controller Instance deployment as it will be required in the next step by running the following command:

./photon deployment list

Step 5 - We will now enable the Mesos Cluster Orchestration on our Photon Controller instance by running the following command and specifying the ID of your deployment as well as the ID of the Mesos image from the previous two steps:

./photon -n deployment enable-cluster-type 569c3963-2519-4893-969c-aed768d12623 -k MESOS -i 51c331ea-d313-499c-9d8f-f97532dd6954

test-driving-photon-controller-meso-1
Step 6 - We are now ready to spin up our Mesos Cluster by simply running the following command and substituting the network information from your environment. We are going to only deploying a single Mesos Slave (if you have additional resources you can spin up more or you can always re-size the cluster after it has been deployed). Do not forget to override the default VM Flavor used by specifying -v option and providing the name of our VM Flavor which we had created earlier called cluster-tiny-vm. You can just hit enter when prompted for the two zookeeper IP Addresses.

./photon cluster create -n mesos-cluster -k MESOS --dns 192.168.1.1 --gateway 192.168.1.1 --netmask 255.255.255.0 --zookeeper1 192.168.1.45 -s 1 -v cluster-tiny-vm

test-driving-photon-controller-meso-2
Step 7 - The process can take a few minutes and you should see a message like the one shown above which prompts you to run the cluster show command to get more details about the state of the cluster.

./photon cluster show bf962c3a-28a2-435d-bd96-0313ca254667

test-driving-photon-controller-meso-3
At this point, you have now successfully deployed a Mesos cluster running on Photon Controller. What you will be looking for in this screen is the IP Address of the Marathon VM which is the management interface to Mesos. We will need this IP Address in the next section if you plan to explore Mesos a bit more.

Exploring Mesos

Using the IP Address obtained from the previous step, you can now open a web browser and enter the following: http://[MARATHON-IP]:8080 which should launch the Marathon UI as shown in the screenshot below. If you wish to deploy a simple application using Marathon, you can follow the workflow here. Since we deployed Mesos using a tiny VM Flavor, we would not be able to exercise the final step of deploying an application running on Mesos. If you have more resources, I definitely recommend you give the workflow a try.

test-driving-photon-controller-meso-4
In our last and final article of the series, we will be covering the last Cluster Orchestration supported on Photon Controller which is Docker Swarm.

  • Test driving VMware Photon Controller Part 1: Installation
  • Test driving VMware Photon Controller Part 2: Deploying first VM
  • Test driving VMware Photon Controller Part 3a: Deploying Kubernetes
  • Test driving VMware Photon Controller Part 3b: Deploying Mesos
  • Test driving VMware Photon Controller Part 3c: Deploying Docker Swarm

Categories // Automation, Cloud Native, ESXi, vSphere 6.0 Tags // cloud native apps, ESXi, Mesos, Photon Controller

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...