WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Getting Started with Tech Preview of Docker Volume Driver for vSphere

05.31.2016 by William Lam // 8 Comments

A couple of weeks ago, I got an early sneak peak at some of the work that was being done in VMware's Storage and Availability Business Unit (SABU) on providing storage persistency for Docker Containers in a vSphere based environment. Today, VMware has open sourced a new Docker Volume Driver for vSphere (Tech Preview) that will enable customers to easily take advantage of their existing vSphere Storage (VSAN, VMFS and NFS) and provide persistent storage access to Docker Containers running on top of the vSphere platform. Both the Developers and vSphere Administrators will have familiar interfaces in how they manage and interact with these Docker Volumes from vSphere, which we will explore further below. 

The new Docker Volume Driver for vSphere is comprised of two components: The first is the vSphere Docker Volume Plugin that is installed inside of a Docker Host (VM) that will allow you to instantiate new Docker Volumes. The second is the vSphere Data Volume Driver that is installed in the ESXi Hypervisor host that will handle the VMDK creation and the mapping of the Docker Volume request back to the Docker Hosts. If you have shared storage on your ESXi hosts, you can have a VM on one ESXi host create a Docker Volume and have a completely different VM on another ESXi host mount the exact same Docker Volume. Below is diagram to help illustrate the different components that make up the Docker Volume Driver for vSphere.
docker-volume-driver-for-vsphere-00
Below is a quick tutorial on how to get started with the new Docker Volume Driver for vSphere.

Pre-Requisites

  • vSphere ESXi 6.0+
  • vSphere Storage (VSAN, VMFS or NFS) for ESXi host (shared storage required for multi-ESXi host support)
  • Docker Host (VM) running Docker 1.9+ (recommend using VMware Photon 1.0 RC OVA but Ubuntu 10.04 works as well)

Getting Started

Step 1 - Download the vSphere Docker Volume Plugin (RPM or DEB) and vSphere Docker Volume Driver VIB for ESXi

Step 2 - Install the vSphere Docker Volume Driver VIB in ESXi by SCP'ing the VIB to the ESXi and then run the following command specifying the full path to the VIB:

esxcli software vib install -v /vmware-esx-vmdkops-0.1.0.tp.vib -f

docker-volume-driver-for-vsphere-1
Step 3 - Install the vSphere Docker Volume Plugin by SCP'ing the RPM or DEB file to your Docker Host (VM) and then run one of the following commands:

rpm -ivh docker-volume-vsphere-0.1.0.tp-1.x86_64.rpm
dpkg -i docker-volume-vsphere-0.1.0.tp-1.x86_64.db

docker-volume-driver-for-vsphere-2

Creating Docker Volumes on vSphere (Developer)

To create your first Docker Volume on vSphere, a Developer would only need access to a Container Host (VM) like PhotonOS for example that has the vSphere Docker Volume Plugin installed. They would then use the familiar Docker CLI to create a Docker Volume like they normally would and there is nothing they need to know about the underlying infrastructure.

Run the following command to create a new Docker Volume called vol1 with the capacity of 10GB using the new vmdk driver:

docker volume create --driver=vmdk --name=vol1 -o size=10gb

We can list all the Docker Volumes that available by running the following command:

docker volume ls

We can also inspect a specific Docker Volume by running the following command and specifying the name of the volume:

docker volume inspect vol1

docker-volume-driver-for-vsphere-3
Lets actually do something with this volume now by attaching it to a simple Busybox Docker Container by running the following command:

docker run --rm -it -v vol1:/mnt/volume1 busybox

docker-volume-driver-for-vsphere-4
As you can see from the screenshot above, I have now successfully accessed the Docker Volume that we had created earlier and I am now able to write to it. If you have another VM that resides on the same underlying shared storage, you can also mount the Docker Volume that you had just created from a different system.

Pretty straight forward and easy right? Happy Developers 🙂

Managing Docker Volumes on vSphere (vSphere Administrator)

For the vSphere Administrators, you must be wondering, did I just give my Developers full access to the underlying vSphere Storage to consume as much storage as possible? Of course not, we have not forgotten about our VI Admins and we have some tools to help. Today, there is a CLI utility located at /usr/lib/vmware/vmdkops/bin/vmdkops_admin.py which runs directly on the ESXi Shell (hopefully this will turn into an API in the future) which provides visibility into how much storage is being consumed (provisioned and usage) by the individual Docker Volumes as well as who is creating them and their respective Virtual Machine mappings.

Lets take a look at a quick example by logging into the ESXi Shell. To view the list ofDocker Volumes that have been created, run the following command:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py ls

You should see the name of the Docker Volume that we had created earlier and the respective vSphere Datastore in which it was provisioned to. At the time of writing this, these were the only two default properties that are displayed out of the box. You can actually add additional columns by simply using the -c option by running the following command:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py ls -c volume,datastore,created-by,policy,attached-to,capacity,used

docker-volume-driver-for-vsphere-5
Now we get a bunch more information like which VM had created the Docker Volume, the BIOS UUID that the Docker Volume is currently attached to, the VSAN VM Storage Policy that was used (applicable to VSAN env only), the provisioned and used capacity. In my opinion, this should be the default set of columns and this is something I have feedback to the team, so perhaps this will be the default when the Tech Preview is released.

One thing that to be aware of is that the Docker Volumes (VMDKs) will automatically be provisioned onto the same underlying vSphere Datastore as the Docker Host VM (which makes sense given it needs to be able to access it). In the future, it may be possible to specify where you may want your Docker Volumes to be provisioned. If you have any feedback on this, be sure to leave a comment in the Issues page of the Github project.

Docker Volume Role Management

Although not yet implemented in the Tech Preview, it looks like VI Admins will also have the ability to create Roles that restrict the types of Docker Volume operations that a given set of VM(s) can perform as well as the maximum amount of storage that can be provisioned.

Here is an example of what the command would look like:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py role create --name DevLead-Role --volume-maxsize 100GB --rights create,delete,mount --matches-vm photon-docker-host-*

Docker Volume VSAN VM Storage Policy Management

Since VSAN is one of the supported vSphere Storage backends with the new Docker Volume Driver, VI Admins will also have the ability to create custom VSAN VM Storage Policies that can then be specified during Docker Volume creations. Lets take a look at how this works.

To create a new VSAN Policy, you will need to specify the name of the policy and provide the set of VSAN capabilities formatted using the same syntax found in esxcli vsan policy getdefault command. Here is a mapping of the VSAN capabilities to the attribute names:

VSAN Capability Description VSAN Capability Key
Number of failures to tolerate hostFailuresToTolerate
Number of disk stripes per object stripeWidth
Force provisioning forceProvisioning
Object space reservation proportionalCapacity
Flash read cache reservation cacheReservation

Run the following command to create a new VSAN Policy called FTT=0 which sets Failure to Tolerate to 0 and Force Provisioning to true:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py policy create --name FTT=0 --content '(("hostFailuresToTolerate" i0) ("forceProvisioning" i1))'

docker-volume-driver-for-vsphere-6
If we now go back to our Docker Host, we can create a second Docker Volume called vol2 with capacity of 20GB, but we will also now include our new VSAN Policy called FTT=0 policy by running the following command:

docker volume create --driver=vmdk --name=vol2 -o size=20gb -o vsan-policy-name=FTT=0

We can also easily see which VSAN Policies are in use by simply listing all policies by running the following command:

docker-volume-driver-for-vsphere-7
All VSAN Policies and Docker Volumes (VMDK) that are created are stored under a folder called dockvols in the root of the vSphere Datastore as shown in the screenshot below.

docker-volume-driver-for-vsphere-8
Hopefully this gave you a nice overview of what the Docker Volume Driver for vSphere can do in its first release. Remember, this is still in Tech Preview and our Engineers would love to get your feedback on the things you like, new features or things that we can improve on. The project is on Github which you can visit the page here and if you have any questions or run into bugs, be sure to submit an issue here or contribute back!

Categories // Automation, Cloud Native, Docker, ESXi, VSAN, vSphere Tags // cloud native apps, container, Docker, docker volume, ESXi, nfs, vmdkops_admin.py, vmfs, VSAN

How to update AppCatalyst's default PhotonOS VM template w/Docker 1.9?

12.19.2015 by William Lam // 9 Comments

For those of you using VMware's free AppCatalyst Hypervisor, you may have noticed that there is currently not a way to update to the latest Docker 1.9 Engine/Client using Photon's package manager, Tiny Dandified YUM (tndf). Although you can manually install Docker 1.9 on any deployed PhotonOS VMs from AppCatalyst, the only issue with that is that all VMs based off of the default PhotonOS template will still be running version 1.8, which means you would need to apply to do this on ever new VM creation.

Not ideal at the moment, but there is a simple fix which is to update the default PhotonOS template with Docker 1.9. We can easily this simply by leveraging AppCatalyst itself to update itself 😉 or rather its default PhotonOS VMDK. Below are the manual steps if you wish to walk through this yourself, but if you rather just run a simple script that will compeltely automate the entire process, just jump straight to the bottom of this article 🙂

Step 1 - Create a new temp PhotonOS VM (in my example, I'm calling it PHOTON-DOCKER-1.9) which we will use to install Docker 1.9. You can do so by running the following command:

/opt/vmware/appcatalyst/bin/appcatalyst vm create PHOTON-DOCKER-1.9

update-docker-client-to-19-in-appcatalyst-photonos-template-1
Step 2 - Power on the temp PhotonOS VM that you just created by running the following command:

/opt/vmware/appcatalyst/bin/appcatalyst vmpower on PHOTON-DOCKER-1.9

update-docker-client-to-19-in-appcatalyst-photonos-template-2
Step 3 - Retrieve the IP Address of the temp PhotonOS by running the following command (this could take up to a minute or so to display):

/opt/vmware/appcatalyst/bin/appcatalyst guest getip PHOTON-DOCKER-1.9

update-docker-client-to-19-in-appcatalyst-photonos-template-3
Step 4 - We can now SSH to the temp PhotonOS by using the IP Address from the previous step. You will need to specify the private key to login to the PhotonOS by running the following command:

ssh -i /opt/vmware/appcatalyst/etc/appcatalyst_insecure_key photon@[IP-ADDRESS]

update-docker-client-to-19-in-appcatalyst-photonos-template-4
Step 5 - Now we will download and install Docker 1.9 but before doing so we also need to grab the "tar" utility as it does not seem to be part of the default PhotonOS. Here is a one-liner that will automatically install the tar utility and then perform the download and install of Docker (Thanks to Massimo Re'Ferre for his original install steps, just polishing up his awesomeness). Copy and paste the snippet below which you will run that inside of the PhotonOS:

sudo tdnf -y install tar;curl -O https://get.docker.com/builds/Linux/x86_64/docker-1.9.1.tgz;tar -zxvf docker-1.9.1.tgz;sudo systemctl stop docker;sudo cp usr/local/bin/docker /usr/bin/docker;sudo systemctl start docker;rm -rf usr/;rm -f docker-1.9.1.tgz;exit

update-docker-client-to-19-in-appcatalyst-photonos-template-5
Step 6 - Now that we have our PhotonOS updated with Docker 1.9, we just need to shut it down and replace the VMDK with AppCatalyst's default PhotonOS VMDK. Run the following command to find the name of your temp PhotonOS VMDK.

find "${HOME}/Documents/AppCatalyst/PHOTON-DOCKER-1.9" -name '*.vmdk'

Once you have the VMDK name, you should backup the original AppCatalyst's PhotonOS VMDK by running the following command:

sudo mv /opt/vmware/appcatalyst/photonvm/photon-disk1-cl1.vmdk /opt/vmware/appcatalyst/photonvm/photon-disk1-cl1.vmdk.bak

Next, we will copy our temp PhotonOS VMDK to AppCatalyst's default PhotonOS directory and replace its original VMDK by running the following command:

sudo cp ${HOME}/Documents/AppCatalyst/PHOTON-DOCKER-1.9/photon-disk1-cl2.vmdk /opt/vmware/appcatalyst/photonvm/photon-disk1-cl1.vmdk

Finally, we need to make sure the new VMDK has the correct permissions by running the following command:

sudo chmod 644 /opt/vmware/appcatalyst/photonvm/photon-disk1-cl1.vmdk

Step 7 - At this point, you can verify that your changes were successful by creating a new PhotonOS VM and once logged into the VM, you can run "docker version" to verify that you are now running Docker 1.9 as shown in the screenshot below.

update-docker-client-to-19-in-appcatalyst-photonos-template-6
For those those of you who do not wish to go through the manual steps, here is a quick script called update_docker_client_in_appcatalyst.sh which automates this entire process describe in the instructions above. Here is a quick screenshot of a sample execution of the script. I am hoping the Photon team will be updating the tdnf online repo so that you can simple run yum -y update docker in the future.

update-docker-client-to-19-in-appcatalyst-photonos-template-7

Categories // Apple, Automation, Cloud Native, Docker Tags // appcatalyst, Docker, Photon, tdnf

Docker Container for the Ruby vSphere Console (RVC)

11.08.2015 by William Lam // 2 Comments

The Ruby vSphere Console (RVC) is an extremely useful tool for vSphere Administrators and has been bundled as part of vCenter Server (Windows and the vCenter Server Appliance) since vSphere 6.0. One feature that is only available in the VCSA's version of RVC is the VSAN Observer which is used to capture and analyze performance statistics for a VSAN environment for troubleshooting purposes.

For customers who are still using the Windows version of vCenter Server and wish to leverage this tool, it is generally recommended that you deploy a standalone VCSA just for the VSAN Observer capability which does not require any additional licensing. Although it only takes 10 minutes or so to setup, having to download and deploy a full blown VCSA to just use the VSAN Observer is definitely not ideal, especially if you are resource constrained in your environment. You also may only need the VSAN Observer for a short amount of time, but it could take you longer to deploy and in a troubleshooting situation, time is of the essence.

I recently came across an internal Socialcast thread and one of the suggestion was why not build a tiny Photon OS VM that already contained RVC? Instead of building a specific Photon OS that was specific to RVC, why not just create a Docker Container for RVC? This also means you could pull down the Docker Container from Photon OS or any other system that has Docker installed. In fact, I had already built a Docker Container for some handy VMware Utilities, it would be simple enough to just have an RVC Docker Container.

The one challenge that I had was that the current RVC github repo does not contain the latest vSphere 6.x changes. The fix was simple, I just copied the latest RVC files from a vSphere 6.0 Update 1 deployment of the VCSA (/opt/vmware/rvc and /usr/bin/rvc) and used that to build my RVC Docker Container which is now hosted on Docker Hub here and includes the Dockerfile in case someone was interested in how I built it.

To use the RVC Docker Container, you just need access to a Linux Container Host, for example VMware Photon OS which can be deployed using an ISO or OVA. For instructions on setting that up, please take a look here which should only take a minute or so. Once logged in, you just need to run the following commands to pull down the RVC Docker Container and to star the container:

docker pull lamw/rvc
docker run --rm -it lamw/rvc

ruby-vsphere-console-docker-container-1
As seen in the screenshot above, once the Docker Container has started, you can then access RVC like you normally would. Below is an quick example of logging into one of my VSAN environments and using RVC to run the VSAN Health Check command.

ruby-vsphere-console-docker-container-0
If you wish to run the VSAN Observer with the live web server, you will need to map the port from the Linux Container Host to the VSAN Observer port which runs on 8010 by default when starting the RVC Docker Container. To keep things simple, I would recommend mapping 80->8010 and you would run the following command:

docker run --rm -it -p 80:8010 lamw/rvc

Once the RVC Docker Container has started, you can then start the VSAN Observer with --run-webserver option and if you connect to the IP Address of your Linux Container Host using a browser, you should see the VSAN Observer Stats UI.

Hopefully this will come in handy for anyone who needs to quickly access RVC.

Categories // Docker, VSAN, vSphere 6.0 Tags // container, Docker, Photon, ruby vsphere console, rvc, vcenter server appliance, VCSA, vcva, VSAN, VSAN 6.1, vSphere 6.0 Update 1

  • « Previous Page
  • 1
  • …
  • 3
  • 4
  • 5
  • 6
  • 7
  • …
  • 10
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...