WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud
  • Tanzu
    • Application Modernization
    • Tanzu services
    • Tanzu Community Edition
    • Tanzu Kubernetes Grid
    • vSphere with Tanzu
  • Home Lab
  • Nested Virtualization
  • Apple

Enhancements to VMware Tools 12 for Container Application Discovery in vSphere 

03.02.2022 by William Lam // 2 Comments

VMware Tools 12 was just released and it adds a number of new features including support for Windows 11 and Windows Server 2022, Salt Stack Minion deployment and the use of OpenSSL 3.0 library to just name a few.

One additional feature that is quite interesting is the enhancement to the Application Discovery feature that was shipped with VMware Tools 11 which provides organizations with additional visibility of the running processes within a VM.

With VMware Tools 12, we now have a more granular method for discovering container-based processes (Docker or Containerd) running within a Linux VM, which is pretty cool if you ask me!


Similiar to the Application Discovery feature, a new VM guestinfo variable has been introduced called guestinfo.vmtools.containerinfo that will be populated with the list of running containers. By default, the polling interval is every 6 hours with a default of listing the first 100 containers, these and other settings can be adjusted which you can find in the official VMware documentation.

Simliar to the Application Discovery feature, I have also updated my PowerCLI function Get-VMApplicationInfo.ps1 to include this additional functionality for users that would like to extract this information and I have created a new function called Get-VMContainerInfo, which you see how it functions in the screenshot above. In addition to console output, you can also save the information in both CSV and JSON format.

Categories // Automation, Cloud Native, Kubernetes Tags // container, Kubernetes, vmware tools

Test driving ContainerX on VMware vSphere

06.20.2016 by William Lam // 2 Comments

Over the weekend I was catching up on some of my internet readings, one of which is Timo Sugliani's excellent weekly Tech Links (highly recommend a follow). In one of his non-VMware related links (which funny enough is related to VMware), I noticed that the recent Container startup ContainerX has just made available a free version of their software for non-production use. Given part of the company's DNA included VMware, I was curious to learn more about their solution and how it works, especially as it relates to VMware vSphere which is one of the platforms it supports.

For those not familiar with ContainerX, it is described as the following:

ContainerX offers a single pane of glass for all your containers. Whether you are running on Bare Metal or VM, Linux or Windows, Private or Public cloud, you can view your entire infrastructure in one simple management console.

In this article, I will walk you through in how to deploy, configure and start using ContainerX in a vSphere environment. Although there is an installation guide included with the installer, I personally found the document to be a little difficult to follow, especially for someone who was only interested in a pure vSphere environment. The mention of bare-metal at the beginning was confusing as I was not sure what the actual requirements were and I think it would have been nice to just have a section that covered each platform from start to end.

In any case, here are high level steps that are required in setting up ContainerX for your vSphere environment:

  1. Deploy an Ubuntu (14.01/14.04) VM and install the CX Management Host software
  2. Deploy the CX Ubuntu OVA Template into the vSphere environment that will be used by the CX Management Host
  3. Configure a vSphere Elastic Cluster using the CX Management Host UI
  4. Deploy your Container/Application to your vSphere Elastic Cluster

Pre-Requisite:

  • Sign up for the free ContainerX offering here (email will contain a download link to CX Management Host Installer)
  • Access to a vSphere environment w/vCenter Server
  • Already deployed Ubuntu 14.01 or 14.04 VM (4 vCPU, 8GB vMEM & 40GB vDISK) that will be used for the CX Management Host

CX Management Host Deployment:

Step 1 - Download the CX Management Host installer for your OS desktop platform of choice. If you are using the Mac OS X installer, you will find that the cX.app fails to launch as it is not signed from an identified developer. You will need to change your security settings to allow an application which was downloaded from "anywhere" to be opened, which is a shame.

Step 2 - Accept the EULA and then select the "On Preconfigured Host" option which expects you to have a pre-installed Ubuntu VM to install the CX Management Host software. If you have not pre-deployed the Ubuntu VM, stop here and go perform that step and then come back.

test-driving-containerx-on-vsphere-1
Step 3 - Next, provide the IP Address/hostname and credentials to the Ubuntu VM that you have already pre-installed. You can use the "Test" option to verify that either the SSH password or private key that you have provided is functional before proceeding further in the installer.

test-driving-containerx-on-vsphere-2
Step 4 - After you click "Continue", the installer will remotely connect to your Ubuntu VM and start the installation of the CX Management Host software. This takes a few minutes with progress being displayed at the bottom of the screen. If the install is successful, you should see the "Install FINISHED" message.

test-driving-containerx-on-vsphere-3

Step 5 - Once the installer completes, it will also automatically open a browser and take you to the login screen of the CX Management Host UI interface (https://IP:8085). The default credentials is admin/admin

test-driving-containerx-on-vsphere-4
At this point, you have successfully deployed the CX Management Host. The next section will walk you through in setting up the CX Ubuntu Template which will be used to deploy your Containers and Applications by the CX Management Host.

Preparing the CX Ubuntu Template Deployment:

Before we can create a vSphere Elastic Cluster (EC), you will need to deploy the CX Ubuntu OVA Template which will then be used by the CX Management Host to deploy CX Docker Hosts to run your Containers/Applications. When I had originally gone through the documentation, there was a reference to the CX Ubuntu OVA but I was not able to find a download URL anywhere including going through the ContainerX's website. I had reached out to the ContainerX folks and they had updated KB article 20960087 to provide a download link, appreciate the assistance over the weekend. However, it looks like their installation documentation is still missing the URL reference. In any case, you can find the download URL below for your convenience.

Step 1 - Download the CX Ubuntu OVA Template (http://update.containerx.io:8080/cx-ubuntu.ova) and deploy (but do NOT power it on) using the vSphere Web/C# Client to the vCenter Server environment that ContainerX will be consuming.

Note: I left the default VM name which is cx-ubuntu as I am not sure if it would mess up with the initial vSphere environment discovery later in the process. It would be good to know if you could change the name.

Step 2 - Take a VM snapshot of the powered off CX Ubuntu VM before powering it on.

test-driving-containerx-on-vsphere-7

Creating a vSphere Elastic Cluster (EC) to ContainerX:

Step 1 - Click on the "Quick Wizard" button at the top and select the "vSphere Cluster" start button. Nice touch on the old school VMware logo 🙂

test-driving-containerx-on-vsphere-5
Step 2 - Enter your vCenter Server credentials and then click on the "Login to VC" button to continue.

test-driving-containerx-on-vsphere-6
Step 3 - Here you will specify the number of CX Docker Hosts and the compute, storage, and networkings resources that they will consume. The CX Docker Hosts will be provisioned using VMware Linked Clones based off of our CX Ubuntu VM Template that we had uploaded earlier. If you had skipped this step, you will find that there is not a drop down box and you will need to perform that step first before you can proceed further.

test-driving-containerx-on-vsphere-8

Note: It would have been nice if the CX Ubuntu VM was not detected, that it would automatically prompt you to deploy it without having to go back. I did not even realize this particular template was required since I was not able to find the original download link in any of the instructions.

Step 4 - An optional step, but you also have the option to create what is known as Container Pools which allow you to set both CPU and Memory limits (supports over-commitment) within your EC. It is not exactly clear how Container Pools work but it sounds like these are being applied within the CX Docker Hosts VMs?

test-driving-containerx-on-vsphere-9
Step 5 - Once you have confirmed the settings to be used for your vSphere EC, you can then click Next to being the creation. This process should not take too long and once everything has successfully been deployed, you should see a success message and a "Done" button which you can click on to close the wizard.

test-driving-containerx-on-vsphere-10
Step 6 - If we go back to our CX Management UI home page, we should now see our new vSphere EC which in my example is called "vSphere-VSAN-Cluster". There is some basic summary information about the EC, including number of Container Pools, Hosts and their utilization. You may have also noticed that there are 12 Containers being displayed in the UI which I found a bit strange given I have not deployed anything yet. I later realized that these are actually CX Docker Containers running within the CX Docker Hosts which I assuming is providing communication back to the CX Management Host. I think it would be nice to separate these numbers to reflect "Management" and actual "Application" Containers, the same goes for resource utilization information.

test-driving-containerx-on-vsphere-11

Deploying a Container on ContainerX:

Under the "Applications" tab of your vSphere EC, you can deploy either a standalone Docker Container or some of the pre-defined Applications that have been bundled as part of the CX Management Host.

test-driving-containerx-on-vsphere-12
We will start off by just deploying a very simple Docker Container. In this example, I will select my first ContainerPool-1 and then select the "A Container" button. Since we do not have a repository to select a Container to deploy, click on the "Launch a Container" button towards the top.

Note: I think I may have found a UI bug in which the Container Pool that you select in the drop down is not properly being displayed when you go deploy the Container or Application. For example, if you pick Container Pool 1, it will say that you are about to deploy to Container Pool 2. I found that you had to re-select the same drop down a second time for it to properly display and whether this is merely a cosmetic bug or its actually using the Container Pool that I did not specify.

Step 1 - Specify the Docker Image you wish to launch, if you do not have one off hand, you can use the PhotonOS Docker Container (vmware/photon) and specify a Container name. You can also add additional options using the advanced settings button such as environmental variables, network ports, Docker Volumes, etc. For this example, we will keep it simple, go ahead and click on "Launch App" button to deploy the Container.

test-driving-containerx-on-vsphere-13
Step 2 - You should see that our PhotonOS Docker Container started and then shortly after exited, not a very interesting demo but you get the idea.

test-driving-containerx-on-vsphere-14
Note: It would be really nice to be able to get the output from the Docker Container, even running a command like "uname -a" did not return any visible output that I could see from the UI.

Deploying an Application on ContainerX:

The other option is to deploy a sample application that is pre-bundled within the CX Management Host (I assume you can add your own application as it looks to be just a Docker Compose file). Select the Container Pool from the drop down that you wish to deploy the application and then click on the "An Application" button. In our example, we will deploy the WordPress application.

Step 1 - Select the application you wish to deploy by click on the "Power" icon.

test-driving-containerx-on-vsphere-21
Step 2 - Give the application a name and then click on the "Launch App" to deploy the application.

test-driving-containerx-on-vsphere-16
Step 3 - The deployment of the application can take several minutes, but one completed, you should see in the summary view like the one shown below. You can also find the details of how to reach the WordPress application that we just deployed by looking for the IP Address and the external port as highlighted below.

test-driving-containerx-on-vsphere-17
Step 4 - To verify that our WordPress application is working, go ahead and open a new browser and specify the IP Address and the port shown in the previous step and you should be taken to the initial WordPress setup screen.

test-driving-containerx-on-vsphere-18
If you need to access the CX Docker Hosts whether it is for publishing Containers/Applications by your end users or for troubleshooting purposes, you can easily access the environment information under the "Pools" tab. There is a "Download access credentials" which contains zip file containing platform specific snippets of the CX Docker hosts information.

test-driving-containerx-on-vsphere-22
Since I use a Mac, I just need to run the env.sh script and then run my normal "Docker" command (this assumes you have the Docker Beta Client for Mac OS X, else you will need a Docker Client). You can see from the screenshot below the three Docker Containers we had deployed earlier.

test-driving-containerx-on-vsphere-23

Summary:

Having only spent a short amount of time playing with ContainerX, I thought it was a neat solution. The installation of the CX Management Host was pretty quick and straight forward and I was glad to see a multi-desktop OS installer. It did take me a bit of time to realize what the actual requirement was for just a pure vSphere environment as mentioned earlier, perhaps an end-to-end document for vSphere would have cleared all this up. The UI was pretty easy to use and intuitive for the most part. I did find it strange not being able to edit any of the configurations a bit annoying and ended up deleting and re-creating some of the configurations. I would have liked an easier way to map between the Container Pools (Pools tab) and their respective CX Docker Hosts without having to download the credentials or navigate to anther tab. I also found in certain places that selection or navigation of objects was not very clear due to the subtle transition in the UI which made me think there was a display bug.

I am still trying to wrap my head around the Container Pool concept. I am not sure I understand the benefits of it or rather how the underlying resource management actually works. It seems like today, it is only capable of setting CPU and Memory limits which are applied within the CX Docker Host VMs? I am not sure if customers are supposed to create different sized CX Docker Host VMs? I was pretty surprised that I did not see more use of the underlying vSphere Resource Management capabilities in this particular area.

The overall architecture of ContainerX for vSphere looks very similiar to VMware's vSphere Integrated Containers (VIC) solution. Instead of a CX Docker Host VM, VIC has a concept of a Virtual Container Host (VCH) which is backed by a vSphere Resource Pool. VIC creates what is known as a Container VM that only contains the Container/Application running as VM, rather than in a VM. These Container VMs are instantiated using vSphere's Instant Clone capability from a tiny PhotonOS Template. Perhaps I am a bit biased here, but in addition to providing an integrated and familiar interface to each of the respective consumers: vSphere Administrators (familiar VM construct, leveraging the same set of tools with extended Docker Container info) and Developers (simply accessing the Docker endpoint with the tools they are already using), the other huge benefit of the VIC architecture is that it allows the Container VMs to benefit from all the underlying vSphere platform capabilities. vSphere Administrators can apply granular resource and policy based management on a per Container/Application basis if needed, which is a pretty powerful capability if you ask me. It will be interesting to see if there will be deeper integration from a management and operational standpoint in the future for ContainerX. 

All in all, very cool stuff from the ContainerX folks, looking forward to what comes next. DockerCon is also this week and if you happen to be at the event, be sure to drop by the VMware booth as I hear they will be showing off some pretty cool stuff. I believe the ContainerX folks will also be at DockerCon, so be sure to drop by their booth and say hello.

Categories // Automation, Cloud Native, vSphere Tags // cloud native apps, container, ContainerX, Docker, VIC, vSphere, vSphere Integrated Containers

Getting Started with Tech Preview of Docker Volume Driver for vSphere

05.31.2016 by William Lam // 8 Comments

A couple of weeks ago, I got an early sneak peak at some of the work that was being done in VMware's Storage and Availability Business Unit (SABU) on providing storage persistency for Docker Containers in a vSphere based environment. Today, VMware has open sourced a new Docker Volume Driver for vSphere (Tech Preview) that will enable customers to easily take advantage of their existing vSphere Storage (VSAN, VMFS and NFS) and provide persistent storage access to Docker Containers running on top of the vSphere platform. Both the Developers and vSphere Administrators will have familiar interfaces in how they manage and interact with these Docker Volumes from vSphere, which we will explore further below. 

The new Docker Volume Driver for vSphere is comprised of two components: The first is the vSphere Docker Volume Plugin that is installed inside of a Docker Host (VM) that will allow you to instantiate new Docker Volumes. The second is the vSphere Data Volume Driver that is installed in the ESXi Hypervisor host that will handle the VMDK creation and the mapping of the Docker Volume request back to the Docker Hosts. If you have shared storage on your ESXi hosts, you can have a VM on one ESXi host create a Docker Volume and have a completely different VM on another ESXi host mount the exact same Docker Volume. Below is diagram to help illustrate the different components that make up the Docker Volume Driver for vSphere.
docker-volume-driver-for-vsphere-00
Below is a quick tutorial on how to get started with the new Docker Volume Driver for vSphere.

Pre-Requisites

  • vSphere ESXi 6.0+
  • vSphere Storage (VSAN, VMFS or NFS) for ESXi host (shared storage required for multi-ESXi host support)
  • Docker Host (VM) running Docker 1.9+ (recommend using VMware Photon 1.0 RC OVA but Ubuntu 10.04 works as well)

Getting Started

Step 1 - Download the vSphere Docker Volume Plugin (RPM or DEB) and vSphere Docker Volume Driver VIB for ESXi

Step 2 - Install the vSphere Docker Volume Driver VIB in ESXi by SCP'ing the VIB to the ESXi and then run the following command specifying the full path to the VIB:

esxcli software vib install -v /vmware-esx-vmdkops-0.1.0.tp.vib -f

docker-volume-driver-for-vsphere-1
Step 3 - Install the vSphere Docker Volume Plugin by SCP'ing the RPM or DEB file to your Docker Host (VM) and then run one of the following commands:

rpm -ivh docker-volume-vsphere-0.1.0.tp-1.x86_64.rpm
dpkg -i docker-volume-vsphere-0.1.0.tp-1.x86_64.db

docker-volume-driver-for-vsphere-2

Creating Docker Volumes on vSphere (Developer)

To create your first Docker Volume on vSphere, a Developer would only need access to a Container Host (VM) like PhotonOS for example that has the vSphere Docker Volume Plugin installed. They would then use the familiar Docker CLI to create a Docker Volume like they normally would and there is nothing they need to know about the underlying infrastructure.

Run the following command to create a new Docker Volume called vol1 with the capacity of 10GB using the new vmdk driver:

docker volume create --driver=vmdk --name=vol1 -o size=10gb

We can list all the Docker Volumes that available by running the following command:

docker volume ls

We can also inspect a specific Docker Volume by running the following command and specifying the name of the volume:

docker volume inspect vol1

docker-volume-driver-for-vsphere-3
Lets actually do something with this volume now by attaching it to a simple Busybox Docker Container by running the following command:

docker run --rm -it -v vol1:/mnt/volume1 busybox

docker-volume-driver-for-vsphere-4
As you can see from the screenshot above, I have now successfully accessed the Docker Volume that we had created earlier and I am now able to write to it. If you have another VM that resides on the same underlying shared storage, you can also mount the Docker Volume that you had just created from a different system.

Pretty straight forward and easy right? Happy Developers 🙂

Managing Docker Volumes on vSphere (vSphere Administrator)

For the vSphere Administrators, you must be wondering, did I just give my Developers full access to the underlying vSphere Storage to consume as much storage as possible? Of course not, we have not forgotten about our VI Admins and we have some tools to help. Today, there is a CLI utility located at /usr/lib/vmware/vmdkops/bin/vmdkops_admin.py which runs directly on the ESXi Shell (hopefully this will turn into an API in the future) which provides visibility into how much storage is being consumed (provisioned and usage) by the individual Docker Volumes as well as who is creating them and their respective Virtual Machine mappings.

Lets take a look at a quick example by logging into the ESXi Shell. To view the list ofDocker Volumes that have been created, run the following command:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py ls

You should see the name of the Docker Volume that we had created earlier and the respective vSphere Datastore in which it was provisioned to. At the time of writing this, these were the only two default properties that are displayed out of the box. You can actually add additional columns by simply using the -c option by running the following command:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py ls -c volume,datastore,created-by,policy,attached-to,capacity,used

docker-volume-driver-for-vsphere-5
Now we get a bunch more information like which VM had created the Docker Volume, the BIOS UUID that the Docker Volume is currently attached to, the VSAN VM Storage Policy that was used (applicable to VSAN env only), the provisioned and used capacity. In my opinion, this should be the default set of columns and this is something I have feedback to the team, so perhaps this will be the default when the Tech Preview is released.

One thing that to be aware of is that the Docker Volumes (VMDKs) will automatically be provisioned onto the same underlying vSphere Datastore as the Docker Host VM (which makes sense given it needs to be able to access it). In the future, it may be possible to specify where you may want your Docker Volumes to be provisioned. If you have any feedback on this, be sure to leave a comment in the Issues page of the Github project.

Docker Volume Role Management

Although not yet implemented in the Tech Preview, it looks like VI Admins will also have the ability to create Roles that restrict the types of Docker Volume operations that a given set of VM(s) can perform as well as the maximum amount of storage that can be provisioned.

Here is an example of what the command would look like:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py role create --name DevLead-Role --volume-maxsize 100GB --rights create,delete,mount --matches-vm photon-docker-host-*

Docker Volume VSAN VM Storage Policy Management

Since VSAN is one of the supported vSphere Storage backends with the new Docker Volume Driver, VI Admins will also have the ability to create custom VSAN VM Storage Policies that can then be specified during Docker Volume creations. Lets take a look at how this works.

To create a new VSAN Policy, you will need to specify the name of the policy and provide the set of VSAN capabilities formatted using the same syntax found in esxcli vsan policy getdefault command. Here is a mapping of the VSAN capabilities to the attribute names:

VSAN Capability Description VSAN Capability Key
Number of failures to tolerate hostFailuresToTolerate
Number of disk stripes per object stripeWidth
Force provisioning forceProvisioning
Object space reservation proportionalCapacity
Flash read cache reservation cacheReservation

Run the following command to create a new VSAN Policy called FTT=0 which sets Failure to Tolerate to 0 and Force Provisioning to true:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py policy create --name FTT=0 --content '(("hostFailuresToTolerate" i0) ("forceProvisioning" i1))'

docker-volume-driver-for-vsphere-6
If we now go back to our Docker Host, we can create a second Docker Volume called vol2 with capacity of 20GB, but we will also now include our new VSAN Policy called FTT=0 policy by running the following command:

docker volume create --driver=vmdk --name=vol2 -o size=20gb -o vsan-policy-name=FTT=0

We can also easily see which VSAN Policies are in use by simply listing all policies by running the following command:

docker-volume-driver-for-vsphere-7
All VSAN Policies and Docker Volumes (VMDK) that are created are stored under a folder called dockvols in the root of the vSphere Datastore as shown in the screenshot below.

docker-volume-driver-for-vsphere-8
Hopefully this gave you a nice overview of what the Docker Volume Driver for vSphere can do in its first release. Remember, this is still in Tech Preview and our Engineers would love to get your feedback on the things you like, new features or things that we can improve on. The project is on Github which you can visit the page here and if you have any questions or run into bugs, be sure to submit an issue here or contribute back!

Categories // Automation, Cloud Native, Docker, ESXi, VSAN, vSphere Tags // cloud native apps, container, Docker, docker volume, esxi, nfs, vmdkops_admin.py, vmfs, VSAN

  • 1
  • 2
  • 3
  • 4
  • Next Page »

Search

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Infrastructure Business Group (CIBG) at VMware. He focuses on Cloud Native technologies, Automation, Integration and Operation for the VMware Cloud based Software Defined Datacenters (SDDC)

Connect

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Recent

  • How to disable the Efficiency Cores (E-cores) on an Intel NUC? 03/24/2023
  • Changing the default HTTP(s) Reverse Proxy Ports on ESXi 8.0 03/22/2023
  • NFS Multi-Connections in vSphere 8.0 Update 1 03/20/2023
  • Quick Tip - How to download ESXi ISO image for all releases including patch updates? 03/15/2023
  • SSD with multiple NVMe namespaces for VMware Homelab 03/14/2023

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2023

 

Loading Comments...