WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

VUM UMDS Docker Container for vSphere 6.5

12.07.2016 by William Lam // Leave a Comment

Early last week, I had published an article on how to automate the deployment of VUM's Update Manager Download Service (UMDS) in vSphere 6.5 for an Ubuntu 14.04 distribution. The interesting backstory to that script is that it started from a Docker Container that I had initially built for the VUM UMDS. I found that being able to quickly spin up UMDS instance using a Docker Container purely from a testing standpoint was much easier than needing to deploy a full VM, especially as I have Docker running on my desktop machines. Obviously, there are limitations with using a Docker Container, especially if you plan to use UMDS for a longer duration and need persistence. However, for quick lab purposes, it may just fit the bill and even with Docker Containers, you can use Docker Volumes to help persist the downloaded content.

You can find the Dockerfile and its respective scripts on my Github repo here: https://github.com/lamw/vum-umds-docker

Below are the instructions on how to use the VUM UMDS Docker Container.

[Read more...]

Categories // Automation, Docker, vSphere 6.5 Tags // Docker, ubuntu, update manager download service, vSphere 6.5, vSphere Update Manager, vum

KMIP Server Docker Container for evaluating VM Encryption in vSphere 6.5

12.02.2016 by William Lam // 10 Comments

There are a number of vSphere Security enhancements that were introduced in vSphere 6.5 including the much anticipated VM Encryption feature. To be able to use the new VM Encryption feature, you will need to first setup a Key Management Interoperability Protocol (KMIP) Server if you do not already have one and associate it with your vCenter Server. There are plenty of 3rd party vendors who provide KMIP solutions that interoperate with the new VM Encryption feature, but it usually can take some time to get access to product evaluations.

During the vSphere Beta, VMware had provided a sample KMIP Server Virtual Appliance based on PyKMIP, which allowed customers to quickly try out the new VM Encryption feature. Many of you have expressed interest in getting access to this appliance for quick evaluational purposes and the team is currently working on providing an updated version of the appliance for customers to access. In the mean time, for those who can not wait for the appliance or would like an alternative way of quickly standing up a sample KMIP Server, I have created a tiny (163 MB) Docker Container which can be easily spun up to provide the KMIP services. I haver published the Docker Container on Docker Hub at lamw/vmwkmip. The beauty of the Docker Container is you do not need to deploy another VM and for resource constrained lab environments or quick demo purposes, you could even run it directly on the vCenter Server Appliance (VCSA) as shown here, obviously not recommended for production use.

The Docker Container bundles the exact same version of PyKMIP that will be included in the virtual appliance, this is just another consumption mechanism. It is also very important to note that you should NOT be using this for any production workloads or any VMs that you care about. For actual production deployments of VM Encryption, you should be leveraging a production grade KMIP Server as PyKMIP stores the encryption keys in memory and will be lost upon a restart. This will also be true even for the virtual appliance, so this is really for quick evaluational purposes.

UPDATE (10/08/22) - The KMIP Docker Container is now available for both x86 and Arm platforms. Simply run docker pull lamw/vmwkmip and the correct architecture will automatically be downloaded.

Note: The version of PyKMIP is a modified version and VMware plans to re-contribute their changes back to the PyKMIP open-source project so others can also benefit.

Below are the instructions on using the KMIP Server Docker Container and how to configure it with your vCenter Server. I will assume you have worked with Docker before, if you have not, please have a look at Docker online resources before continue further or wait for the virtual appliance to be posted.

[Read more...]

Categories // Home Lab, vSphere 6.5 Tags // Docker, KMIP, KMS, VM Encryption, vSphere 6.5

5 ways to a run PowerCLI script using the PowerCLI Docker Container

10.25.2016 by William Lam // 5 Comments

In case you missed the exciting update last week, the PowerCLI Core Docker Container is now hosted on Docker Hub. With just two simple commands you can now quickly spin up a PowerCLI environment in under a minute! This is really useful if you need perform a couple of operations using the cmdlets interactively and then discarding the environment once you are done. If you want to do something more advanced like run an existing PowerCLI script as well as potentially persist its output (Docker Containers are stateless by default), then there are few options to consider.

To better describe the options, lets use the following scenario. Say you have a Docker Host, this can be a VMware's Photon OS or a Microsoft Windows, Linux or Mac OS X system which has the Docker Client running. The Docker Host is where you will run the PowerCLI Core Docker Container and it also has access to a collection of PowerCLI scripts that you have created or downloaded else where. Lets say the location of these PowerCLI scripts are located in /Users/lamw/scripts and you would like them to be available within the PowerCLI Core Docker Container when it is launched, say under /tmp/scripts.

Here is a quick diagram illustrating the scenario we had just discussed.

4-different-ways-to-use-powercli-core-docker-containerHere are 5 different ways in which you can run your PowerCLI scripts within the Docker Container. Each will have its pros/cons and I will be using real sample scripts to exercise each of the options. You can download all the sample scripts in my Github repository: powerclicore-docker-container-samples

Note: Before getting started, please familiarize yourself with launching the PowerCLI Core Docker Container which you can read more about here. In addition, you will need access to either a vCenter Server or ESXi host environment and also please create a tiny "Dummy" VM called DummyVM which we will be using to update its Notes field with the current time.

UPDATE (04/11/18) - Microsoft has GA'ed PowerShell Core, one of the changes is the name of the PS binary from powershell to pwsh. For entrypoint parameter, you will need to specify /usr/bin/pwsh rather than /usr/bin/powershell

Option 1:

This is the most basic and easiest method. You literally run a PowerCLI script that already contains all of the necessary information hardcoded within the script itself. This means things like credentials as well as user input that is required can be found within the script. This is obviously simple but makes it very inflexible as you would need to edit the script before launching the container. Another downside is that you now have your vSphere credentials hardcoded inside of the script which is also not ideal from a security standpoint.

To exercise example 1, please edit the pcli_core_docker_sample1.ps1 script and update it with your environment credentials and then run the following command:

docker run --rm -it \
-v /Users/lamw/scripts:/tmp/scripts vmware/powerclicore /tmp/scripts/pcli_core_docker_sample1.ps1

If executed correctly, the Docker container should launch, connect to your vSphere environment, update the notes field of DummyVM with the current time and then exit. Pretty straight forward and below is a screenshot of this example.

run-powercli-scripts-using-powercli-core-docker-container-0

Option 2:

Nobody likes hardcoding values, especially when it comes to endpoints and credentials. This next method will allow us to pass in variables from the Docker command-line and make them available to the PowerCLI scripts inside of the container as OS environmental variables. This allows for greater flexibility then the previous option but the downside is that you may potentially be exposing credentials in plaintext which can be inspected by others who can perform docker run/inspect commands. You also need to update your existing PowerCLI scripts to handle the environmental variable translation which may not be ideal if you have a lot of pre-existing scripts.

To exercise example 2, run the following command and specify your environmental credentials in the command-line instead:

docker run --rm -it \
-e VI_SERVER=192.168.1.150 \
-e VI_USERNAME=*protected email* \
-e VI_PASSWORD=VMware1! \
-e VI_VM=DummyVM \
-v /Users/lamw/scripts:/tmp/scripts vmware/powerclicore /tmp/scripts/pcli_core_docker_sample2.ps1

If executed correctly, you will see that the variables that we have defined are passed into the container and we are now able to make use of them within the PowerCLI script by simply accessing the respective environmental variable names as shown in the screenshot below.

run-powercli-scripts-using-powercli-core-docker-container-1

Option 3:

If you have created some PowerCLI scripts which already prompt for user input which can include also include credentials, then another way to run those script is to do so interactively. If the parameters are required for a given script, then it should prompt for input. The benefit here is that you can reuse your existing PowerCLI scripts without needing to make any modifications even when executing it within a Docker container. You are also not exposing any credentials in plaintext. To take this step further, you could also implement the secure string feature in PowerShell but that would still require you to include a small snippet in your PowerCLI script to do the appropriate decoding when connecting.

To exercise example 3, run the following command and specify your environmental credentials in the command-line instead:

docker run --rm -it \
-v /Users/lamw/scripts:/tmp/scripts vmware/powerclicore /tmp/scripts/pcli_core_docker_sample3.ps1

If executed correctly, you will be prompted for the expected user inputs to the script and then it will perform the operation as shown in the screenshot below.

run-powercli-scripts-using-powercli-core-docker-container-2

Option 4:

Similiar to Option 3, if you have defined parameters to your PowerCLI script, you can also just specify them directly in the Docker command-line just like you would if you were to manually run the PowerCLI script in a Windows environment. Again, the benefit here is that you can reuse your existing PowerCLI scripts without any modifications. You do risk exposing any credentials if you are passing it through the command-line, but the risk was known as you are already doing that with your existing scripts. A downside to this option is if your PowerCLI script accepts quite a few parameters, your Docker run command can get quite long. You may just consider prompting for endpoint/credentials and the rest of the user input can then be passed in dynamically if you were to go with this option.

To exercise example 4, run the following command and specify your environmental credentials in the command-line instead:

docker run --rm -it \
-v /Users/lamw/scripts:/tmp/scripts vmware/powerclicore /tmp/scripts/pcli_core_docker_sample3.ps1 -VI_SERVER 192.168.1.150 -VI_USERNAME *protected email* -VI_PASSWORD VMware1! -VI_VM DummyVM

run-powercli-scripts-using-powercli-core-docker-container-3

Option 5:

The last option is a nice compromise of the above in which you can continue leveraging your existing scripts but providing a better way of sending in things like credentials. As I mentioned before, Docker Volumes allows us to make directories and files available from our Docker Host to the Docker Container. This not only allows us to make our PowerCLI scripts available from within the container but it can also be used to provide access to other things like simply sourcing a credentials file. This method works on a per-individual basis running the container without any major modification to your existing scripts, you simply just need to source the credential file at the top of each script. Best of all, you are not exposing any sensitive information

Note: Some of you might be thinking about PowerCLI's credential store and seeing how that might be a better solution but currently today that has not been implemented yet in PowerCLI Core which is really leveraging Microsoft's credential store feature. Once that has been implemented in .NET Core, I am sure the PowerCLI team can then add that capability which is probably the recommended and preferred option both from a security perspective as well as Automation standpoint.

To exercise example 5, edit the credential.ps1 file and update it with your environmental credentials and run the following command:

docker run --rm -it \
-v /Users/lamw/scripts:/tmp/scripts vmware/powerclicore /tmp/scripts/pcli_core_docker_sample4.ps1 -VI_VM DummyVM

If executed correctly, the same variables in the credentials file will then be loaded into the PowerCLI script context and run the associated operations and exit.

run-powercli-scripts-using-powercli-core-docker-container-4
As you can see, there are many different ways in which you can run your existing PowerCLI scripts using the new PowerCLI Core Docker Container. Hopefully this article gives you a good summary along with some real world examples to consider. Given this is still an active area of development by the PowerCLI team, if you have any feedback or suggestions, please do leave a comment. I know the Alan (PM) as well as the engineers are very interested in hearing your feedback and seeing how else we could better improve the user experience of both PowerCLI Core as well as consuming PowerCLI Core through these various interfaces.

UPDATE (10/25/16) - It looks like PowerCLI Core Docker Container has been updated with my suggestion below, so you no longer need to specify the --entrypoint parameter 🙂

One finale note, right now the PowerCLI Core Docker Container does not automatically startup the Powershell process when it is launched. This is why we have the --entrypoint='/usr/bin/powershell' command appended to the Docker command-line. If you prefer to have Powershell start up which will automatically load the PowerCLI module, you can check out my updated PowerCLI Core Docker Container: lamw/powerclicore which uses the original as a base with one tiny modification. Perhaps this is something Alan and the team would consider making as a default in the future? 🙂

Categories // Automation, Docker, PowerCLI, vSphere Tags // Docker, PowerCLI, powershell

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • …
  • 11
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...