WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud
  • Tanzu
    • Application Modernization
    • Tanzu services
    • Tanzu Community Edition
    • Tanzu Kubernetes Grid
    • vSphere with Tanzu
  • Home Lab
  • Nested Virtualization
  • Apple
You are here: Home / Automation / Is my vSphere Cluster managed by vSphere Lifecycle Manager (vLCM) as a Desired Image or Baseline?

Is my vSphere Cluster managed by vSphere Lifecycle Manager (vLCM) as a Desired Image or Baseline?

03.10.2023 by William Lam // 11 Comments

Prior to vSphere 7.0, ESXi lifecycle management has been provided by vSphere Update Manager (VUM), which has been around for more than a decade plus and is most likely what you are still using today. With the release of vSphere 7.0, VMware introduced a brand new lifecycle management solution for ESXi called vSphere Lifecycle Manager (vLCM), which you can read more about HERE.


While VMware has made it clear that vLCM will be the future going forward for ESXi lifecycle management, we also understand that most customers will still be using the existing VUM-based solution and we wanted to make sure it was easy to  transition between the two solutions, especially within the vSphere UI.

An interesting question was recently brought up was how to determine whether a vSphere Cluster was using the new vLCM solution based on desired images versus VUM, which uses baselines?

Note: If you are not familiar between vLCM Desired Image and VUM Baselines, be sure to check out this helpful resource HERE.

Using vSphere UI

Select the specific vSphere Cluster in your inventory and under Updates->Images, if you are prompted to setup an image then you are NOT using vLCM Desired Image but VUM baselines as shown in the screenshot below.

Using the vSphere API or PowerCLI

For those that prefer to automate this check, a new vSphere API property called lifecycleManaged has been introduced at the vSphere Cluster level that will tell you whether it is using vLCM Desired Image or not. Here is an example PowerCLI snippet for accessing the property

(Get-Cluster "Supermicro-Cluster").ExtensionData.LifecycleManaged

Finally, when creating a new vSphere Cluster, users can decide whether the cluster will configured using vLCM Desired Image as shown in the screenshot below. The default behavior in vSphere 8.0 is to automatically select vLCM Desired Image and once this is configured, you can not change it back to VUM baseline. If you have a vSphere Cluster that is configured as VUM baseline, then you can change it to vLCM Desired Image. To learn more about vLCM, be sure to check out the official VMware documentation HERE.

More from my site

  • VUM UMDS Docker Container for vSphere 6.5
  • Automating the installation of VUM Update Manager Download Service (UMDS) for Linux in vSphere 6.5
  • How to configure Hardware Compatibility List (HCL) database for vSphere Lifecycle Manager (vLCM) in an air-gapped environment?
  • How to automate checking for new vCenter Server updates in vSphere Lifecycle Manager (vLCM)?
  • Creating custom ESXi images using vSphere Lifecycle Manager (vLCM) UI and PowerCLI cmdlets for vSphere 8

Categories // Automation, vSphere 7.0, vSphere 8.0 Tags // vSphere Lifecycle Manager, vSphere Update Manager, vum

Comments

  1. Toni says

    03/10/2023 at 7:43 am

    How to handle clusters with mixed hardware generations?
    For example HPE DL380 Gen9 and Gen10 servers in a cluster.
    Pretty common for most of your customers.

    Reply
    • lamw says

      03/10/2023 at 10:12 am

      Hi Toni,

      Yes, this is something we've heard from customers. As of right now, the easiest way is to incorporate drivers for the different configurations. While this is not ideal or may not be an acceptable workaround, this is something the product team is aware of and have been told is working on

      Reply
  2. Johann says

    03/10/2023 at 9:43 am

    Joke of the day… „easy to transition between the two solutions“. „vLCM Desired Image“ is nothing but a nightmare, and reverting back to baselines is artificially prevented!

    I had to rebuild a cluster and move hosts and VMs step-by-step to get out if this hell… I don’t know what VMware is thinking, it’s the nail in the coffin at least for our Huawei hosts (or for
    VMware…), if this sh** actually gets enforced.

    I‘m doing VMware ESXi for nearly 20 years and manage 500+ hosts, but this is a no-go in my opinion.

    Reply
    • lamw says

      03/10/2023 at 10:15 am

      Johann,

      Sorry to hear that you've not had a good experience w/vLCM. My comment about transition is purely ability to switch between the two solutions as I'd expect most customers to be using both unless you're starting a greenfield deployment. Not sure if you've already shared your feedback with your account and/or PM, but happy to connect you directly w/PM so they can hear your frustrations/concerns first hand. Let me know if thats of interests

      Reply
  3. rmbki says

    03/11/2023 at 6:46 pm

    vLCM makes liberal use of the word "image" but it's very different from the usual concept of imaging with regard to IT operations. All too often we find our servers in a state where some component is not allowed be removed or downgraded and vLCM becomes a roadblock toward bringing a server into compliance. The workaround is manual removal of offending vibs via esxcli or PowerCLI then a reboot.

    I'd prefer to have image truly mean image which I interpret as "write this block of data to secondary storage with extreme prejudice." If a now-removed vib has created configuration detail that becomes orphaned, I can deal with it accordingly. If we're going to be persnickety about component removals/downgrades/supercedence/prerequisites then call a spade a spade we can agree to refer to this tech as half-baked patch management; not an imaging process.

    We're grudgingly using vLCM here but it's a half-step forward from classic baselines, at best.

    Reply
  4. will says

    03/21/2023 at 8:15 am

    I like the idea of vLCM because of the layering of a base image, vendor add on packs and even indidiual vibs plus the HSM layer for firmware. However the one thing missing vs baselines is scheduling the update which was a great feature of the old VUM way. e.g. kick this off at 5am and I will validate when I am online.

    Can this be worked around with PowerCLI for now / do we know if the ability to schedule remediations in vLCM is coming natively in the GUI?

    thanks

    Reply
    • William Lam says

      03/21/2023 at 9:40 am

      If I understand your ask, you would like to auto-remediate based on a schedule, is this something you do for your entire vSphere estate, what’s the size of your cluster(s) and host(s) counts? We typically don’t see customers doing this as there’s change control that drives the updates and while they’re not sitting there watching the rolling updates, it’s trigger via UI or API once they wish to move to a given update. Just trying to understand your scenario and expected behaviors, especially on scale/size

      Reply
      • will says

        03/21/2023 at 10:02 am

        Hey William. thanks for the reply.

        This is per cluster which is typically 2 or 3 node. In VUM or baseline mode when you remediate you get the option to "Schedule this remediation to run later" and pick a time for this. We have change controls as regulated but would typically schedule the remediation of a few small clusters for say 4am Saturday morning and then validate at 8am. If all is well its a checkbox otherwise a fix forward and troubleshoot.

        With vLCM Images that "Schedule this remediation to run later" is not part of the workflow. I think the baselines used scheduled tasks within vCenter and perhaps the images cannot do this?

        Any feedback welcome. I would be happy to do it with code but it was very easy for ops teams before in the GUI.

        Reply
        • William Lam says

          03/24/2023 at 5:47 am

          Scheduled vLCM remediation is currently not possible, but your use case makes perfect sense. I just wanted to confirm whether you'd simply update w/o testing or if this was scheduled remediation after the change has been approved. Let me share this w/PM team

          Reply
          • Will says

            03/27/2023 at 7:32 am

            Hello again

            To confirm we would test the upgrade on our lab kit first. Then raise change control for prod systems. Once approved schedule the upgrade (currently baseline but hopefully with image in future) for 4am on a Saturday and then validate around 8am (fixing forward if any problems).

            This allows things to scale while not having folks watching paint dry. :o)

          • William Lam says

            03/27/2023 at 8:28 am

            Yup, this makes sense and I've already shared the feedback directly w/PM. For now, you can schedule this outside of vCenter Server by calling into vCLM API, which you can see an example https://williamlam.com/2022/10/using-vsphere-lifecycle-manager-vlcm-api-to-patch-group-of-esxi-hosts.html

Thanks for the comment! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Infrastructure Business Group (CIBG) at VMware. He focuses on Cloud Native technologies, Automation, Integration and Operation for the VMware Cloud based Software Defined Datacenters (SDDC)

Connect

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Recent

  • How to enable passthrough for USB Network Adapters claimed by ESXi CDCE Driver? 03/30/2023
  • Self-Contained & Automated VMware Cloud Foundation (VCF) deployment using new VLC Holodeck Toolkit 03/29/2023
  • ESXi configstorecli enhancement in vSphere 8.0 Update 1 03/28/2023
  • ESXi on Intel NUC 13 Pro (Arena Canyon) 03/27/2023
  • Quick Tip - Enabling ESXi Coredumps to be stored on USB 03/26/2023

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2023

 

Loading Comments...