WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

How do you "log a reason" using PowerCLI when rebooting or shutting down ESXi host?

06.04.2018 by William Lam // 2 Comments

I am sure many of you have seen this UI prompt asking you to specify a reason before issuing a reboot or shutdown of an ESXi host and I assume most of you spend a few seconds to type in a useful message and not just random characters, right? 😉


Have you ever tried performing the same reboot or shutdown operation using the vSphere API or PowerCLI (which leverages the API)? Have noticed, there is not a way to specify a message like you can in the UI?

Here is a table of the PowerCLI cmdlets and the respective vSphere API that is used to perform these two operations:

Operation Cmdlet vSphere API
Reboot  Restart-VMHost  RebootHost_Task
Shutdown  Stop-VMHost  ShutdownHost_Task

When looking at either the PowerCLI and/or vSphere API documentation, we can confirm that there are no fields to specify a message which can lead to an assumption that this is simply not possible or that the functionality might be provided by a private API. Fortunately, this is not the case and the functionality is in fact in the public vSphere API and has been for quite some time.

When you specify a message prior to rebooting or shutting down, this message is actually persisted and implemented as an Event within vCenter Server as shown in the screenshot below.

Instead of being able to specify a message that is only applicable to an ESXi host, I believe the original vSphere API designers thought that this functionality could also be useful and applied more broadly across any number of the vSphere Inventory objects, not just ESXi hosts. As such, this functionality which the vSphere UI uses is provided by the LogUserEvent() method which is part of the EventManager API. Customers or solutions can leverage this mechanism to log custom user defined events which is then persisted with the lifecycle fo the vSphere Inventory Object or as far back as your retention period for vCenter Server Events.

Going back to our original question, if you want to specify a message prior to rebooting or shutting down an ESXi host, the following snippet below demonstrates the use of the vSphere API via PowerCLI:

$eventManager = Get-View eventManager
$vmhost = Get-VMHost -Name 192.168.30.11
$message = "This message will be logged"

$eventManager.LogUserEvent($vmhost.ExtensionData.MoRef,$message)

Categories // Automation, ESXi, PowerCLI, vSphere Tags // ESXi, PowerCLI, reason, reboot, shutdown, vSphere API

Quick Tip - OVFTool 4.3 now supports vCPU & memory customization during deployment

05.29.2018 by William Lam // 3 Comments

In addition to adding vSphere 6.7 support and a few security enhancements (more details in the release notes), the latest OVFTool 4.3 release has also been enhanced to support customizing either vCPU and/or Memory from the default configurations when deploying an OVF/OVA.

Historically, it was only possible to modify these values if you were deploying to a vCloud Director endpoint using either --numberOfCpus or --memorySize. When deploying to a vSphere endpoint, these settings were not applicable and users would need to perform an additional operation calling into the vSphere API using whatever automation tool of choice to reconfigure the VM after deployment. It was not the end of the world but also not ideal if you simply wanted to make a minor modification to the default OVF/OVA you were deploying. I definitely ran into this a few times where having this functionality would have been very useful and I know number of customers have also shared simliar feedback in the past.

I had asked whether it was possible to support this use case and it looks like we already had an internal feature request added to the OVFTool backlog and with some additional customer feedback, we were also able to get this enhancement added to the latest release.

The existing --numberOfCpus and --memorySize accepts a VM Identifier (usually the name) followed by the value, for example

--numberOfCpus:Foo=4

The VM Identifier is to help with vApp deployments where you may have an OVF/OVA which is composed of multiple VMs of which you would like to customize each VM with different values. To ensure we do not break backwards compatibility, this pattern has also been extended when deploying to a vSphere endpoint. Having said that, most customers that I have talked to who use OVFTool generally deploy an OVF/OVA that is comprised of a single VM. In this case, rather than specifying the name of the VM again which is derived from --name property, you can simply use the wildcard asterisk (*) to simply apply it to all VMs within the OVF/OVA.

Here is an example of deploying a PhotonOS OVA which is configured with a default of 1 vCPU and 2GB memory and as part of our deployment using OVFTool, we will increase both vCPU to 2 and memory to 4GB:

ovftool --acceptAllEulas --name=Foo --numberOfCpus:'*'=2 --memorySize:'*'=4096 photon-custom-hw11-2.0-304b817.ova 'vi://*protected email*@192.168.30.200/VSAN-Datacenter/host/VSAN-Cluster'

Categories // Automation, OVFTool Tags // memorySize, numberOfCpus, ovftool

How to simulate Persistent Memory (PMem) in vSphere 6.7 for educational purposes? 

05.24.2018 by William Lam // 6 Comments

A really cool new capability that was introduced in vSphere 6.7 is the support for the extremely fast memory technology known as non-volatile memory (NVM), also known as persistent memory (PMem). Customers can now benefit from the high data transfer rate of volatile memory with the persistence and resiliency of traditional storage. As of this blog post, both Dell and HP have Persistent Memory support and you can see the list of supported devices and systems here and here.


PMem can be consumed in one of two methods:

  • Virtual PMem (vPMem) - In this mode, the GuestOS is actually PMem-aware and can consume the physical PMem device on the ESXi host as standard, byte-addressable memory. In addition to using an OS that supports PMem, you will also need to ensure that the VM is running the latest Virtual Hardware 14
  • Virtual PMem Disks (vPMemDisk) - In this mode, the GuestOS is NOT PMem-aware and does not have access to the physical PMem device. Instead, a new virtual PMem hard disk can be created and attached to a VM. To ensure the PMem hard disk is placed on the PMem Datastore as part of this workflow, a new PMem VM Storage Policy will be applied automatically. There are no additional GuestOS or VM Virtual Hardware requirement for this scenario, this is great for legacy OS that are not PMem-aware

Customers who may want to familiarize themselves with these new PMem workflows, especially for Automation or educational purposes, could definitely benefit from the ability to simulate PMem in their vSphere environment prior to obtaining a physical PMem device. Fortunately, this is something you can actually do if you have some additional spare memory from your physical ESXi host.

Disclaimer: This is not officially supported by VMware. Unlike a real physical PMem device where your data will be persisted upon a reboot, the simulated method will NOT persist your data. Please use this at your own risk and do not place important or critical VMs using this method.

[Read more...]

Categories // ESXi, Home Lab, Nested Virtualization, Not Supported, vSphere 6.7 Tags // fakePmemPct, Nested ESXi, Non-Volatile Memory, NVDIMM, NVM, Persistent Memory, PMem, vSphere 6.7

  • « Previous Page
  • 1
  • …
  • 249
  • 250
  • 251
  • 252
  • 253
  • …
  • 567
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Ultimate Lab Resource for VCF 9.0 06/25/2025
  • VMware Cloud Foundation (VCF) on ASUS NUC 15 Pro (Cyber Canyon) 06/25/2025
  • VMware Cloud Foundation (VCF) on Minisforum MS-A2 06/25/2025
  • VCF 9.0 Offline Depot using Synology 06/25/2025
  • Deploying VCF 9.0 on a single ESXi host? 06/24/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...