WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Quick Tip - How to monitor when ESXi filesystem and partitions are filling up?

05.30.2023 by William Lam // 3 Comments

Here is another tidbit on how you can leverage the power of vSphere Events, which now includes over 2K+ as of vSphere 8.0 Update 1 to help monitor when an ESXi filesystem and/or partition is low on disk space.

With vSphere 6.7 or later, we have two events that you can use to help alert when either an ESXi ramdisk (e.g. /var) or VFAT partition (e.g. bootbanks) has filled up.

  • Ramdisk: esx.problem.visorfs.ramdisk.full
  • VFAT: esx.problem.vfat.filesystem.full.other

When either of these occur, you can easily find them under the Monitor->Events section for an ESXi host as shown in the screenshot below.

[Read more...]

Categories // Automation, ESXi, vSphere, vSphere 6.7, vSphere 7.0, vSphere 8.0 Tags // alarm, ESX-OSData, ESXi, inode, partition, ramdisk, scratch, vfat

Erasing existing disk partitions now available in the vSphere Web Client (vSphere 6.0 Update 1)

09.29.2015 by William Lam // 9 Comments

One of the primary challenges when trying re-purpose existing storage devices is ensuring that all data and existing partitions have been completely removed. Often times, customers end up resorting to third-party tools like GParted which requires you to boot your server into the LiveCD before you can remove the existing partitions. This is less than ideal, especially if you need to perform this operation across multiple systems.

For customers who wish to re-purpose their existing storage devices for other use, including VSAN, there is now a new UI option in the vSphere Web Client introduced in vSphere 6.0 Update 1 to help assist with this procedure. I had not seen anyone talk about this feature yet and figure I would share some details as this is something I have heard customers ask for in the past. You can find this new option (icon with disk and eraser) by clicking onto a specific ESXi host and then selecting the Manage->Storage Adapters and then highlighting the specific storage device you wish to erase as seen in the screenshot below.

erase-disk-partition-in-vsphere-web-client-0
Once the erase partition icon or action is selected, you will then be presented with a summary of the existing partitions on the disk and then prompted to confirm that you wish to delete ALL partitions on the disk.

erase-disk-partition-in-vsphere-web-client-1
After the operation has successfully completed, you can now re-purpose the storage device for other use like VSAN!

For those of you who are interested from an Automation standpoint, this UI operation actually makes use of an existing vSphere API that has been for quite some time called updateDiskPartitions() under the StorageSystem manager of an ESXi host. To erase all partitions, you simply pass in an empty spec to the API method.

In addition, I also want to quickly mention that you will also have the ability to edit and erase existing disk partitions using the ESXi Embedded Host Client Fling which will be available in a future update. Below is a quick screenshot on what that would look like. 

erase-disk-partition-in-vsphere-web-client-2

Categories // Automation, ESXi, VSAN, vSphere Web Client Tags // partition, VSAN, VSAN 6.1, vSphere 6.0 Update 1, vSphere API, vsphere web client, web client

Two coredump partitions in ESXi 5.5?

06.12.2014 by William Lam // 8 Comments

A couple of days back I had to re-install ESXi on a physical host for some troubleshooting purposes and while looking at the partitions on the disks using ESXCLI, I noticed the fresh ESXi installation had created two coredump partitions.

two-coredump-partition-0
I was quite surprised to see two, since normally you would just have one configured. I even asked a colleague if he had ever see this before and he had not, so I wanted to double check that there was in fact two coredump partitions being created which I verified by using partedUtil.

two-coredump-partition-1
As you can see from the screenshot above, there are definitely two coredump partitions. I took a look at our vSphere documentation, but did not find any mention of this. I decided to look internally and found that this is actually a new behavior that was introduced in ESXi 5.5. From what I can tell, the second coredump partition which is 2.5GB was created to ensure that there was sufficient space to handle ESXi hosts configured with a huge amount of memory (up to 4TB) if a coredump were to occur. This new coredump partition is only created on a fresh ESXi install, for upgrade scenarios the original partition structure is preserved. I suspect even on the fresh install, the original coredump partition was kept for potential backwards compatibility.

This definitely made sense given the reason. I guess this actually raises another interesting point from an operational point of view that though upgrades may be preferred, there are also good reasons to perform a fresh install over an upgrade. In this case, to ensure we do not break past requirements/assumptions, we could not just automatically expand or create a larger coredump partition to adhere to new requirements. This is actually not the first instance of this, here are two additional examples in which a fresh installation would have potentially yielded a more optimal environment:

  • Lopsided bootbanks in ESXi
  • Un-Unified VMFS blocksize

Categories // ESXi, vSphere 5.5 Tags // coredump, ESXi 5.5, partition, vSphere 5.5

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...