WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

First look at the Supermicro E100-12T

11.04.2021 by William Lam // Leave a Comment

I first came to learn about Supermicro's E100-9W platform last year, which I had first written about here. The E100-9W is a fanless kit that is part of Supermicro's Embedded IoT family and targets similiar use cases to the Intel NUC such as Industrial Automation, Retail, Smart Medical Systems, Kiosks and Digital Signage. Although the E100-9W was just released in 2020, it was actually using a much older Intel 8th Generation CPU due to some constraints with Intel's embedded CPU roadmap.

Supermicro did mention last year that a Tiger Lake-based model was in the works and last week, I just got my hands on a pre-production unit for their 2nd generation of this platform called the E100-12T.

[Read more...]

Categories // ESXi, Home Lab, VSAN, vSphere Tags // E100-12T, Supermicro

How to clean up stale vSphere Container Volumes & First Class Disks?

03.10.2021 by William Lam // 7 Comments

If you are running and deploying Kubernetes (K8s) which includes vSphere with Tanzu and Tanzu Kubernetes Grid (TKG), you might notice vSphere Container Volumes showing up in the vSphere UI under the Monitor tab for a given vSphere-based Datastore. This is normal and expected as new Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) are being requested as part of deploying K8s-based application that require storage.


Typically, when PVs and PVCs are no longer needed, they should be cleaned up within the K8s layer via kubectl either automatically or manually depending on your provisioning process. When you delete a K8s Cluster, these PVs/PVCs are not automatically cleaned up and its for good reason, you may want to reuse them and the way vSphere supports this is by implementing them as First Class Disks (FCD), which means they are lifecycle independent of a VM.

What happens when the K8s Cluster has been deleted and you actually want to clean up these stale FCDs, how do you go about doing that? This is a question I have seen come up more frequently and there are a few options.

Option 1:

If you happen to be on vSphere 7.0 Update 2 (which was just released yesterday), the vSphere UI has been enhanced to allow users to now delete vSphere Container Volume (see screenshot above). Previously, you could only view the FCDs and reapply a storage policy.

Option 2:

Since vSphere Container Volumes are just FCDs and we have FCD APIs, we can use the API to retrieve information as well as clean them up. The easiest way is to use PowerCLI's Get-CnsVolume and Remove-CnsVolume cmdlets.

Here is an example of deleting the 2GB volume:

Get-CnsVolume -Datastore (Get-Datastore "sm-vsanDatastore") -Name "pvc-db6829ad-e1a9-46e8-ace3-7e7c18187a0d" | Remove-CnsVolume

In the case of standalone FCDs, which could have been manually provisioned or through a backup solution, you can also clean them up by using PowerCLI's Get-VDisk and Remove-VDisk cmdlets respectively:

Get-VDisk -Name "fill-me-in" | Remove-VDisk

Categories // Cloud Native, Kubernetes, VMware Tanzu, VSAN, vSphere 7.0 Tags // CNS, CSI, FCD, Kubernetes

vSAN Witness using Raspberry Pi 4 & ESXi-Arm Fling

10.08.2020 by William Lam // 36 Comments

As hinted in my earlier blog post, you can indeed setup a vSAN Witness using the ESXi-Arm Fling running on a Raspberry Pi (rPI) 4b (8GB) model. In fact, you can even setup a standard 2-Node or 3-Node vSAN Cluster using the exact same technique. For those familiar with vSAN and the vSAN Witness, we will need to have at least two storage devices for the caching and capacity tier.

For the rPI, this means we are limited to using USB storage devices and luckily, vSAN can actually claim and consume USB storage devices. For a basic homelab, this is probably okay but if you want something a bit more reliable, you can look into using a USB 3.0 to M.2 NVMe chassis. The ability to use an M.2 NVMe device should definitely provide more resiliency compared to a typical USB stick you might have lying around. From a capacity point of view, I had two 32GB USB keys that I ended up using which should be plenty for a small setup but you can always look at purchasing large capacity given how cheap USB devices are.

Disclaimer: ESXi-Arm is a VMware Fling which means it is not a product and therefore it is not officially supported. Please do not use it in Production.

With the disclaimer out of the way, I think this is a fantastic use case for an inexpensive vSAN Witness which could be running at a ROBO/Edge location or simply supporting your homelab. The possibilities are certainly endless and I think this is where the ESXi-Arm team would love to hear whether this is something customers would even be interested in and please share your feedback to help with priorities for both the ESXi-Arm and vSAN team.

In my setup, I have two Intel NUC 9th Pro which make up my 2-Node vSAN Cluster and then an rPI as my vSAN Witness. Detailed instructions can be found below including a video for those wanting to see vSAN Witness in action by actually powering on an actual workload 😀

[Read more...]

Categories // ESXi-Arm, VSAN, vSphere Tags // Arm, ESXi, Raspberry Pi, witness

  • « Previous Page
  • 1
  • …
  • 14
  • 15
  • 16
  • 17
  • 18
  • …
  • 54
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Ultimate Lab Resource for VCF 9.0 06/25/2025
  • VMware Cloud Foundation (VCF) on ASUS NUC 15 Pro (Cyber Canyon) 06/25/2025
  • VMware Cloud Foundation (VCF) on Minisforum MS-A2 06/25/2025
  • VCF 9.0 Offline Depot using Synology 06/25/2025
  • Deploying VCF 9.0 on a single ESXi host? 06/24/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...