WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Quick Tip - VSAN 6.2 (vSphere 6.0 Update 2) now supports creating all-flash diskgroup using ESXCLI

03.02.2016 by William Lam // 5 Comments

One of my all time favorite features of VSAN is still the ability to be able to "bootstrap" a VSAN Datastore starting with just a single ESXi node. This is especially useful if you would like to bootstrap vCenter Server on top of VSAN out of the box without having to require additional VMFS/NFS storage. This bootstrap method has been possible and supported since the very first release of VSAN which I have written in great detail here and here.

With the release of VSAN 6.1 (vSphere 6.0 Update 1), an all-flash VSAN configuration was also now possible in addition to a hybrid configuration which uses a combination of SSDs and MDs. One observation that was made by a few folks including myself was that you could not configure an all-flash diskgroup using ESXCLI which was one of the methods that could be used to bootstrap VSAN. If you tried to create an all-flash diskgroup using ESXCLI, you would get the following error:

Unable to add device: Can not create all-flash disk group: current Virtual SAN license does not support all-flash

This turned out to be a bug and the workaround at the time was to add the ESXi host to a vCenter Server which would then allow you to create the all-flash diskgroup. This usually was not a problem but for those wanting to bootstrap VSAN, this would require you to have an already running vCenter Server instance. While setting up my new VSAN 6.2 home lab last night

Just finished installing all 32GB of awesomeness + 2 SSD (M.2 & 2.5). Super simple#VSAN62HomeLab pic.twitter.com/tYOujQmCqX

— William Lam (@lamw.bsky.social | @*protected email*) (@lamw) March 2, 2016

I found that this issue has actually been resolved in the upcoming release of VSAN 6.2 (vSphere 6.0 Update 2) and you can now create an all-flash diskgroup using ESXCLI which includes do so from the vSphere API as well. For those interested, you can find the list commands required to bootstrap an all-flash VSAN configuration below:

[Read more...]

Categories // Automation, ESXi, VSAN, vSphere 6.0 Tags // esxcli, ESXi 6.0, Virtual SAN, VSAN, vSphere 6.0 Update 2

VSAN 6.2 extends vSphere API to include new VSAN Management APIs

02.26.2016 by William Lam // 7 Comments

In addition to all the new capabilities and enhancements included in the release of VSAN 6.2 (vSphere 6.0 Update 2) which you can read more about here and here; VSAN 6.2 also introduces a new VSAN Management API which extends the existing vSphere APIs that our customers are quite familiar with.

This new VSAN Management API will allow developers, partners and administrators to automate all aspects of VSAN functionality including: complete lifecycle (install, upgrade, patch), monitoring (including VSAN Observer capabilities), configuration and troubleshooting. There will be two new service endpoints /vsan for an ESXi host and /vsanHealth for vCenter Server respectively which will provide access to the new VSAN Management API interfaces.

UPDATE: (03/17/16) - Check out this article here on how to quickly get started with the new VSAN Management API.

Below are the list of new vSphere Managed Objects that provide the different VSAN capabilities:

Managed Object Functionality ESXi or VC
HostVsanHealthSystem VSAN Health related configuration and query APIs ESXi only
HostVsanSystem VSAN related configuration and query APIs ESXi only
VsanObjectSystem VSAN object related status query and storage policy setting APIs ESXi & VC
VsanPerformanceManager VSAN Performance related configuration and query APIs ESXi & VC
VsanSpaceReportSystem VSAN cluster space usage related query APIs VC only
VsanUpgradeSystem Used to perform and monitor VSAN on-disk format upgrades VC only
VsanUpgradeSystemEx VSAN upgrade and disk format conversion related APIs VC only
VsanVcClusterConfigSystem VSAN cluster configuration setting and query APIs VC only
VsanVcClusterHealthSystem VSAN Health related configuration and query APIs VC only
VsanVcDiskManagementSystem VSAN disks related configuration and query APIs VC only
VsanVcStretchedClusterSystem VSAN Stretched Cluster related configuration and query APIs VC only

Note: There will be a VSAN Management API Reference guide similar to the vSphere API Reference Guide which will be released as part of VSAN 6.2. There, you will find much greater detail on each of the new vSphere Managed Objects and their associated methods and usage.

For customers interested in consuming this new VSAN Management API, there will be initially five language specific bindings also known as an SDK (Software Development Kit) that will be available for download when VSAN 6.2 is generally available:

  • VSAN Management SDK for Python - Extends pyvmomi (vSphere SDK for Python)
  • VSAN Management SDK for Ruby - Extends rbvmomi (vSphere SDK for Ruby)
  • VSAN Management SDK for Java - Extends vSphere SDK for Java
  • VSAN Management SDK for C# - Extends vSphere SDK for C#
  • VSAN Management SDK for Perl - Extends vSphere SDK for Perl

Additional language bindings are being worked on and if you have any feedback on what you might like to see next, feel free to leave a comment.

Categories // Automation, ESXi, VSAN, vSphere 6.0 Tags // C#, java, pyVmomi, rbvmomi, Virtual SAN, vSphere 6.0 Update 2, vSphere API

Docker Container for the Ruby vSphere Console (RVC)

11.08.2015 by William Lam // 2 Comments

The Ruby vSphere Console (RVC) is an extremely useful tool for vSphere Administrators and has been bundled as part of vCenter Server (Windows and the vCenter Server Appliance) since vSphere 6.0. One feature that is only available in the VCSA's version of RVC is the VSAN Observer which is used to capture and analyze performance statistics for a VSAN environment for troubleshooting purposes.

For customers who are still using the Windows version of vCenter Server and wish to leverage this tool, it is generally recommended that you deploy a standalone VCSA just for the VSAN Observer capability which does not require any additional licensing. Although it only takes 10 minutes or so to setup, having to download and deploy a full blown VCSA to just use the VSAN Observer is definitely not ideal, especially if you are resource constrained in your environment. You also may only need the VSAN Observer for a short amount of time, but it could take you longer to deploy and in a troubleshooting situation, time is of the essence.

I recently came across an internal Socialcast thread and one of the suggestion was why not build a tiny Photon OS VM that already contained RVC? Instead of building a specific Photon OS that was specific to RVC, why not just create a Docker Container for RVC? This also means you could pull down the Docker Container from Photon OS or any other system that has Docker installed. In fact, I had already built a Docker Container for some handy VMware Utilities, it would be simple enough to just have an RVC Docker Container.

The one challenge that I had was that the current RVC github repo does not contain the latest vSphere 6.x changes. The fix was simple, I just copied the latest RVC files from a vSphere 6.0 Update 1 deployment of the VCSA (/opt/vmware/rvc and /usr/bin/rvc) and used that to build my RVC Docker Container which is now hosted on Docker Hub here and includes the Dockerfile in case someone was interested in how I built it.

To use the RVC Docker Container, you just need access to a Linux Container Host, for example VMware Photon OS which can be deployed using an ISO or OVA. For instructions on setting that up, please take a look here which should only take a minute or so. Once logged in, you just need to run the following commands to pull down the RVC Docker Container and to star the container:

docker pull lamw/rvc
docker run --rm -it lamw/rvc

ruby-vsphere-console-docker-container-1
As seen in the screenshot above, once the Docker Container has started, you can then access RVC like you normally would. Below is an quick example of logging into one of my VSAN environments and using RVC to run the VSAN Health Check command.

ruby-vsphere-console-docker-container-0
If you wish to run the VSAN Observer with the live web server, you will need to map the port from the Linux Container Host to the VSAN Observer port which runs on 8010 by default when starting the RVC Docker Container. To keep things simple, I would recommend mapping 80->8010 and you would run the following command:

docker run --rm -it -p 80:8010 lamw/rvc

Once the RVC Docker Container has started, you can then start the VSAN Observer with --run-webserver option and if you connect to the IP Address of your Linux Container Host using a browser, you should see the VSAN Observer Stats UI.

Hopefully this will come in handy for anyone who needs to quickly access RVC.

Categories // Docker, VSAN, vSphere 6.0 Tags // container, Docker, Photon, ruby vsphere console, rvc, vcenter server appliance, VCSA, vcva, VSAN, VSAN 6.1, vSphere 6.0 Update 1

  • « Previous Page
  • 1
  • …
  • 30
  • 31
  • 32
  • 33
  • 34
  • …
  • 53
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...