WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

vimtop: esxtop for the VCSA 6.0

02.13.2015 by William Lam // 6 Comments

A couple of weeks back I learned about a really cool new tool called vimtop located in the new VCSA 6.0 from fellow colleague Nick Marshall. If you have ever used esxtop before with ESXi, then you will feel right at home with vimtop which is purpose built to provide performance information and statistics about VCSA and the applications running under it. This will definitely be a handy a tool to be aware of when needing to troubleshoot performance issues or bottlenecks in the VCSA.

Disclaimer: While testing vimtop, I found that some of the command-line options are not currently functional and probably why the current version is at 0.5 with tag of "Alpha". I have been told vimtop is still in active development and I suspect Engineering wanted to get something out to customers to try out and get feedback as they continue to iterate and add more features.

To launch vimtop, you will need to SSH to a VCSA 6.0 system and type "vimtop" in either the applianceshell or in a regular bash shell.

vimtop0
At first glance, vimtop looks very similar to esxtop but you will quickly notice there are many cool new UI improvements which really makes navigating the interface much simpler. The first thing that should stand out to you is the use of colors to help improve the readability of all the metrics. You will also notice that you can quickly navigate through current list view by either scrolling up and down or side to side using the directional arrow keys. When a item is selected is also clearly highlighted which is a huge plus in my opinion when needing to troubleshoot and watch for a particular entry or stat.

Here is a screenshot selecting a specific row in vimtop, you can also do this for a column as well:

vimtop-1-up-down
There are three primary views in vimtop: Processes, Disks & Networks statistics which can be toggled using keyboard shortcuts. In fact, all navigation is performed through a series of global keyboard shortcuts similar to esxtop. There is actually a quite a few of them and you can quickly see the list by hitting the "h" key at any time for the help menu.

Here is the complete list of keyboard shortcuts for your reference

Keyboard Key Description
esc Clear existing selection and jump back to Process view
w Write the configure out the current settings goes to a configuration file located in vimtop/vimtop.xml
s Set the refresh interval (seconds)
f Display all available CPUs overview
t Display Tasks currently managed by the appliance
g Expand top 4 physical CPUs currently available to the appliance
h Help menu
u Show/Hide the unit headers
i Show/Hide the top line
o Network view
p Pause the screen
l Select a particular column
delete Remove selected column
PgUp/PgDn Select first and last row and scroll to it
- Collapse selected item
+ Expand selected item
home/end Select first and last column and scroll to it
left/right arrow Select column
up/down arrow Select row
enter Display more info about a select item
< Move selected column to the left
> Move selected column to the right
k Disk View
m Display memory overview information
n Show/Hide the name headers
c Add new column
d Add selected column in descending order or to switch column to descending order
x Select optimal column width
z Clear sort order
a Add selected column in ascending order or to switch column to ascending order
q Quit
~ Display vimtop in Back/White mode

If you are more of a visual person, I have also created a visual keyboard layout of all the vimtop commands which might be handy to print out and post on your wall. I actually got this awesome idea from one of our internal Wikis and I have created a new layout to match all the commands that are currently in vimtop.

vimtop-shortcut-keys
For each of the three views, you can also add and remove different columns just like you could with esxtop using the "c" character. You can then select or de-select columns by using the spacebar for the metrics you wish to be displayed in the current view.

add-column
I figure it would also be useful to have a table of all the metrics and their definitions as it is a bit difficult to read while in vimtop itself.

ProcessES

Metric ID Description
PID Process identifier
CMD Command name used to start the process as it is seen by the underlying system
CMDLINE The full command line of this process used during startup
NAME User readable name of the process
THREADS Number of native threads currently running in the process
%CPU (CPU Usage) Current CPU usage in percent for this process
MHZ Current CPU usage in MHz for this process
CPU Total CPU time used by the process during last measurement cycle (sum of cpu.used.system and cpu.used.user)
SYS CPU time spent by process in the system (kernel) routines
USR CPU time spent by process in the user land
%MEM (Memory Usage) Physical memory usage in percent for this process
MEM Physical (resident) memory used by this process
VIRT Total virtual memory size of this process (the complete working set including resident and swapped memory)
SHR Size of the shared code - these are any shared objects (so or DLL) loaded by the process
TEXT Code segment size of the process without any shared libraries
DATA Data segment size of the process (for managed process like JVM this includes the managed code also)
FD Total number of file descriptors opened by the process
FILS Number of all file objects opened by the process (sum of files directories and links)
FILE Number of regular files currently opened by the process
DIR Number of directories currently opened by the process
LNK Number of symbolic links currently opened by the process
DEVS Number of devices (char or block) opened by the process
CHAR Number of descriptors opened to character devices
BLCK Number of descriptors opened to block devices
CHNS Number of all communication channels opened by the process (either sockets or FIFOs)
SCKS Number of sockets (TCP|UDP|raw) currently opened by the process
FIFO Pipes (named or not) opened by the process

DiskS

Metric ID Description
DISK/PART Storage disk / partition identifier
IOS Number of I/O operations currently in progress on this disk (should go to zero)
IOTIME Milliseconds spent doing I/O operations on this disk / partition (increases for a nonzero number of I/O operations)
LAT disk / partition access latency (in milliseconds) calculated using the total amount of time spend doing I/O divided by the total amount of I/O operations done during last measurement interval
READS Number of reads issued to this disk / partition and completed successfully during last measurement interval
RDMRG Adjacent to each other reads on this disk / partition merged for efficiency
READ Number of reads per second issued to this disk / partition
RDSCTRS Number of sectors read successfully from this disk / partition during last measurement interval
WRITES Number of writes issued to this disk / partition and completed successfully during last measurement interval
WRMRG Adjacent to each other writes on this disk / partition merged for efficiency
WRITE Number of writes per second issued to this disk / partition
WRSCTRS Number of sectors wrote successfully to this disk / partition during last measurement interval

NetworkS

Metric ID Description
INTF Interface name
TRGPT Total throughput of this interface (Rx + Tx) in kilobytes
RATE The activity of this network interface in kBps
RXED Amount of data (in kilobytes) received during last measurement interval
RXRATE Rate of received data through this interface in kBps
TXED Amount of data (in kilobytes) transmitted during last measurement interval
TXRATE Rate of data transmission through this interface in kBps
RXMCAST Number of multicast packets received on this interface during last measurement interval
RXDROP Number of data rx-packets dropped during last measurement interval
TXDROP Number of data packets dropped upon transmission during last measurement interval
DROPPED Number of dropped packets through this network interface because of running out of buffers during last measurement cycle
ERRS Total number of faults (Tx and Rx) on this interface
RXERRS The sum of receive errors rx-fifo errors and rx-frame errors
TXERRS The sum of transmit errors tx-fifo errors and carrier errors
FIFOERRS FIFO overrun errors on this interface caused by host being busy to serve the NIC hardware
CLLSNS Collisions detected on the transmission medium

There is definitely a lot more to explore in vimtop, but hopefully this provides a good reference point on quickly getting started. I have to say I really like a lot of the UI enhancements to vimtop, especially the ability to select and quickly watch a particular process. Hopefully some of these enhancements can make its way into esxtop to provide the same set of functionality in the future.

Categories // VCSA, vSphere 6.0 Tags // VCSA, vcva, vimtop, vSphere 6.0

How to configure an All-Flash VSAN 6.0 Configuration using Nested ESXi?

02.11.2015 by William Lam // 11 Comments

There has been a great deal of interest from customers and partners for an All-Flash VSAN configuration, especially as consumer grade SSDs (eMLC) continue to drop in price and the endurance levels of these devices lasting much longer than originally expected as mentioned in this article by Duncan Epping. In fact, last year at VMworld the folks over at Micron and SanDisk built and demoed an All-Flash VSAN configuration proving this was not only cost effective but also quite performant. You can read more about the details here and here. With the announcement of vSphere 6 this week and VMware Partner Exchange taking place the same week, there was a lot of excitement on what VSAN 6.0 might bring.

One of the coolest feature in VSAN 6.0 is the support for an All-Flash configuration. The folks over at Sandisk gave a sneak peak at VMware Partner Exchange couple weeks back on what they were able to accomplish with VSAN 6.0 using an All-Flash configuration. They achieved an impressive 2 Million IOPs, for more details take a look here. I am pretty sure there are going to be plenty more partner announcements as we get closer to the GA of vSphere 6 and there will be a list of supported vendors and devices on the VMware VSAN HCL, so stay tuned.

To easily demonstrate this new feature, I will be using Nested ESXi but the process to configure an All-Flash VSAN configuration is exactly the same for a real physical hardware setup. Nested ESXi is a great learning tool to understand and be able to walk through the exact process but should not be a substituted for actual hardware testing. You will need a minimum of 3 Nested ESXi hosts and they should be configured with at least 6GB of memory or more when working with VSAN 6.0.

Disclaimer: Nested ESXi is not officially supported by VMware, please use at your own risk.

In VSAN 1.0, an All-Flash configuration was not officially supported, the only way to get this working was by "tricking" ESXi into thinking the SSD's used for capacity tier are MD's by creating claimrules using ESXCLI. Though this method had worked, VSAN itself was assuming the capacity tier of storage are regular magnetic disks and hence the operations were not really optimized for anything but magnetic disks. With VSAN 6.0, this is now different and VSAN will optimize based on whether are you using using a hybrid or an All-Flash configuration. In VSAN 6.0, there is now a new property called IsCapacityFlash that is exposed and it allows a user to specify whether an SSD is used for the write buffer or for capacity purposes.

Screen Shot 2015-02-10 at 10.01.12 PM
Step 1 - We can easily view the IsCapacityFlash property by using our handy vdq VSAN utility which has now been enhanced to include a few more properties. Run the following command to view your disks:

vdq -q

all-flash-vsan-6
From the screenshot above, we can see we have two disks eligible for VSAN and that they both are SSDs. We can also see thew new IsCapacityFlash property which is currently set to 0 for both. We will want to select one of the disk(s) and set this property to 1 to enable it for capacity use within VSAN.

Step 2 - Identity the SSD device(s) you wish to use for your capacity tier, a very simple to do this is by using the following ESXCLI snippet:

esxcli storage core device list  | grep -iE '(   Display Name: |   Size: )'

all-flash-vsan-1
We can quickly get a list of the devices and their ID along with their disk capacity. In the example above, I will be using the 8GB device for SSD capacity

Step 3 - Once you have identified the device(s) from the previous step, we now need to add a new option called enable_capacity_flash to these device(s) using ESXCLI. There are actually three methods of assigning the capacity flash tag to a device and both provide the same end result. Personally, I would go with Option 2 as it is much simpler to remember than syntax for claimrules 🙂 If you have the ESXi hosts connected to your vCenter Server, then Option 3 would be great as you can perform this step from a single location.

Option 1: ESXCLI Claim Rules

Run the following two ESXCLI commands for each device you wish to mark for SSD capacity:

esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d naa.6000c295be1e7ac4370e6512a0003edf -o enable_capacity_flash
esxcli storage core claiming reclaim -d naa.6000c295be1e7ac4370e6512a0003edf

all-flash-vsan-2
Option 2: ESXCLI using new VSAN tagging command

esxcli vsan storage tag add -d naa.6000c295be1e7ac4370e6512a0003edf -t capacityFlash

Option 3: RVC using new vsan.host_claim_disks_differently command

vsan.host_claim_disks_differently --disk naa.6000c295be1e7ac4370e6512a0003edf --claim-type capacity_flash

Step 4 - To verify the changes took effect, we can re-run the vdq -q command and we should now see our device(s) marked for SSD capacity.

all-flash-vsan-3
Step 5 - You can now create your VSAN Cluster using the vSphere Web Client as you normally would and add the ESXi host into the cluster or you can bootstrap it using ESXCLI if you are trying to run vCenter Server on top of VSAN, for more details take a look here.

One thing that I found interesting is that in the vSphere Web Client when setting up an All-Flash VSAN configuration, the SSD(s) used for capacity will still show up as "HDD". I am not sure if this is what the final UI will look like before vSphere 6.0 GA's.

all-flash-vsan-4
If you want to check the actual device type, you can always go to a specific ESXi host under Manage->Storage->Storage Devices to see get more details. If we look at our NAA* device ID, we can see that both devices are in fact SSDs.

all-flash-vsan-5
Hopefully for those of you interested in an All-Flash VSAN configuration, you can now quickly get a feel for that running VSAN 6.0 in a Nested ESXi environment. I will be publishing updated OVF templates for various types of VSAN 6.0 testing in the coming weeks so stay tune.

Categories // ESXi, Nested Virtualization, VSAN, vSphere 6.0 Tags // enable_capacity_flash, esxcli, IsCapacityFlash, Virtual SAN, VSAN, vSphere 6.0

Increasing disk capacity simplified with VCSA 6.0 using LVM autogrow

02.10.2015 by William Lam // 20 Comments

With previous releases of the VCSA, increasing disk capacity was not a very straight forward process. Even though you could easily increase the size of the underlying VMDK while the VCSA was running, increasing the guestOS filesystem was not as seamless. In fact, the process was to add a new VMDK, format it and then copy the contents from the old disk to the new disk as detailed in VMware KB 2056764. This meant with previous releases of VCSX 5.x, you would need to incur downtime of your environment and it could be also be quite significant depending on your familiarity with the steps mentioned in the KB not to mention the time it took to copy the data.

UPDATE (12/06/16) - For VCSA 6.5 deployments, please refer to the article here as the instructions have changed since VCSA 6.0.

The reason for this unnecessary complexity is that VCSA did not take advantage of a Logical Volume Manager (LVM) for managing its disks. In VCSA 6.0, LVM is now used to make it extremely easy to increase disk capacity while the VCSA is running. VCSA 6.0 further simplifies this by separating out the various functions into their own disk partitions comprised of 11 VMDKs compared to the monolithic design in previous VCSA releases. This not only allows you to increase capacity for specific a partition but you can also now attach specific storage SLA's using VM Storage Policies on specific VMDKs such as the Database or Log VMDK for example.

In the example below, I will walk through the process of increasing the DB VMDK from the existing 10GB to 20GB while the vCenter Server is still running.

Step 1 - Verify the existing disk capacity using "df -h"

increase-vmdk-in vcsa-01
Step 2 - Increase the capacity on VMDK 6 which represents the DB partition using the vSphere Web/C# Client.

Step 3 - Once the VMDK has been increased, you will need to run the following command in the VCSA which will automatically expand any Logical Volumes that have had their Physical Volumes increased:

vpxd_servicecfg storage lvm autogrow

increase-vmdk-in vcsa-02
Step 4 - Confirm the newly added capacity has been consumed

increase-vmdk-in vcsa-03
If you would like to learn more about the different VMDK structure in the new VCSA 6.0, I will be sharing more details in a future article.

Categories // Automation, VCSA, vSphere 6.0 Tags // autogrow, lvm, VCSA, vcva, vpxd_servicecfg, vSphere 6.0

  • « Previous Page
  • 1
  • …
  • 17
  • 18
  • 19
  • 20
  • 21
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...