WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Quick Tip - vSphere MOB is disabled by default in ESXi 6.0

02.24.2015 by William Lam // 9 Comments

Yesterday, I noticed an interesting error when trying to connect directly to the vSphere MOB on an ESXi 6.0 host. The following error message was displayed on the browser:

503 Service Unavailable (Failed to connect to endpoint: [N7Vmacore4Http20NamedPipeServiceSpecE:0x4bf02038] _serverNamespace = /mob _isRedirect = false _pipeName =/var/run/vmware/proxy-mob)

vsphere-6.0-mob-disable-0
This was the first time I had noticed this as I normally use the vSphere MOB for debugging purposes or exploring the vSphere API. The vSphere MOB is also a quick an handy way to unregister vSphere Plugins when connecting to vCenter Server.

I did some further investigation and it turns out that in vSphere 6.0, the vSphere MOB will be disabled by default on an ESXi 6.0 host. The reason for this is to provide security hardening out of the box for ESXi versus having an administrator harden after the fact. If you are familiar with the vSphere Security Hardening Guides, you will recall one of the guidelines is to disable the vSphere MOB on an ESXi host and with vSphere 6.0, this is now done automatically for you. This information will also be documented as part of the vSphere 6.0 documentation when it GAs.

If you still need to access the vSphere MOB on an ESXi how, this of course can be re-enabled from the default. There is also a new ESXi Advanced Setting called Config.HostAgent.plugins.solo.enableMob which easily controls whether the vSphere MOB is enabled or disabled as seen in the screenshot below.

vsphere-6.0-mob-disable-1
You have the option of using either the vSphere C# Client as shown in the screenshot above or the vSphere Web Client to configure the ESXi Advanced Setting:

vsphere-6.0-mob-disable-3
You can also configure this property using the vim-cmd in the ESXi Shell.

Listing the ESXi Advanced Setting using vim-cmd:

vim-cmd hostsvc/advopt/view Config.HostAgent.plugins.solo.enableMob

vsphere-6.0-mob-disable-2
Configuring the ESXi Advanced Setting to true:

vim-cmd hostsvc/advopt/update Config.HostAgent.plugins.solo.enableMob bool true

If you prefer to automate this using PowerCLI or vSphere API, this can also be done. Below are two examples using the Get-VmHostAdvancedConfiguration and Set-VMHostAdvancedConfiguration PowerCLI cmdlets.

Listing the ESXi Advanced Setting using PowerCLI:

Get-VMHost 192.168.1.200 | Get-VmHostAdvancedConfiguration -Name Config.HostAgent.plugins.solo.enableMob | Format-List

vsphere-6.0-mob-disable-4.png
Configuring the ESXi Advanced Setting to true:

Get-VMHost 192.168.1.200 | Set-VMHostAdvancedConfiguration -Name Config.HostAgent.plugins.solo.enableMob  -Value True

If you rely on using the vSphere MOB on ESXi and would like this to be your default, I would recommend you update either your ESXi Kickstart or Host Profile to include this additional configuration so that you do not get like I did 🙂 If you only need to use the vSphere MOB on occasion or do not have a use for it at all, then leaving the default is sufficient.

Categories // Automation, ESXi, vSphere 6.0 Tags // ESXi, mob, vim-cmd, vSphere 6.0, vSphere API

Dynamic memory resizing for vCenter Server 6.0

02.23.2015 by William Lam // 32 Comments

In previous releases of vSphere, scaling up resources such as storage or memory for vCenter Server was a huge pain-point for our customers. Before the various vCenter Server services can consume the new resources, some additional manual steps were required. Though this type of an operation is usually infrequent, there is still an operational overhead which can potentially lead to increased downtime of your vCenter Server.

For example, increasing storage capacity for the VCSA was an offline operation that required adding an additional disk and then copying the existing content to the new disk which can be quite error prone and lead to a significant amount of downtime. In vSphere 6.0, the VCSA now uses LVM which provides the ability for online storage capacity increase without any downtime to vCenter Server. Increasing memory was also challenging because you had to manually adjust several configuration files that manages the JVM heap settings for various vCenter Server services as described in this VMware KB. Having complex workflows to perform basic resource expansion can increase risk of errors, especially when the process is foreign to those performing it for the very first time.

To help solve this problem, in vSphere 6.0 vCenter Server (Windows & VCSA) now includes a built-in dynamic memory reconfiguration process that automatically runs at boot up. This process includes a dynamic algorithm that inspects the current amount of CPU, Memory and Storage that is available to determine the appropriate size to configure the vCenter Server. This means that if you no longer have to tweak individual JVM settings for the various services within vCenter Server, this will happen automatically by analyzing the resources that are available and then calculating the configuration based on the supported maximums for vCenter Server.

Note: In vSphere 6.0, there are additional services going beyond just the core vCenter Server, vSphere Web Client, vCenter SSO and Inventory Services.

The dynamic memory algorithm is configured to understand the minimal amount of resources for running a vCenter Server and is bounded between a "Tiny" configuration which is 2vCPU and 8GB memory and a "Large" configuration which is 16vCPU and 32GB memory. This is important to note because if you try to configure the vCenter Server with less memory than the minimal supported, though the algorithim will dynamically distribute the available memory to the various resources, it could lead to performance degradation as the different services may not be receiving the amount of memory they require to run. YMMV if you decide to reduce the supported amount of memory but the algorithm will distribute what's available.

The process which does all the magic is a utility called cloudvm-ram-size and there are several useful options to be aware of. To view the current memory assignment for the various vCenter Server services including the OS, you can run the following command on the VCSA as an example:

cloudvm-ram-size -l

Screen Shot 2015-02-14 at 9.07.52 AM
From the screenshot above, we can see a very simple break down of the current memory assignment for a "Tiny" deployment which has 8GB of memory.

To show that the dynamic memory algorithm is in fact running when more memory is added, the example below is of a VCSA that was initially configured with 8GB of memory. I then capture the running configuration and then shut down the vCenter Server and increased its memory to 10GB. I then power on the VCSA and capture the running state and you can see differences in the screenshot below.

Screen Shot 2015-02-14 at 8.51.16 AM
Another useful command to be aware is being able to see the current memory usage for all services. You can do this by running the following command:

cloudvm-ram-size -S

Screen Shot 2015-02-14 at 9.08.28 AM
As you can see the dynamic memory algorithm is a very much welcome feature for vCenter Server and will greatly simplify the operational tasks when needing to scale up or down resources such as CPU and Memory. I know this is definitely one of the enhancements I have been waiting for and I am glad to see it here in the new vSphere 6.0 release! As of right now, a system reboot is required but who knows maybe in the future we can increase memory while the VCSA is still running and simply reloading the services ...

Categories // VCSA, vSphere 6.0 Tags // cloudvm-ram-size, jvm heap, vCenter Server, VCSA, vcva

Quick Tip - smartd configurable polling interval in vSphere 6.0

02.20.2015 by William Lam // 1 Comment

In vSphere 5.1, one of the major storage enhancements that was part of the new I/O Device Management (IODM) framework was the addition of SMART (Self Monitoring, Analysis And Reporting Technology) data for monitoring FC, FCoE, iSCSI, SAS protocol statistics, this is especially useful for monitoring the health of an SSD device. The SMART data is provided through a SMART daemon which lives inside of ESXi and runs every 30 minutes to gather statistics and diagnostic information from the underlying storage devices and provides the information through the following ESXCLI command:

esxcli storage core device smart get -d [DEVICE]

Screen Shot 2015-02-20 at 4.14.06 AM
If you would like to learn more about IODM and SMART, be sure to check out Cormac Hogan's in-depth article here.

The default polling interval for the SMART daemon in vSphere 5.1 was not configurable and 30 minutes was the system default. For most customers, the out of the box configuration should be sufficent. However, for some customers who wish to have greater flexibility in the polling frequency, the default can now be adjusted in vSphere 6.0. The smartd process now includes a new -i option which specifies the polling interval.

[root@mini:~] smartd -h
smartd: option requires an argument -- 'h'
smartd <options>
-i   Polling interval (in minutes) for smartd
(default Polling interval is 30 minutes)

If you wish to change the default, you will need to modify the /etc/init.d/smartd init script and include the interval option. One issue that I have found is that changes to the init script do not persist reboots as modifications to these files should not be performed by users. In the case of adjusting the polling interval, we need to add the additional option for smartd startup.

We can still accomplish this by adding the following to /etc/rc.local.d/local.sh make the necessary adjustments and restarting the smartd process:

SMARTD_POLL_INTERVAL=35
/etc/init.d/smartd stop
sed -i "s/^SMARTD_SCHED_PARAM.*/SMARTD_SCHED_PARAM=\"-i ${SMARTD_POLL_INTERVAL} ++group=smartd\"/g" /etc/init.d/smartd
/etc/init.d/smartd start

Note: The -i option is only visible when smartd process is not running

If you wish to see the changes live immediately, then you can run /etc/rc.local.d/local.sh command once or this will automatically happen upon ESXi booting up. If we perform a process look up using "ps", we can see that our smartd is now configured to poll every 35 minutes instead of the default 30.

Screen Shot 2015-02-18 at 6.00.52 PM

Categories // ESXi, vSphere 6.0 Tags // esxcli, iodm, smartd, vSphere 6.0

  • « Previous Page
  • 1
  • …
  • 369
  • 370
  • 371
  • 372
  • 373
  • …
  • 561
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...