WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

New Hidden CBRC (Content-Based Read Cache) Feature in vSphere 5 & for VMware View 5?

08.20.2011 by William Lam // 25 Comments

CBRC (Content-Based Read Cache) is another new/hidden feature of vSphere 5, not to be confused with the new Host Cache feature (swap to host cache). I initially thought CBRC was related to host cache and that it might have been an internal name for the feature. In a recent discussion on the VMTN community forums, a reader pointed out that CBRC is different and there is not whole lot of information on how it works. I decided to perform a quick Google search to see if anyone has written about this feature and one site had an interesting quote from a VMware sales rep on how CBRC works:

Content-Based Read Cache. A content-based read cache (CBRC) has been delivered for specific use with View (VDI) workloads. With this option configured in ESX, a read cache is constructed in memory optimized for recognizing, handling, and deduplicating VDI client images. The cache is managed from within the View Composer and delivers a significant reduction, as high as 90% by early estimates, in IOPS from each ESX host to the storage platform holding client images. This reduction in IOPS enables large scaling of the number of clients in case multiple I/O storms, typical in large VDI deployments, occur. 

It looks like CBRC is implemented within hypervisor but it will be leveraged by VMware View 5 around provisioning Linked Clones? One question I had was whether or not this constructed cache in memory could be used without VMware View?

*** Disclaimer: I do not have any insider information from VMware, these are my own personal observations. The following section is not supported by VMware, use at your own risk ***

There are two new sections under the Advanced Settings of an ESXi 5 host, CBRC and Digest.

There are 4 configurable options to enable CBRC, to configure the cache size, to configure the cache size reservation and interval for digest journal.

I believe the Digest section is related with respect to the algorithms used for CBRC

CBRC looks to only support a maximum of 2GB of memory and default of 400MB reservation. In this example, I have a brand new ESXi 5 host with 8GB configured and without any running virtual machines, here is how much memory it is using.

Now let's go ahead and change the CBRC reserved memory from 400MB to 2048GB (max) and enable CBRC.

If we go back to ESXi summary page, we'll see that an additional 2GB is now being reserved by the ESXi host.

You can also enable and configure CBRC using the CLI, but the ability to enable is only available when using vim-cmd interface in ESXi Shell, the other three options can be configured using legacy esxcfg-advcfg, esxcli or vsish.

Here's an example of enabling CBRC and changing the CBRC memory reservation to 1GB:

vim-cmd hostsvc/advopt/update CBRC.Enable bool true
vim-cmd hostsvc/advopt/update CBRC.DCacheMemReserved long 1024

Ifyou are interested in other advanced settings for CBRC that are not publicly exposed, be sure to check out this post here.

Note: Something I noticed about CBRC is there is an admission control check when you initially enable the feature. If you do not have sufficient memory to pass the admission control check, you will get a very generic error. To see if it is related to admission control by looking in vmkernel.log, you may see a message such as the following: WARNING: cbrc_filter: CBRC_MemSetMemAllocation:1420:Failed to set memory resource parameters for CBRC (Admission check failed for memory resource)

So now that CBRC is enabled, how do we actually use the feature? Well, as I mentioned earlier in the post, this is both a new and hidden feature. Hidden in the sense, that it's not meant to be used directly on an ESXi or vCenter Server but by actually by View Composer. Though you can still access the hidden APIs using the vSphere MOB connected to either vCenter Server or ESXi host.

To get to the new CBRC managed object manager, you can point your browser over to following URL:

https://[hostname]/mob/?moid=ServiceInstance&method=retrieveInternalContent

You will notice a new managed object manager "cbrcManager", go ahead and click on it.

There are 4 methods associated with configuring CBRC:

Note: A shortcut to getting to the CBRC managed object manager is using the following URL:

ESX(i) - https://[hostname]/mob/?moid=ha-cbrc-manager
vCenter - https://[hostname]/mob/?moid=CbrcManager

The ConfigureDigest_Task is what is needed to configure CBRC for a given virtual machine and specifically a virtual disk. The parameters that are needed is both the managed object reference ID of the virtual machine and the virtual disk device ID.

If we browse over to https://[hostname]/mob/?moid=ha-host, we'll be able to identify the virtual machine's MoRef ID and in this example, it's 1

Next we'll need to identify the virtual disk device ID, by traversing the virtual hardware array until we find the virtual disk we're interested in.

The ID for this virtual disk is 3000 and you will also notice a new property called digestEnabled, currently it is disabled.

Now we have everything we need to construct ConfigureDigest_Task method, click on the method name and it will open a new page for you to specify the virtual machine ID and the deviceKey for the virtual disk. You will see the next option is to enable CBRC, I'm not exactly sure what the latter two options but I'll go ahead and enable them anyways.

Once the task has completed, you can view the results and you will see that CBRC has successfully been configured for the virtual machine:

You can also see that the task has been kicked off within the vSphere Client depending if you're connecting to ESXi 5 directly or to vCenter Server:

If we go back to the virtual disk, we will see that digestEnable property is now enabled:

One interesting observation I found was when you enable CBRC, if you browse the datastore of the virtual machine, you will see a new VMDK created with name "digest", this is probably what keeps track of the blocks from the virtual machine and what is loaded into memory.

We can also use the QueryDigestRuntimeStatus to check whether a virtual machine is enabled with CBRC, again specifying the virtual machine's MoRef ID and virtual disk device ID:

We can use QueryDigestInfo on an offline virtual machine to get the details of the digest information for a virtual machine with CBRC enabled:

The last method RecomputeDigest_Task should be self explanatory and it allows you to compute partial or full digest.

By enabling CBRC, the CBRC kernel module is loaded and with that, there are some interesting statistics that can be viewed for a given virtual machine. We'll be leveraging the VMware vsish interface to access the CBRC statistics. To get started, just type vsish and then you will need to change into vmkModules/cbrc_filter path.

One interesting property is the dcacheStats, you can just cat this entry and it provides an enormous amount of statistics about the cache including the number of virtual machines that are using the cache and various IO counters.

You will see that all counters are currently zero, once we start to spin up some Linked Clone virtual machines, you will want to pay close attention to some of these counters.

To answer the question on whether or not CBRC would work outside of VMware View 5, I decided to perform a functional test of the feature. I generated my own VMware Linked Clones using the following vSphere SDK for Perl script vGhettoLinkedClone.pl. You will need a vCenter Server, but you will NOT need VMware View.

Step 1 - Create an offline snapshot for the virtual machine that will be used as the base/golden image to create Linked Clones, in this example, the snapshot will be called "base"

Step 2 - Create several Linked Clones based off of this base/golden virtual machine, in this example, I will be creating three Linked Clone virtual machine named: ALinkedCloneVM1,ALinkedCloneVM2,ALinkedCloneVM3

If we go over to your vCenter Server, you should see the following inventory:

Step 3 - Let's power on the first two virtual machine "ALinkedCloneVM1" and "ALinkedCloneVM2" and let's check out the dcacheStats from vsish and see what has changed.

As you can see we now have counters incrementing on the number of VMs using the cache that was created with CBRC and also counters regarding the backend IO. It looks like the new Linked Clones that was generated can in fact leverage CBRC without VMware View 5. Now this was purely a functional test, these VMs were basically dummy shells, I would be very interested to see if someone is able to get this really working and actually leveraging CBRC with real base images and seeing the reduction of IOPS during a VDI boot storm.

Categories // Uncategorized Tags // cbrc, ESXi 5.0, vmware view 5, vSphere 5.0

How to Send vCenter Alarm Notification to Growl

08.14.2011 by William Lam // 2 Comments

This tweet from Jason Nash and @PunchingClouds says it all and here it is!

I did some research this afternoon and stumbled upon this article Nagios: Notifications via Growl and leveraging the Net::Growl Perl module, I was able to forward alarms generated from a vCenter server to a system that was running Growl.

Software Requirements:

  • Growl for Windows or Mac OSX installed on a system to receive notifications
  • vSphere SDK for Perl installed on vCenter Server

Step 1 -  Install Grow and configure a password under the "Security" tab and ensure you "Allow network notification"

Step 2 - Install vSphere SDK for Perl on your vCenter server. You may also need to update the PATH variable with Perl bin directory (e.g. C:\Program Files\VMware\VMware vSphere CLI\Perl\bin)

Step 3 - Install Net::Growl Perl module using ppm (Perl Package Manager) which is part of ActiveState Perl with the installation of vSphere SDK for Perl. This will require your vCenter server have internet access to ActiveState Perl site, if you can not get this access, you can install this locally on your system and extract the Growl.pm and copy it to your vCenter server C:\Program Files\VMware\VMware vSphere CLI\Perl\site\lib\Net

Step 4 - Copy the Perl script from here and store it somewhere on your vCenter server, make sure it has the .pl extension. In this example, I named it growl.pl

Step 5 - To verify that Growl Perl script works and can communicate to the system with Growl install, you can manually test it by running the following command:

growl.pl -H william.primp-industries.com -p vmware -a custom -t Alert -m "hello william" -s 1

You will need to change -H to the hostname or IP Address of the system with Growl installed and of course the password you had setup. You should see a notification of the message you had just sent.

Step 6 - Create a batch script which will call the growl.pl script and store it somewhere on your vCenter server. Here is what the script (sendGrowl.bat) looks like, you can modify it to fit your requirements.

:: http://www.virtuallyghetto.com/
:: Custom vCenter Alarm script to generate growl notifications

set GROWL_SERVER=william.primp-industries.com
set GROWL_PASSWORD=vmware
set GROWL_SCRIPT_PATH="C:\Documents and Settings\primp.PRIMP-IND\Desktop\growl.pl"
set PATH="%PATH%;C:\Program Files (x86)\VMware\VMware vSphere CLI\Perl\site\bin"

%GROWL_SCRIPT_PATH% -H %GROWL_SERVER% -p %GROWL_PASSWORD% -a %COMPUTERNAME% -t Alert -m "%VMWARE_ALARM_EVENTDESCRIPTION%" -s 1

Note: If you would like to get a list of other default VMware alarm variables, run the "SET" command and output it to a file to get more details on various variables that can be accessed.

Step 7 - Create a new or update an existing vCenter alarm and under "Actions", specify "Run a command" option and provide the full path to the sendGrowl.bat

Step 8 - For testing purposes, I created a new alarm that would trigger upon an ESX(i) host going in/out of maintenance mode and you can see from the "Tasks and Events", our script is triggered on the vCenter server

and now for the finale, you should see a notification from Growl on your system and since we enable the "sticky" parameter, the notification will stay on your screen until you click on it. You can see that in the script example, I set the message to the event and application is registered as the name of the vCenter server, which allows you to have multiple vCenter forward you notifications.

So there you have it, forwarding vCenter alarms to Growl.

Note: Once a vCenter alarm has been triggered, the script will not fire off again until the original alarm has been reset to green. This behavior probably is okay for majority of the events one would want to monitor, but if you want it to continuously alert you, you will need to fiddle with a way to reset the alarm on the vCenter server.

UPDATE:  Thanks to Richard Cardona for reminding me, but this can also be implemented on the new VCVA (vCenter Server Virtual Appliance) in vSphere 5. Here are the instructions on setting it up

Step 1 - Install Grow and configure a password under the "Security" tab and ensure you "Allow network notification" on the system that is receiving the Growl notifications

Step 2 - To install Net::Growl, we'll be using cpan which requires 2 modules that are not installed by default on the SLES VCVA. Using the Tips and Tricks for vMA 5 (running SLES as well), we'll go ahead and setup zypper package manager for VCVA to install the two required packages: make and yaml

zypper --gpg-auto-import-keys ar http://download.opensuse.org/distribution/11.1/repo/oss/ 11.1
zypper --gpg-auto-import-keys ar http://download.opensuse.org/update/11.1/ Update-11.1
zypper refresh
zypper in make
zypper in perl-YAML

Step 3 - You will use cpan to install Net::Growl

perl -MCPAN -e shell

Step 4 - Once you are inside the cpan shell, type the following to install Net::Growl

install Net::Growl

Step 5 - Copy the Perl script from here and store it somewhere on your vCenter server (e.g. /root), make sure it has the .pl extension and has execute permission. In this example, I named it growl.pl

Step 6 - To verify that Growl Perl script works and can communicate to the system with Growl install, you can manually test it by running the following command:

vcenter50-2:~ # ./growl.pl -H william.primp-industries.com -p vmware -a custom -t Alert -m "hello william" -s 1

Step 7 - Create a shell script which will call the growl.pl script and store it somewhere on your vCenter server (e.g. /root). Here is what the script (sendGrowl.sh) looks like, you can modify it to fit your requirements.

Step 8 - Create a new or update an existing vCenter alarm and under "Actions", specify "Run a command" option and provide the full path to the sendGrowl.sh

Categories // Uncategorized Tags // alarm, api, growl, VCSA, vcva, vSphere 4.1, vSphere 5.0, vsphere sdk for perl

When Can I Run Apple OSX on vSphere 5?

08.12.2011 by William Lam // 9 Comments

There was a recent post from the famous Scott Drummonds about Running Apple OSX Lion on vSphere 5 and Scott provided his interpretation/opinion of Apple's EULA on virtualizing Apple OSX. Though the EULA can be somewhat confusing, it is true that with the release of vSphere 5, you now can run OSX 10.7 (Lion), 10.6 (Snow Leopard) and 10.5 (Leopard) as a supported guestOS in ESXi 5.

...but there is a catch (there's always a catch)

UPDATE: As of vSphere 5.1, the Apple Mac Pro is now fully supported on running on ESXi 5.1, to get more details, please take a look at this article.

The caveat is that in allowing VMware to run OSX as a virtual machine on vSphere 5, the physical hardware that ESXi 5 is running on MUST be Apple hardware and specifically the XServe 3.1. For those of you who do not follow Apple's hardware closely, the XServe line was recently EOL as of January 31, 2011 and that brings up an interesting problem. If you wanted to virtualize Apple OSX, you would have had to have purchased XServes prior to January 31st or start looking on Ebay with your corporate card 😉

Now, the Apple EULA is not the only thing that is regulating this requirement, in addition, VMware had to implement a software check within ESXi 5 to ensure that the physical hardware is in fact Apple hardware before allowing you to properly boot up an OSX virtual machine. The check looks for the SMC (System Management Controller) when an OSX virtual machine is being powered on and if this check fails, you will get an error and the virtual machine will be powered off automatically. The presence of the SMC is a new property that is exposed in the vSphere 5 API under "hardware" section of the ESXi host.

The property will either return true or false on whether SMC is present. You can easily check your ESXi 5 host by using the vSphere MOB and pointing your browser to the following URL:

https://[esxi5-hostname]/mob/?moid=ha-host&doPath=hardware


Now you can easily determine whether or not your physical host can support running Apple OSX VMs.

As I understand from the beta, only XServer 3.1 will be officially supported and you will not be able to install ESXi 5 on older versions of the hardware. I have also heard mixed results on folks being able to install ESXi 5 on Mac Mini's and Mac Pro's. At this point, hopefully Apple has a change of heart and will update their EULA to allow ESXi 5 to run on "currently available" Apple hardware such as Mac Mini and Mac Pro.

Categories // Uncategorized Tags // ESXi 5.0, mac, osx, vSphere 5.0

  • « Previous Page
  • 1
  • …
  • 51
  • 52
  • 53
  • 54
  • 55
  • …
  • 74
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025