WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud
  • Tanzu
    • Application Modernization
    • Tanzu services
    • Tanzu Community Edition
    • Tanzu Kubernetes Grid
    • vSphere with Tanzu
  • Home Lab
  • Nested Virtualization
  • Apple

Enabling/Disabling EVC using the vSphere MOB

05.07.2012 by William Lam // 2 Comments

There were some discussions this morning on twitter regarding the configuration of EVC for a vSphere Cluster using one of the vSphere CLI's such as PowerCLI or directly leveraging the vSphere API. Unfortunately, this is not possible today as the operations pertaining to EVC are not currently exposed in the vSphere API. This means you will not be able to use the vCLI, PowerCLI, vCO or the vSphere API to configure and manage EVC configurations, you will need to use the vSphere Client to do so.

Having said that, one could still "potentially" automate EVC configurations using the vSphere MOB interface using the private vSphere API, but it may not be ideal and will require some "creativity" and custom coding to integrate with your existing automation solution. This particular limitation of the vSphere API is one that I have personally faced and have filed a bug with VMware awhile back. I am hoping this will eventually be added to the public vSphere API, so that users can fully automate all aspects and configurations of a vSphere Cluster.

Disclaimer: This is not officially supported by VMware, use at your own risk and discretion.

Step 1 - Connect to your vCenter MOB and traverse to the vSphere Cluster of interest (note the MOID will be different in your specific cluster).

Step 2 -  Now replace the URL with the following while substituting the cluster MOID that you see in your browser:

https://reflex.primp-industries.com/mob/?moid=domain-c1550&method=transitionalEVCManager

and hit enter and you'll be brought to TransitionalEVCManager() method, you'll then want to click on the "Invoke Method". Once you do so, you should be returned with a task object and you'll have a link to something like evcdomain-cXXXX. Click on this and you'll be brought to ClusterTransitionalEVCManager.

Step 3 - From here you'll have have some basic evcState information which you can click on to see what the current EVC configuration is set to, guaranteedCPUFeatures and valid EVC Modes (the last part will be important for reconfiguring EVC)

Step 4 - Now let's say the cluster currently has EVC Mode set to intel-merom and you would like to change it to Nehalem, you would need to retrieve the key from the previous page, in our example it's intel-nehalem. Now, you need to click on the method link called ConfigureEVC_Task which is pretty straight forward, it just accepts the EVC Mode Key, enter the string and click on "Invoke Method" and now your cluster will be reconfigured if you go back to the evcState or look at your vCenter task. You can also disable EVC by using DisableEVC_Task

 
Note: If EVC is already configured in your vSphere Cluster, you can use the vSphere API to view it's current configuration by looking at the ClusterComputeResource's summary property. You just will not be able to make any changes or disabling EVC using the vSphere API.

Categories // Uncategorized Tags // api, evc, mob, vSphere

How to Configure Nested ESXi 5 to Support EVC Clusters

02.10.2012 by William Lam // 10 Comments

Dave Hill recently wrote an article about running nested ESXi and a gotcha with EVC (Enhanced vMotion Compatibility). In vSphere 4.x, you could not join a nested ESXi host into a cluster with EVC enabled. With vSphere 5, there's actually a way to connect a nested ESXi 5 host to an EVC enabled cluster AND still power on 64bit nested guestOSes.

I have to thank my friend and partner in crime Tuan Duong for showing me this trick awhile back. Tuan was performing some tests using both nested and physical ESXi 5 hosts and discovered this method after a bit of tinkering. At the time, I was not sure if others would find this useful and I did not document the process.

Disclaimer: As usual, this is not officially supported by VMware, use at your own risk. 

Here are the steps:

1. You must be running vSphere 5, create a nested ESXi 5 host using this article How to Enable Support for Nested 64bit & Hyper-V VMs in vSphere 5

2. Create an EVC enabled cluster or use an existing cluster with whatever baseline you would like and click on the "Current CPUID Details" in cluster settings.

3. Copy down the CPU mask flags for that particular EVC baseline, you will need this in the next step

4. Shutdown your nested ESXi 5 host and edit the VM's settings and under "Options" tab click on "CPUID Mask->Advanced". You will take the CPU mask from the above step and update the nested ESXi 5 VM to make it match

5. Go ahead and power on your nested ESXi 5 host and join it to the EVC enabled cluster you created earlier. You should not see any errors when connecting to the cluster and after that you can create a nested 64bit VM within that virtualized ESXi 5 host.

There you have it, running a nested ESXi 5 host and joined to an EVC enabled cluster! Isn't VMware technology awesome! 🙂

Categories // Uncategorized Tags // esxi5, evc, nested, vesxi, vSphere 5.0

How to Install VMware VSA with Running VMs

09.26.2011 by William Lam // 1 Comment

For those of you who want to quickly test out the new VMware VSA (vSphere Storage Appliance) will notice that you can not just throw a few ESXi 5 hosts that have running virtual machines on them. If you try to proceed with the VSA installation, you will see an error message regarding the presence of virtual machines whether they are running or not.

This can make it difficult to evaluate or test the new VSA if you do not have additional hosts that can be easily re-deployed as vanilla ESXi 5 installations. While working on the previous article How to Install VMware VSA in Nested ESXi 5 Host Using the GUI, I decided to test out the behavior of a few other configuration variables found in the dev.properties file for the VSA Manager. It turns out that you can actually disable the host audit check which includes the validation of running virtual machines by changing the host.audit variable from "true" to "false" using the same trick documented here. You will need to restart the VSA Manager and then the vCenter Server service for the change to go into effect.

**** DISCLAIMER: This is not supported by VMware and there maybe specific checks that are now bypassed by disabling the host.audit parameter. Please use at your own risk and test before deploying on actual systems **** 

One interesting observation made while testing this in a nested ESXi configuration is that even though there is a message warning the user that any data found on the local VMFS volumes will be deleted, I did not see any process that was kicked off to do so. This does not mean this was not the original intention, but there was no reformatting of the local VMFS or removal or powering off of the running virtual machines. While testing both a "supported" and "ghetto" installation of the VSA, I found that several advanced settings were updated as part of the VSA installation, you should see the same if you look in the vmkernel.log of one of the ESXi 5 hosts:

2011-09-23T17:36:33.030Z cpu0:3475)Config: 346: "HostLocalSwapDirEnabled" = 0, Old Value: 0, (Status: 0x0)
2011-09-23T17:38:00.971Z cpu0:3258)Config: 346: "HeartbeatPanicTimeout" = 60, Old Value: 900, (Status: 0x0)
2011-09-23T17:38:07.069Z cpu1:2851)Config: 346: "EnableSVAVMFS" = 1, Old Value: 0, (Status: 0x0)
2011-09-23T17:38:07.090Z cpu1:2851)Config: 346: "VmkStressEnable" = 0, Old Value: 1, (Status: 0x0)
2011-09-23T17:44:22.163Z cpu1:3477)Config: 346: "SIOControlFlag2" = 1, Old Value: 0, (Status: 0x0)

One that sparked my curiosity is EnableSVAVMFS which is a hidden setting found on the ESXi host but one can view it using vsish. Per the limited documentation found in vsish, this parameter is to enable some sort of optimization with the local VMFS volume.

Thanks to @VMwareStorage (Cormac Hogan, VMware Technical Marketing for Storage) for the quick answer to my question on twitter, it looks like this parameter does the following:

"Forces linear allocation of VMDKs on local VMFS for VSA. Improves mirroring performance across VSAs apparently" 

There was nothing in the vmkernel.log that would indicate the local VMFS was reformatted or files had to be delete to support the VSA installation. I can understand why VMware wanted a vanilla installation which included no running VMs to simplify the installation process. Another reason that I can think of is by having some initial storage consumption, it can offset the amount of "available" storage that needs to be setup on VSA cluster. The amount of available storage per host must be equal on all two or three node cluster, to ensure there is sufficient space for replication. As long as you understand by having running virtual machines on one or more ESXi nodes, the node with the smallest amount of free physical storage is what the rest of the VSA nodes will be configured to.

You potentially may also find yourself in a chicken and the egg problem if VSA installation fails to install and reverts it's changes, which includes putting the ESXi host into maintenance mode. This will cause it to fail on the node that is running the vCenter Server and VSA Manager, another reason you would want to run the management system outside of the VSA cluster.

Without further ado, I recorded a quick 6minute video demonstrating the installation of the new VMware VSA on ESXi 5 hosts that has running virtual machines which includes the vCenter Server and VSA Manager running on one of the nodes (video is awesome when you bump up the audio):

Installing VMware VSA with Running VMs from lamw on Vimeo.

Not only is this not supported, but it is also NOT a best practice to run the vCenter Server and VSA Manager within the VSA cluster because you may potentially have issues with replication if vCenter and VSA Manager goes down. In my testing, I found that I could take down vCenter and VSA Manager and the NFS volumes continue to function and the cluster continues to churn away. Any virtual machines running on the VSA volumes will automatically be restarted by vSphere HA. Once the VSA manager has recovered, it'll automatically ensure the volumes have all synchronized and re-protect the VSA cluster.

Note: It is important to understand that even though you can install the VMware VSA with running virtual machines using the hack above, the requirement of a vanilla ESXi 5 installation is still 100% mandatory. You MUST still have only a single vSwitch (vSwitch0) with only a single vmnic (vmnic0) connected to the vSwitch with only two default portgruops that must exists: "VM Network" and "Management Network", there is no workaround for this requirement. If you you have a host that you plan on running VMs prior to VSA installation, make sure they are on the "VM Network" portgroup as additional portgroups are not supported prior to installation of the VSA.

I am hoping that some of these requirements are relaxed in a future release of the VMware VSA and possibly a version that would work with the vCVA (vCenter Virtual Appliance). For now if have limited hardware or would like to use existing ESXi 5 host with running virtual machines (needs to be configured like a vanilla installation of ESXi 5) then you can run everything on either a two or three node cluster, just be aware of the caveats.

For more in-depth information and details about the new VSA, please check out the VMware Storage Blog - vSphere Storage Appliance Links and be sure to follow Cormac Hogan on twitter at @VMwareStorage

Categories // Uncategorized Tags // esxi5, evc, nested, vsa, vSphere 5.0

  • 1
  • 2
  • Next Page »

Search

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Infrastructure Business Group (CIBG) at VMware. He focuses on Cloud Native technologies, Automation, Integration and Operation for the VMware Cloud based Software Defined Datacenters (SDDC)

Connect

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automated ESXi Installation with a USB Network Adapter using Kickstart 02/01/2023
  • How to bootstrap ESXi compute only node and connect to vSAN HCI Mesh? 01/31/2023
  • Quick Tip - Easily move or copy VMs between two Free ESXi hosts? 01/30/2023
  • vSphere with Tanzu using Intel Arc GPU 01/26/2023
  • Quick Tip - Automating allowed and not allowed Datastores for use with vSphere Cluster Services (vCLS) 01/25/2023

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2023