WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Standalone VMRC now available for Mac OS X

04.15.2015 by William Lam // 55 Comments

Last year, a standalone Virtual Machine Remote Console (VMRC) was released for Windows as part of vSphere 5.5 Update 2b which provides an alternative way of launching the VM console due to NPAPI deprecation. There was of course a huge request for Mac OS X support and the VMRC team has been working hard and today I am please to announce that standalone VMRC is now available for Apple Mac OS X which you can download using the following URL: www.vmware.com/go/download-vmrc

Note: Mac OS X 10.8 or greater is required to use the new Standalone VMRC. The release notes will be updated to reflect this requirement

There are currently two methods of launching a remote console to a Virtual Machine using the vSphere Web Client as seen in the screenshot below:

  1. Using HTML5 VMRC simply by clicking on the thumbnail preview
  2. Using the new Standalone VMRC by clicking on the "Launch Remote Console" link

vmrc-mac-osx-2
When using the Standalone VMRC method, instead of opening the VM console in the browser, it will launch the native VMRC application on your system whether that be Windows or Mac OS X. All basic functionalities of the Standalone VMRC is available as you would expect such as power operations, device management, etc.

Note: There is not a specific version of vSphere that is required to directly launch the Standalone VMRC. However, to launch it within the vSphere Web Client, you will need vSphere 5.5 Update 2b or greater.

vmrc-mac-osx-1
The other great thing about the Standalone VMRC is that it can function without vCenter Server and the vSphere Web Client and you can actually use it to connect to VM directly on an ESXi host. To use the VMRC without the vSphere Web Client, you will need to construct the VMRC URI which looks like the following:

vmrc://clone:[TICKET]@[HOST]:[PORT]/?moid=[VM-MOREF]

where TICKET is obtained by calling the AcquireCloneTicket() method using the SessionManager in vCenter Server. The HOST will either be the Hostname/IP Address of vCenter Server and the PORT should be default to 443 and you will need to specify the VM MoRef ID. In the case of a standalone ESXi host, you would just change the HOST property. If you do not wish to use the clone ticket, you can also just provide the following URI which will prompt for your ESXi credentials

vmrc://@[HOST]:[PORT]/?moid=[VM-MOREF]

Once you have generated the VMRC URI, you MUST launch it through a web browser as that is how it is passed directly to the Standalone VMRC application. In my opinion, this is not ideal especially for customers who wish to automatically generate this as part of a VM provisioning workflow to their end users and not having to require a browser to launch the Standalone VMRC application. If you have some feedback on this, please do leave a comment.

In the mean time, a quick workaround is to use the "open" command on Mac OS X along with the VMRC URI which will automatically load it into your default browser and launch the Standalone VMRC application for you.

open 'vmrc://clone:cst-VCT-52e44ad7-712f-9f45-a9ee-13ec6a74acaf--tp-B1-6F-91-F6-B5-8F-80-E5-FD-D6-E1-8B-10-F7-FE-15-C5-2A-75-41@192.168.1.60:443/?moid=vm-18'

UPDATE (05/31/15) - If you are connecting directly to an ESXi host you can either use the vSphere API to query for the VM MoRef ID or you can easily pull it by running the following command directly in the ESXi Shell:

vim-cmd vmsvc/getallvms

I am sure there are probably a few of you asking, what about for Linux users? Well, you can probably guess what is being worked on next 😉

Categories // Apple, ESXi, vSphere, vSphere 6.0 Tags // mac, osx, remote console, vmrc, vSphere 5.5

Running Nested ESXi / VSAN Home Lab on Ravello

04.14.2015 by William Lam // 3 Comments

nested_esxi_on_ravello
There are many options when it comes to building and running your own vSphere home lab. There are going to be different pros and cons to each of these solutions which you will need to evaluate things like cost, performance, maintenance, ease of use and complexity to name a few. Below is a list of the currently available options to you today.

Home Lab Options:


On-Premises

  • Using hardware on the VMware HCL
  • Using Apple Mac Mini, Intel NUC, etc.
  • Using whitebox or off the shelf hardware

Off-Premises (hosted)

  • VMware HOL
  • VMware vCloud Air or other vCloud Air Service Providers
  • Colo-located labs

For example, you could purchase a couple of Apple Mac Mini's and build out a decent size vSphere environment, but it could potentially be costly and not to mention a bit limited on the memory options. Compared to other platforms, it is pretty energy efficient and easy to use and maintain. If you did not want to manage any hardware at all, you could look at a hosted or an on-demand lab such as vCloud Air which can run Nested ESXi unofficially or anyone of the many vCloud Air Service Providers. Heck, you could even use VMware Hands On Lab, though the access will be limited as you will be constrained by the pre-built labs and would not be able to directly upload or download files to the lab. However, this could be a quick way to get access to an environment for testing and best of all, it is 100% free. As you can see, there are many options for a home lab and it really just depends on your goals and what you are trying to accomplish.

Ravello says hello to Nested ESXi


Today, we have a new player entering the off-premises (hosted) option for running vSphere based home labs. I am please to announce that Ravello, a startup that uses Nested Virtualization to target dev/test workloads has just introduced beta support for running Nested ESXi on their platform. I have written about Ravello in the past and you can find more details here. Ravello uses their own home grown KVM-based nested hypervisor called HVX which runs on top of a VM provisioned by either Amazon EC2 or Google Compute Engine. As you can imagine, this was not a trivial feature to add support for especially when things like Intel-VT/AMD-V is not directly exposed to the virtual machines in EC2 or GCE which is required to run ESXi. The folks over at Ravello has solved this in a very interesting way by "emulating" the capabilities of Intel-VT/AMD-V using Binary Translation with direct execution.

Over the last month, I have had the privilege of getting early access to the Ravello platform with the Nested ESXi capability and have been providing early feedback to their R&D team to ensure the best possible user experience for customers looking to run Nested ESXi on their platform. I have also spent quite a bit of time working out the proper workflow for getting Nested ESXi running and being able to quickly scale up the number of nodes, especially useful when testing new features like VSAN 6.0. I have also been working with their team to develop a script that will allow users to quickly spin up as many Nested ESXi VMs as needed after a one time initial preparation. This will greatly simplify deployments of more than a couple of Nested ESXi VMs. Hopefully I will be able to share more details about the script in the very near future.

Before jumping into the instructions on getting Nested ESXi running on the Ravello platform, I also wanted to quickly highlight what is currently supported from a vSphere perspective as well as some of the current limitations and caveats regarding Nested ESXi that you should be aware of. Lastly, I have also provided some details around pricing so the proper expectations is set if you are considering a vSphere home lab on Ravello. You can find more information in the next few sections else you can go straight to the setup instructions.

Supports:


  • vCenter Server 5.x (Windows) & VCSA 5.x
  • vCenter Server 6.0 (Windows)
  • ESXi 5.x
  • ESXi 6.0

Caveats:


Coming from a pure vSphere background, I have enjoyed many of the simplicities that VMware has built into their core platform such as support for OVF capabilities like Dynamic Disks and Deployment Options for example. While using the Ravello platform I came across several limitations with respect to Nested ESXi and the VCSA. Below is just a quick list of the caveats that I have found while testing the platform and I have been told that many of these are being looked at and hopefully will be resolved in the future. Nonetheless, I still wanted to make sure these were called out so that you go in with the right expectations.

  • There is currently no support for virtuallyGhetto's Nested ESXi /VSAN VM OVF Templates (though you can import the OVFs, most of the configurations are lost)
  • There is currently no support for VM Advanced Settings such as marking a VMDK as an SSD or enabling UUID for disks for example (configurations are not preserved through import)
  • There is currently no support for VCSA 6.0 OVA due to disk controller limitation + no OVF property support, you will need to use Windows based vCenter Server for now (VCSA 5.5 is supported)
  • There is currently no OVF property support
  • There is currently no support for VMXNET3 for Nested ESXi VM, e1000 must be used due to a known network bug
  • Running Nested SMP-FT is not supported as 10Gbit vNICs are required and VMXNET3 is not currently supported

Pricing:


When publishing your Ravello Application, you have the option selecting two different deployment optimization. The first is optimized for cost, if TCO is what you care most about, then the platform will automatically select the cloud provider (EC2 or GCE) that is the cheapest to satisfy the requirements. The second option is to optimize based on performance and if selected, you can choose to place your application on either EC2 or GCE. In both of cases, you will be provided with an estimated cost which is broken down to compute, storage, networking as well as a final cost (per hour). Once you agree to the terms, you can then click on the "publish" button which will then deploy your workload onto the selected cloud provider.

Here is a screenshot summary view of a Ravello Application which I have built that consists of 65 VMs (1 Windows VM for vCenter Server) and 64 Nested ESXi VMs and I chose to optimize based on cost. The total price would be $17.894/hr

ravello-vghetto-nested-esxi-vsan-6.0-64-Node-cost-optmized
Note: Prices as of 04/05/2015

I also went through an exercise of going through several more configurations to give you an idea of what the cost could be for varying sized environments. Below is a table for a 3 Node, 32 Node & 64 Node VSAN setup (includes one additional VM for the vCenter Server).

# of VM Optimization Hosting Platform Compute Cost Storage Cost Network Cost Public IP Cost Total Price
4 Cost N/A $1.09/hr $0.0292/hr $0.15/GB $0.01/hr $1.1292/hr
4 Performance Amazon $1.62/hr $0.0292/hr $0.15/GB $0.01/hr $1.6592/hr
4 Performance Google $1.38/hr $0.0292/hr $0.15/GB $0.01/hr $1.4192/hr
33 Cost N/A $8.92/hr $0.1693/hr $0.15/GB $0.01/hr $9.0993/hr
33 Performance Amazon $13.22/hr $0.1693/hr $0.15/GB $0.01/hr $13.3993/hr
33 Performance Google $11.24/hr $0.1693/hr $0.15/GB $0.01/hr $11.4193/hr
65 Cost N/A $17.56/hr $0.324/hr $0.15/GB $0.01/hr $17.894/hr
65 Performance Amazon $26.02/hr $0.324/hr $0.15/GB $0.01/hr $26.354/hr
65 Performance Google $22.12/hr $0.324/hr $0.15/GB $0.01/hr $22.454/hr

How to Setup:


Here is the process for setting up Nested ESXi on the Ravello platform. The process consists of installing a single Nested ESXi VM and "preparing" it so that it can then be used later to deploy additional unique Nested ESXi instances from the Ravello Library.

Step 1 - Upload either an ESXi 5.x or 6.0 ISO to the Library using the Ravello VM Uploader tool which you will be prompted to install.

Screen Shot 2015-04-08 at 8.43.14 PM
Step 2 - Deploy the empty Ravello ESXi VM Template from the Library which has already been prepared with the required CPU ID

<ns1:cpuIds value="0000000768747541444d416369746e65" index="f00d”/>

Adding the above CPU ID will enable the emulation of Intel VT-x/AMD-V. If you decide to create your own Ravello VM Template, you will need to perform this operation yourself which is only available today via their REST API today, you can find more details here.

Step 3 - Add a CD-ROM device to the Nested ESXi VM by highlighting the ESXi VM and under "Disks" (yes, this was not intuitive for me either)

Screen Shot 2015-04-08 at 8.48.40 PM
Once you have added the CD-ROM, you will want to mount the ESXi ISO.

Step 4 - Power on the Nested ESXi VM and perform a regular installation of ESXi as you normally would.

At this point, you have now successfully installed Nested ESXi on Ravello! The next series of step is to "prepare" this ESXi image so that it can be duplicated (cloned) to deploy additional instances without causing conflicts, else you would have to perform this step N-number of times for additional nodes which I am sure many of you would not want to do. The steps outlined here will be following the process which I have documented in my How to properly clone a Nested ESXi VM? article.

Step 5 - Login to the console of ESXi VM and run the following ESXCLI command:

esxcli system settings advanced set -o /Net/FollowHardwareMac -i 1

Note: If you wish to connect to the ESXi VM directly for ease of use versus going through the remote console. You can go to "Services" tab for the VM and enable external access as seen in the screenshot below.

ravello-networking
Step 6 - Edit /etc/vmware/esx.conf and remove the uuid entry and then run /sbin/auto-backup.sh to ensure the changes have been saved.

At this point, you have prepared a vanilla Nested ESXi VM. You can save this image into the Ravello Library and you can deploy additional instances and by default Ravello platform is set for DHCP. You can of course change it to DHCP reservations so you get a particular IP Address or specifying a static IP Address assignment.

If you wish to prepare the Nested ESXi VM for use with VSAN, then you will need to run through these additional steps:

  • Create a claim rule to mark the 4GB VMDK as SSD
  • Enable VSAN traffic type on vmk0

Step 7 - I have also enabled remote logging as well as suppress any shell warnings and you just need to run the snippet below within the ESXi Shell

DEVICE=$(esxcli storage core device list  | grep -iE '(   Display Name: |   Size: )' | grep -B1 4096 | grep mpx | awk -F '(' '{print $2}' | sed 's/)//g');esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d $DEVICE -o enable_ssd;esxcli storage core claiming reclaim -d $DEVICE;esxcli vsan network ipv4 add -i vmk0;esxcli system syslog config set --loghost=10.0.0.100;esxcli system settings advanced set -o /UserVars/SuppressShellWarning -i 1

Step 8 -

If you wish to setup 32 Nodes with VSAN 1.0, then you will need to run this additional command:

esxcli system settings advanced set -o /CMMDS/goto11 -i 1

If you wish to setup 64 Nodes with VSAN 6.0, then you will need to run this additional command:

esxcli system settings advanced set -o /VSAN/goto11 -i 1

At this point, you have completed preparing your Nested ESXi VM. You can now save your image to the Ravello Library and once that has been done, you can now easily clone additional Nested ESXi instances by simply dragging/dropping into your canvas from the Ravello Library. For vCenter Server, if you are setting up a vSphere 5.x environment you will need to upload the VCSA and go through the normal configuration using the VAMI UI. For vCenter Server 6.0, you will not be able to use the VCSA 6.0 because there is a limitation in the platform today that does not support it. At this time, you will need to deploy and install a Windows VM and then install the vCenter Server 6.0 installation.

I of course had some fun with the Ravello platform and below are some screenshots of running both a 32 Node VSAN Cluster (vSphere 5.5) as well as a 64 Node VSAN Cluster (vSphere 6.0) Overall, I thought it was a pretty good experience. There were definitely some sluggishness while installing vCenter Server bits and navigating through the vSphere Web Client. It took a little over 40min which was almost double the amount of time that I have seen in my home lab. I was told that VNC might perform better than RDP, though RDP is what Ravello folks recommend for connecting to a Windows based desktop. It is great to see another option for running vSphere home labs and I think the performance is probably acceptable for most people and hopefully it will continue to improve in the future. I definitely recommend giving Ravello a try and who knows, it might be the platform of choice for your vSphere home lab.

Nested ESXi 5.5 running 32 Node VSAN Cluster:

vghetto-nested-esxi-5.5-32-node-cluster-ravello-1

vghetto-nested-esxi-5.5-32-node-cluster-ravello-0

Nested ESXi 6.0 running 64 Node VSAN Cluster:

vghetto-nested-esxi-64-node-cluster-ravello-1

vghetto-nested-esxi-64-node-cluster-ravello-0

Categories // ESXi, Home Lab, Nested Virtualization, vSphere Tags // homelab, intel vt, nested, nested virtualization, ravello

vCenter Host Gateway ... more than meets the eye

04.10.2015 by William Lam // 9 Comments

While going through the download motion like many of you when vSphere 6.0 was generally available, something that caught my eye in the vCenter Server download area was something called the vCenter Host Gateway (vCHG) virtual appliance. At first, I did not know what that was and until I spoke to a few colleagues did I realize that vCHG is the evolution of the Multi-Hypervisor Management (MHM) Plugin which provides vSphere Administrators a way to natively manage Hyper-V hosts within the vCenter Server UI. MHM was originally released as a Fling and later then productized within the vCenter Server product. At the time, it made sense for the plugin to be Windows based as it needed to connect to Hyper-V which obviously ran on Microsoft Windows.

It looks like the folks over in the MHM team have been quite busy as they have gotten rid of the Windows installer and have now provided a Virtual Appliance which uses winrm to directly communicate to the Hyper-V hosts. In addition, you can now manage Hyper-V hosts within the vSphere Web Client where as previously it was only available using the vSphere C# Client. vCHG works with both vCenter Server for Windows as well as the vCenter Server Appliance, there are no additional Windows host required for this new solution. Deploying and configuring vCHG is relatively straight forward and you can find all the instructions here. There were a few minor gotchas that I ran into and I thought it would be worth sharing, especially figuring out what was needed on the Hyper-V hosts which was mainly due to my lack of familiarity with winrm.

You have the option of configuring winrm to go over standard HTTP (port 5985) or HTTPS (port 5986) on the Hyper-V host but the latter requires you to setup SSL Certificates which you can find more details here. For that reason, I just went with the default HTTP method to quickly get going. To configure winrm, you will need to run following command and accept the default with "y":

winrm quickconfig

Next, you will need to enable winrm listener as shown in the screenshot below by running the following command:

winrm e winrm/config/listener

vcenter-host-gateway-1
At this point, you can now login to your vSphere Web Client and to add a Hyper-V host, you will need to add at the vSphere Datacenter level. This was another thing that I missed and though could be added into its own vSphere Cluster. As you can see from the screenshot below, we have extended our "Add Host" workflow to natively support Hyper-V hosts, so that it is intuitive and familiar for our vSphere Administrators.

vcenter-host-gateway-0
You will need to specify both the Hostname/IP Address of Hyper-V host followed by the winrm port (e.g. hostname:5985) and then select the Type to be "Hyper-V" and in a just a few seconds, you will be able to see your Hyper-V hosts within vCenter Server and perform basic power operations as well as creating/managing VMs running on Hyper-V. Below is a screenshot of my Hyper-V host and I just finished created a new VM using the vSphere Web Client and you can see it seamless integrated into a single view.

vcenter-host-gateway-2

This is great enhancement for customers who have to run a mix workload between vSphere and Hyper-V (I do apologize to those in advance ;)) but at least you now truly now have a single integrated pane of glass to manage all your workloads. I also do want to stress the word "integrated" beyond just the UI component that vCHG provides. I have found that all the operations through the vSphere Web Client is also exposed through our rich vSphere API, for example the AddHost_Task() method now includes a new hostGateway spec. This also means you will be able to use all the existing power operations and create VMs methods against your Hyper-V hosts, again tightly integrated into the existing tools you are familiar with such as PowerCLI for example for Automation. How freaking cool is that!?

but wait ... there's more! 😀

While going through the exercise of deploying vCHG and adding Hyper-V host, I could not help but wonder why we named this feature "Host Gateway", especially since we only supported a single third party hypervisor, did not really make sense to me? Well, it turns out there is a lot more coming! When you select the "Type" from the drop down menu, I notice there were a few more options: KVM and vCloud Air!

vcenter-host-gateway-4
I of course I tried to add a KVM host as well as my vCloud Air account but looks like those providers are not available just yet. I am actually quite excited to see support for vCloud Air. This has always been something I felt should have been available natively within the vSphere Inventory so that an administrator could deploy their workloads either locally on-premises or hosted on vCloud Air without having to jump around. It should align with the existing vSphere Administrator workflows and I am glad to see this change. This is definitely an area that I recommend keeping an eye out on and hopefully we will see vCloud Air support soon!

Categories // vCloud Air, vSphere 6.0 Tags // hyper-v, kvm, mhm, multi-hypervisor, vcenter host gateway, vchg, vcloud air

  • « Previous Page
  • 1
  • …
  • 365
  • 366
  • 367
  • 368
  • 369
  • …
  • 567
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Ultimate Lab Resource for VCF 9.0 06/25/2025
  • VMware Cloud Foundation (VCF) on ASUS NUC 15 Pro (Cyber Canyon) 06/25/2025
  • VMware Cloud Foundation (VCF) on Minisforum MS-A2 06/25/2025
  • VCF 9.0 Offline Depot using Synology 06/25/2025
  • Deploying VCF 9.0 on a single ESXi host? 06/24/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...