WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Search Results for: content library

EMC Project OnRack now RackHD

11.03.2015 by William Lam // 1 Comment

Back in May, EMC announced a new initiative at EMC World called Project OnRack which had an ambitious goal of providing a new software abstraction layer that would sit on top of existing "industry standards" for server out-of-band management. Standards such as IPMI, CIM, SMI-S and CIM-SMASH to just name a few were supposed to help IT administrators manage and operate the life-cycle of their physical servers. Instead, we ended up with even more complexity and inconsistency due to the different implementations of these "industry standards" across vendors and sometimes even within the same vendor. Trying to keep firmware, BIOS, hardware drivers, etc. up to date across different hardware platforms from the same vendor in a consistent and automated fashion was already painful enough. As If this was not already challenging enough, try doing this for a mix of hardware platforms across different vendors and you have just given your operational and datacenter team a never ending nightmare.

Frankly, I am pretty surprised that it has taken us this long to finally tackle this problem. This is something we have needed for quite some time now and I still remember the early days as an admin trying to script around the inconsistencies of IPMI to configure things like asset tags and serial numbers across different hardware platforms.

OnRack http://t.co/I6dpMSPgSB Interesting initiative from EMC. Something we've needed for a LONG time! Reminds me of few startups doing same

— William Lam (@lamw.bsky.social | @*protected email*) (@lamw) May 7, 2015

In my opinion, having a consistent and programmable interface to this low level of hardware is a critical component to a Software-Defined Datacenter and has often been overlooked. Kudos to EMC for taking on this initiative and more importantly driving this change through open-source and the community in mind.

Since the announcement back in May, things have been been pretty quiet about OnRack, until recently that is. I was listening to a recent episode of The Hot Aisle Podcast with guest Brad Maltz of EMC talking about Hyper-Converged Infrastructure. Among the different topics discussed, OnRack was brought up along with dis-aggregated hardware/infrastructure where individual compute resources can scale up independently of each other. There were a couple of nice tidbits mentioned on the podcast. First, it looks like OnRack which was the internal EMC project name has now been renamed to RackHD as the external project name. Second, it looks like the RackHD repo is already on Github with some initial content including some pretty detailed documentation on the architecture and components which can be found here.

The OnRack project looks to be made up of the following sub-projects per the documentation:

  • on-tftp - NodeJS application provided TFTP service integrated with the workflow engine
  • on-http - internal HTTP REST API interfaces integrated with the workflow engine
  • on-syslog - syslog endpoint integrated to feed data into workflow engine
  • on-taskgraph - NodeJS application providing the workflow engine
  • on-dhcp-proxy - NodeJS application providing DHCP proxy support integrated into the workflow engine
  • onserve - OnServe Engine
  • core library - Core libraries in use across NodeJS applications
  • task library - NodeJS task library for the workflow engine
  • tools - Useful dev tools for running locally
  • webui - Initial web interfaces to some of the APIs - multiple interfaces embedded into a single project
  • integration tests - Integration tests with code for deploying and running those tests, as well as the tests themselves
  • statsd - A local statsD implementation that makes it easy to deploy on a local machine for capturing application metrics

Brand mentioned that many of the Github repos are still marked private as they are still working through the process of releasing RackHD to the public. It looks like RackHD and all relevant repos are now all open source as of Monday Nov 2nd, for more details please visit the Github repo here. I am definitely excited to see how this project will evolve with the larger community and some of the new innovations which will be unlocked due to this barrier being removed. Hopefully we will see positive collaboration from other hardware vendors which will help us move forward and finally solve this problem once and for all! I can already see huge benefits for software only vendors like VMware who can integrate RackHD directly into provisioning tools like Auto Deploy or configuration management tools like Puppet, Chef and Ansible for example. It will also be interesting to see how other startups in this area like NodePrime and another stealth company, who is also working on solving a similar problem and whether they would leverage RackHD or not.

Categories // Automation Tags // cim, converged infrastructure, disaggregated infrastructure, EMC, hyper-converged infrastructure, ipmi, OnRack, RackHD, SMASH, SMI-S

Heads Up - Workaround for changing Mac OS X VM display resolution in vSphere & Fusion

10.22.2015 by William Lam // 51 Comments

For customers who are running Mac OS X 10.9 (Mavericks) or newer in a Virtual Machine, you may have noticed that you can no longer set a custom display resolution beyond the default 1024x768 in either VMware Fusion and vSphere, regardless of the amount of video memory that has been allocated. The reason for this behavior is that Apple has changed the way in which it remembers previously used modes and would automatically fall back to this versus retaining the custom mode using the Display Preferences. Given this is a non-trivial fix, VMware Engineering has been working hard on a providing a workaround which would still allow users to set a custom resolution from within the GuestOS.

The workaround that has been developed is a tiny standalone command-line utility called vmware-resolutionSet which runs within the Mac OS X Guest and allows you to configure a custom display resolution. You will need to ensure you have VMware Tools installed and running before you can use this utility. As of right now, customers can get a hold of this utility by filing an SR with VMware Support and referencing PR 1385761. Although this tool has not been officially released and must go through the standard release process, the plan is to include it in a future update of VMware Tools and will available for use with both VMware Fusion and vSphere.

UPDATE (12/11/15) - Thanks to reader @elvisizer, it looks like the latest VMware Fusion 8.1 release now includes an updated version of VMware Tools (10.0.5) which includes the vmware-resolutionSet utility. You can find it under '/Library/Application Support/VMware Tools'. One thing to note is that there is a known issue right now for VMware Fusion 8.1 related to NAT and port forwarding, you may want to hold off on upgrading if you rely on this feature.

Screen Shot 2015-12-11 at 10.55.18 AM

The syntax for the vmware-resolutionSet utility is pretty straight forward, it accepts a width and height argument. Make sure to use "sudo" if you want the display resolution to persist through a system reboot. For example, to set a 1920x1080 resolution, you would run the following command:

./vmware-resolutionSet 1920 1080

change-mac-osx-vm-display-resolution-vsphere-fusion-0
Note: Ensure you have sufficient video memory configured for your VM for larger display resolutions. In the example above, I have 16MB configured for my Mac OS X VM which would give you a max resolution of 2560x1600. 

If everything was successful, you should see that both the "Requested resolution" and the "Effective resolution" match in the output. If output does not match, it most likely means you need to increase the video memory and you can refer to this VMware KB 1003 for more details. If we take a look at our Mac OS X VM, we should now see that our new custom display has taken effect. Below is a screenshot of a Mac OS X 10.11 (El Capitan) running on vSphere 6.0 Update 1 configured with a 1920x1080 resolution.

change-mac-osx-vm-display-resolution-vsphere-fusion-1
One other thing to note is if you plan on using higher display resolution than 2560x1600, you may need to configure some additional VM Advanced Settings due to use of framebuffers that are larger than 16MB. In this case, you would need to also add the following two advanced settings to the VM which can be done using the vSphere Web/C# Client or the vSphere API. For example, if I want a 2880x1800 display resolution, I would add the following:

svga.maxWidth = "2880"
svga.maxHeight = "1800"

Lastly, I would like to give a big thanks to Michael Udaltsov, the Engineer who is responsible for creating the workaround and providing me with some additional context to this change in behavior. I know our customers will greatly appreciate this workaround!

Categories // Apple, ESXi, Fusion, vSphere 6.0 Tags // apple, ESXi, fusion, osx, resolution, vmware-resolutionSet

How to VMFork aka Instant Clone Nested ESXi?

08.03.2015 by William Lam // 15 Comments

vmfork-aka-instant-clone
The VMware Fling's team recently released an update to the existing PowerCLI Extensions which now exposes the new VMFork aka Instant Clone capability that was introduced in vSphere 6.0. The Fling contains a set of PowerCLI Extension Modules which in turn provides new PowerCLI cmdlets for accessing the Instant Clone feature. The idea behind the Fling is to help VMware understand how customers would like to consume the Instant Clone feature not only from a CLI point of view but also from an API and UI standpoint. Prior to this, Instant Clone was only available through the use of either Horizon View or the Big Data Extensions product. I think this is a great opportunity for customers and partners to help shape how Instant Clone should be consumed more generally.

One of the use cases I had in my mind when I had first heard about the Instant Clone feature was to be able to quickly instantiate new Nested ESXi VMs. When I got the opportunity to help test out early prototypes of the Instant Clone cmdlet to help provide feedback and usability improvements, I knew I had to give Nested ESXi a try!

Requirements:

  • Fresh installation of Nested ESXi 6.0 in VM (unconfigured)
  • PowerCLI 6.0 Release 1
  • Instant Clone PowerCLI Extensions Fling
  • Nested ESXi 6.0 Instant Clone Scripts

High level process:

  1. A "preparation" script will be manually uploaded & executed within the Nested ESXi VM (Parent VM) to prep the system for Instant Cloning
  2. As the Parent VM is quiesce, both the pre/post customization script will be uploaded to the Parent VM automatically. The "pre-customization" is also then executed within the Parent VM which properly setups the library path to the VMware Tools binary (applicable to ESXi 6.0 only) and is then placed in a ready state for creating Instant Clones
  3. As new Instant Clone (Child VMs) are spun up, the "post-customization" script is automatically executed to add additional configurations and most importantly ensure newly created Instant Cloned Nested ESXi VMs have unique network identities

Note: For Instant Cloning regular OSes, only step 2 and 3 are really needed. Due to a known issue with VMware Tools for Nested ESXi, I have found that it is easier to prepare the Nested ESXi VM prior to quiescing and creating Instant Clones from the Parent VM.

Instructions:

Step 1 - Download and install both PowerCLI 6.0 Release 1 & Instant Clone PowerCLI Extensions Fling.

Step 2 - Perform a fresh Nested ESXi 6.0 installation in a VM, do not configure additional settings outside of enabling ESXi Shell and SSH.

Step 3 - Download the Nested ESXi 6.0 Instant Clone Scripts which contains the following four files:

  • prep-esxi60.sh - Prepares the Nested ESXi VM and ensures that new Child VMs will not retain the Parent VM's MAC Address which is baked in several places
  • pre-esxi60.sh - Pre-customization script which is used to properly setup the library paths to use the VMware Tools daemon to retrieve guest properties from PowerCLI script
  • post-esxi60.sh - Post-customization script which is used to apply networking configuration and hostnames for example
  • vmfork-esxi60.ps1 - An example PowerCLI script which issues the Instant Clone cmdlets

Note: For out of the box use, the only script that needs to be modified is the PowerCLI "vmfork-esxi60.ps1" script, the rest of the scripts should work or require very little to no modifications assuming you have followed the instruction thus far.

Step 4 - Upload the prep-esxi60.sh to Nested ESXi 6.0 VM (Parent VM) and then execute it using either the ESXi Shell over SSH or through a VMRC session. If you use SSH, you will notice that the script hangs, that is because the VMkernel interface is deleted as part of the script.

Step 5 - Next, we need to make a few edits to the vmfork-esxi60.ps1 script to update the name of your ESXi VM, along with its credentials and the full path to both the pre and post customization scripts. Below is an example of the variables that you will need to edit:

$parentvm = 'vESXi6'
$parentvm_username = 'root'
$parentvm_password = 'vmware123'
$precust_script = 'C:\Users\lamw\Desktop\vmfork\esxi60\pre-esxi60.sh'
$postcust_script = 'C:\Users\lamw\Desktop\vmfork\esxi60\post-esxi60.sh'

The section shown below will also need to be edited which contains the customization properties which are then passed down to the guestOS for configuration as part of the Instant Clone process.

 $configSettings = @{
 'hostname' = "$vmname.primp-industries";
 'ipaddress' = "192.168.1.$_"; 
 'netmask' = '255.255.255.0'; 
 'gateway' = '192.168.1.1';
 }

Step 6 - Lastly, it is time to run the script by issuing the following command:

.\vmfork-esxi60.ps1

instant-clone-nested-esxi-0
If everything was successful, you should see a couple of new powered on Instant Cloned Nested ESXi VMs that have been fully customized and ready for use!

instant-clone-nested-esxi-1
Note: There have been a couple of times where newly Instant Clone VMs have not been properly customized and when looking in the Instant Clone logs under /var/tmp/quiesce.log you may find "Unable to fork" error message. I usually have to re-quiesce the Parent VM which I do so by reverting back to a snapshot that captures the state after Step 4. Once I re-run the PowerCLI script, I am able to successfully deploy N-Number of Instant Clone Nested ESXi VMs. For additional best practices and tips/tricks, be sure to check out this blog post here.

Big thanks to Jim Mattson for some of his earlier research and work on this topic which made implementing these scripts much easier.

Categories // Automation, ESXi, Nested Virtualization, vSphere 6.0 Tags // ESXi 6.0, fling, instant clone, nested, nested virtualization, PowerCLI, vmfork

  • « Previous Page
  • 1
  • …
  • 34
  • 35
  • 36
  • 37
  • 38
  • …
  • 41
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...