WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

New method of enabling Multiwriter VMDK flag in vSphere 6.0 Update 1 (UI + API)

10.19.2015 by William Lam // 22 Comments

Prior to vSphere 6.0, in order for multiple Virtual Machines to share a VMFS-backed VMDK, the Multiwriter VMDK flag must be enabled, which is accomplished by adding a specific VM Advanced Setting as shown in this VMware KB 1034165. For customers who were accustomed to this old method, you may have found that this option no longer works. This was true regardless if you had used the vSphere Web/C# Client or the vSphere API to apply the configuration.

To provide for a better user experience, this behavior was changed in vSphere 6.0 and a new API was introduced for enabling and disabling the Multiwriter VMDK flag. In vSphere 6.0, there is now a new sharing attribute on the Virtual Disk backing property which accepts one of two values: sharingMultiWriter or sharingNone for specifying the Multiwriter flag. In my opinion, this is a positive change as we too often rely on the VM Advanced Setting as a generic "catch all" for enabling or configuring various settings versus adding proper APIs to a VM.

Although there is now a proper API which will can help enable new Automation use cases, one thing that was still lacking was an easy way to enable the Multiwriter VMDK flag using the UI. In vSphere 6.0 Update 1, we have now introduced a new UI dropdown option called "sharing" in the vSphere Web Client for configuring the Multiwriter VMDK flag which can be found in the Virtual Disk section when editing a VM as shown in the screenshot below.

Screen Shot 2015-10-16 at 10.19.05 AM
Note: The new Sharing property is only available in the vSphere Web Client UI and is not available in the vSphere C# Client. If you need to configure the Multiwriter VMDK flag and do not have access to the vSphere Web Client, you can use the vSphere API to help automate this configuration change.

UPDATE (06/27/16) - Created two scripts which now cover scenarios where VM is online and/or offline.

For those interested in Automating the Multiwriter VMDK flag, I have created two PowerCLI scripts called: configureMultiwriterVMDK.ps1 (offline VM configuration) and addMultiwriterVMDK.ps1 (online VM configuration) which demonstrates this new vSphere API.

The first script configureMultiwriterVMDK.ps1 allows you enable the Multiwriter Flag for an existing VMDK that has already been added to a VM. This operation must be done while the VM is powered off and to use the script you will need to specify the name of the VM as well as the label of the VMDK in which you wish to enable the Multiwriter VMDK flag (e.g. Hard disk 2). Below is an example of running the script.

Screen Shot 2015-10-16 at 8.24.46 PM
The second script addMultiwriterVMDK.ps1 allows you to hot-add a new VMDK and enables the Multiwriter Flag to a VM. This operation is done while the VM is powered on which is a common workflow for customers needing to hot-add storage to an existing Cluster solution such as Oracle RAC for example all while the system is running. To use the script, there are a few variables you will need to edit:

  • vmName - The name of the VM you wish to perform th operation on
  • vmdkFileNamePath - This is the full datastore path to the name of the underlying VMDK. See the script for more information but the syntax will look like "[datastore-name] vm-home-dir/vmdk-name.vmdk"
  • diskSizeGB - The capacity of the VMDK to add (GB)
  • diskControllerNumber - The SCSI controller number (0-3)
  • diskUnitNumber - The Unit number (0-16)

Categories // Automation, vSphere 6.0, vSphere Web Client Tags // multiwriter, vmdk, vSphere API, vsphere web client

Building minimal vSphere demo lab using VMware Fusion/Workstation with only 8GB memory?

10.16.2015 by William Lam // 7 Comments

After tweeting this update last week, I received quite a few questions on how I was able to squeeze a vCenter Server Appliance (VCSA) & ESXi 6.0 Update 1 along with a VMware Photon VM, all running on my Mac Book Air with only 8GB of memory. Although, I was not able to make use of my demo which was for my vSphere Content Library session at VMworld Europe this week; I thought I would still share the details on how I built this vSphere lab environment which could come in handy for others.

I was able to squeeze VCSA 6.0 & ESXi 6.0 Update 1 & Photon VM on Mac Book Air w/only 8GB of memory. Chrome & terminal ran fine as well!

— William Lam (@lamw.bsky.social | @*protected email*) (@lamw) October 7, 2015

I wanted to run everything on my Mac Book Air primarily for the convenience factor so I did not have to bring my Mac Mini which may not be ideal for traveling aboard. The performance and responsiveness of the environment was actually pretty good and I was able to also access the vSphere Web Client using Google Chrome as well as OS X terminal for CLI operations without any problems. It definitely helps if you place all VMs on SSDs, which is especially useful if swapping occurs since we are overcommitting the physical memory.

minimal-vsphere-demo-lab-on-fusion-or-workstation-with-only-8GB-of-memory-3
Below are the instructions for building this environment and here is a quick summary of the expected memory configuration for the three VMs.

Virtual Machine Memory
Embedded vCenter Server Appliance VM 5GB
ESXi VM 3GB
Photon VM 384 MB

Step 1 - Download the VCSA & ESXi 6.0 Update 1 ISO (or any other version you wish to run). You will need to extract the contents of VCSA ISO and the OVA is located in /vcsa/vmware-vcsa and you will need to add the .ova extension.

  • Source: Ultimate automation guide to deploying VCSA 6.0 Part 1: Embedded Node

Step 2 - We will need to configure memory overcommitment for VMware Fusion/Workstation to allow for the majority of the memory to be swapped to be able to run our minimal vSphere environment. You will need to set the value of prefvmx.minVmMemPct to 25 by adding the following line to the respective configuration file shown in the table below.

prefvmx.minVmMemPct = 25

Hypervisor Configuration File
VMware Workstation C:\ProgramData\VMware\VMware Workstation\config.ini
VMware Fusion /Library/Preferences/VMware\ Fusion/config
  • Source: Quick Tip – How to enable memory overcommitment in VMware Fusion?

Step 3 - Deploy the VCSA OVA to either your VMware Fusion or Workstation deployment and ensure you do not power on the VM. We will need to make the following edits to the VCSA's VMX file to ensure it is properly configured when it is powered on. Below is an example of the VMX parameters you will need to add before powering on the VM.

guestinfo.cis.deployment.node.type = "embedded"
guestinfo.cis.vmdir.domain-name = "vghetto.local"
guestinfo.cis.vmdir.site-name = "vghetto"
guestinfo.cis.vmdir.password = "VMware1!"
guestinfo.cis.appliance.net.addr.family = "ipv4"
guestinfo.cis.appliance.net.addr = "192.168.1.54"
guestinfo.cis.appliance.net.pnid = "192.168.1.54"
guestinfo.cis.appliance.net.prefix = "24"
guestinfo.cis.appliance.net.mode = "static"
guestinfo.cis.appliance.net.dns.servers = "192.168.1.1"
guestinfo.cis.appliance.net.gateway = "192.168.1.1"
guestinfo.cis.appliance.root.passwd = "VMware1!"
guestinfo.cis.appliance.ssh.enabled = "true"

  • Source: Quick Tip – How to enable memory overcommitment in VMware Fusion?

Step 5 - Once the VCSA has successfully been configured and you can connect to it using the vSphere Web Client, you can then power it off and reduce the memory from 8GB to 5GB.

Step 4 - Create a new VM using the ESXi 6.x GuestOS type for running your Nested ESXi VM and stick with the defaults of 4GB of memory to be able to install ESXi. Once the VM has been created, go ahead and install ESXi using the ISO as you normally would.

Step 5 - Once the ESXi VM has successfully been installed and booted up, you can then power it off and reduce the memory from 4GB to 3GB.

Step 6  (Optional) - If you wish to play with VMware Photon, you can also install Photon using the ISO which can be downloaded from here or deploy using the OVA which can be downloaded from here.

For folks who have more memory in their system, you could add an additional two Nested ESXi VMs to then run a full VSAN setup and then you will have a pretty powerful, with minimal resource footprint that you can bring with you anywhere to run demos or for development and testing purposes. I also highly recommend making use of the "Suspend" operation when you need to quickly get access to memory or run other applications and this also allows you to quickly resume the entire environment in just a few seconds without having to power down the entire setup which will take much longer.

Categories // Apple, Fusion, vSphere 6.0 Tags // apple, ESXi, fusion, Photon, vcenter server appliance, VCSA, vcva, workstation

UEFI PXE boot is possible in ESXi 6.0

10.09.2015 by William Lam // 21 Comments

A couple of days ago I received an interesting question from fellow colleague Paudie O'Riordan, who works over in our Storage and Availability Business Unit at VMware. He was helping a customer who was interested in PXE booting/installing ESXi using UEFI which is short for Unified Extensible Firmware Interface. Historically, we only had support for PXE booting/installing ESXi using the BIOS firmware. You also could boot an ESXi ISO using UEFI, but we did not have support for UEFI when it came to booting/installing ESXi over the network using PXE and other variants such as iPXE/gPXE.

For those of you who may not know, UEFI is meant to eventually replace the legacy BIOS firmware. There are many benefits with using UEFI over BIOS, a recent article that does a good job of explaining the differences can be found here. In doing some research and pinging a few of our ESXi experts internally, I found that UEFI PXE boot support is actually possible with ESXi 6.0. Not only is it possible to PXE boot/install ESXi 6.x using UEFI, but the changes in the EFI boot image are also backwards compatible, which means you could potentially PXE boot/install an older release of ESXi.

Note: Auto Deploy still requires legacy BIOS firmware, UEFI is not currently supported today. This is something we will be addressing in the future, so stay tuned.

Not having worked with ESXi and UEFI before, I thought this would be a great opportunity for me to give this a try in my homelab which would also allow me to document the process in case others were interested. For my PXE server, I am using CentOS 6.7 Minimal (64-Bit) which runs both the DHCP and TFTP services but you can use any distro that you are comfortable with.

[Read more...]

Categories // Automation, ESXi, vSphere 6.0 Tags // bios, boot.cfg, bootx64.efi, dhcp, ESXi 6.0, kickstart, mboot.efi, pxe boot, tftp, UEFI, vSphere 6.0

  • « Previous Page
  • 1
  • …
  • 335
  • 336
  • 337
  • 338
  • 339
  • …
  • 560
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...