WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

VMware has the best platform to run latest Windows 10 Desktop, Server & Hyper-V Tech Preview!

10.08.2014 by William Lam // 6 Comments

I am constantly amazed at the number of guest operating systems that is supported on VMware products like VMware vSphere our Enterprise Hypervisor, vCloud Air our public cloud offering which runs on vSphere and our desktop products such as VMware Fusion and Workstation. If we just look at vSphere alone, it currently "lists" 101 supported guest operating systems! (full list below) However, this is actually a tiny subset of what is actually supported on vSphere as new guest OSes are constantly being added to the support matrix. This also does not include any pre-released operating systems like the recent Apple OS X Yosemite (10.10) Tech Preview. Heck, you can even run Windows 3.11 if you really want to as shown by my fellow VMware colleague Chris Colotti.

To get the complete list of currently supported operating systems for vSphere or any other VMware products, you will want to check the VMware HCL for Guest Operating Systems. Running a filter on latest ESXi 5.5 Update 2 release for all Guest OSes, we can see that the total number of supported Guest OSes is astounding 231! I know this number is even greater as we probably can not capture every single x86 Guest OS that exists out there today which can run on VMware.

Getting back to the topic of this post, I know Microsoft has recently released a new Tech Preview of their upcoming Windows platform dubbed Windows 10 (not a typo, they decided to skip Windows 9) and I know some of you may be interested in trying out their latest release. What better way than to run it on VMware? I know there was a blog or two about running Windows 10 on vSphere, however there was some incorrect information about not being able to install VMware Tools or getting the optimized VMXNET3 driver working. I decided to run all three flavors (Windows 10 Desktop, Server and Hyper-V) on the latest vSphere 5.5 release (should work on previous releases of 5.5) and will share the Virtual Machine configuration.

Note: You can also run Windows 10 Tech Preview on both VMware Fusion and Workstation, take a look at this article for more details. These are great options in addition to vSphere and vCloud Air.

Windows 10 Desktop:

  • GuestOS: Windows 8 64-bit
  • Virtual HW: vHW10
  • Network Driver: VMXNET3
  • Storage Controller: LSI Logic SAS

windows10-desktop

Windows 10 Server:

  • GuestOS: Windows 2012 64-bit
  • Virtual HW: vHW10
  • Network Driver: VMXNET3
  • Storage Controller: LSI Logic SAS

windows10-server

Windows 10 Hyper-v:

  • GuestOS: Windows 2012 64-bit
  • Virtual HW: vHW10
  • Network Driver: VMXNET3
  • Storage Controller: LSI Logic SAS
  • CPU Advanced Setting: Enable VHV
  • VM Advanced Setting: hypervisor.cpuid.v0

For more details about running Hyper-V and the last two advanced settings, please take a look at this article on running other Hypervisors.

windows10-hyper-v
If you look closely at this last screenshot, you will see that I am not only running Windows 10 Hyper-V within a VM on ESXi, but I am also running a Nested Windows 10 VM within this Hyper-V VM! How cool is that!? Not sure there are good use cases for this, but if you wanted to, you could! In my opinion (although I may be bias because I work for VMware, but results speak for itself), VMware truly provides the best platform to the widest variety of x86 guest operating systems that exists.

Here are the guest operating systems that are currently "listed" in vSphere today that can be selected:

Apple Mac OS X 10.5 (32-bit)
Apple Mac OS X 10.5 (64-bit)
Apple Mac OS X 10.6 (32-bit)
Apple Mac OS X 10.6 (64-bit)
Apple Mac OS X 10.7 (32-bit)
Apple Mac OS X 10.7 (64-bit)
Apple Mac OS X 10.8 (64-bit)
Apple Mac OS X 10.9 (64-bit)
Asianux 3 (32-bit)
Asianux 3 (64-bit)
Asianux 4 (32-bit)
Asianux 4 (64-bit)
CentOS 4/5/6 (32-bit)
CentOS 4/5/6/7 (64-bit)
Debian GNU/Linux 4 (32-bit)
Debian GNU/Linux 4 (64-bit)
Debian GNU/Linux 5 (32-bit)
Debian GNU/Linux 5 (64-bit)
Debian GNU/Linux 6 (32-bit)
Debian GNU/Linux 6 (64-bit)
Debian GNU/Linux 7 (32-bit)
Debian GNU/Linux 7 (64-bit)
FreeBSD (32-bit)
FreeBSD (64-bit)
IBM OS/2
Microsoft MS-DOS
Microsoft Small Business Server 2003
Microsoft Windows 2000
Microsoft Windows 2000 Professional
Microsoft Windows 2000 Server
Microsoft Windows 3.1
Microsoft Windows 7 (32-bit)
Microsoft Windows 7 (64-bit)
Microsoft Windows 8 (32-bit)
Microsoft Windows 8 (64-bit)
Microsoft Windows 95
Microsoft Windows 98
Microsoft Windows NT
Microsoft Windows Server 2003 (32-bit)
Microsoft Windows Server 2003 (64-bit)
Microsoft Windows Server 2003 Datacenter (32-bit)
Microsoft Windows Server 2003 Datacenter (64-bit)
Microsoft Windows Server 2003 Standard (32-bit)
Microsoft Windows Server 2003 Standard (64-bit)
Microsoft Windows Server 2003 Web Edition (32-bit)
Microsoft Windows Server 2008 (32-bit)
Microsoft Windows Server 2008 (64-bit)
Microsoft Windows Server 2008 R2 (64-bit)
Microsoft Windows Server 2012 (64-bit)
Microsoft Windows Vista (32-bit)
Microsoft Windows Vista (64-bit)
Microsoft Windows XP Professional (32-bit)
Microsoft Windows XP Professional (64-bit)
Novell NetWare 5.1
Novell NetWare 6.x
Novell Open Enterprise Server
Oracle Linux 4/5/6 (32-bit)
Oracle Linux 4/5/6/7 (64-bit)
Oracle Solaris 10 (32-bit)
Oracle Solaris 10 (64-bit)
Oracle Solaris 11 (64-bit)
Other (32-bit)
Other (64-bit)
Other 2.4.x Linux (32-bit)
Other 2.4.x Linux (64-bit)
Other 2.6.x Linux (32-bit)
Other 2.6.x Linux (64-bit)
Other 3.x Linux (32-bit)
Other 3.x Linux (64-bit)
Other Linux (32-bit)
Other Linux (64-bit)
Red Hat Enterprise Linux 2.1
Red Hat Enterprise Linux 3 (32-bit)
Other (32-bit)
Red Hat Enterprise Linux 3 (64-bit)
Red Hat Enterprise Linux 4 (32-bit)
Red Hat Enterprise Linux 4 (64-bit)
Red Hat Enterprise Linux 5 (32-bit)
Red Hat Enterprise Linux 5 (64-bit)
Red Hat Enterprise Linux 6 (32-bit)
Red Hat Enterprise Linux 6 (64-bit)
Red Hat Enterprise Linux 7 (32-bit)
Red Hat Enterprise Linux 7 (64-bit)
SCO OpenServer 5
SCO OpenServer 6
SCO UnixWare 7
SUSE Linux Enterprise 10 (32-bit)
SUSE Linux Enterprise 10 (64-bit)
SUSE Linux Enterprise 11 (32-bit)
SUSE Linux Enterprise 11 (64-bit)
SUSE Linux Enterprise 12 (32-bit)
SUSE Linux Enterprise 12 (64-bit)
SUSE Linux Enterprise 8/9 (32-bit)
SUSE Linux Enterprise 8/9 (64-bit)
Serenity Systems eComStation 1
Serenity Systems eComStation 2
Sun Microsystems Solaris 8
Sun Microsystems Solaris 9
Ubuntu Linux (32-bit)
Ubuntu Linux (64-bit)
VMware ESX 4.x
VMware ESXi 5.x

Categories // ESXi, Nested Virtualization, vSphere Tags // ESXi, guest os, hyper-v, Microsoft, vSphere, windows 10

How to automate VM deployment from large USB keys using ESXi Kickstart?

10.08.2014 by William Lam // 8 Comments

During VMworld US, I had the opportunity to speak with several customers to learn about their VMware environment and some of the challenges they were facing. In some scenarios, I was able to offer a solution or a different way of solving the problem. For others, it was primarily feedback on how we can better improve some of our capabilities/features or specific feature requests they would like to see get added.

One interesting challenge that arose from a class of customers who manages hundreds of remote sites is the ability fully automate the provisioning of an ESXi host as well as set of Virtual Machines as part of the initial deployment. The provisioning is all done through Kickstart (unattended installation of ESXi) and usually from a USB device but it could also be from a custom ISO. One ask that kept coming up was the support for larger USB key support within ESXi so that it could be used to include additional payload.

As some of you may or may not know, ESXi can only access USB devices within the ESXi Shell formatted using the FAT16 filesystem which allows for a maximum file size of 2GB for each partition. However, this limitation is only for the ESXi Shell itself and for the size of the ESXi installation media, this is more than sufficient. If you wish to leverage larger USB keys which has increased significantly in recent years from 32GB, 64GB and even 128GB, you can directly pass that into any guest OS through the USB Arbitrator Service (enabled by default) and there you will be able to consume the entire capacity of the USB device. The challenge is how do you go about bootstrapping ESXi as well as the initial set of Virtual Machines with these limitations and completely automated using an ESXi Kickstart?

Over the years I have seen some really creative solutions to solving this problem and funny enough, right before VMworld I had several folks reach out asking similar questions. I decided to take a look and also build upon some earlier work done by a fellow VMware SE (Tim S) to come up with a completely automated solution that would scale to any size USB device and hopefully make it easy to extend if needed.

For this project, I used a 64GB USB key which I received from the folks over at Micron who I visited in the Solution Exchange during VMworld US (these guys are doing some really awesome stuff with VSAN and an All-Flash array, be sure to check them out).

automate-esxi-kickstart-and-servicevm-usb-3
Here is a diagram of the partition structure for the 64GB USB key which I will explain further:

esxi-usb-partition
The first partition is 2GB using a FAT16 filesystem and this is used to store the actual ESXi media along with an embedded ESXi Kickstart configuration file. You can easily reference a remote Kickstart if you wish, but for simplicity purposes and to support some of the requested use cases from customers, I have embedded it.

The second partition is also 2GB using a FAT16 filesystem and this is used to store a tiny VM which I am calling a "Service VM". This VM needs to be small enough to fit the partition and will be used to read the remainder capacity of the USB device which will be using a more capable filesystem type. I have decided to store a pre-configure vMA appliance which is tarred up to reduce the disk footprint.

The third and final partition will consume the remainder capacity of the USB device, in this case it would be 60GB and using a FAT32 partition which can support up to 2TB for a single volume. This is where additional Virtual Machines would be stored and accessed by the "Service VM".

As you can probably guess, the idea is to install ESXi as you normally would to a local disk or directly onto the USB device in which case an additional partition would be required. As part of the installation, the "Service VM" would be boot strapped as it would be visible within the ESXi Shell and registered and powered on during first bootup. A first boot script could then be included in the guestOS which can receive some details about the ESXi deployment which could be hard coded (not recommended) or dynamically discovered as I have implemented it. The USB device would then be passed directly to this "Service VM" to mount and then it would be able to deploy the remainder Virtual Machines which would be stored in this larger partition.

Here is the complete ESXi Kickstart which implements what has been discussed so far and I have also included a break down of the kickstart below:

vmaccepteula
install --firstdisk --overwritevmfs
rootpw vmware123
reboot

network --bootproto=static --ip=192.168.1.200 --netmask=255.255.255.0 --gateway=192.168.1.1 --hostname=mini.primp-industries.com --nameserver=192.168.1.1 --addvmportgroup=1

%post --interpreter=busybox

# stop USB Arbitrator service to access USB device in ESXi Shell
/etc/init.d/usbarbitrator stop

# copy service VM to local VMFS datastore
cp /vmfs/volumes/SERVICEVM/vMA.tar.gz /vmfs/volumes/datastore1
tar -zvxC /vmfs/volumes/datastore1 -f /vmfs/volumes/datastore1/vMA.tar.gz
rm -f /vmfs/volumes/datastore1/vMA.tar.gz

# add guestinfo property for ESXi IP Address for adv. VM deployment
ESXI_IP=$(localcli network ip interface ipv4 get | grep vmk0 | awk '{print $2}')
echo "guestinfo.esxi_ip = ${ESXI_IP}" >> /vmfs/volumes/datastore1/vMA/vMA.vmx

%firstboot --interpreter=busybox

# Ensure hostd is ready
while ! vim-cmd hostsvc/runtimeinfo; do
sleep 10
done

# enable & start SSH
vim-cmd hostsvc/enable_ssh
vim-cmd hostsvc/start_ssh

# enable & start ESXi Shell
vim-cmd hostsvc/enable_esx_shell
vim-cmd hostsvc/start_esx_shell

# Suppress ESXi Shell warning
esxcli system settings advanced set -o /UserVars/SuppressShellWarning -i 1

# rename datastore1
vim-cmd hostsvc/datastore/rename datastore1 mini-local-datastore-1

# Register VM
vim-cmd solo/registervm /vmfs/volumes/mini-local-datastore-1/vMA/vMA.vmx

# connect USB device via passthrough
USB_DEV_NAME="Alcor Micro Corp"
USB_DEV_BUSID=$(lsusb | grep "${USB_DEV_NAME}" | awk '{print $2}' | cut -c 2)
vim-cmd vmsvc/device.connusbdev 1 "path:${USB_DEV_BUSID}/0/1 version:2"

# power on VM
vim-cmd vmsvc/power.on 1

Line12 - Need to disable the USB Arbitrator Service so the USB device can be seen by ESXi since it is by default made ready to be exposed to a VM. The service will be automatically re-enabled after the installation of ESXi which will allow for the VM to connect to the USB device.

Line15-17 - Copy the "Service VM" from USB device to local VMFS datastore1. In the example, I have pre-configured the vMA appliance tarred up the VMX and its respective VMDK.

Line20-21 - Extract the ESXi IP Address and sets a custom guestInfo property so the "Service VM" knows where to deploy the additional VMs to

Line26-29 - This checks to ensure hostd is up and running before continuing on

Line45 - Register the "Service VM" within ESXi

Line49-50 - Identify the USB device ID which will be required to mount to the "Service VM". You will need to update USB_DEV_NAME based on the USB device you are using

Line51 - Connect the USB Device to "Service VM"

Line54 - Power on the "Service VM"

At this point, you should be able to access the USB device from within the "Service VM". We can easily verify this by running the following command:

sudo fdisk -l

automate-esxi-kickstart-and-servicevm-usb-0
As seen in the screenshot above, we can see our three partitions and third is the one with our FAT32 partition which contains a couple of Virtual Machines that I want to deploy. Of course, this partition can contain anything you wish to store, so the sky is the limit!

To mount the USB device and the specific partition, we will create a temporarily directory and issue the mount command by running these two commands:

sudo mkdir -p /mnt/USB;sudo mount /dev/sdb3 /mnt/USB

automate-esxi-kickstart-and-servicevm-usb-1
For my USB key, I have stored both the VCSA and NSX Manager OVA which can then be deployed using ovftool. The last part to be able to make this as seamless and automated as possible is to be able to identify the ESXi host information. If you recall earlier, we had set a custom guestInfo property within our "Service VM". This custom property can then be read by the guestOS leveraging VMware Tools and provides the IP Address to the guest. You can easily set other metadata information but to be able to deploy these additional OVA's, we would need to know the IP Address of the ESXi host and this makes it so you do not need to hard code anything (perhaps ESXi host credentials).

To retrieve this custom property, you will need to run the following command:

vmtoolsd --cmd "info-get guestinfo.esxi_ip"

automate-esxi-kickstart-and-servicevm-usb-2
With these last few guestOS commands, you will be able to create a firstboot script which will automatically mount the appropriate USB partition and deploy these additional Virtual Machines. This is just one of the many possibilities on how you can deploy additional VMs as part of your ESXi Kickstart deployment. Hopefully this solution provides a base in which you can easily customize based on your own requirements.

Categories // Automation, ESXi, vSphere Tags // ESXi, fat16, fat32, kickstart, ks.cfg, sd, usb, vSphere

How to move a VSAN Cluster from one vCenter Server to another?

09.26.2014 by William Lam // 42 Comments

I recently caught an interesting VMTN thread where a user wanted to move an exiting VSAN Cluster from one vCenter Server to another vCenter Server with minimal impact to the ESXi hosts and running Virtual Machines. The great news is that this can be done without any impact to your ESXi hosts and more importantly, there is no impact to your workloads. I have personally performed this operation on several occasions without any problems and the process is actually quite straight forward and thought I walk you through it because it is literally a couple of steps.

The main reason this is not a challenge is that VSAN has been architected to not have a reliance on vCenter Server for its normal operations. It is true that vCenter Server is required for the configuration and management of the VSAN Cluster and VM Storage Policies, but once those configurations have been applied, then the vCenter Server is no longer in picture from operational point of view. This means if you need to move your VSAN Cluster from a development vCenter Server to a production vCenter Server or if you accidentally destroyed your original vCenter Server, the VSAN Cluster can easily be re-created on a new vCenter Server.

To demonstrate the process, I have a 3 Node VSAN Cluster with a running Virtual Machine on vCenter Server (vcenter55-1) and I have built a new vCenter Server (vcenter55-3) which I would like to move the existing VSAN Cluster over to.

UPDATE2 (11/02/17) - There was a question a couple of weeks back on whether the procedure outlined below could also apply to a vSAN Stretched Cluster. I did not see any technical reasons preventing this and one of our GSS Engineers had recently validated this with a customer and successfully moved a vSAN Stretched Cluster. I asked if he could share the modified instructions in case others were interested.

  1. Copy all VDS settings to new cluster
  2. Enable vSAN on new cluster (follow Step 2 below)
  3. Disable stretched cluster
  4. Move each host
  5. Move witness
  6. Re-enable stretched cluster (follow Step 4 below)

Step 1 - Deploy a new vCenter Server and create a vSphere Cluster with VSAN Enabled.

migrate-vsan-cluster-from-one-vcenter-to-another-0
Step 2 -

UPDATE1 (05/02/17) - Updated to include vSAN 6.6. specific instructions.

Pre-vSAN 6.6 - Disconnect one of the ESXi hosts from your existing VSAN Cluster and then add that to the VSAN Cluster in your new vCenter Server.

Note: Technically, you do not even have to disconnect the ESXi hosts from the old vCenter Server. You could just add the ESXi hosts to the new vCenter Server and once you have confirmed you wish to move the ESXi host, it will automatically be disconnected once added. This would actually save you an extra step.

vSAN 6.6 - An additional configuration is needed to be applied to all ESXi hosts PRIOR to disconnecting from the original vCenter Server and adding them into the new vCenter Server. Below are a few examples on how to apply the ESXi Advanced Setting which should be a value of 1:

Here is an example using ESXCLI (local or remotely) on an individual ESXi host:

esxcli system settings advanced set -o /VSAN/IgnoreClusterMemberListUpdates -i 1

Here is an example of using PowerCLI to apply the setting across all ESXi hosts if the original vCenter Server is still available:

Foreach ($vmhost in (Get-Cluster -Name VSAN-Cluster | Get-VMHost)) {
$vmhost | Get-AdvancedSetting -Name "VSAN.IgnoreClusterMemberListUpdates" | Set-AdvancedSetting -Value 1 -Confirm:$false
}

Here is an example of using PowerCLI to apply the setting directly to an ESXi host if the original vCenter Server is no longer available:

Get-VMHost -Name 192.168.1.100 | Get-AdvancedSetting -Name "VSAN.IgnoreClusterMemberListUpdates" | Set-AdvancedSetting -Value 1 -Confirm:$false

migrate-vsan-cluster-from-one-vcenter-to-another-1
Once you have successfully added the ESXi host, you should see a warning within the VSAN Configuration page stating there is a "Misconfiguration detected" which is expected. What is happening is that this ESXi has been configured in an existing VSAN Cluster and the ESXi hosts that it is supposed to be able to communicate with are not part of this VSAN Cluster. Once we add the remainder ESXi hosts, then the VSAN Cluster will be happy and this error will go away.

Note: If you try to add all of the ESXi hosts from the existing VSAN Cluster to the new VSAN Cluster at once, you will see an error regarding UUID mismatch. The trick is to add one host first and once that has been done, you can then bulk add the remainder ESXi hosts and you will not have an issue. This is handy if you are trying to automate this process.

Step 3 - Add the remainder ESXi hosts to the VSAN Cluster in the new vCenter Server. Once all hosts have been added to the new VSAN Cluster, you will see the warning icons disappear and your VSAN Cluster is now fully managed by the new vCenter Server. We can also confirm that there are no network partition as all original VSAN configurations have been retained on the ESXi hosts.

UPDATE1 (05/02/17)

Step 4 - This last step is ONLY applicable to vSAN 6.6 hosts. Once all hosts have been successfully to the new vCenter Server and you have verified cluster status is healthy and there are no network partitions. We now need to update the ESXi Advanced Setting we had set earlier from a value of 1 back to value of 0.

Here is a PowerCLI snippet which given a vSAN Cluster, it will automatically go through all ESXI hosts and update the setting:

Foreach ($vmhost in (Get-Cluster -Name VSAN-Cluster | Get-VMHost)) {
$vmhost | Get-AdvancedSetting -Name "VSAN.IgnoreClusterMemberListUpdates" | Set-AdvancedSetting -Value 0 -Confirm:$false
}

migrate-vsan-cluster-from-one-vcenter-to-another-2
Disclaimer: As mentioned there is no impact to the ESXi hosts (other than not being able to manage it while you disconnect and re-connect on the new vCenter Server) and there is no impact to the running Virtual Machines and any VM Storage Policies that have been applied to the VM will still be enforced by each of the ESXI hosts. However, one thing to be aware of is that the VM Storage Policies in your original vCenter Server will not be available in the new vCenter Server. You will need to re-create each of the VM Storage Policies and re-attach them to the existing Virtual Machines. This can of course be automated by using the vSphere API or leveraging the new PowerCLI 5.8 R1 release which includes VM Storage Policie cmdlets.

Here is an example of exporting a VM Storage Policy named "FTT=1" to a file called policy.xml on your desktop:

Export-SpbmStoragePolicy -StoragePolicy (Get-SpbmStoragePolicy -Name FTT=1) -FilePath C:\Users\Administrator\Desktop\policy.xml

Currently this is the only impact by moving a VSAN Cluster from one vCenter Server to another and of course this assumes you have created VM Storage Policies aside from the default policies.

I received a couple of questions regarding the networking setup for my VSAN Cluster. In the above example I was using VSS (Virtual Standard Switch). I did however, retest this scenario completely on VDS (Virtual Distributed Switch) and the results were the same. When all ESXi hosts have been added to the new vCenter Server, you will see a warning about proxy host switch. The key to properly migrating the networks (VMkernel & VM Portgroup) is to add each ESXi hosts to the new VDS that you will need to create. If you original vCenter Server is still available, you can export and import the VDS configuration. If it is not available, then you will need to manually re-create the Distributed Portgroups before proceeding.

The first step is to go to the Networking view and right click select "Add and Manage Hosts"

migrate-vsan-cluster-from-one-vcenter-to-another-3
Go ahead and walk through the guided wizard and make sure you only add one hosts at a time, as I saw issues when trying to add multiple hosts at a time. Once the ESXi host has been added to the new VDS and its uplinks, VMkernel and VM Portgroups are all connected. You should now see two VDS under the Networking view of ESXi host under "Manage".

migrate-vsan-cluster-from-one-vcenter-to-another-4
This can be seen clearly using the vSphere C# Client as it allows you to view both on the same screen. Once you have confirmed that the everything looks good, then you can go ahead and remove the old VDS switch as shown in the screenshot above. At this point, your ESXi hosts networking is now running on the new VDS. You will continue this same workflow for the remainder ESXi hosts until they all have been migrated over to the new VDS.

Categories // ESXi, VSAN, vSphere Tags // ESXi, Virtual SAN, VSAN, VSAN 6.6, vSphere 5.5

  • « Previous Page
  • 1
  • …
  • 82
  • 83
  • 84
  • 85
  • 86
  • …
  • 109
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...