WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

External replication of vSphere Content Library

11.15.2017 by William Lam // 17 Comments

As the adoption of vSphere Content Library continues to grow, I am seeing more questions from our field and customers around content distribution. In case you did not know, vSphere Content Library (CL as I will be refering to it going forward) has its own built-in native replication mechanism which allows customers to easily publish and subscribe to libraries from either within a single vCenter Server instance or even between two completely different vCenter Servers (regardless of deployment topology and/or SSO Domain configurations).


Content distribution or replication is handled by CL which is a service within the vCenter Server. If content is being replicated from within a single vCenter Server and the ESXi hosts can communicate with each other, then direct host to host transfer is used, also referred to as Network File Copy (NFC), rather than going through vCenter Server. When content is transfered between two vCenter Servers, then the data travels through vCenter Server using standard HTTPS (443) by default. In the latter scenario, if you have configured Enhanced Linked Mode for your vCenter Servers, then NFC will be used unless ESXi hosts can not communicate with each other than, it will automatically fall back to the default HTTPS which is pretty cool.

One thing that may not be very well known is that customers actually have a choice in how their CL content is replicated. In addition to native replication which currently does not support incremental/delta updates, meaning all file transfers are full copies, CL can also support external replication. In fact, many customers today already have existing methods for efficiently replicating large amounts of data across multiple datacenters whether that is replication built into their storage arrays, network appliances or some other means. For these customers, you can still benefit from CL while continue to take advantage of your existing methods of replication.


[Read more...]

Categories // Automation, PowerCLI, vSphere 6.0

Moving ESXi hosts with LACP/LAG between vCenter Servers?

11.09.2017 by William Lam // 11 Comments

At VMworld this year, I had received several questions from customers asking whether it was possible to move an ESXi host configured using LACP/LAG from one vCenter Server to another, similar to the workflows outlined here or here. Not having spent much time with LACP/LAG, I reached out to a fellow colleague who I knew would know the answer, Anothony Burke, who you may know as one of the co-creators of the popular Automation tool PowerNSX.

Anthony not only verified that there was indeed a workflow for this scenario, but he was also kind enough to test and verify this in his lab. Below is the procedure that he had shared with me and I merely "prettified" the graphics he initially drafted up 🙂

At a high level, the workflow is similar to the ones shared earlier. The main difference is that for an LACP/LAG-based configuration, you must convert from VDS to VSS and then disconnect from one vCenter Server to the other, you can not simply disconnect and "swing" the ESXi host like you could for non-LACP/LAG configuration or you will run into issues. Once you have re-added the ESXi host to the new vCenter Server, you simply reverse the procedure from VSS to VDS and re-create your LACP/LAG configuration.

Step 1 - Here is an example of a starting point, where we have an ESXi host with 2 pNICs (vmnic0 and vmnic1) connected to an LACP bundle which is then associated with a physical switch.


[Read more...]

Categories // vSphere Tags // distributed virtual switch, LACP, LAG, vds, vss

Workarounds for deploying PhotonOS 2.0 on vSphere, Fusion & Workstation

11.07.2017 by William Lam // 2 Comments

PhotonOS 2.0 was just released last week and it includes a number of exciting new enhancements which you can read more about here. Over the last few days, I had noticed quite a few folks having issues deploying the latest PhotonOS OVA, including myself. I figure I would share the current workarounds after reaching out to the PhotonOS team and seeing the number of questions both internally and externally.

Deploying PhotonOS 2.0 on vSphere

If you are deploying the latest OVA using either the vSphere Web (Flex/H5) Client on vCenter Server or the ESXi Embedded Host Client on ESXi, you will notice that the import fails with the following error message:

The specified object /photon-custom-hw13-2.0-304b817/nvram could not be found.


This apparently is a known issue with the vSphere Web/H5 Client bug with exported vHW13 Virtual Machines. As I understand it, the actual fix did not make it in the latest vSphere 6.5 Update 1 release, but it should be available in a future update. After reporting this issue to the PhotonOS team as I ran into this myself, the team quickly re-spun the vHW11 OVA (since that image also had a different issue) which can now be imported into a vSphere environment using any of the UI-based Clients and/or CLIs. For now, the workaround is to download PhotonOS 2.0 "OVA with virtual hardware v11" if you are using vSphere OR you can install PhotonOS using the ISO.

Deploying PhotonOS 2.0 to Fusion/Workstation

UPDATE (11/08/17) - The PhotonOS team just published an additional OVA specifically for Fusion/Workstation which uses LSI Logic storage adapter as PVSCSI is currently not supported today. You can easily import latest PhotonOS 2.0 without needing to tweak the OVF as mentioned in the steps below, simply download the OVA with virtual hardware v11(Workstation and Fusion) and import normally via UI or CLI.

If you are deploying either of the vHW11 or vHW13 OVA to Fusion/Workstation, you see the following error message:

Invalid target disk adapter type: pvscsi


The reason for this issue is that neither Fusion/Workstation currently support the PVSCSI storage adapter type which the latest PhotonOS OVA uses. In the meantime, a workaround is to edit the OVA to use the LSI Logic adapter instead of the PVSCSI. Below are the steps to convert the OVA to OVF and then apply the single line change.

Step 1 - Use OVFTool (included with both Fusion/Workstation) to convert the OVA to an OVF which will allow us to edit the file. To do so, run the following command:

ovftool --allowExtraConfig photon-custom-hw13-2.0-304b817.ova photon-custom-hw13-2.0-304b817.ovf

Step 2 - Open the photon-custom-hw13-2.0-304b817.ovf using a text editor like Visual Studio Code or VI and update the following line from:

<rasd:ResourceSubType>VirtualSCSI</rasd:ResourceSubType>

to

<rasd:ResourceSubType>lsilogic</rasd:ResourceSubType>

and save the change.

Step 3 - Delete the OVF manifest file named photon-custom-hw13-2.0-304b817.mf since the contents of the file has been updated

Step 4 - You can now import the modified OVF. If you wish to get back the OVA, you can just re-run Step 1 and use the .ova extension to get back a single file

Upgrading from Photon 1.x to 2.0

I also noticed several folks were asking about upgrading from Photon 1.0 to 2.0, you can find the instructions below:

Step 1 - You may need to run the following if you have not done so in awhile:

tdnf distro-sync

Step 2 - Install the PhotonOS upgrade package by running the following command:

tdnf install photon-upgrade

Step 3 - Run the PhotonOS upgrade script and answer 'Y' to start the upgrade:

photon-upgrade.sh

Categories // ESXi, Fusion, OVFTool, vSphere, Workstation Tags // fusion, Photon, vSphere, workstation

  • « Previous Page
  • 1
  • …
  • 260
  • 261
  • 262
  • 263
  • 264
  • …
  • 561
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...