WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

New SDDC Linking capability for VMware Cloud on AWS

11.03.2020 by William Lam // 1 Comment

Back in September, the VMware Transit Connect (vTGW) on VMware Cloud on AWS (VMConAWS) feature was released and provides users a simplified way of connecting AWS VPCs, AWS Direct Connect Gateways and customer on-premises datacenter from a networking connectivity standpoint. As part of this feature, a new logical construct called an SDDC Group was created which allows customers to easily apply common networking connectivity policies across a number of SDDCs versus having to manage them separately which can quickly get complex from an operational point of view.

The SDDC Group not only simplified the initial setup, but it also simplifies Day 2 Operations when new SDDCs are provisioned and added to the SDDC group. The networking policies that have been configured at the SDDC Group will automatically apply to all new SDDCs which makes this a really slick solution. As SDDCs are removed from the SDDC Group, the related configurations are automatically un-provisioning and detached from the respective networking resources.


Simplified network connectivity using an SDDC Group was just the beginning! Today, the VMware Cloud team has released a new feature built on top of the SDDC Groups construct called vCenter Linking for SDDC Groups. Just as the name implies, customers can now easily "Link" multiple vCenter Servers within an SDDC Group enabling a single view of all vCenter Servers using any one of the vSphere UIs within the SDDC. For those familiar with Enhanced Linked Mode (ELM), this is basically that but for SDDCs running in the Cloud!

The workflow could not have been simpler and last week I got try it out and was quite impressed! Under the hood, this leverages the vCenter Convergence capability and when enabling vCenter Linking, the service automatically handles all those details including the necessary NSX-T firewall rules that need to be configured across ALL SDDC to allow for secured connectivity. Just imagined having to do this each time a new SDDC is added or remove, you need to manually go to all SDDC and update or create new firewall rules!? This is all hidden away from the user and by simply associating SDDCs in the SDDC Group, the configurations are applied automatically for you.

Just setup an upcoming feature which builds on top of VMware Transit Connect Gateway (vTGW) allowing #VMWonAWS customers to now “Link” multiple SDDCs together. Just 1-Click, you now can access all Cloud vCenter Servers using any one vSphere UI. ELM for Cloud!#VMwareCloud pic.twitter.com/dImg6Yloe3

— William Lam (@lamw.bsky.social | @*protected email*) (@lamw) October 30, 2020

One question that I did have while trying out this new feature was how does this work with existing features such as Hybrid Linked Mode (HLM) and ELM?

[Read more...]

Categories // VMware Cloud, VMware Cloud on AWS Tags // ELM, Enhanced Linked Mode, HLM, Hybrid Linked Mode, SDDC Group, VMware Cloud, VMware Cloud on AWS

Stateless ESXi-Arm with Raspberry Pi

11.03.2020 by William Lam // 24 Comments

I am super excited to be able to finally share, what I think, is a really cool ESXi-Arm solution which has been an evolution of this and this. This solution also incorporates a number of automation techniques I have shared over the years when it comes to ESXi scripted installation aka Kickstart, so it was really neat to all those things get pulled into a single solution. Lastly, I also want to give huge thanks to Cyprien Laplace who threw the initial challenge my way after I had shared how to perform an ESXi-Arm scripted installation without using SD Card.

ESXi-x86 can be deployed using either a stateful or stateless installation. In the latter case, ESXi is booted over the network using the vSphere Auto Deploy feature in vCenter Server which does not require any local media for ESXi. Upon attaching itself to vCenter Server, Auto Deploy then leverages vSphere Host Profiles and its rules engine to determine which configurations or profiles should be applied to ensure the ESXi hosts are configured per their desired stated. Here is a quick video overview of how Auto Deploy and Host Profiles work.

Fundamentally, vSphere Auto Deploy and Host Profiles can also work with ESXi-Arm but today, vCenter Server would require some code modification for this to actually work.

OK, so am I teasing you with something that does not exists? Nope, but I just wanted to help set the context 🙂

The solution that I have created boots ESXi-Arm over the network in a "stateless" manner, so there is no need for an SD Card or USB device plugged into the Raspberry Pi (rPI). In addition to the ESXi-Arm files, it also includes a custom payload which runs to retrieve additional configurations which can automatically join a desired vCenter Server as well as apply further customizations of an ESXi-Arm host. As you can see, this solution behaves similar to that of vSphere Auto Deploy and Host Profiles but does not use either of these vSphere features and works with the ESXi-Arm Fling right now.

Technically speaking, these techniques can also be applied to ESXi-x86 but I will leave that to the reader for further exploration.

[Read more...]

Categories // Automation, ESXi-Arm Tags // Arm, ESXi, Raspberry Pi, stateless

Automating HAProxy VM deployment with 3-NIC configuration using PowerCLI

11.02.2020 by William Lam // 2 Comments

When deploying the HAProxy VM as part of vSphere with Tanzu, customers have the option of deploying the HAProxy VM using either a 2-NIC or 3-NIC configuration. The default OVF Deployment Option is the 2-NIC design called "Default" and the 3-NIC design is called "Frontend".

From an Automation point of view, you can use either OVFTool or PowerCLI to automate the deployment. For a 2-NIC example, you can refer to my Automated vSphere with Tanzu Lab Deployment Script. However, for the 3-NIC example, a few folks were running into some issues when using PowerCLI for the automation.

The main issue is that because the default OVF Deployment Option is the 2-NIC design (Default), the two additional OVF properties frontend_ip and frontend_gateway is basically hidden when processing the OVF properties when PowerCLI.

Note: You can view these optional properties by running the following OVFTool command: ovftool --X:enableHiddenProperties vmware-haproxy-v0.1.8.ova


Even if you specified the "Frontend" OVF Deployment Option, PowerCLI does not seem to have the logic to retrieve the other optional parameters and hence can not be set as part of the initial deployment.

[Read more...]

Categories // Automation, PowerCLI, VMware Tanzu Tags // HAProxy, PowerCLI, vSphere Kubernetes Service

  • « Previous Page
  • 1
  • …
  • 169
  • 170
  • 171
  • 172
  • 173
  • …
  • 565
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Is my NIC supported with Enhanced Data Path (EDP) with VCF 9.0 06/23/2025
  • PowerCLI remediation script for running NSX Edge on AMD Ryzen for VCF 9.0 06/20/2025
  • Failed to locate kickstart on Nested ESXi VM CD-ROM in VCF 9.0 06/20/2025
  • NVMe Tiering with Nested Virtualization in VCF 9.0 06/20/2025
  • VCF 9.0 Installer workaround for ESXi hosts with different vendor 06/19/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...