WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple
You are here: Home / Automation / Using Packer vsphere-iso provider with VMware Cloud on AWS

Using Packer vsphere-iso provider with VMware Cloud on AWS

05.24.2021 by William Lam // 1 Comment

I am a huge fan of HashiCorp Packer, which makes automating Virtual Machine images for vSphere including OVF, OVA and vSphere Content Library Templates extremely easy. Packer supports two vSphere Providers, the first being vmware-iso which requires SSH access to an ESXi host and the second called vsphere-iso which does not require ESXi access but instead connects to vCenter Server using the vSphere API, which is the preferred method for vSphere Automation.

I started working with Packer and the vmware-iso several years ago and because there is not 100% parity between the two vSphere providers, I have not really looked at the vsphere-iso provider or even attempted to transition over. I was recently working on some automation within my VMware Cloud on AWS(VMConAWS) SDDC and since this is a VMware managed service, customers do not have access to the underlying ESXi hosts nor SSH access. I thought this would be a good time to explore the vsphere-iso provider and see if I can make it work in a couple of different networking scenarios.

For customers that normally establish either a Direct Connect (DX) or VPN (Policy or Route-based) from their on-premises environment to their SDDC, there is nothing special that needs to be setup to use Packer. However, if you are like me who may not always have these types of connectivity setup or if you wish to use Packer directly over the internet to your SDDC, then some additional configurations will be needed.

UPDATE (04/12/22) - A floppy option can now be used with Photon OS to host the kickstart file, see this Github issue for an example.

Packer Connectivity Scenarios

In both scenarios below, DX/VPN is not configure or relied upon to the VMConAWS SDDC.

Scenario 1 - Run Packer locally on your desktop and connect to the VMConAWS SDDC over the public internet. Because the SDDC will not be able to communicate with your system running Packer, required configuration files can not be served over Packer's HTTP server nor can you use any of the communicators which also expects the ability to SSH to the provisioned VM using the internal IPs that have been allocated.

Scenario 2 - Run Packer in a Linux bastion (jumphost) VM that runs within the VMConAWS SDDC and configure the SDDC firewall rule to allow connectivity from the VM to/from the SDDC Management CIDR. This will enable you to use Packer's built-in web server to serve up both the kickstart configuration files to the VM being created but also use SSH communicator for additional post-provisioning automation.

To help demonstrate a solution for both of these scenarios, I have created a Github repo called vmc-packer-example which contains an example for both Ubuntu and PhotonOS. The Ubuntu example relies on a guestOS that supports a floppy device which is how it retrieves the kickstart configuration file and can be a solution for Scenario 1. The PhotonOS example relies on Packer's built-in HTTP server to serve up the kickstart configuration and this can be a solution for Scenario 2.

Note: I was hoping PhotonOS could also read a kickstart file from a floppy device but it looks like PhotonOS 3.0 does not see the device and although PhotonOS 4.0 seems to have improved and lists the device, it is not automatically mounted and hence it can not be used. There has been several requests from the community to add this support and I have shared my latest findings in this latest Github PR.

Packer Examples

To use either the Ubuntu or Photon example, simply clone the Github repo and update the respective Packer JSON configuration file with your environment configuration. In addition, you will also need to download and upload the respective ISO files to your SDDC or another version based on your needs.

Once you have saved your configuration, simply run the following command to start the packer build process:

packer build [JSON-CONFIGURATION]


If everything was configured correctly, you should see the newly build VM in your SDDC Inventory as shown in the screenshot below.


I would be remiss if I did not mention Ryan Johnson's amazing vSphere Packer Repository which includes complete working templates for automating Microsoft Windows Server, Redhat Linux, CentOS and PhotonOS, definitely check it out as it is a great resource for using Packer's vsphere-iso provider.

More from my site

  • VMware Cloud (VMC) Console Inventory with various vSphere "Linked Modes"
  • Custom vCenter Server Role using vSphere Terraform Provider on VMware Cloud on AWS
  • Building a custom Ubuntu image using Packer Examples for VMware vSphere project
  • vSphere with Tanzu using Intel Arc GPU
  • Quick Tip - How to check if vSAN TRIM/UNMAP is enabled in VMware Cloud on AWS Cluster?

Categories // Automation, VMware Cloud on AWS Tags // Packer, VMware Cloud on AWS

Comments

  1. *protectedRay says

    05/25/2021 at 8:09 pm

    Really good article, I've been doing this for a while, windows I've found to be far more complex for post deployment for customization and essential requirement for the floppy to mount for the kicker and customization scripts.

    Proxmox for packer I have managed to get those customization scripts to work via iso mount, wonder if that would work on vsphere.

    Reply

Thanks for the comment!Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...