WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Automated Pivotal Container Service (PKS) Lab Deployment 

06.12.2018 by William Lam // 3 Comments

While working on my Getting started with VMware Pivotal Container Service (PKS) blog series awhile back, one of the things I was also working on was some automation to help build out the required infrastructure NSX-T (Manager, Controller & Edge), Nested ESXi hosts configured with VSAN for the Compute vSphere Cluster and Pivotal Ops Manager. This was not only useful for my own learning purposes, but that I could easily rebuild my lab if I had messed something up and allowed me to focus more on the PKS solution rather than standing up the infrastructure itself.

To be honest, I had about 95% of the script done but I was not able to figure out one of the NSX-T APIs and I got busy and had left the script on the back burner. This past weekend while cleaning out some of my PKS research documents, I came across the script and funny enough, in about 30minutes I was able to solve the problem which I was stuck for weeks prior. I just finished putting the final touches on the script along with adding some documentation. Simliar to my other vGhetto Lab Automation scripts, I have created a Github repo vGhetto Automated PKS Lab Deployment

UPDATE (06/19/18) - I have just updated the script to also include the deployment and configuration of the PKS components (Ops Manager, BOSH Director, Harbor & Stemcell). The script by default will now configure everything end-2-end and you will have a fully functional PKS environment that you can start playing around with. For complete details, please see the Github repo which has the updated requirements and documentation. Below is a screenshot of the PKS deployment and configuration which requires the use of the Ops Manager CLI (OM).


The script will deploy the following components which will be placed inside of a vApp as shown in the screenshot below:

  • NSX-T Manager
  • NSX-T Controller x 3 (though you technically only need one for lab/poc purposes)
  • NSX-T Edge
  • Nested ESXi VMs x 3 (VSAN will be configured)
  • Ops Manager


The script follows my PKS blog series and automates Part 3 (NSX-T) and the start of Part 4 (Ops Manager deploy), please refer to these individual blog posts for more information. The goal of the script is to enable folks to jump right into the PKS configuration workflows and not have to worry about setting up the actual infrastructure that is needed for PKS. Once the script has finished, you can jump right into Ops Manager and start your PKS journey.

Here is a sample execution of the script which took ~29 minutes to complete.


The full requirements for using the script be found on the Github repo and below are the software versions that I had used to deploy and configure PKS:

  • Pivotal Ops Manager for vSphere - 2.1-build.318
  • VMware Harbor Container Registry 1.4.2
  • Pivotal Container Service 1.0.4
  • Stemcell 3668.42 

Categories // Automation, Cloud Native, Home Lab, Kubernetes, NSX, PowerCLI Tags // BOSH, Kubernetes, NSX-T, Pivotal, PKS, PowerCLI

Getting started with VMware Pivotal Container Service (PKS) Part 3: NSX-T

03.28.2018 by William Lam // 6 Comments

In this article, we are now going to start configuring NSX-T so that it will be ready for us to install PKS and consume the networking and security services provided by NSX-T. The result is that PKS can deliver on demand provisioning of all NSX-T components: Container Network Interface (CNI), NSX-T Container Plugin (NCP) POD, NSX Node Agent POD, etc) automatically when a new Kubernetes (K8S) Cluster is requested, all done with a single CLI or API call. In addition, PKS also provides a unique capability through its integration with NSX-T to enable network micro-segmentation at the K8S namespace level which allows Cloud/Platform Operators to manage access between application and/or tenant users at at a much finer grain level than was possible before which is really powerful!

As mentioned in the previous blog post, I will not be walking through a step-by-step NSX-T installation and I will assume that you already have a basic NSX-T environment deployed which includes a few ESXi host prepped as Transport Nodes and at least 1 NSX-T Controller and 1 NSX-T Edge. If you would like a detailed step by step walk through, you can refer to the NSX-T documentation here or you can even leverage my Automated NSX-T Lab Deployment script to setup the base environment and modify based on the steps in this article. In fact, this is the same script I have used to deploy my own NSX-T environment for PKS with some minor modification which I will be sharing at a later date.

If you missed any of the previous articles, you can find the complete list here:

  • Getting started with VMware Pivotal Container Service (PKS) Part 1: Overview
  • Getting started with VMware Pivotal Container Service (PKS) Part 2: PKS Client
  • Getting started with VMware Pivotal Container Service (PKS) Part 3: NSX-T
  • Getting started with VMware Pivotal Container Service (PKS) Part 4: Ops Manager & BOSH
  • Getting started with VMware Pivotal Container Service (PKS) Part 5: PKS Control Plane
  • Getting started with VMware Pivotal Container Service (PKS) Part 6: Kubernetes Go!
  • Getting started with VMware Pivotal Container Service (PKS) Part 7: Harbor
  • Getting started with VMware Pivotal Container Service (PKS) Part 8: Monitoring Tool Overview
  • Getting started with VMware Pivotal Container Service (PKS) Part 9: Logging
  • Getting started with VMware Pivotal Container Service (PKS) Part 10: Infrastructure Monitoring
  • Getting started with VMware Pivotal Container Service (PKS) Part 11: Application Monitoring
  • vGhetto Automated Pivotal Container Service (PKS) Lab Deployment

[Read more...]

Categories // Automation, Cloud Native, Kubernetes, NSX Tags // BOSH, cloud native apps, Kubernetes, NSX-T, PCF, Pivotal, PKS

ESXi host with network redundancy using NSX-T and only 2 pNICs?

03.27.2018 by William Lam // 8 Comments

In todays data centers, it is not uncommon to find servers with only 2 x 10GbE network interfaces, this is especially true with the rise of Hyper-Converged Infrastructure over the last several years. For customers looking to deploy NSX-T with ESXi, there is an important physical network constraint to be aware of which is quickly mentioned in the NSX-T documentation here.

For example, your hypervisor host has two physical links that are up: vmnic0 and vmnic1. Suppose vmnic0 is used for management and storage networks, while vmnic1 is unused. This would mean that vmnic1 can be used as an NSX-T uplink, but vmnic0 cannot. To do link teaming, you must have two unused physical links available, such as vmnic1 and vmnic2.

As shown in the diagram below, an ESXi host with only two physical NICs can not provide complete network redundancy as each pNIC can only be associated with a single switch (VSS/VDS or the new N-VDS) as pNICs can not be shared across switches.


For customers, this means that you need to allocate a minimum of 4 pNICs to provide redundancy for both overlay traffic and non-overlay VMkernel traffic such as Management, vMotion, VSAN, etc. This is much easier said than done as not all hardware platforms can easily be expanded and even if they can, there still is a huge cost in expanding the physical network footprint (switch port, cabling, etc).

UPDATE (06/12/18) - As of NSX-T 2.2, which was recently released, there is now a UI in NSX-T Manager for managing the migration of VMkernel interfaces to the N-VDS. For automation purposes, you may still find this article useful but now you have option of using the UI.

[Read more...]

Categories // Automation, ESXi, NSX Tags // ESXi, N-VDS, NSX-T, REST API

  • « Previous Page
  • 1
  • …
  • 6
  • 7
  • 8
  • 9
  • 10
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...