WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Automating post-configurations for both PSC & VCSA 6.0u1 using appliancesh

11.23.2015 by William Lam // 4 Comments

In vSphere 6.0, we introduced a new command-line option to allow you to automate both the deployment and upgrade of a vCenter Server Appliance (VCSA) and Platform Services Controller (PSC) using a simple JSON configuration file. This has been a very popular request from customers and one that I have been asking for some time now and was glad to see it was finally made available with the VCSA. One thing that was still missing from an Automation standpoint was being able to some basic post-configurations after the initial deployment. Common operations such as adding additional user accounts, configuring SNMP for monitoring or adding proxy server were available but had to be done interactively and manually.

In vSphere 6.0 Update 1, an enhancement was made to the appliancesh interface which will now allow customers to automate the post-configurations of either a VCSA or PSC by simply re-directing a series of appliancesh commands within a file using SSH. Although SSH may not be ideal for all customers and having a programmatic interface via an API is ultimately where we want to get to; This at least allows customers to automate the end-to-end deployment of both the VCSA and PSC as well as covering any additional post-configurations that might be required to stand up a vSphere environment.

To make use of this feature, you simply create a file that contains the list of appliancesh commands that you wish to run on either the VCSA and/or PSC. Here is an example configuration called psc.config (you can name it anything you want):

access.shell.set --enabled false
access.ssh.set --enabled false
ntp.server.add --servers "0.pool.ntp.org,1.pool.ntp.org"
timesync.set --mode NTP
services.restart --name ntp
proxy.set --protocol https --server proxy.primp-industries.com
localaccounts.user.add --email *protected email* --role operator --fullname 'William Lam' --username lamw --password 'VMware1!'
snmp.set --communities public --targets 192.168.1.160@161/public
snmp.enable

Once you have saved the configuration file, you simply SSH to either your VCSA or PSC and re-direct the configuration file by running the following command:

ssh *protected email* < psc.config

Once authenticated, the series of appliancesh commands will be executed and then you will be automatically logged off as seen in the screenshot below.
automating-post-configurations-for-psc-and-vcsa-using-appliancesh-0
If you have any feedback in this particular area, please leave a comment as I know both PM/Engineering are interested in hearing your thoughts and what you might want to see in the future in terms of post-configuration of the VCSA and PSC.

Categories // Automation, VCSA, vSphere 6.0 Tags // appliancesh, psc, vami, vcenter server appliance, VCSA, vcva, vSphere 6.0 Update 1

Migrating ESXi to a Distributed Virtual Switch with a single NIC running vCenter Server

11.18.2015 by William Lam // 29 Comments

Earlier this week I needed test something which required a VMware Distributed Virtual Switch (VDS) and this had to be a physical setup, so Nested ESXi was out of the question. I could have used my remote lab, but given what I was testing was a bit "experimental", I prefered using my home lab in the event I need direct console access. At home, I run ESXi on a single Apple Mac Mini and one of the challenges with this and other similar platforms (e.g. Intel NUC) is that they only have a single network interface. As you might have guessed, this is a problem when looking to migrate from a Virtual Standard Switch (VSS) to VDS, as it requires at least two NICS.

Unfortunately, I had no other choice and needed to find a solution. After a couple minutes of searching around the web, I stumbled across this serverfault thread here which provided a partial solution to my problem. In vSphere 5.1, we introduced a new feature which would automatically roll back a network configuration change if it negatively impacted network connectivity to your vCenter Server. This feature could be disabled temporarily by editing the vCenter Server Advanced Setting (config.vpxd.network.rollback) which would allow us to by-pass the single NIC issue, however this does not solve the problem entirely. What ends up happening is that the single pNIC is now associated with the VDS, but the VM portgroups are not migrated and the reason that this is problematic is that the vCenter Server is also running on the ESXi host which it is managing and has now lost network connectivity 🙂

I lost access to my vCenter Server and even though I could connect directly to the ESXi host, I was not able to change the VM Network to the Distributed Virtual Portgroup (DVPG). This is actually an expected behavior and there is an easy work around, let me explain. When you create a DVPG, there are three different bindings: Static, Dynamic, and Ephemeral that can be configured and by default, Static binding is used. Both Static and Dynamic DVPGs can only be managed through vCenter Server and because of this, you can not change the VM network to a non-Ephemeral DVPG and in fact, it is not even listed  when connecting to the vSphere C# Client. The simple work around is to create a DVPG using the Ephemeral binding and this will allow you to then change the VM network of your vCenter Server and is the last piece to solving this puzzle.

Disclaimer: This is not officially supported by VMware, please use at your own risk.

Here are the exact steps to take if you wish to migrate an ESXi host with a single NIC from a VSS to VDS and running vCenter Server:

Step 1 - Change the following vCenter Server Advanced Setting config.vpxd.network.rollback to false:

migrating-from-vss-to-vds-with-single-nic-1
Note: Remember to re-enable this feature once you have completed the migration

Step 2 - Create a new VDS and the associated Portgroups for both your VMkernel interfaces and VM Networks. For the DVPG which will be used for the vCenter Server's VM network, be sure to change the binding to Ephemeral before proceeding with the VDS migration.

migrating-from-vss-to-vds-with-single-nic-0
Step 3 - Proceed with the normal VDS Migration wizard using the vSphere Web/C# Client and ensure that you perform the correct mappings. Once completed, you should now be able connect directly to the ESXi host using either the vSphere C# Client or ESXi Embedded Host Client to confirm that the VDS migration was successful as seen in the screenshot below.

migrating-from-vss-to-vds-with-single-nic-2
Note: If you forgot to perform Step 2 (which I initially did), you will need to login to the DCUI of your ESXi host and restore the networking configurations.

Step 4 - The last and final step is to change the VM network for your vCenter Server. In my case, I am using the VCSA and due to a bug I found in the Embedded Host Client, you will need to use the vSphere C# Client to perform this change if you are running VCSA 6.x. If you are running Windows VC or VCSA 5.x, then you can use the Embedded Host Client to modify the VM network to use the new DVPG.

migrating-from-vss-to-vds-with-single-nic-3
Once you have completed the VM reconfiguration you should now be able to login to your vCenter Server which is now connected to a DVPG running on a VDS which is backed by a single NIC on your ESXi host 😀

There is probably no good use case for this outside of home labs, but I was happy that I found a solution and hopefully this might come in handy for others who might be in a similar situation and would like to use and learn more about VMware VDS.

Categories // ESXi, Not Supported, vSphere Tags // distributed portgroup, distributed virtual switch, dvs, ESXi, notsupported, vds

ESXi 6.0 on Apple Xserve 3,1

11.17.2015 by William Lam // 76 Comments

A couple of months ago, I shared a guest blog post from one of my readers John Clendenen who was able to get ESXi 6.0 running on an Apple Xserve 2,1. At the end of that article, it was hinted that John was also looking into getting ESXi 6.0 running on an Apple XServe 3,1 and you can the details below after several months of investigation.

Disclaimer: This is not officially supported by VMware, please use at your own risk.

*** This is a guest blog post from John Clendenen ***

First an update on my Xserve 2,1’s. I had them running for over 100 days without any issue! However, now that I have the 3,1 working reliably, it is time that I part ways with my Xserve 2,1’s. I currently have them up on eBay. Here is the link: http://www.ebay.com/itm/231752771080?ssPageName=STRK:MESELX:IT&_trksid=p3984.m1555.l2649

Anyway, onto the Xserve 3,1.

--

I came across an Xserve 3,1 on eBay about a year ago. It was badly photographed, and the seller didn’t really know what he/she had. It wasn’t getting much attention, so I thought I might get it cheap. I ended up paying $500 for it which I felt ok about, but not great.

When it arrived, it had no processors, heatsinks or airflow duct. I immediately messaged the seller, and was able to get $350 refunded to me. I found the missing parts for under $100 over the next few weeks, and developed an intimate understanding of the Xserve 3,1 hardware.

At this point, I had no familiarity with vSphere at all. I was running OS X server and virtualizing a few services in Fusion. It was only through researching the Xserve 3,1 to find the missing hardware that I discovered that VMware had supported once as an ESXi 5 host. This made me wonder if it might still be possible to run ESXi on it, despite it no longer being supported.

I have found, after a considerable time investment, that the Xserve 3,1 can run ESXi 6, just as I found the Xserve 2,1 can run ESXi 6. However, unlike the Xserve 2,1, the Xserve 3,1 took months of troubleshooting before I had it running as a reliable ESXi host.

--

As it turns out, despite how much time it took me to get it working, there are only 2 serious issues with the Xserve 3,1 running ESXi 6. The first is somewhat specific to my configuration, but the second will be relevant to all configurations.

The first issue concerns booting into ESXi on a headless Xserve 3,1. The issue is limited to configurations where ESXi is booting from a drive installed in the optical bay (my original configuration). I have since changed my configuration and swapped the ESXi boot drive from the optical bay to the first hard drive bay. I have had no issue since I made this change.

For my configuration, I used an OWC bracket to replace the optical drive with an SSD. I installed ESXi onto it without issue. During installation, it was connected to monitor, keyboard, etc. I ran some VM’s on it to make sure it worked, and there were zero issues. I was relieved! So, I put it in the rack, wired it up and turned it on. Nothing. The Xserve lit up, and it was clear that it got through POST, but ESXi was clearly not booting.

Long story short, when no monitor is plugged into the Xserve 3,1, it will not automatically boot into ESXi if the boot drive is installed in the optical bay. The Xserve boot options can even be programmed through the front panel, but no configuration will make it reliably boot from the optical bay when a hard drive is installed. It is truly baffling, and if anyone has some insight here, or if it is a problem specific to my particular Xserve, I would love to know.

The solution, in my case, was to plug a keyboard into the Xserve, and hold down option for a few minutes while it boots (bringing up the boot options). Once all LED activity has normalized and the fan has settled down, I released the option key and pushed the arrow buttons. I think you only need to push the up button, but I always just pressed all of them to be sure. Then I pressed enter, and ESXi will boot. I have since simply swapped the boot drive to the first drive bay. Ideally, I’d have the other drives in the hot-swap bays, but I felt it was too much trouble to keep it in the optical bay.

The second issue concerns the onboard NIC. Once I had ESXi up and running, everything worked fine for anywhere between a few hours and 2 days, after which the Xserve 3,1 host would disappear from the VCSA and become completely unresponsive (no ping/ssh/etc). The length of time before failure made this issue especially difficult and time consuming to diagnose.

After nearly a month of frustration and disappointment, I determined that ESXi actually continued to run, but all network connectivity was ceasing. The only solution I have found is to install a 3rd party NIC and completely avoid using the onboard NIC. Even in standby, the onboard NIC can cause problems, but when it is completely unused, both for management and VM traffic, it no longer causes any problems.

This has been superficially improved with the last update, but use of the onboard NIC should still be completely avoided. The ESXi host will remain accessible via the VCSA, but the network management will become grayed out after a day or so. I suspect this is a driver issue in ESXi, but I really do not know.

--

Beyond these 2 issues, I have had no problems. Since the last update, even the performance and hardware status tabs are functional. RDM is not available, but not recommended in the first place. The Apple RAID backplane will not be recognized, but this was even the case in ESXI 5 when it was officially supported by VMware.

I hope that my efforts here will save others a lot of time and frustration. I think that for a lot of IT infrastructures, ESXi on an Xserve might make sense. It can run non-critical OS X services (which are hopefully the only kind of services you’re trying to run in OS X).

--

Summary

  •      Completely avoid using the on-board NIC. Silicom NIC’s are recommended.
  •      Find a standard backplane. The RAID backplane is useless in ESXi.
  •      A 2.5” drive can be installed in the optical bay, but booting from it is problematic

 

xserve31-pic-1
The Xserve 3,1 with the Silicom NIC installed

xserve31-pic-2
The 6 ports are a tight squeeze, but they just fit. My other 2 EXSi hosts are Supermicro Nodes, also with Silicom NIC’s and I had to use a Dremel to grind off part of the chassis to make all the ports accessible. But the Xserve works out of the box.

xserve31-pic-3
The OWC SSD “Data Doubler” bracket in the optical bay. Booting from here is a pain, but putting an additional SSD here works great for host caching.

xserve31-pic-4
The standard backplane is difficult to find, but is a great asset for vSphere. It is easy to distinguish it from the RAID backplane which would have a heat sink here.

xserve31-pic-5
There are no complications during installation/initial configuration.

xserve31-pic-6
Apologies for not having a longer uptime. I updated to ESXi6.0U1a 12 days ago, but I’ve had the Xserve 3,1 up for months. If something changes, I will post an update here, but I am confident that the system is stable.

xserve31-pic-7
This is the final stage of my home lab. The Xserve 3,1 is 1 of 3 ESXi hosts. These are accompanied by a primary domain controller (Samba4), a media server (Emby) and a home-grown NAS (Centos7). Networking in the back is Ubiquiti. I use this lab to prototype production environments for clients, and of course to run my home media services 🙂

Categories // Apple, ESXi, vSphere 6.0 Tags // apple, ESXi 6.0, osx, xserve

  • « Previous Page
  • 1
  • …
  • 337
  • 338
  • 339
  • 340
  • 341
  • …
  • 567
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Ultimate Lab Resource for VCF 9.0 06/25/2025
  • VMware Cloud Foundation (VCF) on ASUS NUC 15 Pro (Cyber Canyon) 06/25/2025
  • VMware Cloud Foundation (VCF) on Minisforum MS-A2 06/25/2025
  • VCF 9.0 Offline Depot using Synology 06/25/2025
  • Deploying VCF 9.0 on a single ESXi host? 06/24/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...