WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Automating the silent installation of Site Recovery Manager 6.0/6.1 w/Embedded vPostgres DB

11.09.2015 by William Lam // 4 Comments

For customers looking to Automate the latest release of Site Recovery Manager 6.0 / 6.1 with an Embedded vPostgres DB, you may have found that my previous deployment scripts for SRM 5.8 no longer work with the latest release. The reason for this is that SRM 6.x now supports the Platform Services Controller (PSC) and in doing so, there are a couple of new silent installer flags that are now required. With the help of the SRM Engineering team, I was able to modify my script to include these new options for automating the silent installation of both SRM 6.0 and 6.1. You can download the new script called install_srm6x.bat.

Before using this script, I highly recommend that you take a look my previous article here which provides more details on how the script works in general.

There are 5 new silent options that have been introduced with SRM 6.x which are all required:

  • PLATFORM_SERVICES_CONTROLLER_HOST - The hostname of the Platform Services Controller
  • PLATFORM_SERVICES_CONTROLLER_PORT - The port for the PSC, default is 443 (recommend leaving this the default)
  • PLATFORM_SERVICES_CONTROLLER_THUMBPRINT - PSC SSL SHA1 Thumbprint (Must be in all CAPS)
  • SSO_ADMIN_USER - The SSO Administrator account (e.g. *protected email*)
  • SSO_ADMIN_PASSWORD - The SSO Administrator password

In addition to the above options, you will still need to populate the following options below and the script outlines which options need to be modified before running the script.

  • SRM_INSTALLER - The full path to the SRM 6.x installer
  • DR_TXT_VCHOSTNAME - vCenter Server Hostname
  • DR_TXT_VCUSR - vCenter Server Username
  • DR_TXT_VCPWD - vCenter Server Password
  • VC_CERTIFICATE_THUMBPRINT - vCenter Server SSL SHA1 Thumbprint (Must be in all CAPS)
  • DR_TXT_LSN - SRM Local Site Name
  • DR_TXT_ADMINEMAIL - SRM Admin Email Address
  • DR_CB_HOSTNAME_IP - SRM Server IP/Hostname
  • DR_TXT_CERTPWD - SSL Certificate Password
  • DR_TXT_CERTORG - SSL Certificate Organization Name
  • DR_TXT_CERTORGUNIT - SSL Certification Organization Unit Name
  • DR_EMBEDDED_DB_DSN - SRM DB DSN Name
  • DR_EMBEDDED_DB_USER - SRM DB Username
  • DR_EMBEDDED_DB_PWD - SRM DB Password
  • DR_SERVICE_ACCOUNT_NAME - Windows System Account to run SRM Service

Note: If you deployed either your vCenter Server or PSC using FQDN, be sure to specify that for both DR_TXT_VCHOSTNAME and PLATFORM_SERVICES_CONTROLLER_HOST. This is a change in behavior compared to SRM 5.8 which only required the IP Address of the vCenter Server.

If you run into any issues, you can take a look at the logs that are generated. From what I have seen, you will normally get a 1603 error code which you need to step back through the logs and eventually you will see the actual error.

Categories // Automation, SRM, vSphere 6.0 Tags // site recovery manager, srm, vpostgres, VSAN, vSphere Replication

Using Ansible to provision a Kubernetes Cluster on VMware Photon

11.05.2015 by William Lam // 1 Comment

ansible-vmware-photon-kubernetes
I am always interested in learning and playing with new technologies, solutions and tools. Ansible, a popular configuration management tool which was recently acquired by Redhat, is one such tool that I have had on my to do list for some time now. It is quite difficult to find extra free time and with a new 7month year old, it has gotten even harder. However, in the last week or so I have been waking up randomly at 4-5am and I figured I might as well put this time to go use and give Ansible a try.

As the title suggests, I will be using Ansible to deploy a Kubernetes Cluster running on top of VMware's Photon OS. The motivation behind this little project was after watching Kelsey Hightower's recorded session at HashiConf on Managing Applications at Scale and comparing HashiCorp's Nomad and Google's Kubernetes (K8s) scheduler. I knew there were already a dozen different ways to deploy K8s, but I figure I would try something new and add a VMware spin to it by using the Photon OS.

I had found an out dated reference on setting up K8s in the Photon OS documentation and though a few of the steps are no longer needed, it provided a good base for me on creating the Ansible playbook for setting up a K8s Cluster. If you are not familiar with Ansible, this getting started guide was quite helpful. For our K8s setup, we will have a 2-Node setup, one being the Master and the other the Minion. If you are interested in an overview of K8s, be sure to check out the official documentation here.

Step 1 - You will need to deploy at least 2 Photon OS VMs, one for the Kubernetes Master and one for the Minon. This can be done using either the ISO or by deploying the pre-packaged OVA. For more details on how to setup Photon OS, please refer to the documentation here. This should take only a few minutes as the installation or deployment of Photon OS is pretty quick. In my setup, I have 192.168.1.133 as Master and 192.168.1.111 as the Minion.

Step 2 - Download and install Ansible on your client desktop. There are several options depending on the platform you plan to use. For more information take a look at the documentation here. In my setup, I will be using a Mac OS X system and you can easily install Ansible by running the following command:

brew install ansible

Step 3 - Next, to verify and test that our installation of Ansible was successful, we will need to create our inventory host file (I called it hosts but you can name it anything you want) which will contain the mappings to our Photon OS VMs. The example below assumes you do not have DNS running in your environment and I am making use of the variable options in host file to specify a friendly names versus just using the IP Addresses which will be read in later. If you do have DNS in your environment, you do not need the last section of the file.

[kubernetes_cluster]
192.168.1.133
192.168.1.111

[masters]
192.168.1.133

[minions]
192.168.1.111

[kubernetes_cluster:vars]
master_hostname=photon-master
master_ip=192.168.1.133
minion_hostname=photon-node
minion_ip=192.168.1.111

Step 3 - We will be performing a basic "ping" test to validate that Ansible is in fact working and can communicate with our deployed Photon VMs. Run the following command which will specify the inventory host file as input:

ansible -i hosts all -m ping --user root --ask-pass

Screen Shot 2015-11-04 at 5.45.12 PM
Step 4 - If the previous step was successful, we can now create our Ansible playbook which will contain the instructions on setting up our K8s Cluster. Download the kubernetes_cluster.yml to your desktop and then run the following command:

ansible-playbook -i hosts --user root --ask-pass kubernetes_cluster.yml

If you want to use SSH keys for authentication and if you have already uploaded the public keys to your Photon VMs, then you can replace --ask-pass with --private-key and specify the full path to your SSH private keys.

using-ansible-to-provision-kubernetes-cluster-running-on-vmware-photon-0
Step 5 - Once the Ansible playbook has been successfully executed, you should see summary at the end showing everything was ok. To verify that our K8s Cluster has been properly setup, we will check the Minon's node status state which should show "ready". To do so, we will login to the K8s Master node and run the following command:

kubectl get nodes

You should see that the status field shows "Ready" which means the K8s Cluster has been properly configured.

using-ansible-to-provision-kubernetes-cluster-running-on-vmware-photon-1
At this point you have a basic K8s Cluster setup running on top of VMware Photon. If you are interested in exploring K8s further, there are some nice 101 and 201 official tutorials that can be found here. Another handy reference that I used for creating my Ansible playbook was this article here which provided a way to create loops using the lineinfile param.

Categories // Automation, Cloud Native, vSphere Tags // Ansible, cloud native apps, K8s, Kubernetes, Photon

EMC Project OnRack now RackHD

11.03.2015 by William Lam // 1 Comment

Back in May, EMC announced a new initiative at EMC World called Project OnRack which had an ambitious goal of providing a new software abstraction layer that would sit on top of existing "industry standards" for server out-of-band management. Standards such as IPMI, CIM, SMI-S and CIM-SMASH to just name a few were supposed to help IT administrators manage and operate the life-cycle of their physical servers. Instead, we ended up with even more complexity and inconsistency due to the different implementations of these "industry standards" across vendors and sometimes even within the same vendor. Trying to keep firmware, BIOS, hardware drivers, etc. up to date across different hardware platforms from the same vendor in a consistent and automated fashion was already painful enough. As If this was not already challenging enough, try doing this for a mix of hardware platforms across different vendors and you have just given your operational and datacenter team a never ending nightmare.

Frankly, I am pretty surprised that it has taken us this long to finally tackle this problem. This is something we have needed for quite some time now and I still remember the early days as an admin trying to script around the inconsistencies of IPMI to configure things like asset tags and serial numbers across different hardware platforms.

OnRack http://t.co/I6dpMSPgSB Interesting initiative from EMC. Something we've needed for a LONG time! Reminds me of few startups doing same

— William Lam (@lamw.bsky.social | @*protected email*) (@lamw) May 7, 2015

In my opinion, having a consistent and programmable interface to this low level of hardware is a critical component to a Software-Defined Datacenter and has often been overlooked. Kudos to EMC for taking on this initiative and more importantly driving this change through open-source and the community in mind.

Since the announcement back in May, things have been been pretty quiet about OnRack, until recently that is. I was listening to a recent episode of The Hot Aisle Podcast with guest Brad Maltz of EMC talking about Hyper-Converged Infrastructure. Among the different topics discussed, OnRack was brought up along with dis-aggregated hardware/infrastructure where individual compute resources can scale up independently of each other. There were a couple of nice tidbits mentioned on the podcast. First, it looks like OnRack which was the internal EMC project name has now been renamed to RackHD as the external project name. Second, it looks like the RackHD repo is already on Github with some initial content including some pretty detailed documentation on the architecture and components which can be found here.

The OnRack project looks to be made up of the following sub-projects per the documentation:

  • on-tftp - NodeJS application provided TFTP service integrated with the workflow engine
  • on-http - internal HTTP REST API interfaces integrated with the workflow engine
  • on-syslog - syslog endpoint integrated to feed data into workflow engine
  • on-taskgraph - NodeJS application providing the workflow engine
  • on-dhcp-proxy - NodeJS application providing DHCP proxy support integrated into the workflow engine
  • onserve - OnServe Engine
  • core library - Core libraries in use across NodeJS applications
  • task library - NodeJS task library for the workflow engine
  • tools - Useful dev tools for running locally
  • webui - Initial web interfaces to some of the APIs - multiple interfaces embedded into a single project
  • integration tests - Integration tests with code for deploying and running those tests, as well as the tests themselves
  • statsd - A local statsD implementation that makes it easy to deploy on a local machine for capturing application metrics

Brand mentioned that many of the Github repos are still marked private as they are still working through the process of releasing RackHD to the public. It looks like RackHD and all relevant repos are now all open source as of Monday Nov 2nd, for more details please visit the Github repo here. I am definitely excited to see how this project will evolve with the larger community and some of the new innovations which will be unlocked due to this barrier being removed. Hopefully we will see positive collaboration from other hardware vendors which will help us move forward and finally solve this problem once and for all! I can already see huge benefits for software only vendors like VMware who can integrate RackHD directly into provisioning tools like Auto Deploy or configuration management tools like Puppet, Chef and Ansible for example. It will also be interesting to see how other startups in this area like NodePrime and another stealth company, who is also working on solving a similar problem and whether they would leverage RackHD or not.

Categories // Automation Tags // cim, converged infrastructure, disaggregated infrastructure, EMC, hyper-converged infrastructure, ipmi, OnRack, RackHD, SMASH, SMI-S

  • « Previous Page
  • 1
  • …
  • 174
  • 175
  • 176
  • 177
  • 178
  • …
  • 224
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...