In the previous article, I provided some background on the origin of the project. In this article, we will now focus on the technical details and how the solution actually works.
Hardware
This solution was originally developed against an Intel NUC but I had designed it to be generic so that it could run on any system which meets the minimum requirements which is just having two disks (HDD & SSD or two SSDs) which is used to create a vSAN datastore.
Here is the BOM for the Intel NUC that we had used:
- 1 x Intel NUC 6th Gen NUC6i3SYH (supports 2 drives: M.2 & 2.5)
- 2 x Crucial 16GB DDR4
- 1 x Samsung SM951 NVMe 128GB M.2 for "Caching" Tier
- 1 x Samsung 850 EVO 500GB 2.5 SATA3 for “Capacity” Tier
During the Sydney VMUG, we had did a live demo using an Intel NUC. Prior to the Melbourne VMUG, fellow VMware colleague Tai Ratcliff reached out and offered to let us borrow his Supermicro kit for the demo which was great as the hardware was much beefier than the NUC. Thanks Tai!
I had already been hearing great things about E200-8D platform but I had not had the opportunity to get my hands on the system to play with. After only spending a little bit of time with the platform while prepping for the VMUG event, I can see why is a pretty slick system for a vSphere/vSAN based home lab, especially if you need to go beyond 32GB of memory which is where the Intel NUCs currently max out at.
The other appealing features for this platform is that it comes with 2x10GbE, 2x1GBe and an IPMI interface for remote management which is a huge benefit for not needing to connect an external monitor and keyboard. The system is also Xeon based w/6-Cores and can go all the way up to 128GB of memory. Tai had also recently published a blog article comparing the Supermicro E200-8D and the Intel NUC, which I think is worth a read if you are deciding between these two platforms.
Note: If you are considering purchasing the Supermicro E200-8D or any other system for that matter, check out this exclusive vGhetto discount here.
Here is the BOM for the Supermicro E200-8D that we had used:
- 1 x SYS-E200-8D
- 4 x 16GB DDR4
- 1 x 128GB SSD NVMe for "Caching" Tier
- 1 x 1TB SSD for "Capacity" Tier
Software
The following VMware products are automatically deployed and configured as part of the solution:
Product |
---|
ESXi 6.5a |
VCSA 6.5a |
vSAN 6.5a |
The following tools were used to build this solution:
- ESXi Scripted Install (Kickstart)
- PowerCLI Multi-platform (MP)
- vSAN Management SDK for Python
- pyvmomi (vSphere SDK for Python)
- PhotonOS
- Docker
The following techniques/resources were used to build this solution:
- http://www.virtuallyghetto.com/2017/01/copying-files-from-a-usb-fat32-or-ntfs-device-to-esxi.html
- http://www.virtuallyghetto.com/2016/10/5-different-ways-to-run-powercli-script-using-powercli-core-docker-container.html
- http://www.virtuallyghetto.com/2016/03/quick-tip-vsan-6-2-vsphere-6-0-update-2-now-supports-creating-all-flash-diskgroup-using-esxcli.html
- http://www.virtuallyghetto.com/2014/10/how-to-automate-vm-deployment-from-large-usb-keys-using-esxi-kickstart.html
- http://www.virtuallyghetto.com/2014/07/esxi-5-5-kickstart-script-for-setting-up-vsan.html
- http://www.virtuallyghetto.com/2013/09/how-to-bootstrap-vcenter-server-onto_9.html
- http://www.virtuallyghetto.com/2011/01/how-to-extract-host-information-from.html
USB Configuration
In addition to the hardware where the SDDC will be deployed to, the only other physical component that is required is a USB key that is at least 6GB in size and I recommend a USB 3.0 if you can which will definitely speed up the deployment compared to older versions.
The USB device will contain two partitions:
- 1st Partition (BOOT)
- 2GB, FAT16 which houses the ESXi bootable installer as well as the ESXi Kickstart configuration file where all the magic happens. The installation will overwrite the existing USB device and this is expected, so there are no additional devices required
- 2nd Partition (PAYLOAD)
- >= 4GB, FAT32 which houses a special DeployVM based on PhotonOS and the VCSA ISO which is used to deploy the VCSA
The content and the layout of the USB key is also something that I have also fully automated. Yes, I have a script that automates the automated installer 😉 We will go over that in greater detail in Part 3 when we take a look at how to actually consume this in your own environment.
Once our USB key has been prepared, then the last step is to simply plug that it into your system and power it on. This is where you go get a beer or 5 and come back ~45min later.
For those of you that are interested in the details, there are two phases of the provisioning phases which is described below.
Provisioning Phase1
The first thing that happens is the ESXi Installer boots up and loads our Kickstart configuration file. It then executes a %pre section which goes and identifies the two free disks and claims them to bootstrap our vSAN Datastore. The script can handle both Hybrid as well as All-Flash configuration and this is configurable setting within the Kickstart file.
Next, it copies the content from our PAYLOAD partition onto the vSAN Datastore and encodes the specific configuration into the DeployVM which we will be used in Phase 2. If we did not copy off the content, when ESXi goes and installs itself onto the same USB key, our PAYLOAD partition will automatically be wiped.
Lastly, ESXi is then installed and finishes up by executing a few configurations in the %post section of the Kickstart and the system reboots.
Provisioning Phase2
Once ESXi boots up, there is a %firstboot script which runs and registers our DeployVM that we had copied earlier and it also powers it on automatically.
Within the DeployVM, there is a startup script (rc.local) which automatically runs after 60 seconds. This script then reads the encoded configuration that it had received from the Kickstart script using guestinfo.* properties and uses that info to create the vCenter Server Appliance (VCSA) JSON configuration file. The JSON file is then passed to the VCSA CLI Installer which then initiates an automated deployment of the VCSA.
Once vCenter Server is up and running, the rc.local script then runs a vSAN Management SDK for Python script which setups the vSAN Cluster and enables Dedupe/Compression (applicable to All-Flash vSAN configuration only) on the pre-created vSphere Cluster based on the encoded configuration the user had provided in the Kickstart. Lastly, PowerCLI-MP is launched via a Docker Container to finish up the vCenter Server configuration which includes adding the physical ESXi host into the vCenter Server that we had just deployed. The reason we had to use a couple of different tools is that not all of the vSAN Management APIs are currently available as part of PowerCLI-MP and though we could have kept it all in one language, our initial goal was to rely on PowerCLI-MP for the majority of the work and then use other tools if necessary. Hopefully in the very near future, we will have parity between PowerCLI for Windows and PowerCLI-MP so this could be further simplified in the future.
At this point, the deployment has completed and we have now successfully automated the deployment and configuration of ESXi, vSAN and the VCSA all without even breaking a sweat or doing anything more than just powering on the system. If you are using a USB 3.0 key, it should take roughly 45min to complete from the time you power on the system to the time you can login to the vSphere Web Client, but YMMV. Pretty slick, right? We think so! 😀
There is definitely a lot more details behind the solution but hopefully this gave you a fairly good understanding of all the moving pieces and once the code is available, you can take a closer look at the implementation. As you can imagine, this solution was literally months of trial/error and continuously going back to the drawing board and evaluating different methods. Even when something had worked, I would come up with new ways of performing the task and wanted to evaluate them to see which worked best. I wanted to take a moment and give a huge thanks to both Timo Sugliani and Alan Renouf for their help on this project. There were several times where I was literally just stuck and I was able to bounce some ideas off of them or they gave me new ideas I had not thought about before. I know you guys definitely saved me hours of work 🙂
Lastly, what has been described here is really just one way in which you can deploy the environment. This example works very well for a completely isolated environments where you do not want to rely on any external dependancies. In practice, this solution may be more restrictive since the code and binaries are pretty static. Another method in which could be implemented is to have the binaries and even the code to be centrally stored say behind a simple HTTP server. Some generic bootstrap code could run in the DeployVM which would then pull this script remotely which would include instructions on what to do based on some pre-defined rules for the admins. Perhaps this will inspire other folks to come up with other creative solutions, just thought I give you some food for thought.
In Part 3, we will finally get our hands on the code, I know you are probably anxious but I felt it was important to provide some background on how the solution work before jumping straight into the solution even though it is pretty straight forward to use.
pc-dok says
hi william, absolutly great article. when will part3 available?. hope that my hp ml110 also can get this auomated deploy of this cool environment.