I recently published an article demonstrating how to inject OVF properties into the VCSA and other virtual appliances when deploying directly onto ESXi using an unreleased version of ovftool (4.0). A fellow reader by the name of VirtualJMills, as he is known on Twitter left an interesting comment using an alternate solution which I thought was actually pretty clever!
Quick stats for the VSAN HCL
I noticed there was a new blog post this morning from Wade Holmes on an update to the VSAN HCL and I thought it might be useful to provide some quick stats on all the partners who have supported components listed on the VSAN HCL such as the storage controllers, SSDs and MDs. As of today (06/13/14), the information below is the latest from the VSAN HCL. I will make adjustments to the Google doc as updates are made to the VSAN HCL.
Disclaimer: The VMware VSAN HCL should still be used as the official source when selecting components for your VSAN environment.
Total VSAN Storage Controllers: 89
GDoc for All VSAN Controllers - https://docs.google.com/spreadsheets/d/1FHnGAHdQdCbmNJMyze-bmpTZ3cMjKrwLtda1Ry32bAQ
Vendor | Controllers |
---|---|
Cisco | 2 |
Dell | 5 |
Fujitsu | 11 |
HP | 7 |
IBM | 6 |
Intel | 18 |
LSI | 37 |
SuperMicro | 3 |
Note: If you would like to help contribute to the "Community" VSAN storage controller queue depth list, please take a look at this article for more details.
Total VSAN SSDs: 110
GDoc for All VSAN SSDs - https://docs.google.com/spreadsheets/d/1FHnGAHdQdCbmNJMyze-bmpTZ3cMjKrwLtda1Ry32bAQ/edit#gid=858526558
Vendor | SSDs |
---|---|
Cisco | 5 |
Dell | 15 |
EMC | 5 |
Fujitsu | 4 |
Fusion-IO | 15 |
Hitachi | 9 |
HP | 15 |
IBM | 9 |
Intel | 12 |
Micron | 7 |
Samsung | 3 |
SanDisk | 6 |
Virident Systems | 5 |
Total VSAN MDs: 97
GDoc for All VSAN MDs - https://docs.google.com/spreadsheets/d/1FHnGAHdQdCbmNJMyze-bmpTZ3cMjKrwLtda1Ry32bAQ/edit#gid=1993745998
Vendor | MDs |
---|---|
Cisco | 8 |
Dell | 20 |
Fujitsu | 13 |
Hitachi | 1 |
HP | 19 |
IBM | 20 |
Lenovo | 3 |
Seagate | 13 |
Two coredump partitions in ESXi 5.5?
A couple of days back I had to re-install ESXi on a physical host for some troubleshooting purposes and while looking at the partitions on the disks using ESXCLI, I noticed the fresh ESXi installation had created two coredump partitions.
I was quite surprised to see two, since normally you would just have one configured. I even asked a colleague if he had ever see this before and he had not, so I wanted to double check that there was in fact two coredump partitions being created which I verified by using partedUtil.
As you can see from the screenshot above, there are definitely two coredump partitions. I took a look at our vSphere documentation, but did not find any mention of this. I decided to look internally and found that this is actually a new behavior that was introduced in ESXi 5.5. From what I can tell, the second coredump partition which is 2.5GB was created to ensure that there was sufficient space to handle ESXi hosts configured with a huge amount of memory (up to 4TB) if a coredump were to occur. This new coredump partition is only created on a fresh ESXi install, for upgrade scenarios the original partition structure is preserved. I suspect even on the fresh install, the original coredump partition was kept for potential backwards compatibility.
This definitely made sense given the reason. I guess this actually raises another interesting point from an operational point of view that though upgrades may be preferred, there are also good reasons to perform a fresh install over an upgrade. In this case, to ensure we do not break past requirements/assumptions, we could not just automatically expand or create a larger coredump partition to adhere to new requirements. This is actually not the first instance of this, here are two additional examples in which a fresh installation would have potentially yielded a more optimal environment:
- « Previous Page
- 1
- …
- 406
- 407
- 408
- 409
- 410
- …
- 567
- Next Page »