A couple of days back I had to re-install ESXi on a physical host for some troubleshooting purposes and while looking at the partitions on the disks using ESXCLI, I noticed the fresh ESXi installation had created two coredump partitions.
I was quite surprised to see two, since normally you would just have one configured. I even asked a colleague if he had ever see this before and he had not, so I wanted to double check that there was in fact two coredump partitions being created which I verified by using partedUtil.
As you can see from the screenshot above, there are definitely two coredump partitions. I took a look at our vSphere documentation, but did not find any mention of this. I decided to look internally and found that this is actually a new behavior that was introduced in ESXi 5.5. From what I can tell, the second coredump partition which is 2.5GB was created to ensure that there was sufficient space to handle ESXi hosts configured with a huge amount of memory (up to 4TB) if a coredump were to occur. This new coredump partition is only created on a fresh ESXi install, for upgrade scenarios the original partition structure is preserved. I suspect even on the fresh install, the original coredump partition was kept for potential backwards compatibility.
This definitely made sense given the reason. I guess this actually raises another interesting point from an operational point of view that though upgrades may be preferred, there are also good reasons to perform a fresh install over an upgrade. In this case, to ensure we do not break past requirements/assumptions, we could not just automatically expand or create a larger coredump partition to adhere to new requirements. This is actually not the first instance of this, here are two additional examples in which a fresh installation would have potentially yielded a more optimal environment: