Thunderbolt 3 (TB3) and eventually USB 4 is a really fascinating technology and I believe it still has so much untapped potential, especially when looking at Remote/Branch Office (ROBO), Edge and IoT types of deployments. TB3 was initially limited to Apple-based platforms, but in the last couple of years, adoption has been picking up across a number of PC desktop/laptops including the latest generations of Intel NUCs which are quite popular for vSphere/vSAN/NSX Home Labs. My hope with USB 4 is that in the near future, we will start to see servers with this interface show up in the datacenter 🙂
In the mean time, I have been doing some work with TB3 from a home lab standpoint. Some of you may have noticed my recent work on enabling Thunderbolt 3 to 10GbE for ESXi and it should be no surprise that the next logical step was TB3 storage. Using a Thunderbolt interface to connect to external storage, usually Fibre Channel is something many of our customers have been doing for quite some time. In fact, I have a blog post from a few years back which goes over some of the solutions customers have implemented, the majority use case being Virtualizing MacOS on ESXi for iOS/MacOS development. These solutions were usually not cheap and involved a sizable amount of infrastructure (e.g. storage arrays, network switches, etc) but worked very well for large vSphere/MacOS based environments.
Putting aside the TB interface for a second, another exciting development in the last few years is the introduction of NVMe SSD devices and with the M.2 form factor, this makes for a killer combo in terms of performance, power and footprint. These M.2 NVMe SSDs is almost a defacto standard these days for any vSphere/vSAN/NSX Home Lab and can really give your lab a boost in performance! Last year, I wrote about using a USB-C based enclosure which also support M.2 NVMe devices and this could be used for both traditional VMFS datastore as well as for vSAN. In addition to requiring some tricks to get this working, you do sacrifice the ability to provide pass-through of other USB-based devices but more importantly, you do not get the full benefits of the NVMe SSDs, because the maximum bandwidth of USB 3.1 Gen 2 (USB-C) is 10Gbps, this gives you a transfer rate of ~700-800MB/s. With TB3, you get whopping 40Gbps and a transfer rate of ~2750 MB/s enabling you to take full advantage of what NVMe SSDs can offer you.
With the adoption of TB3 continuing to grow, the rise of TB3 peripherals from the eco-system is also growing and innovating. While doing some research online, I came across a number of TB3 enclosures that finally support customizable storage options using M.2 NVMe SSDs. Historically, TB storage enclosures were fixed capacity that could not be modified, they were very bulky and they also came with an insane price tag. Overall, Thunderbolt-based devices are still pretty expensive and the solution shared here is definitely not focused on cost as a core benefit.
Below are 6 different TB3 storage enclosure (3 of which I have tried) supporting single, dual and quad M.2 NVMe SSDs that you can use with ESXi for both VMFS and vSAN, no additional drives or tweaks required are required. These devices can be consumed purely as external storage or creating a "hybrid" vSAN datastore comprised of NVMe SSDs from both a system like an Intel NUC (both half heigh/full height) or Skull/Hades Canyon and NVMe SSDs from these TB3 enclosures. For those looking for maximum storage performance, extending an initial investment or having a modular and mobile VMFS/vSAN Datastore for travel, this may be an interesting solution.
UPDATE (11/17/20) - Are is an additional Single M.2 NVMe enclosure that is pretty inexpensive compared the ones listed earlier: OWC Envoy Express Thunderbolt 3 ($79 USD)
Single M.2 NVMe Enclosure
Both the Trebleet and TEQK are small and portable, the Trebleet is slightly bigger than the TEQK. Other than the TEQK, all other enclosures do NOT include an M.2. Although the TEQK includes an SSD (Phison), unfortunately it is not recognized by ESXi. You will need to install your own M.2 SSD such as a Samsung or Intel. If you are using Crucial M.2 NVMe, like I was, you may run into issues getting ESXi to recognize the device. If you do, check out this blog post for the workaround. Overall, these are great for providing additional storage and can easily be moved from one host to another. For those wanting a super lower power or mobile environment, you can construct a vSAN datastore using the half-height NUCs!
- Trebleet Thunderbolt 3 enclosure ($169-199 USD)
Dual M.2 NVMe Enclosure
Prior to a couple of weeks ago, I was not even aware that multi M.2 NVMe TB3 enclosures was even a thing until I randomly stumbled onto a company called Netstor who produces a number of TB3 storage solutions including the NA611TB3. The first thing that stood out to me beyond the number of supported M.2 devices was just how small this device was (fits in palm of your hand) which includes built-in cooling. Although it is not cheap, especially after factoring the cost of two M.2 devices, this could be an interesting solution for those wanting to add additional storage or have a completely "remote" vSAN that can easily be attached to an ESXi hosts, great for travel and demo purposes.
I was very fortunate to have been able to get my hands on an evaluational unit directly from Netstor. Below are some pictures of the enclosure and on the back of the unit, it includes two TB3 ports and this is where things get interesting. With TB, you can actually "daisy-chain" multiple TB devices and be able to access all of them from a single system! This Netstor unit can connect up to 6 other TB3 devices, so if you really want to go crazy, you can have up to 12 NVMe SSDs and break that up into multiple diskgroups.
- Netstor NA611TB3 ($299 USD)
Here is the storage adapter view for the Netstor unit in the vSphere H5 Client and each M.2 device will have its own adapter.
You can either setup two VMFS volumes or simply create a single vSAN Datastore as shown in the screenshot below.
Here is a screenshot of daisy-chaining TEKQ TB3 enclosure with the Netstor and ESXi simply sees all three NVMe SSDs!
Quad M.2 NVMe Enclosure
In case the dual M.2 unit was not enough, both NetStor and OWC also makes a quad M.2 unit! One thing that may standout immediately is the price difference but something that is unique to the Netstor unit which looks to be an industry first is the inclusion of a PCIe switch directly within the enclosure. What this means is that each M.2 device is guaranteed to have PCIe 3.0 x2 bandwidth of up to 1600 MB/s, giving you maximum performance of your NVMe devices. You can carve up the devices however you like, whether that is a couple of VMFS volumes + vSAN or creating two vSAN Diskgroups. I did not get my hands on either of the quad M.2 units but it is expected to just work like the others and both of these enclosures include two TB3 ports which can be used to daisy-chain additional TB3 storage or network devices.
- OWC Express 4M2 ($299 USD)
- Netstor NA622TB3 ($529 USD)
Will,
I actually ordered the OWC unit. I had been looking at it for a while. It only has a max of 2800MB/s and in order to configure anything other than RAID 0 and 1 you need to purchase the SoftRAID Pro software at an additional $179
Thanks for nice post!
Como Fazer o Cachorro Parar de Latir
Other than building a vSAN array on this and sharing that out to a cluster of hosts (fine until the host it's attached to goes down), is there any way to turn this is shared storage?
No, Thunderbolt is a DAS-type of technology. It would have been really cool to be able to hook up two hosts to NetStor/OWC which has 2xTB3 ports but sadly that won't work. For multiple hosts access, you'd usually connect to TB-array which provides Ethernet connectivity which would then allow for multi-host connections. Check out TB3 Storage article which gives some examples on how customers have achieved this
Good read, thank you!
With the increasing popularity of TB3, there are also basic TB3 based external PCIe chassis (intended for the eGPU market) that can host more conventional PCIe NVMe adapters, such as the classic Amfeltec Squid quad NVMe adapter card with PCIe switch built in.
Of note, Amfeltec appears to have released a new full height Squid card supporting 6 NVMe drives, so the new king of the hill is that hexa card (though it requires a full height PCIe slot).
I guess you would carve up the overall bandwidth if you daisy chain these together
I had the Trebleet enclosure for only 24 hours before sending it back to Amazon. While it is a beautiful little enclosure, it does nothing to keep the NVMe chip cool.
When performing read/write tests, the memory chip would slow down in less that 45 seconds of use due to thermal throttling. The case would get very warm to the touch.
When you are recommending enclosures especially for shared storage under continuous operation, then cooling capacity is a major factor.
I have no experience with the other enclosures but the Trebleet is a non-starter.
Nice Overview, Thanks
I had hard times finding vendors that offer true TB to NVMe experience.
What I bought was an i-tec case with an Intel M.2 card (2TB) that was also quite affordable in total 330 EUR
https://i-tec.cz/en/produkt/tb3mysafem2-2/
https://www.alternate.de/html/product/1480341?gclid=EAIaIQobChMIqImO9ePI5AIVCLrtCh2mIgUMEAQYAiABEgIAW_D_BwE
performs on a MBP with about 1GB/s writing quite well.
Cheers
I wonder why there are barely enclosures-only to buy, Can it cause any problems to replace the stock NVME with any other? Also, why are they so expensive in comparison with usb 3.1 enclosures?
I am interested in the Akitio Thunder3 Quad Mini and would like to use it with my Intel NUC. I am using ESXI 6.7u2. Has anybody tested the Akitio device with ESXI?
Beware the performance limitations of multi-M.2 enclosures as they all significantly limit single drive throughput, as all 4 PCIe lanes are not shared/switched to all drives. You can't get full drive throughput when using a single drive in a multi-drive enclosures like those listed above... unfortunately. William mentions the limit, sort of. What I haven't found is a multi-drive enclosure with PCIe switch that provide x4 lanes to every drive
As to why there are so few TB3 enclosures, look up Intel's restriction on bus-powered ThunderBolt 3 enclosures (short-version for the lazy amongst you - Intel *requires* those enclosures include a drive .. ie can't be sold as just enclosure). Intel's reasoning is self-serving and no longer reasonable (if it ever was). If the enclosure is AC powered, then Intel allows such to ship without an included drive [I got this info direct from Intel reps 8+ months ago].
What would be really nice is a TB3 NVMe enclosure only (I already have a number of spare M.2 drives laying around), using Intel's Titan Ridge controller (which if I recall correctly should enable using either TB3 or USB). What would be even nicer is for those of use who foresee storage needs greater than 1 or 2TB, to have an enclosure which supported U.2 vs M.2 drives... oh, and those U.2 drives to become more common with consumer models
possibly a silly question, or not relevant, BUT, if you made an external M.2nvme drive with thunderbolt 3 the main drive instead of the onboard one, so as to make the external the bootable for another OS, or, just the one OS, does the external drive and booting from it etc, etc, put a strain on the GPU? (i read this somewhere but couldnt confirm)
Gerry
Not a silly question, and in the situation you mention, no the TB3 external drive wouldn't put a strain on the GPU (typically). HOWEVER, if your system is a laptop, and you are using a TB3 dock with an external GPU, and you are sharing that same 4-lane (PCIe 3 x4) bandwidth for both NVMe drive and external GPU (and USB devices?), then not a strain but potentially bandwidth starved.
The issue isn't TB3 but CPU/chipset & motherboard, and especially PCIe lanes and how handled [..it depends]. Intel has been miserly with PCIe lanes but that doesn't work in today's word with small (2TB and less NVMe) drives for a portion of the user world (me included). Thanks to AMD limited PCIe lanes is improving, but Intel's disaster with 10nm production means an Intel's response with more PCIe lanes was just announced (Cascade Lake-X) but that is with the decade+ old PCIe v3). An Intel PCIe v4 (or 5) system is still a ways out.
So - for your question, the detail answer is it depends on your specific system and motherboard config, and whether PCIe lanes to external TB3 connection is shared bandwidth (and with what) or not.
on to other considerations....
Then there is the issue of drive space. M.2 is ok, but for those of us who professionally, or on the side, perform RAW photo and/or Video editing need larger drives. U.2 NVMe fits the bill but try finding those in consumer price/configs? In the enterprise, we have 30TB U.2 enterprise SSDs [U.2 is same protocol as M.2, just a cable connection vs. M.2 direct motherboard plug-in]. And then there is fear-of-missing-out, as Intel is likely to have a TB4 (or TB5) as the high speed connection option 'upgrade' once their faster than PCIe 3 systems are released (and TB3/USB4 becomes ubiquitous). On the other hand, notice how hard it is to find USB 3.2 2x2 (20GB/s) ports and devices. The technical challenges with these high bandwidth connections isn't trivial (see active vs. passive TB3 cables). Bus powering a U.2 drive should be a non-issue over TB3/USB4... so large, fast external NVMe drives become practical.
Ah... the joy of progress...
looking for something like trebleet or tekq that can hold the optane 380gb ie 22110 or 110mm long and do proper heat dissipation. so far it doesn't seem to exist.