One thing I really enjoy at VMworld when I have a few minutes to spare between sessions and customer meetings is to walk around the Solutions Exchange and learn about what our partners are doing in the VMware eco-system. I usually do not make it in very far before bumping into an old colleague or customer before having to run to my next engagement, but some times I get lucky.
While walking the show floor, I came across a really interesting company that immediately caught eye and you can probably guess why from the picture I took below.
The company is called Hivecell and they make it super easy for Global 500 companies to deploy and maintain software at the Edge without requiring a large IT team to manage the deployments which can be spread across hundreds if not thousands of sites with very little to no IT staff.
One of the biggest challenges with Edge Computing is being able to process the large quantity of data being generated in all of these remote locations on a daily basis. In some cases, the dataset can grow up to several Terabytes and it is no longer feasible to send all of this data to the Cloud or back to your Datacenter to extract the business intelligence and value. In fact, depending on the connectivity of your remote site, it can take weeks before the data is available. For any type of real or near-real time applications, the window where the data is of value can literally be hours if not minutes and it must be processed immediately at the Edge.
Speaking of use cases, here are some of the scenarios where Hivecell believes they can really help with their solution, more details about each use case can be found here.
- Petrochemicals
- Renewable Energy
- Quick Service Restaurant Chains
- Manufacturing
- Healthcare
- Weather
- Data Science
- Hotels
OK, so what is a Hivecell? It is a small, low-energy and inexpensive server designed to run distributed software like Containers and Machine Learning models. In my opinion, what makes the Hivecell solution really unique is their innovative form factor design that provides both form and function. The "stack" design is very intentional and although you can not see from the picture below, there are actually magnets between the systems or what Hivecell refers to as Baranovsky Connectors. These magnets not only provide an easy way to add or remove a unit from the cluster but it also provides power and network connectivity between the units!
How freaking cool is that!? You just need a single power and ethernet cable and the rest of the units can communicate across the Baranovsky Connectors. No backplane, chassis or router is required and you can easily scale up by "stacking" another unit and then pressing the power button. Similarly, if you wish to scale down, simply remove a unit and their software will automatically handle the rest.
All required connectivity including availability with a backup battery and wireless connection is built directly into each unit and setup could not be easier with their one-click management interface giving customers a single view into all their "Hives" or deployments.
Today, Hivecell only supports an ARM-based deployment with the following specification:
- 64-bit ARMv8 Processing
- 6 CPU cores, 2.4GHz
- 256 GPU CUDA cores
- 8GB RAM LPDDR4
- 500GB SSD
- 1G Ethernet
- Wifi IEEE 802.11a/b/g/n/ac dual-band 2x2 MIMO
- Size 220x175x65 mm
- Weight 1.36 kg (3.0 lbs)
- Power 15W (Max 25W)
However, at VMworld they announced that they would also have x86 version and in fact, the prototype they had at the show was running the latest version vSphere and vSAN! With vSphere, they were able to provide an even higher level of application availability with the use of vSphere Fault Tolerance which I thought was pretty cool. If you are interested in learning more, you can check out this VMworld breakout session #EIOT2715BU - Edge Computing Innovations in Office of the CTO and Dell Technologies (just requires a free VMworld account to sign in) or reach out to the Hivecell folks.
Another really cool thing I learned while talking to the team was that they have been making use of both our ESXi Native Driver for USB Network Fling as well as the recent 10GbE Aquantia Driver for ESXi for adding additional network connectivity between the Hivecell systems. I am really excited to see our vSphere and vSAN platform can enable for the Hivecell solution and look forward to hearing more from these folks in the near future!
Bee Kay says
So, if you have 5 of these in a stack, and #3 dies, if you try to take it out, then you lose power/ethernet... I take it you can do redundancy in the stack, like multiple power and ethernet connections...?