WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Google Coral USB Edge TPU Accelerator on ESXi

05.10.2023 by William Lam // 58 Comments

Several weeks back, I came across a really strange post on the VMTN communities asking how to change the Device ID (DID) and Vendor ID (VID) for a USB Device that has been passthrough to a VM from ESXi? The device in question is the Google Coral USB Edge TPU (Tensor Processing Unit) Accelerator, which is a relatively in-expensive device that can help accelerate machine learning (ML) inferencing. With all the buzz these days with Generative AI and ChatGPT, I can only imagine its popularity has grown even further but I did not realize how popular this device has been in the community, especially for those wanting to use it with ESXi.

The initial observation reported by this user and also by many others in the Coral community was that ESXi was showing the incorrect VID/DID for the Coral USB device and because of this, it was not working correctly when passthrough'ed to a VM and they were looking for a way to change the DID/VID value from 1a6e:089a (Global Unichip Corp.) to 18d1:9302 (Google Inc.).

Interestingly enough, a couple of weeks ago, my buddy Alan Renouf had also shared that he recently purchased the Coral USB device, so I figured I would check with him first to see if he was observing the same behavior that was being reported, which he was. I had been going through the Github reports to try better understand the issue and some of the previous workarounds that users had done including disabling the vmkusb module, which I definitely not recommended, especially for more recent releases of ESXi where that will simply disable all USB functionality to your ESXi host.

I still could not wrap my head around the issue as the reports did not make any sense in terms of the DID/VID not being claimed correctly or that it needed to change to properly function. This also did not make sense when speaking with our USB expert (Songtao who also developed our USB Network Native Driver for ESXi), so I decided to bite the bullet and purchase the Coral USB device, which apparently is difficult to obtain unless you overpay on Amazon, which I did.

[Read more...]

Categories // ESXi, vSphere 7.0, vSphere 8.0 Tags // AI, Coral, ESXi 7.0, ESXi 8.0, ESXi 8.0 Update 1, TPU, usb

GPU Passthrough with Nested ESXi

05.09.2023 by William Lam // 9 Comments

Advancements in ESXi Nested Virtualization have given us the ability to run ESXi inside of a VM (Nested ESXi) and has allowed us to do just about anything you would with a physical ESXi host for development, testing and learning purposes. In fact, I have shared many tips and tricks for using Nested ESXi and Nested Virtualization over the years on my blog, which is worth bookmarking in case you are trying to do something and run into an issue which more than likely, I have come across.

Today, there is very little you can not do using Nested ESXi and is typically limited to a physical device that can not be virtualized and/or emulated in software.

I bring this up because I recently had a chat with Frank Denneman on an unrelated topic and he brought up the question about being able to double passthrough of a GPU from a physical ESXi host into a Nested ESXi VM which would then be passthrough'ed again to a VM running on that Nested ESXi system. While this was not the first time that I had heard of such a request, it does not come up often, this has only been the second time I have heard of this request. For context, his use case was for testing purposes and I can certainly see some interesting scenarios where you want to run vSphere in a Nested environment and still access all the vSphere capabilities including leveraging a physical GPU within that environment, whether that is AI/ML or other graphics process requirements.

My response to Frank was this will not work for a few reasons, one of which is that the use of Virtual Hardware-Assisted Virtualization (VHV) is not supported with DirectPath I/O and if the GPU is passthrough to a VM, even if it was running ESXi, it would be in control of the GPU, so how could one passthrough it again?

My curiosity got the better of me and given this was the second time I had ever been asked about this, I figured maybe it was worth exploring but before I go down anymore 🐇🕳️, I wanted to get quick sanity check from one of our graphics engineers on the remote feasibility of this ask.

[Read more...]

Categories // vSphere 7.0 Tags // Dragon Canyon, GPU, Intel NUC, Serpent Canyon

Inventory standalone ESXi host core count for vSphere+

05.04.2023 by William Lam // Leave a Comment

To help customers inventory their existing vSphere and vSAN CPU core usage for the vSphere+ and vSAN+ Cloud Service, you can take advantage of this inventory script HERE that I have created and generates a report (including CSV) by connecting to your existing vCenter Server(s) for analysis.

While the majority of our customers use vCenter Server to manage their ESXi host(s), we do have customers that run standalone ESXi hosts for various reasons. Recently, I had a few inquires on how customers could inventory their standalone ESXi hosts as they plan to transition those standalone ESXi hosts into the vSphere+ Cloud Service?

The benefit of using the vSphere API in creating the initial script is that the exact same API model also applies to a standalone ESXi host, so the core logic was there but a small modification of the script was required as we now have individual ESXi hosts that we need to connect and inventory. In addition, I also wanted to make the user experience as easy as possible to run the script and require as little input as possible.

[Read more...]

Categories // Automation, VMware Cloud, vSphere Tags // vSphere

  • « Previous Page
  • 1
  • …
  • 71
  • 72
  • 73
  • 74
  • 75
  • …
  • 562
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Crowdsourced Lab Hardware for ESXi 9.0 Dashboard 06/17/2025
  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...