WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

How to change/deploy VCSA 6.0 with default bash shell vs appliancesh?

03.06.2015 by William Lam // 10 Comments

When logging into the new VCSA 6.0 via SSH, you will notice that you are no longer dropped into a normal bash shell but into a new appliancesh (pronounced appliance shell) environment. This new interface provides basic set of virtual appliance management capabilities including Ruby vSphere Console (RVC) access which makes the majority of operations convenient to a vSphere Administrator but it also helps restrict unnecessary access to the underlying filesystem which can be helpful from a security standpoint.

If you need to access the underlying filesystem, you can temporarily enable it by running the following two commands:

shell.set --enabled True
shell

applianceshell-default-bash
If you need to transfer files to/from the VCSA via SCP/WinSCP, you will need to change the default shell from /bin/appliancesh to /bin/bash else the operation will fail. You can easily do this by using the chsh command:

chsh -s "/bin/bash" root

If you rather have the BASH shell configured as the default after deployment and not have to go through this manual process each time, you can actually configured using the following hidden option called guestinfo.cis.appliance.root.shell

This property allows you to specify the default shell for the "root" account and you can only modify this if you deploy the VCSA using ovftool. Here is the parameter you would append to the ovftool argument list:

--prop:guestinfo.cis.appliance.root.shell="/bin/bash"

You can leverage this new property and automate the deployment of the new VCSA 6.0 and for more details be sure to check out my VCSA 6.0 Automation Series.

Categories // Automation, OVFTool, VCSA, vSphere 6.0 Tags // appliancesh, guestinfo, ovftool, VCSA, vcva, vSphere 6.0

Duplicate MAC Address concerns with xVC-vMotion in vSphere 6.0

03.05.2015 by William Lam // 4 Comments

In vSphere 6.0, the mobility options for a Virtual Machine is truly limitless. This has all been possible with a new set of vMotion capabilities introduced in vSphere 6.0 which you can learn more about them here and here. In the past, one area of concern when migrating a VM from one vCenter Server to another is the possibility that a migrated VM's MAC Address might be re-provisioned by the source vCenter Server resulting in a MAC Address conflict. In fact, this is actually a topic I have covered before in my considerations when migrating VMs between vCenter Servers article. I highly encourage you check out that article before proceeding further as it provides some additional and necessary context.

When looking to leverage the new Cross vCenter Server vMotion (xVC-vMotion) capability in vSphere 6.0, are MAC Address conflicts still a concern? To answer that question, lets take a look at an example. Below I have a diagram depicting two different vSphere 6.0 deployments. The first is comprised of three vCenter Servers who are joined to the same SSO Domain called vghetto.local and VM1 is currently being managed by VC1. The second is a single vCenter Server connected to a completely different SSO Domain called vmware.local. I will also assume we are being a good VI Admin and we have deployed each vCenter Server using a unique ID (more details here on why having different VC ID matters).

mac-address-xvc-vmotion-00
Lets say we now migrate VM1 from VC1 to VC2. In previous releases of vSphere, this potentially could lead to VC1 re-provisioning the MAC Address that VM1 was associated with because that MAC Address was no longer being managed by VC1 and from its point of view, it is now available. Though this type of a scenario is probably rare in most customer environments, in a high churn continuous integration or continuous delivery environment, this can be a real issue. So has anything been improved in vSphere 6.0? The answer is yes, of course ๐Ÿ™‚

In vSphere 6.0, vCenter Server now maintains a VM MAC Address Blacklist which upon a successful xVC-vMotion will update this blacklist with the MAC Addresses associated with the migrated VM. This ensures that the source vCenter Server will not re-provisioned these MAC Addresses to newly created VMs and these MAC Addresses are basically "blacklisted" from being used again as shown in the diagram below.

mac-address-xvc-vmotion-1
If we decide to migrate VM1 from VC2 back to VC1, the blacklist is automatically updated and "blacklisted" MAC Addresses will be removed. If we decide to migrate VM1 to a completely different vCenter Server which is not part of the same SSO Domain, then the MAC Address could potentially be re-used, but it will depend on your environment if VC4 is on a completely different L2 segment, then a MAC Address conflict would not occur.

As of right now, there is no automatic way of reclaiming blacklisted MAC Addresses, it is a manual process that must be initiated through a private vSphere API. I am hoping we will be able to get this documented in an official VMware KB, so that in case this is required, you can easily follow the simple steps to execute the necessary APIs. Automatic reclamation is being looked at by Engineering and hopefully we will see this in a future patch/update in vSphere. Overall, this should should not really be a concern given that vCenter Server can uniquely generate about 65,000 unique MAC Addresses and you would have to perform quite a few xVC-vMotions before ever needing to reclaim from the blacklist.

One thing to be aware of when performing xVC-vMotion or ExVC-vMotion is that there are currently no pre-flight checks for MAC Address conflicts at the destination vCenter Server (something Engineering is looking update in a future patch/update release). Having said that, there are two additional measures you can implement in you environment to prevent MAC Address conflicts:

  1. Create vCenter Server alarm which can detect and notify you of a duplicate MAC Address in you environment (also applicable to vSphere 5.5)
  2. Pro-actively check to see if the existing MAC Addresses of your VM is currently in use prior to performing a xVC-vMotion, this is especially useful when performing ExVC-vMotion.

To help with with number 2, I have created a simple PowerCLI script called check-vm-mac-conflict.ps1 which accepts both your source and destination vCenter Server as well as the name of the VM in the source VC to be migrated. It will check the VM's MAC Addresses in the destination VC and ensure that there are no conflicts. If there is a conflict, it will output the name of the destination VM and the MAC Address that is in conflict as seen in the screenshot below.

mac-address-xvc-vmotion-2
Hopefully with these additional measures, you can easily prevent MAC Address conflicts when performing xVC-vMotions in your vSphere environment which can be a pain to troubleshoot.

Categories // vSphere, vSphere 6.0 Tags // blacklist, Cross vMotion, Long Distance vMotion, mac address, vSphere 6.0, xVC-vMotion

Home Labs made easier with VSAN 6.0 + USB Disks

03.04.2015 by William Lam // 23 Comments

VSAN 6.0 includes a large number of new enhancements and capabilities that I am sure many of you are excited to try out in your lab. One of the challenges with running VSAN in a home lab environment (non-Nested ESXi) is trying to find a platform that is both functional and cost effective. Some of the most popular platforms that I have seen customers use for running VSAN in their home labs are the Intel NUC and the Apple Mac Mini. Putting aside the memory constraints in these platforms, the number of internal disk slots for a disk drive is usually limited to two. This would give you just enough to meet the minimal requirement for VSAN by having at least a single SSD and MD.

If you wanted to scale up and add additional drives for either capacity purposes or testing out a new configurations, you are pretty much out of luck, right? Well, not necessary. During the development of VSAN 6.0, I came across a cool little nugget from one of the VSAN Engineers where USB-based disks could be claimed by VSAN which could be quite helpful for testing in a lab environment, especially using the hardware platforms that I mentioned earlier.

For a VSAN home lab, using cheap consumer USB-based disks which you can purchase several TB's for less than a hundred dollars or so and along with USB 3.0 connectivity is a pretty cost effective way to enhance hardware platforms like the Apple Mac Mini and Intel NUCs.

Disclaimer: This is not officially supported by VMware and should not be used in Production or evaluation of VSAN, especially when it comes to performance or expected behavior as this is now how the product works. Please use supported hardware found on the VMware VSAN HCL for official testing or evaluations.

Below are the instructions on how to enable USB-based disks to be claimable by VSAN.

Step 1 - Disable the USB Arbitrator service so that USB devices can been seen by the ESXi host by running the following two commands in the ESXi Shell:

/etc/init.d/usbarbitrator stop
chkconfig usbarbitrator off

vsan-usb-disk-1
Step 2 - Enable the following ESXi Advanced Setting (/VSAN/AllowUsbDisks) to allow USB disks to be claimed by VSAN by running the following command in the ESXi Shell:

esxcli system settings advanced set -o /VSAN/AllowUsbDisks -i 1

vsan-usb-disk-2
Step 3 - Connect your USB-based disks to your ESXi host (this can actually be done prior) and you can verify that they are seen by running the following command in the ESXi Shell:

vdq -q

vsan-usb-disk-3
Step 4 - If you are bootstrapping vCenter Server onto the VSAN Datastore, then you can create a VSAN Cluster by running "esxcli vsan cluster new" and then contribute the storage by adding the SSD device and the respective USB-based disks using the information from the previous step in the ESXi Shell:

esxcli vsan storage add -s t10.ATA_____Corsair_Force_GT________________________12136500000013420576 -d mpx.vmhba32:C0:T0:L0 -d mpx.vmhba33:C0:T0:L0 -d mpx.vmhba34:C0:T0:L0 -d mpx.vmhba40:C0:T0:L0

vsan-usb-disk-4
If we take a look a the VSAN configurations in the vSphere Web Client, we can see that we now have 4 USB-based disks contributing storage to the VSAN Disk Group. In this particular configuration, I was using my Mac Mini which has 4 x USB 3.0 devices that are connected and providing the "MD" disks and one of the internal drives that has an SSD. Ideally, you would probably want to boot ESXi from a USB device and then claim one of the internal drives along with 3 other USB devices for the most optimal configuration.

vsan-usb-disk-5
As a bonus, there is one other nugget that I discovered while testing out the USB-based disks for VSAN 6.0 which is another hidden option to support iSCSI based disks with VSAN. You will need to enable the option called /VSAN/AllowISCSIDisks using the same method as enabling USB-based disk option. This is not something I have personally tested, so YMMV but I suspect it will allow VSAN to claim an iSCSI device that has been connected to an ESXi host and allow it to contribute to a VSAN Disk Group as another way of providing additional capacity to VSAN with platforms that have restricted number of disk slots. Remember, neither of these solutions should be used beyond home labs and they are not officially supported by VMware, so do not bother trying to do anything fancy or running performance tests, you are just going to let your self down and not see the full potential of VSAN ๐Ÿ™‚

Categories // Apple, ESXi, Home Lab, Not Supported, VSAN, vSphere 6.0 Tags // AllowISCSIDisks, AllowUsbDisks, apple, esxcli, mac mini, usb, Virtual SAN, VSAN, vSphere 6.0

  • « Previous Page
  • 1
  • …
  • 371
  • 372
  • 373
  • 374
  • 375
  • …
  • 567
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Ultimate Lab Resource for VCF 9.0 06/25/2025
  • VMware Cloud Foundation (VCF) on ASUS NUC 15 Pro (Cyber Canyon) 06/25/2025
  • VMware Cloud Foundation (VCF) on Minisforum MS-A2 06/25/2025
  • VCF 9.0 Offline Depot using Synology 06/25/2025
  • Deploying VCF 9.0 on a single ESXi host? 06/24/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...