WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Can You Backup & Restore Apple Mac OS X Guests Using vSphere Data Protection (VDP)?

06.14.2013 by William Lam // 1 Comment

It is really cool to see more and more customers show interest in running Apple Mac OS X on vSphere. Just the other day there was another interesting question that was raised from a customer asking whether vSphere Data Protection (VDP) would be able to backup and restore Mac OS X guests.  Apparently there is still an assumption that VMware Tools do not exist for Mac OS X guests? Perhaps virtualizing Mac OS X is still relatively new for some folks, but it is just like any other guest operating system that is supported on vSphere.

I think the following two statements should help clarify any confusion that may exist:

  • To virtualize an Apple Mac OS X guest, you need to be running vSphere on Apple hardware. This is due to a requirement in Apple's EULA and is also enforced within the vSphere platform. You can get more details in this article. 
  • VMware Tools does exist for Apple Mac OS X guests, take a look at this article for more details.

Now, if we take a look at VDP's evaluation guide on page 4 we will see the prerequisite for backing up a guest OS is pretty straight forward:

At least one virtual machine running a supported guest operating system (OS) with VMware Tools installed

Since Apple Mac OS X (10.8, 10.7, 10.6 and 10.5) is a supported guest operating system and we have VMware Tools for this operating system, then yes VDP can be used to backup and restore an Apple Mac OS X guest. To demonstrate that this actually works, I have a Mac OS X 10.7 VM running in my home lab (Apple Mac Mini which is not officially supported) and I have deployed the latest version of VDP.

I then setup the backup job for the Mac OS X guests using the super simple VDP backup wizard and then initiate a backup.

Now, let's say I accidentally fat fingered an operation and deleted this VM. Uh oh!? What am I to do? Well don't worry, VDP is there to the rescue!

To restore the VM, it is simply going through the VDP restore wizard and in just a few minutes, I  have now recovered my Mac OS X guest and it is up and running again!

I have said this many times, but it still amazes me on the number of guest operating systems vSphere supports! There really is no workload that vSphere can not virtualize! So if you have any use cases for Mac OS X workloads, rest assure you can safely virtualize it and back it up on vSphere.

Note: Though I showed using VMware VDP as the backup/recovery solution, you should also be able to leverage both VMware vSphere Replication as well as VMware Site Recovery Manager.

number of guest OSes the vSphere platform supports

Categories // Apple, Automation, ESXi Tags // apple, mac, osx, vdp, vSphere data protection

How To Create Offline Update Repository For VMware Virtual Appliances

05.13.2013 by William Lam // 8 Comments

Virtual appliances built from VMware Studio provides a very easy mechanism of updating or upgrading the software on the appliance by using the VAMI (Virtual Appliance Management Interface) web interface. The VAMI web interface provides three methods of updating or upgrading an appliance: online update repository hosted by the author of the appliance, CD-ROM or alternate update repository can be specified.

UPDATE 02/23/16 - It looks like there were two tiny changes with the latest VAMI Update Repos starting with vCenter Server Appliance (VCSA) 6.0 Update 1. The first being a new signature file called manifest-latest.xml.sign and the .sha256 and .sig files are no longer used or available for download. The second is an additional patch-metadata-scripts.zip file that is under the pool-package directory which maybe required depending on the virtual appliance in question. I have updated my script to take care of these files in case they are needed for newer versions of the VAMI interface.

UPDATE 07/10/15 - VMware has just released a new Fling called VAMI Update Repository Appliance (VURA) which provides an easy way for customers to create offline VAMI repositories for their Virtual Appliances.

For VMware virtual appliances, a VMware hosted update repository is configured by default and internet connectivity to the configured URL will be required (proxy configurations are supported). However, there are environments where network connectivity to VMware's online repository is just not possible or the update repository must be hosted internally due to security requirements and this is where the third option can be used.

The process to setup your own update repository is not really documented and I have been noticing more requests from customers looking for a way to update or upgrade their VMware virtual appliances without requiring access to VMware's online repository. There are also other virtual appliances such as VCSA (vCenter Server Appliance) which ships both the update contents for an ISO and zip file which can then be used with the two other update/upgrade methods.

Though these files can be generated from VMware Studio as part of the appliance build process, the majority of the VMware virtual appliances do not provide these files for download.

I decided to research this topic a bit and look into building my own offline update repository based on the online update repository from VMware. I figured it should be fairly easy to replicate what is being hosted to a local web server that runs within your own datacenter. After some investigation, I found the  process to be pretty straight forward and only requirement is a web server that can be used to host the contents for update repository. In this example, I will show you how to build an update repository to upgrade VIN (vSphere Infrastructure) 1.2 to 2.0.

Step 1 - Login to the VAMI interface of VIN (https://[VIN-IP]:5480) and under the Update tab and make a note of the the default repository URL.

Step 2 - Download buildVARepo.sh shell script and upload that to a Linux based web server which will automatically build out our update repository.

Step 3 - The script accepts two arguments: default repository URL for a particular virtual appliance (from the previous step) and the name of the directory in which the repository will be created. In this example, I will be using the VIN repository URL and I will name the repository vin:

./buildVARepo.sh http://vapp-updates.vmware.com/vai-catalog/valm/vmw/302ce45f-64cc-4b34-b470-e9408dbbc60d/1.2.0.290.latest vin

The system that the script runs will need to have access to the URL above as it needs to download the required manifest files. Using the manifest files, parses out the package name and downloads the RPM packages to the web server using wget.

Step 4 - The result is a directory structure that will look like the following:

vin/manifest = List of XML manifest and signature files that describes the update and the path to the appliance packages
vin/package-pool = The RPM packages for the appliances for a given update

Depending on the location of where the script was executed, you may need to move it to the proper path in which your web server is configured to serve up content. You should be able to open a browser and point that to the /vin directory and view the contents.

Step 5 - We now log back into the VAMI interface and specify our update repository URL which will be http://[IP-OR-HOSTNMAE]/[REPO-NAME] and save the settings.

Step 6 - Now we head over to the Status sub-tab under Update and click on the "Check Updates" and we should see a new update for our virtual appliance. To update the appliance, we then select "Install Updates" and shortly after we should see our VIN appliance upgrade to 2.0

Note: Not all virtual appliances provide upgrades to the latest versions of the appliance, be sure to check the documentation of each individual appliances to see what is supported.

Categories // Automation, VAMI Tags // update repository, vami, virtual appliance

How To Enable Nested ESXi Using VXLAN In vSphere & vCloud Director

05.06.2013 by William Lam // 9 Comments

Recently I had received several inquiries asking on how to configure nested ESXi (Nested Virtualization) to function in a VXLAN environment. I have written several articles in the past on configuring nested ESXi in a regular vSphere and vCloud Director environment, but with the use of a VXLAN backed network, there are a few additional steps that are required. These steps include additional configurations of the vCloud Network & Security Manager (previously known as vShield Manager) which ensures that both the required promiscuous mode and forged transmits are automatically enabled for the VXLAN virtual wires (vWires) as they are managed exclusive by the vCNS Manager.

In this article, I will walk you through the configurations that is required when using VXLAN in both a vSphere only environment as well as a vCloud Director environment. If you would like to learn more about how VXLAN works, be sure to check out the multi-part VXLAN series (Part 1/Part 2) by Venky Deshpande.

Disclaimer: This is not officially supported by VMware, please use at your own risk.

Configurations for VXLAN in vSphere Environment

Step 1 - Deploy vCNS Manager and configure it to point to your vCenter Server (do not enable or prepare VXLAN, this must be done after the configurations)

Step 2 - You will need to identify the VDS MoRef ID in your vCenter Server which will be used in the next step. Since the configuration is applied at the VDS level, you may want to consider having a separate VDS serving Nested Virtualization traffic since both promiscuous mode & forged transmits will automatically be enabled for all vWires. To locate the VDS MoRef ID, login to the vSphere Web Client and select the summary view for the VDS.

The VDS MoRef ID will be towards the end of the URL link and it should start with dvs-X where X is some arbitrary number. Record this value down for the next step

Step 3 - Download the enablePromForVDS.sh shell script which will be used to prepare the VDS within the vCNS Manager. The script basically performs a POST to the REST API to the vCNS Manager using cURL and it accepts three input parameters: vCNS Manager IP Address/Hostname, VDS MoRef ID and VDS MTU. The username/password is hard coded in the script to use the default which is admin/default. If you have modified the default password like any good admin, you will want to change the password before running the script. If you take a look at the request body, you will notice only promiscuous mode is enabled to true, but this will also automatically enable forged transmits as well.

In my lab enviroment, I have the vCNS Manager IP to be 172.30.0.196, VDS MoRef ID to be dvs-13 and VDS MTU to be 9000. So the syntax to run the script would be:

./enablePromForVDS.sh 172.30.0.196 dvs-13 9000

Here is a screenshot of executing the script, you should see a response back with 200 to indicate successful execution of the script.

Step 4 - Now, we will proceed with the VXLAN preparation. Start off by logging into the vCNS Manager and selecting the vSphere Datacenter which you wish to enable VXLAN. On the right you should see a tab called "Network Virtualization" go ahead and click on that and then click on the sub-tab called "Preparation". Click on edit and then select the vSphere Cluster and proceed through the wizard based on your environment configuration.

Step 5 - Once the VXLAN preparation has completed, click on the "Segment ID" and configure that based on your environment.

Step 6 - Next, click on "Network Scopes" and you will create a network scope and specify the set of vSphere Clusters the VXLAN network will span.

Step 7 - Lastly, click on "Networks" and this is where you will create your vWires and ensure it the proper network scope is selected.

Step 8 - To confirm that everything has been configured properly. We now log back into our vSphere Web Client and heading over to the VDS settings page. You should now see a new vWire portgroup that is created, if we take a look at it's settings we should see that both promiscuous mode and forged transmits is enabled.

You are now done with the VXLAN configurations in the vCNS Manager and can proceed to the regular instructions for enabling Nested ESXi for vSphere.

Note: If you have already prepared VXLAN in your environment, you can still configure the above without having to un-prepare your VXLAN configurations. You just need to login to the vCNS Manager via the REST API and perform a DELETE on the VDS switch (Please refer to page 153 of the vCNS API Programming Guide) which will just delete the mapping from vCNS but will not destroy any of your VDS configuration. Once that is done, you will be able to use the script to configure the VDS with the proper settings.

Configurations for VXLAN in vCloud Director Environment

A VXLAN network pool is automatically created for you when using vCloud Director 5.1, so the steps for preparing Nested Virtualization for vCloud Director is extremely simple compared to the vSphere only environment.

Note: VXLAN is only supported in vCloud Director 5.1, for previous versions you have the choice of using a VCD-NI or vSphere backed network and the configurations for that can be found here.

Step 1 - Please follow the steps 1-5 from above in the vSphere only environment and then you are done. If you would like a more detailed walk through for configuring VXLAN for a vCloud Director environment, check out this article by Rawlinson Rivera who takes you through the process step by step.

Step 2 - Proceed to the regular instructions for enabling Nested ESXi for vCloud Director.

Step 3 - Lastly, you will go through the vCloud Director setup which is to attach your vCenter Server & vCNS Manager, create a Provider VDC, create an Organization and assign resources to your Organization VDC and ensure that the OrgVDC is consuming the VXLAN network pool that is automatically created for you when you create the Provider VDC. Once that is done, when you deploy your vApp, you will see a vWire that automatically created for you. If we login to the vSphere Web Client and go to the VDS settings, you will see the vWire has both promiscuous mode and forged transmits automatically enabled.

Additional Resources:

  • Nested Virtualization Resources

Categories // Automation, Nested Virtualization, NSX Tags // nested, vcloud director 5.1, vcloud networking and security, vcns, vhv, vSphere 5.1, VXLAN

  • « Previous Page
  • 1
  • …
  • 209
  • 210
  • 211
  • 212
  • 213
  • …
  • 224
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025