WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Search Results for: guest operations

Quickly getting started with VMware AppCatalyst & AppCatalyst Vagrant Plugin

06.24.2015 by William Lam // 5 Comments

At Dockercon this week, the Cloud-Native Apps team at VMware introduced two new Tech Previews: VMware AppCatalyst and Project Bonneville. In addition, I also found out today that Fabio Rapposelli who works over in the CNA team has also released a Vagrant Plugin for VMware Catalyst as well. Having spent a couple of days playing with AppCatalyst before it was released, I thought this would be a good opportunity to show how to quickly get started with AppCatalyst but also to try out the new Vagrant Plugin that had just been released.

Before jumping in, What exactly is VMware AppCatalyst? It is a slimmed down desktop Hypervisor based on VMware Fusion that has been optimized for Developers who want to be able to quickly spin up Virtual Machines to run Docker Containers. Included with AppCatalyst is also VMware Photon, an open-source lightweight Linux container host that is used to quickly instantiate new VMs that are ready for running and building Docker Containers. Instead of a traditional GUI like VMware Fusion, AppCatalyst's is consumed completely via a REST API or CLI, which is also ideal for integrating with other 3rd Party Automation tools like Vagrant. Best of all, both VMware AppCatalyst & AppCatalyst Vagrant Plugin is completely Free!

Requirements:

  • Mac OS X 10.9.4 or 10.10
  • VMware Fusion must not be running

Quick INFO:

  • The AppCatalyst configuration file is located in ~/.appcatalyst.conf (if you wish to change system defaults)
  • The CLI to AppCatalyst is located at /opt/vmware/appcatalyst/bin/appcatalyst
  • The daemon for the AppCatalyst REST API is located at /opt/vmware/appcatalyst/bin/appcatalyst-daemon
  • SSH keys to the Photon VM is located in /opt/vmware/appcatalyst/etc/appcatalyst_insecure_ssh_key
  • Additional AppCatalyst Documentation can be found here

Exploring Appcatalyst CLI:

Step 1 - Download and install VMware AppCatalyst from here

Step 2 - Once AppCatalyst has been installed, you can explore the CLI and view the list of operations by running the following command:

/opt/vmware/appcatalyst/bin/appcatalyst

vmware-appcatalyst-vagrant-plugin-1
Step 3 - To create our first VM using AppCatalyst, run the following command:

/opt/vmware/appcatalyst/bin/appcatalyst vm create vGhetto1

vmware-appcatalyst-vagrant-plugin-2
Note: You will see that all VMs created by AppCatalyst will be stored in /Users/[USER]/Documents/AppCatalyst and you can change this by editing DEFAULT_VM_PATH in AppCatalyst configuration file

Step 4 - To power on our VM, run the following command:

/opt/vmware/appcatalyst/bin/appcatalyst vmpower on vGhetto1

vmware-appcatalyst-vagrant-plugin-3
Note: If you run into problems powering on your VM, there is a good chance that you may still have a VM that is running in VMware Fusion (also check docker-machine in case you are using that to power off any VMs provisioned using that tool)

Step 5 - Once the VM is powered on, we will probably want to retrieve the IP Address which could take a few seconds to be displayed by running the following command:

/opt/vmware/appcatalyst/bin/appcatalyst guest getip vGhetto1

Step 6 - Once we have the IP Address of our VM, we can then SSH to it using the SSH keys included with AppCatalyst by running the following command:

ssh -i /opt/vmware/appcatalyst/etc/appcatalyst_insecure_ssh_key [email protected]

vmware-appcatalyst-vagrant-plugin-5
At this point, you have successfully deployed a VM based on VMware Photon image using the AppCatalyst's CLI. You can now login and start running and building Docker Containers as Docker is already pre-installed on Photon. Next we will take look at using the AppCatalyst API.

Exploring Appcatalyst API:

UPDATE (01/22/16) - For the complete cheatsheet of using AppCatalyst API using cURL, check out this article here for more examples.

Step 1 - To use the AppCatalyst REST API, you will need to run the AppCatalyst Daemon by running the following command:

/opt/vmware/appcatalyst/bin/appcatalyst-daemon

vmware-appcatalyst-vagrant-plugin-6
Note: You can also run the AppCatalyst daemon in the background by adding '&' to the command

Step 2 - You can easily view the AppCatalyst API by opening a browser to http://localhost:8080 (assuming you did not change the port) which is built using Swagger. You can explore and even test the API using this interface if you have not worked with a Swagger interface before.

vmware-appcatalyst-vagrant-plugin-7
Step 3 - Lets quickly test the GET /vms operation which will list the VMs that are being managed by AppCatalyst. We will be using cURL by running the following command:

curl http://localhost:8080/api/vms

vmware-appcatalyst-vagrant-plugin-8
As long as the AppCatalyst daemon is running, you will be able to interact with the REST API using a variety of methods including the examples I have shown above.

Note: A known issue is that VMs powered on using the AppCatalyst API can not be managed by the CLI until they have been powered down using the API. Hopefully this issue will be resolved in a future update.

Exploring Appcatalyst Vagrant Plugin:

Step 1 - Ensure you have Vagrant already installed on your system, if not you can download it here.

Step 2 - Install the Vagrant Plugin by running the following command:

vagrant plugin install vagrant-vmware-appcatalyst

vmware-appcatalyst-vagrant-plugin-9
Step 3 -  You can either run "vagrant init" or manually create a file name Vagrant file that contains the following:

# Set our default provider for this Vagrantfile to 'vmware_appcatalyst'
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'vmware_appcatalyst'

nodes = [
  { hostname: 'gantry-test-1', box: 'vmware/photon' },
  { hostname: 'gantry-test-2', box: 'vmware/photon' }
]

Vagrant.configure('2') do |config|

  # Configure our boxes with 1 CPU and 384MB of RAM
  config.vm.provider 'vmware_appcatalyst' do |v|
    v.vmx['numvcpus'] = '1'
    v.vmx['memsize'] = '384'
  end

  # Go through nodes and configure each of them.j
  nodes.each do |node|
    config.vm.define node[:hostname] do |node_config|
      node_config.vm.box = node[:box]
      node_config.vm.hostname = node[:hostname]
    end
  end
end

The above Vagrant file will create two VMs called gantry-test-{1,2} using the VMware Photon Box hosted on Atlas by HashiCorp.

Step 4 (Optional) - If you decide to use the example above with VMware Photon Box, you will need to install an additional Vagrant Plugin for managing Photon Guest by running the following command:

vagrant plugin install vagrant-guests-photon

Step 5 - Ensure that the AppCatalyst daemon is running before proceeding with the next step as the Vagrant Plugin uses the AppCatalyst REST API.

Step 6 - To start the deployment, run the following command:

vagrant up --provider=vmware_appcatalyst

Screen Shot 2015-06-24 at 2.11.20 PM
Step 7 - To login to one of the VMs that have been created by Vagrant, simply run the following command and specify the name of the VM to SSH to:

vagrant ssh gantry-test-1

vmware-appcatalyst-vagrant-plugin-11
Step 8 - Finally, we can also confirm that the VMs created by the AppCatalyst Vagrant Plugin is also visible using just the AppCatalyst REST API and we can perform another "GET" operation to verify the two VMs that we had just created.

vmware-appcatalyst-vagrant-plugin-12
Hopefully this quick primer has been useful in getting you introduced and started with VMware AppCatalyst as well as the new AppCatalyst Vagrant Plugin. If you have any feedback or questions, feel free to leave a comment in the VMware AppCatalyst Forum and for feedback/questions on AppCatalyst Vagrant Plugin, you can file issues here.

Categories // Apple, Automation, Cloud Native, Docker Tags // appcatalyst, cloud native apps, DevOps, Docker, Photon, Vagrant

How to deploy vSphere 6.0 (VCSA & ESXi) on vCloud Director and vCloud Air?

04.27.2015 by William Lam // 13 Comments

In case you missed the awesome news last Friday, George Kobar who works over in the vCloud Air team shared a really cool solution in which he demonstrates how to efficiently setup Nested ESXi running in vCloud Air which includes support for inner-vm guest communication without requiring Promiscuous Mode. Nested ESXi has been possible on vCloud Air for quite some time, in fact when I was first granted access I had to try it out myself and had written about it here. The great thing about vCloud Air is that it runs directly on vSphere which means you will get all the added benefits of the underlying vSphere platform including things like VHV (Virtual Hardware Assisted-Virtualization) to ensure that your Nested ESXi VM and its virutal workloads runs as efficiently and as performant as possible. If you are new to vCloud Air, I would recommend checking out this tutorial here which goes into some of the basic operations.

Given the updated news regarding Nested ESXi on vCloud Air, I am sure many of you are excited to try out this new trick for those requiring inner-vm guest communication. I figured most of you will be interested in trying out vSphere 6.0, especially with some of the new capabilities like SMP-FT and VSAN 6.0 which runs perfectly fine in a Nested ESXi environment for demo and learning purposes as shown here and here. I thought I would put together a quick guide on how to setup both Nested ESXi 6.0 as well as the new VCSA 6.0 (which does have a few minor caveats but can definitely run in vCloud Director and vCloud Air environment).

nested-esxi-6.0-vcloud-air
vcsa--6.0-vcloud-air
Disclaimer: The usual caveat ... Nested ESXi is not officially supported by VMware

ESXi 6.0

There is no version of vCloud Director for the Enterprise that supports vSphere 6.0 which means there is no direct support for the latest virtual hardware release which is 11 or support for ESXi 6.x guestOS type. This is also true for vCloud Air which is currently running on vSphere 5.5 and because of this reason, you will need to upload a VM that has been configured with ESXi 5.x as the guestOS type when looking to install ESXi 6.0. Once vCloud Air supports vSphere 6.0, then you can upload a VM that has been created with the ESXi 6.x guestOS type.

The easiest way to create Nested ESXi VM in a vCloud Director or vCloud Air environment is to simply import a VM that has already been configured with ESXi guestOS type (this does not need to be an already installed image). To help expedite the deployment of Nested ESXi in vCloud Air, I have built several Nested ESXi OVF Templates that that you can use. You will also need to upload an ESXi 6.0 ISO or whichever version of ESXi you plan on running since both ESX(i) 4.x and 5.x is possible.

VCSA 6.0

One of the challenges I came across when testing the new VCSA 6.0 in a vCloud Director based environment which also affect vCloud Air is that they do not support a few capabilities within the OVF specification, namely Deployment Options. Due to this limitation and few others, we can not directly import the VCSA 6.0 OVA into vCloud Director. Luckily, there is a workaround which I had looked into a few months before the GA of vSphere 6.0 and below are the steps to import a VCSA 6.0 OVA into a vCloud Director environment. If you are looking to run VCSA 5.5, then you can directly import the OVA without going through these steps.

Step 1 - Download and extract the contents of the VCSA 6.0 ISO (Build 2656757 was  used)

Step 2 - Convert VCSA 6.0 OVA located in vcsa/vmware-vcsa into an OVF by either using ovftool, tar or a tool like 7zip.

ovftool --sourceType=OVA vmware-vcsa vmware-vcsa.ovf

Next, you will need to make several modifications to the OVF file. I do have to warn you, there are a few tweaks and I highly recommend that you use the OVF templates that I have already created for you. Make sure to also delete the .mf (manifest file) since you are making changes to the OVF else the OVF validation will throw an error because the files have been modified.

To save you some time, pain and troubles, I have pre-created the following 3 OVFs (based on vSphere 6.0 GA release of VCSA 6.0) which contains all the modifications mentioned in Step 3 which you can download and then jump to Step 4:

  • VCSA 6.0 Embedded Tiny OVF
  • VCSA 6.0 vCenter Server Management Node Tiny ONLY OVF
  • VCSA 6.0 Platform Services Controller Node Tiny ONLY OVF

Step 3 - The first is to locate the "References" tag located at the top of the OVF file and remove the line containing the RPM reference. At the end it should look something like the following:

  <References>
    <ovf:File ovf:href="VMware-vCenter-Server-Appliance-6.0.0.5100-2656759_OVF10-file1.json" ovf:id="layout.json_id" ovf:size="5756"/>
    <File ovf:href="VMware-vCenter-Server-Appliance-6.0.0.5100-2656759_OVF10-disk1.vmdk" ovf:id="VMware-vCenter-Server-Appliance-6.0.0.5100-2656759-system.vmdk_id" ovf:size="524469248"/>
    <File ovf:href="VMware-vCenter-Server-Appliance-6.0.0.5100-2656759_OVF10-disk2.vmdk" ovf:id="VMware-vCenter-Server-Appliance-6.0.0.5100-2656759-cloud-components.vmdk_id" ovf:size="1369250304"/>
    <File ovf:href="VMware-vCenter-Server-Appliance-6.0.0.5100-2656759_OVF10-disk3.vmdk" ovf:id="VMware-vCenter-Server-Appliance-6.0.0.5100-2656759-swap.vmdk_id" ovf:size="74240"/>
  </References>

In addition, depending on the method you took to convert the OVA to an OVF, you may also need to rename the json and disk file names located in this section to match the extracted contents.

The second is to delete the following section from the OVF that starts with MigrationUpgradeRequisitesSection:

<vmw:MigrationUpgradeRequisitesSection ovf:required="false">
<Info>Files necessary for migration-based upgrade.</Info>
<vmw:Requisite ovf:fileRef="VMware-vCenter-Server-Appliance-6.0.0.5110-2656759-upgrade-requirements.rpm_id" vmw:purpose="requirements"/>
</vmw:MigrationUpgradeRequisitesSection>

The fourth step is to specify the deployment option type that you wish to use. VCSA 6.0 supports the following: embedded, infrastructure (PSC) and management (VC). You will need to locate the following line containing guestinfo.cis.deployment.node.type and set the value property to one of the three options.

<Property ovf:key="guestinfo.cis.deployment.node.type" ovf:type="string" ovf:userConfigurable="false" ovf:value="infrastructure">

The fifth and final step is to specify the deployment size that you wish use for your VCSA, here are nine different supported options:

  • Embedded
    • tiny
    • small
    • medium
    • large
  • vCenter Server Management Node (only)
    • management-tiny
    • management-small
    • management-medium
    • management-large
  • Platform Services Controller Node (only)
    • infrastructure

Since both vCloud Director and vCloud Air does not support the Deployment Option OVF capability, you will need to specify the deployment you wish to use. Locate the DeploymentOptionSection and the first entry where it shows "default=true", you will need to change the id to match one of the entries show above. For example, if you wanted an Embedded VCSA deployment using the tiny size, you would specify "tiny" in the id field.

  <DeploymentOptionSection>
    <Info>List of profiles</Info>
    <Configuration ovf:default="true" ovf:id="tiny">

Once you have selected the type of deployment, you will also need to remove ALL entries referencing the other deployment types else it will always deploy an Embedded deployment.

Note: I would like to give a big shout-out to Doug Baer who works over in the VMware HOL team, he actually discovered the initial issue with the Deployment Options and found the workaround by removing the other disk references. If not, you would end up needing ~2TB of storage as VCD tries to aggregate all nine deployments into one! When I had initially worked out the steps to deploy a VCSA 6.0, I had only used the Embedded deployment option.

Step 4 - Lastly, you will need to change the "capacity" property as seen below from 1303 to 1306 due to a known vCloud Air issue documented in KB2094271

<Disk ovf:capacity="1303" ovf:capacityAllocationUnits="byte * 2^20" ovf:diskId="cloudcomponents" ovf:fileRef="VMware-vCenter-Server-Appliance-6.0.0.5110-2656759-cloud-components.vmdk_id" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="1365573632"/>

Step 5 - You are now ready to upload your VCSA 6.0 OVF to your vCloud Director or vCloud Air environment.

Note: For vCloud Air, you will need to use the "Manage in vCloud Director" link to upload the OVF as the vCloud Air interface does not support direct OVA/OVF uploads.

Step 6 - When you are are ready to deploy your VCSA, one very important step that you will need to do is to edit a few of the OVF properties in the VM before powering it on. If you power on the VCSA before performing this step, the system will need to be deleted and re-deployed as the OVF properties are only read in on the initial first boot which is required for proper configuration.

  • Make sure to disable guest customization, to do so right click on the VM and select Guest OS Customization and uncheck "Enable guest customization"
  • To edit the OVF properties, right click on the VM and select Properties. Click on Guest Properties and you will ONLY be editing the following three sections

Networking Configuration

vcsa-6.0-networking-configurations
System Configuration

vcsa-6.0-system-configurations
SSO Configuration

vcsa-6.0-sso-configuration
For an Embedded Configuration, you will need to edit the following (below is an example of the data input):

Host Network IP Address: 192.168.110.100
Host Network IP Address Family: ipv4
Host Network DNS Servers: 192.168.110.10
Host Network Default Gateway: 192.168.110.1
Host Network Mode: static
Host Network Identity: vc-01a.corp.local
Host Network Prefix: 24
Tools-based Time Synchronization Enable: check OR NTP Servers
Root Password: VMware1!
SSH Enabled: check/uncheck
Directory Domain Name: vghetto.local
New Identity Domain: check
Directory Password: VMware1!
Site Name: virtuallyGhetto

For a vCenter Server Management Node only , you will need to edit the following (below is an example of the data input):

Host Network IP Address: 192.168.110.100
Host Network IP Address Family: ipv4
Host Network DNS Servers: 192.168.110.10
Host Network Default Gateway: 192.168.110.1
Host Network Mode: static
Host Network Identity: vc-01a.corp.local
Host Network Prefix: 24
Tools-based Time Synchronization Enable: check OR NTP Servers
Platform Services Controller: psc-01a.corp.local
Root Password: VMware1!
SSH Enabled: check/uncheck
Directory Domain Name: vghetto.local
New Identity Domain: uncheck
Directory Password: VMware1!
Site Name: virtuallyGhetto

For a Platform Services Controller Node only, you will need to edit the following (below is an example of the data input):

Host Network IP Address: 192.168.110.110
Host Network IP Address Family: ipv4
Host Network DNS Servers: 192.168.110.10
Host Network Default Gateway: 192.168.110.1
Host Network Mode: static
Host Network Identity: psc-01a.corp.local
Host Network Prefix: 24
Tools-based Time Synchronization Enable: check OR NTP Servers
Root Password: VMware1!
SSH Enabled: check/uncheck
Directory Domain Name: vghetto.local
New Identity Domain: check
Directory Password: VMware1!
Site Name: virtuallyGhetto

If everything was deployed successfully, you should now have a VCSA 6.0 instance running in either your vCloud Director or vCloud Air environment.

Categories // Automation, OVFTool, vCloud Air, VCSA, vSphere 6.0 Tags // ova, ovf, ovftool, vcd, vcloud air, vcloud director, VCSA, vcva, vSphere 6.0

How to change/deploy VCSA 6.0 with default bash shell vs appliancesh?

03.06.2015 by William Lam // 10 Comments

When logging into the new VCSA 6.0 via SSH, you will notice that you are no longer dropped into a normal bash shell but into a new appliancesh (pronounced appliance shell) environment. This new interface provides basic set of virtual appliance management capabilities including Ruby vSphere Console (RVC) access which makes the majority of operations convenient to a vSphere Administrator but it also helps restrict unnecessary access to the underlying filesystem which can be helpful from a security standpoint.

If you need to access the underlying filesystem, you can temporarily enable it by running the following two commands:

shell.set --enabled True
shell

applianceshell-default-bash
If you need to transfer files to/from the VCSA via SCP/WinSCP, you will need to change the default shell from /bin/appliancesh to /bin/bash else the operation will fail. You can easily do this by using the chsh command:

chsh -s "/bin/bash" root

If you rather have the BASH shell configured as the default after deployment and not have to go through this manual process each time, you can actually configured using the following hidden option called guestinfo.cis.appliance.root.shell

This property allows you to specify the default shell for the "root" account and you can only modify this if you deploy the VCSA using ovftool. Here is the parameter you would append to the ovftool argument list:

--prop:guestinfo.cis.appliance.root.shell="/bin/bash"

You can leverage this new property and automate the deployment of the new VCSA 6.0 and for more details be sure to check out my VCSA 6.0 Automation Series.

Categories // Automation, OVFTool, VCSA, vSphere 6.0 Tags // appliancesh, guestinfo, ovftool, VCSA, vcva, vSphere 6.0

  • « Previous Page
  • 1
  • …
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Ultimate Lab Resource for VCF 9.0 06/25/2025
  • VMware Cloud Foundation (VCF) on ASUS NUC 15 Pro (Cyber Canyon) 06/25/2025
  • VMware Cloud Foundation (VCF) on Minisforum MS-A2 06/25/2025
  • VCF 9.0 Offline Depot using Synology 06/25/2025
  • Deploying VCF 9.0 on a single ESXi host? 06/24/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...