WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Whitepaper: Migrating From VIX API to the vSphere Guest Operations API

07.09.2013 by William Lam // 7 Comments

The VMware VIX API in my opinion is still one of the most powerful and undervalued API's that is available to customers and partners for Virtual Machine guest operating system Automation. The VIX API allows you to perform guest operations such as starting/stopping an application, file/directory manipulation, uploading/downloading files all within the guest operating system without requiring any network connectivity to the Virtual Machine. This is all made possible through the use of VMware Tools that is running inside of the Virtual Machine and operations are only performed after a user of the guestOS is properly authenticated.

Guest Operations using vSphere API

The use cases for such an API are endless:

  • Network reconfiguration (Re-IP for DR or miss-configuration)
  • Operating System configurations
  • Application configurations or deployments (example of this)
  • Backup/Restore for individual files
  • Downloading log files for troubleshooting
  • The list goes on ....

The VIX API was first introduced as a separate client API supporting VMware's hosted products such as VMware Fusion, Workstation and Player and later supported VMware vSphere. The API was quite popular for the hosted products and with the release of vSphere 5.0, the VIX API was finally integrated into the vSphere API to provide a single API that could manage all aspects of vSphere as well as these new guest operations APIs for your Virtual Machines. With this integration, these new APIs are now known as the vSphere Guest Operations API.

If you are familiar with the VIX API and would like to move or migrate to using the new Guest Operations API within vSphere, there is a really useful whitepaper that I recently came across called Transporting VIX Guest Operations to the vSphere API that provides a nice mapping of the API methods between the VIX and new vSphere Guest Operations API. The whitepaper also includes various code samples using Java, PowerCLI cmdlets and vSphere SDK for Perl to demonstrate the new Guest Operations APIs.

I think every vSphere administrator or developer should be familiar with the capabilities of the VIX and Guest Operations API and how it can help them further automate and manage your guestOSes and the applications that run inside of them.

Additional Resources:

  • VIX API Home Page
  • Automating the New Integrated VIX/Guest Operations API in vSphere 5

Categories // Uncategorized Tags // api, guest operations, vix, vix api, vSphere, vSphere 5.0

Emulating an SSD Virtual Disk in a VMware Environment

07.03.2013 by William Lam // 32 Comments

I continue to be amazed everyday at all the awesome features and challenges being tackled by our VMware Engineering organization and yesterday was another example of that. There was a question that was posed internally about emulating an SSD device for a Nested ESXi environment running in VMware Fusion. I figure this would be an easy answer and pointed the user to a blog article I had written a few years ago on how to fake an SSD device in ESXi using SATP claim rules via ESXCLI. It turns out, one of the engineers knew of a better way of emulating an SSD Virtual Disk that can be consumed beyond just Nested ESXi VMs but also for any other guestOSes that supports SSD devices.

So why would you want to emulate an SSD device? Well for a vSphere environment, you may want to try out the new Swap to Host Cache feature from a functional perspective to see how it would work. You might be developing a script to enable this feature and having a "fake" SSD device would allow you to create such a script and test it. For other guestOSes, maybe you want to see how the system would react to an SSD device, perhaps drivers or configurations maybe needed and you would like to run through those processes before installing a real SSD device.

So the solution is actually quite simple and it is just an advanced setting in the Virtual Machine's configuration file (VMX) which can also be appended to using either the vSphere Web Client, vSphere C# Client or the vSphere API. This setting is only supported on Virtual Machines that is running virtual hardware 8 or greater. To configure a specific virtual disk to appear as an SSD, you just need to add the following:

scsiX:Y.virtualSSD = 1

where X is the controller ID and the Y is the disk ID of the Virtual Disk.

This configuration presents to the guestOS the mediumRotationRate field of the SCSI inquiry pages 0xB1 and sets the value to 1 and the guests will then report it as a solid-state device. As you can see, this can benefit more than just running Nested ESXi, you can also do various testing on other guestOSes that you require a "fake" SSD device.

Note: Though you can emulate an SSD device, it is no substitute for an actual SSD device and any development or performance tests done in a simulated environment should also be vetted n a real SSD device, especially when it comes to performance.

It is also important to note that reporting of an SSD device will highly depend on the guestOS, here is a high level table on how some of the common guestOSes recognize SSD devices.

GuestOS SSD Reporting
Windows 8 IDE, SCSI and SATA disks can be recognized as SSDs
Windows 7 IDE and SATA disks can be recognized SSD, but SCSI as mechanical
Linux (Ubuntu & RHEL) IDE, SCSI and SATA disks can be recognized as SSDs
Mac OS X SATA can be recognized as SSDs, but IDE and SCSI as mechnical

Here is a screenshot of a Nested ESXi host with an emulated SSD device:

Here is a screenshot of the new Windows 8.1 Preview with an emulated SSD device:

Note: Though I demonstrated this using vSphere, this also works for VMware Fusion (tested this personally), Workstation and Player. The only requirement is that you are running virtual hardware 8 or greater and that your guestOS supports reporting SSD device.

From a Nested ESXi perspective, I will definitely be using this method instead of using ESXCLI to go through the SATP claim rules, this is much easier to remember. I would also like to thank Regis Duchesne for sharing this tip and Srinivas Singavarapu and the virtual devices team for developing this awesome feature. You guys ROCK!

Categories // Uncategorized Tags // ESXi, solid state drive, ssd, virtual disk, vmdk, vSphere

How to Quickly Get Started with VMware vSphere & OpenStack?

07.01.2013 by William Lam // 20 Comments

Kenneth Hui recently published a number of interesting articles diving into the latest VMware vSphere integration with the OpenStack Grizzly release called OpenStack For VMware Admins: Nova Compute With vSphere Part1 and Part2. There has definitely been a lot of chatter around OpenStack lately and I agree with Kenneth, there is also a lot of confusion around the topic in general. Although I have not used OpenStack personally, one very important concept to understand is that OpenStack is really just a framework that allows you to build a Cloud solution that is comprised of the best of breed products that can then be plugged into the underlying compute, network, storage and management infrastructure.

One example of this is OpenStack's Nova compute component which supports a variety of Hypervisor solutions including KVM, XEN and now also VMware vSphere. Another example is OpenStack's Neutron (formally Quantum) networking component which also supports a variety of networking platforms including the leader in this space which is VMware's Nicira NVP (Networking Virtualization Platform).

Having said all that, since I have never worked with OpenStack before, I thought this would be a great opportunity to give OpenStack a test run with my vSphere home lab environment. With a quick Google search, I found an OpenStack Wiki guide for setting up VMware's Nova integration and I thought I should be able to just follow that. As it turns out, some of the commands no longer function due to some recent code changes in OpenStack and the instructions were also incomplete for a few steps. With the assistance of the OpenStack development team at VMware, I was able to get everything working and I wanted to share the details while the Wiki gets corrected.

Here is a diagram of what a vSphere and OpenStack solution could look like and we will be primarily focusing on the Nova component:

Pre-requesites:

  • vSphere ready environment with vCenter Server and at least one ESXi host (I recommend using the vCenter Server Appliance for quick setup)
  • vanilla installation of Ubuntu 12.04 LTS (you can find more details here)

Here is what my vSphere inventory looks like and the nice about this is you can use an existing vSphere environment. As you can see I have my Apple Mac Mini running ESXi, which is also hosting my vCenter Server along with my OpenStack virtual machine.

Installation:

Step 1 - Install git and we will be using that to clone out the latest DevStack which is basically a huge shell script that helps you quickly stand up an OpenStack instance for testing/development as it is not a trivial task to install OpenStack. Run the following commands on your Ubuntu OpenStack host:

sudo apt-get -y install git
git clone http://github.com/openstack-dev/devstack.git
cd devstack

Step 2 - Next we will need to setup a Tun/Tap interface which can do userspace networking and this helps ensure we do not mess with our primary interface (eth0) that is used to connect to the OpenStack VM. Run the following commands:

sudo ip tuntap add dev tapfoo mode tap
sudo ifconfig tapfoo 172.30.0.1 up

Note: You can select any IP Address that is not being used, I chose 172.30.0.1

To confirm the software interface was created correctly, you can run the ifconfig command and you should see a "tapfoo" interface with the IP Address that you had specified from above.

Step 3 - Now we need to create a file called localrc in the devstack directory with the following configurations listed below which will be used by DevStack to build and configure our OpenStack instance.

ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-cpu,n-net,n-cond,n-sch,rabbit,mysql,horizon
VIRT_DRIVER=vsphere
VMWAREAPI_IP=192.168.1.127
VMWAREAPI_USER=root
VMWAREAPI_PASSWORD=vmware
VMWAREAPI_CLUSTER=Cluster
DATABASE_PASSWORD=nova
RABBIT_PASSWORD=nova
SERVICE_TOKEN=nova
SERVICE_PASSWORD=nova
ADMIN_PASSWORD=nova
FLAT_INTERFACE=tapfoo
HOST_IP=192.168.1.143
SCREEN_LOGDIR=/home/primp/devstack-logs
SYSLOG=True
SYSLOG_HOST=192.168.1.104
SYSLOG_PORT=514

The configurations in BLACK are required, where as the ones in GREEN are optional and I will explain those in a bit.

VMWAREAPI_IP is the IP Address of your vCenter Server
VMWAREAPI_PASSWORD is the password of your vCenter Server
VMWAREAPI_CLUSTER is the name of the vSphere Cluster if you have one, else you can leave it blank
HOST_IP is the IP Address of your OpenStack Ubuntu host

Optional configurations:

SCREEN_LOGDIR will log all the OpenStack logs to a directory of your choice. By default, DevStack will log to standard out and only visible through the Screen sessions of each component which is not very user friendly nor easy for troubleshooting.

If you wish to forward OpenStack logs to a remote syslog host, you can also enable the following three configurations which should be pretty straight forward:

SYSLOG=True
SYSLOG_HOST is the IP Address of your remote syslog host (more details on this towards the bottom)
SYSLOG_PORT is the port of your remote syslog host, default will be 514

Note: If you want to learn about other DevStack localrc options, take a look a the documentation here

Step 4 - We are now ready to build and deploy our OpenStack instance. To start, just run the following command:

./stack.sh

This process will take a few minutes depending on how fast your system is and the connection to download all the necessary packages. If everything was successful, you should see a summary about logging into your OpenStack instance and the URL for the Horizon UI as shown in the screenshot below.

Step 5 - Go ahead and confirm you can access the Horizon UI by opening up a browser and pointing it to the IP Address of your OpenStack instance.

Step 6 - To start using OpenStack, we will need to first upload a virtual machine disk to OpenStack's Glance component which handles VM images. There is a sample Debian VMDK that is available on the OpenStack Wiki that we will be downloading to our OpenStack instance. To do so, we will set our credentials on the command-line for the next step and perform a wget to download the VMDK by running the following commands:

source openrc demo demo
wget https://www.dropbox.com/s/utvri5bw3zztty6/Debian-flat.vmdk

Step 7 - We will now use the following Glance CLI to upload our virtual disk and we can also list it once it is uploaded:

glance image-create --name Debian --is-public=True --container-format=bare --disk-format=vmdk --property vmware-disktype="preallocated" < Debian-flat.vmdk
glance image-list

Step 8 - To deploy a new instance of the image we have just uploaded, we will switch over to the Nova CLI and specify the Image ID from the previous step and run the following command which will deploy to our vSphere environment.

nova boot --image --flavor 1 my-first-openstack-vm
nova list

 
Step 9 - We can continue to run "nova list" to view the status, but it would be more interesting to see this from the Horizon UI. You can head over to the OpenStack UI and see the progress under the Instances tab.
Once the VM is ready, we should see an IP Address assignment and the status set to ready and the VM should show powered on.
To confirm that we have actually provisioned the VM onto our vSphere compute cluster, we can login to either the vSphere Web Client or vSphere C# Client and we should see our newly deployed VM running.
If you wish to deploy using the Horizon UI, you can go to Project -> Instances -> Launch Instance and go through the wizard selecting the image, specifying a name and configuration flavor and then click on Launch once you are ready to deploy.
Step 10 - Once you are finished, you can run the ./unstack.sh command which will reset and clean up your environment and delete any images that were uploaded. Again, DevStack is not meant for running production workloads, but can be used for quickly testing or developing against OpenStack. You can also view the consoles of each of the OpenStack components by using screen -x stack.
Using DevStack, you can quickly get a basic OpenStack instance up and running without too much hassle but this is not to say that OpenStack is easy or trivial to install. If this is your first time, I would highly recommend configuring your localrc to store the logs in a directory so you can either go through them if you run into any issues or more likely forward it over to an OpenStack expert to help you decipher. I personally had ran into a few issues and it was partially due to some errors in the Wiki, but troubleshooting can be like search for a needle in a haystack.

DevStack Syslog Configuration

If you recall earlier in the localrc configuration, there is a section that specifies remote syslog configurations for the OpenStack instance. Since I am a fan of the new vCenter Log Insight product that was just released as a beta from VMware, I thought it would be neat to forward the OpenStack logs to it. After a bit of trial and error, it turns out that DevStack configures rsyslog (which is the syslog daemon) running on the Ubuntu host to forward logs using RELP format which is not supported by vCenter Log Insight. If you want to get this working, you will need to disable RELP format by tweaking the rsyslog configuration in /etc/rsyslog.d/90-stack-s.conf
You will need to replace :omrelp:

*.*             :omrelp:192.168.1.104:514

to just @@:

*.*             @@192.168.1.104:514

Finally, you need to restart the rsyslog service for the changes to take effect by running the following command:

service rsyslog restart

If we login to our vCenter Log Insight UI, we should now see our OpenStack instance logging remotely. Once you unstack and run stack, the configurations will default back to the original.

Additional Resources:

  • vSphere + OpenStack Nova wiki guide
  • OpenStack CLI reference
  • Screen command reference

Categories // Uncategorized Tags // DevStack, nova, OpenStack, SDDC, software defined datacenter, vC Log, vCenter Log Insight, vmware, vSphere

  • « Previous Page
  • 1
  • …
  • 24
  • 25
  • 26
  • 27
  • 28
  • …
  • 40
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025