vRealize Orchestrator additionalPropertyFilters Explanation


The Orchestrator VcPlugin getAll…() methods (such as VcPlugin.getAllDatastores(), VcPlugin.getAllVirtualMachines(), etc.) accept two parameters, additionalPropertyFilters and xpath.  Use of xpath has been well documented, but I had trouble finding information on additionalPropertyFilters.  Then, I found this explanation in the vRO Coding Design Guide, page 26:

For performance reasons, the returned objects are not fully populated with all of the properties but a pre-defined set of properties. The initial set can be determined by looking at the plugin inventory – all properties displayed in the inventory are in the initial properties set. If additional properties are needed, specify them during the getAll…(…) call so that the request from the vCenters can get a combined set of properties upfront. If this is not done and extra properties are referenced by the workflow or action then a remote call is made each time to vCenter to create a vCenter filter for the tuple <vm, property=”” name=””>, which wastes resources both for remote execution and for filter management on vCenter side.

Ah hah!


How To Use additionalPropertyFilters

A quick look at vroapi.com tells me the syntax is getAll…(string[], string).  As a test, I created a string array containing the name of all properties for the VcVirtualMachine class and ran this code successfully:



Performance Gains?

After performance testing, I found an average gain of 190ms per VM when reading all the VcVirtualMachine’s base level attributes.  However, I’m using Date.now() for performance testing and it is less granular than I would like, also this wasn’t exactly the most controlled test – other environmental factors may have impacted performance, so your results may not be the same.

For the performance test, I ran 30 iterations of VcPlugin.getAllVirtualMachines() and averaged execution time with and without the additionalPropertyFilters parameter, 195ms and 165ms respectively.

Then I read all base-level properties from the first 200 virtual machines, using the object returned by each of the two types of calls, and averaged that time per VM.  I felt that this should trigger the “remote call… to vCenter” referenced above.


  • Per-VM property read time without additionalPropertyFilters: About 240ms
  • Per-VM property read time with additionalPropertyFilters: About 50ms


Hope this helps.



Cross-vCenter VM Clone using vRealize Orchestrator

vRealize Orchestrator includes a built-in workflow to clone VMs, but does not handle cross-vCenter VM cloning automatically.  This code is an early example of the necessary parameters to perform cross-vCenter VM clone using Orchestrator.  I hope it can serve as a starting point for others.

There are some parts of this code that need work, namely:

  • Workflow relies on input values for username and password to authenticate the destination vCenter.  I would prefer to use token based authentication from the VcPlugin’s connection to vCenter.
  • vCenter SSL Thumbprint is provided manually via input value.  I would prefer to find the SSL thumbprint from the VcPlugin’s existing trusted relationship with the destination vCenter.
  • VcPlugin.allSdkConnections[1] is not the right way to get the destination vCenter sdk connection.  This is the easiest one to fix.



vRA 7 vSphere Data Collection, Error processing [inventory]…


After vRealize Automation Center 7 initial data collection on a new vSphere compute resource, when trying to create a reservation, no compute, storage, and network paths are available for the resource.  This is because all hosts in the vSphere cluster were in maintenance mode.

I see the following error in Infrastructure -> Monitoring -> Log:

Error processing [inventory], error details: Value cannot be null. Parameter name: collection

And this related error (I’ve replaced identifiers):

DC: {guid}: inventory: {endpoint: compute}: Received failed data collection response, StatusID = {guid} : Value cannot be null. Parameter name: collection


All hosts in the vSphere cluster were in maintenance mode.  As a result, data collection failed with an error.  After exiting maintenance mode, I was able to run data collection again (Infrastructure -> Compute Resource -> Compute Resources -> Hover on resource -> Data Collection -> Inventory -> Request Now).  This fixed the issue.

Blog entry created as searching for the error verbatim did not yield any results.

ks.cfg Command Changes from ESXi 5.5 to ESXi 6.0

The vSphere 5.5 documentation includes a helpful page outlining the ESXi installer script changes between 4.x and 5.x:

The vSphere 6.0 documentation does not have a corresponding page documenting the ESXi scripted install changes from vSphere 5.5 to vSphere 6.0, probably because the changes are less significant. Two switches have been removed, and both are related to upgrading from ESX/ESXi 4.x.

ESXi Scripted Install, Switches Removed in vSphere 6.x

Command Switch Description
installorupgrade, upgrade –forcemigrate Force continue when upgrading from ESX(i) 4.x host with custom software that is not found on installation media
installorupgrade, upgrade –deletecosvmdk Previously used when migrating from ESX to ESXi

vRA 7 Allow Override on VirtualMachine.Network0.Name Custom Property

In vRealize Automation Center 7, some changes have been made to the way vSphere machine networks are configured.  However, someone may want to use the VirtualMachine.Network0.Name custom property to specify a particular port group to use at request time.  However, at least in one case, I am seeing the “override” property value reset to “no” after saving the blueprint, which would no longer allow the property value to be overwritten when the blueprint is requested.  NOTE – looks like the override behavior has been resolved, verified with vRA 7.0.1 build 3621464


This is a very simple one machine blueprint that references an external network (this issue occurs regardless of the presence of the external network config)

The external network is tied to a network profile that references multiple port groups.  So we want to use a custom property to let the port group be specified at request time.  Previously you would go to the Properties tab for this machine and set the custom property there.  However, vRA 7 doesn’t want us to do that:

OK, no problem.  Examining the Network tab of the vSphere machine component:

Clicking Custom Properties reveals this window where VirtualNetwork.NetworkX.Name or other custom properties could be specified:

However, after ensuring Show in Request and Overridable are both checked, saving the blueprint, and clicking Finish, this option gets reset to Overrideable=no.  NOTE – Looks like this is fixed in 7.0.1 build 3621464.  The overridable property can be set from the Network -> Custom Properties tab as shown here.

The end result is that the VM cannot be custom wired to a specific port group when a user requests this blueprint from the catalog.




So, let’s fix this.  The solution is to create a property group containing our custom property, and associate that property group to the VM component in this blueprint.

Administration -> Property Dictionary -> Property Groups, Add…

Here’s what the VM component properties look like now that we have added this

Now we can specify the value for this property at request time:

Much better.