Connecting Ansible to query HP v1910-24G

I use an HP v1910-24G in my lab because it is low power, quiet and has 24 gig ports. It works great for me.  I’m working with Ansible and wanted to query the switch through its limited SSH CLI.

Getting Ansible to query the thing over SSH was a bit of a pain. I was able to use the raw module plus paramiko to finally get a semi-working connection. My Ansible inventory looks like this:

[hpsw] ansible_shell_executable=” ansible_connection=paramiko ansible_user=’admin’ ansible_ssh_pass=’admin’


  • = IP address of my switch
  • ansible_shell_executable=” instructs the raw module not to add the typical ‘/bin/sh -c’ prefix to the commands I send
  • ansible_connectoin=paramiko specifies using the paramiko Python SSH library rather than sshpass

User and password are self-explanatory.

The ad-hoc command is:

ansible hpsw -m raw -a ‘?’

Which will run “?” which shows a list of available commands. The results are alright, the command succeeds but Ansible reports a failure:

There appears to be a bug in paramiko (see that I am running into. I made an edit to the Ansible python source for paramiko (./lib/ansible/plugins/connection/ to specify the look_for_keys=False parameter if using password based authentication (around line 220).

Of course, Ansible had no issue connecting to my ESXi hosts since they use a standard SSH implementation.  Yay!

Datastore Cluster / Storage DRS Handling of Imported VMs

This post outlines the behavior of datastore clusters and Storage DRS in a specific scenario:  Two vCenters, but all ESXi hosts accessing common datastores across both vCenters.  Note that I am not advising this configuration, but instead documenting product behavior while working within constraints of an existing environment.  OK, now that we’re clear, here goes…

The question?

  • What happens to VMs’ Storage DRS and Datastore Cluster relationship when their ESXi host is moved from one vCenter to another?

The scenario:

  • vCenter 1 “vc” has ESXi hosts with access to datastores and datastore clusters configured
  • vCenter 2 “vc2” has ESXi host with access to same datastores, but without datastore clusters

I created a new VM in vc2, placed on a common datastore (store-1), again not in a datastore cluster.

Now I am going to move the ESXi host from vCenter 2 to vCenter 1, along with the running test VM “goose”.

I disconnect the host from vc2, and then add the host to vc 1.

We can see that the datastore cluster detected the new ESXi host has visibility to its datastores:

The VM “goose” shows up in the related virtual machines tab for the datastore cluster.

And the VM “goose” has been configured automatically with the default Storage DRS VM settings.

Note the VM also reports it is placed on the datastore cluster, rather than an individual datastore.

The bottom line, this VM will behave as if it was placed on the datastore cluster initially, and inherits all the default Datastore Cluster VM Settings.