Generate example naming schemes

Compare and contrast two naming schemes. Playing with creative use of arrays.

Normal Resource Pools

Goal:

To demonstrate that child objects in a resource pool receive a share of the parent object’s assigned resources.

This is in contrast to an incorrect model where assigning a VM to a same-level (“Normal” shares) resource pool results in equal distribution of top-level resources to all VMs at that “Normal” share level:

The correct model is shown on the left, the incorrect model is shown on the right.

The setup:

  • Three VMs running on a single host, CPU affinity set to pin all vCPUs to the same processor.  Each VM has a single vCPU assigned.
  • Two resource pools, set to “Normal” CPU shares
  • Used “stress” (Linux) and a pi digit calculator (Windows) to peg the guest OSes at 100% CPU

I assigned the VMs into pools as follows:

With the VMs under stress, CPU is allocated as follows:

  • ImNormalTool, 50% -> node1 (50%)
  • Normal, 50% -> winbox1 (25%), winbox2 (25%)

Here’s a screenshot of esxtop to confirm this result:

When designing resource pools, keep in mind this allocation model.  For example, let’s say two equal resource pools exist:

  • Pool1 (50% CPU) -> 1 VM (50%)
  • Pool 2 (50% CPU) -> 2 VMs (25% per VM)

Now, an administrator adds 3 more VMs to Pool 2:

  • Pool1 (50% CPU) -> 1 VM (50%)
  • Pool 2 (50% CPU) -> 5 VMs (10% per VM)

At this point, the first VM is receiving 50% of CPU cycles (under contention) while the other 50% of CPU cycles are distributed to the other 5 VMs.

This may or may not be what was intended by the designer…!

vMotion Traffic Management on UCS

This has been discussed elsewhere, I’m just putting it here as a quick reference for UCS traffic flow as it pertains to vMotion.

To minimize north-south traffic caused by vMotions, the distributed virtual switch can be configured to prefer Fabric A for vMotion vmkernel ports.

As illustrated in the diagrams, layer 2 adjacent traffic flows between vNICs provisioned to the same UCS Fabric will remain internal to the UCS domain.  The Fabric Interconnect will handle the packet switching in this case.

(I color-coded the vNICs to show their relationship to Fabric A or Fabric B)

In the case of cross-fabric vNIC communication, packets will traverse the upstream LAN to bridge between UCS Fabric A and Fabric B.

L3 traffic always traverses the upstream LAN for routing, unless you are using some sort of virtual router like say, NSX…

The traffic pattern above can be configured in the VMware Distributed Virtual Switch by modifying the uplink failover order for the DVS port group used by VMKernel vMotion enabled ports to “active” on Fabric A vNICs and “standby” on Fabric B vNICs.

The shown screenshot is an example for a DVS with 2 uplinks, and presumes that dvUplink1 is associated with Fabric A and dvUplink2 is associated with Fabric B.

This is just one approach, there are definitely alternatives.  Note that this approach doesn’t play so well to use of the Nexus 1000v, although you could pin the vMotion vethernet interface to a manually defined channel subgroup, I haven’t been able to identify a way to keep the redundancy of failover in that model.

A totally different approach calls for configuring the vMotion network redundancy at the chassis level.  On the UCS this would mean configuring the vNIC with failover enabled.  This is a possibility although it is somewhat idiomatic to a pure UCS environment.

In this particular case I decided to keep all vNICs configured the same (as VLAN trunks) due to two design considerations – simplicity and consistency across platforms.