This has been discussed elsewhere, I’m just putting it here as a quick reference for UCS traffic flow as it pertains to vMotion.
To minimize north-south traffic caused by vMotions, the distributed virtual switch can be configured to prefer Fabric A for vMotion vmkernel ports.
As illustrated in the diagrams, layer 2 adjacent traffic flows between vNICs provisioned to the same UCS Fabric will remain internal to the UCS domain. The Fabric Interconnect will handle the packet switching in this case.
(I color-coded the vNICs to show their relationship to Fabric A or Fabric B)
In the case of cross-fabric vNIC communication, packets will traverse the upstream LAN to bridge between UCS Fabric A and Fabric B.
L3 traffic always traverses the upstream LAN for routing, unless you are using some sort of virtual router like say, NSX…
The traffic pattern above can be configured in the VMware Distributed Virtual Switch by modifying the uplink failover order for the DVS port group used by VMKernel vMotion enabled ports to “active” on Fabric A vNICs and “standby” on Fabric B vNICs.
The shown screenshot is an example for a DVS with 2 uplinks, and presumes that dvUplink1 is associated with Fabric A and dvUplink2 is associated with Fabric B.
This is just one approach, there are definitely alternatives. Note that this approach doesn’t play so well to use of the Nexus 1000v, although you could pin the vMotion vethernet interface to a manually defined channel subgroup, I haven’t been able to identify a way to keep the redundancy of failover in that model.
A totally different approach calls for configuring the vMotion network redundancy at the chassis level. On the UCS this would mean configuring the vNIC with failover enabled. This is a possibility although it is somewhat idiomatic to a pure UCS environment.
In this particular case I decided to keep all vNICs configured the same (as VLAN trunks) due to two design considerations – simplicity and consistency across platforms.