vSphere 6.0 PSC Replication Ring Topology

In vSphere 6.0, the PSC introduces multi-master replication.  It is important to understand that the default replication topology is one-to-one, which may not be immediately intuitive.

Design Examples

For example, a site with three sites, each with a single PSC, connected via Enhanced Linked Mode, may follow this replication topology:

In this scenario, Site 1 and Site 3 do not replicate directly, only via proxy with site 2.

Here’s another topology that might “happen”, if you aren’t taking too much care about how you are deploying this:

In this scenario, Site 2 and Site 3 both replicate directly to Site 1, but not directly between Site 2 and Site 3.

In both of these scenarios, all three sites will converge on the same replication set.  However, a single site failure will prevent replication between two other sites.  This design concern becomes more critical in larger environments with more sites.

Recommendation – Use a Ring

For our larger customers we have used a ring topology to improve the design.  Using a ring topology is recommended in the VMware SDDC validated design.  Following our three site example, a ring topology would look like this:

So, how is the replication topology determined?  When you deploy each subsequent PSC after the first, during the install wizard or in the install script you must specify an existing PSC to connect for enhanced linked mode.  A replication relationship will be formed between the target PSC and the PSC you are currently deploying.

How To Deploy a Ring Topology

Ring is simple – it is just a straight line, with the last deployed PSC having an additional replication relationship to the first node in the ring.  Carefully planning the order of PSC deployments will allow you to build the “straight line” replication topology (see the first image).  Once that’s done, log on to the last PSC via SSH and add a new 2-way replication relationship to the first PSC:

For more details, see this incredibly helpful article: https://kb.vmware.com/kb/2127057

Sysprep Location in Windows vCenter 6.0

Howdy! It’s 2017, and so naturally a good time to load up sysprep files on vCenter 6.0 to support all those Windows Server 2003 deployments!




One of my customers experienced this issue, impacting their ability to use Guest Customization on W2K3 VMs.

    • Default vCenter install on Windows Server 2012 R2
    • C:\ProgramData\VMware\VMware VirtualCenter doesn’t exist, so… after poking around…
    • Placed the W2K3 sysprep files in

Could not customize a Windows 2003 Server.

Note: This was a pre-existing folder.  Also, this path seems akin to the /etc/vmware-vpx/sysprep folder on VCSA, referenced in documentation.  Thus, IMHO, not a crazy decision on the customer’s part to try dumping the files in that folder.


We manually created this path:

After copying the relevant sysprep files into this new directory, as per usual, the vSphere client allowed selecting Guest Customization options while cloning a Windows 2003 VM.


Unable to Schedule Reports in vROPs 6.3 using AD Users

If you are like most of my customers, when deploying vRealize Operations Manager the automatic user integration that happens when registering a vCenter adapter is totally sufficient to provide sys admins with access to the solution.

As you may be aware, vROPs Roles such as ContentAdmin map to individual vCenter SSO Role Privileges (All -> Global -> … vRealize Operations …).

Unfortunately, there is one caveat in the documentation to be aware of – scheduling reports in vROPs is not possible by vCenter Server users.  This is despite the fact that ContentAdmin has the vROPs permission Content -> Reports Management -> Schedule (!).

Generating Reports
vCenter Server users cannot create or schedule reports in vRealize Operations Manager.


So, the options are 1) use a local vROPs account to schedule reports, 2) setup an authentication source in vROPs to use Active Directory directly.  Assuming that your scheduled reports do not change too often, solution 1 is probably sufficient for most small or medium sized environments.


ESXi 6.0+ and 3PAR, PDL issue causing host to hang

With certain versions of HPE 3PAR and ESXi 6.0, hosts can hang, crash, or fail to install. If this occurs, you will see many PDL errors in the ESXi logs against LUN 256.

This can be resolved by setting an ESXi limit to prevent visibility of LUN 256, or by applying a patch on the 3PAR.

From the HPE release notes:

“Addresses an issue in which some VMware versions may send unsupported commands to the PE LUN (LUN 256). Such commands are currently returned with sense data 0x5 0x25 0x0. This response causes VMware to encounter an unexpected PDL alert on the LUN. The response is now changed to 0x5 0x20 0x0 to avoid this issue.”

This problem is exposed in ESXi 6.0, when the SCSI device limit was increased to 1024 devices – range LUN 0 to LUN 1023.  In ESXi 5.5, the limit was 256 devices, LUN 0 to LUN 255 – as a result the ESXi 5.5 hosts are not able to see the PE LUN and do not query it.


Resources and Reference

  • https://www.virtual-allan.com/esxi-6-update-2-pdl-errors-on-vvol-pe-device/
  • https://communities.vmware.com/thread/533806?start=0&tstart=0
  • https://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf
  • https://www.vmware.com/pdf/vsphere6/r60/vsphere-60-configuration-maximums.pdf
  • https://kb.vmware.com/kb/2144657
  • https://kb.vmware.com/kb/2004684