vCenter Server Appliance Migration fails at 50%: shutting down source machine

I love the vCenter Server Appliance. The migration works pretty well. Still from time to time I stumble across minor problems (which until now were always quite easy to workaround/fix).

One of this migration ‘issues’ I was faced with recently at a customers site.

We migrated a vCenter against an ESXi host which was using a distributed switch and the corresponding portgroup as a target network.

Since we add the virtual network adapter directly on the ESXi host to the distributed switch we need to have an ephemeral portgroup (otherwise only the vCenter could add the VMs network adapter to this portgroup).

The general process of the migration look like the following.

  • Deploy a new and empty vCenter Server appliance and connect it to the network
  • A temporary IP-address is given to this vCenter Server appliance
  • All relevant data of the source windows based vCenter Server is exported and transferrred over the network to the new vCenter Server appliance
  • When the whole data-set is transferred, shutdown the original vCenter and give the new vCSA the network identity of the original vCenter

Unfortunately the last step was not working properly. After a certain amount of time (and coffee) the migration process has stuck at 50% – Shutting down source machine

Read more

vSphere 6.5: Virtual Disk / VMDK Hot-extend beyond or equal to 2TB is NOW supported

When vSphere 6.5 was announced I was quite impressed about the features. Gathering more and more hands-on experience so far I am more than happy with it.

One of the new features that can have a real operational benefit hasn’t been documented so far that often (or at least I haven’t seen it anywhere).

Before vSphere 6.5 it was impossible to increase the VMDK size of a DISK that was larger than 2TB when the Virtual Machine was powered on. That was a fact that not many organizations were aware of it until they stumbled upon it.

From an architectural point of view there shouldn’t be many use cases where such a large disk layout would be the best practice. But from an operational point of view for many of my customers this has been a bigger issue.

Read more

vCenter 6.5: #SRM, #vSphere Replication, #NSX problems after SSL change (

After upgrading to vCenter 6.5 I replaced the Certificate Authority certificate of my (external) Platform Service Controller (PSC) with an ‘flenzquest-enterprise ;-)’ signed certificate.

The tasks to replace the ssl certificates haven’t changed much from version 6.0 and has been document very well within the community.

After the successful replacement I realized that I had problems with vSphere replication and NSX. I know that NSX is not supported yet with vSphere 6.5, but so far the NSX Manager connectivity with vCenter 6.5 has worked pretty well (until I replace the certificates).

I had a very bad feeling about this issue and googling about it brought an old case to my attention which I thought has been fixed quite a while ago (obviously it hasn’t). I found an old chat protocol of me, Frank Büchsel and Feidhlim O’Leary on Twitter.

Read more

Let’s troubleshoot User Environment Manager (#UEM) 9.X: How to avoid errors during the installation

As I have written in my last post UEM is a game changer in the way how we can create great VDI solutions.

Having worked a lot with VMware’s User Environment Manager (UEM) within the last month I saw many errors made and occurring during the installation phase.

Even though the the installation is quiet straight forward, some minor mistakes can happen from time to time. I am going to summarize which symptoms might occur, how to check to gather further information and what most possiblely has caused the malfunction.

At the moment the post is focusing on the Active Directory Group Policy based installation & configuration within UEM. The newest version 9.1 allows us to it also in a non-ad way. If demand is there, I can add a section for that topic as well.

I am not going to cover the basic installation steps. Chris Halstead did a great job on that topic (and for all the things he worked on [flings, blogs, etc.]).

Please use that one for the basic installation tasks and come back to this post if you need to troubleshoot further ;-) With the following list you can fix 98% of the problems that might occur during the UEM installation ;-).

Read more

Deploying EUC Access Point via the Deployment Utility (fling) fails

Some of you might know that VMware finally creates an alternative to the Horizon View Security Server based on a virtual linux appliance. I don’t want to argue about that topic, but I think we all can agree on that many security people will like the idea to get rid of our Windows based Security Server within the DMZ.

Since the inital deployment of VMware Access Point can be really painful, Chris Halstead created a nice little GUI that helps us during the deployment.

Just fill in the relevant parameters and the small tool will create an output string that gets executed via my beloved ‘OVFtool’.

Screen Shot 2016-07-12 at 16.18.49

Read more

Horizon View Update to 7.0.1 warning: Connectivity problems due to changed tunnel settings

We observed a strange behaviour after updating from Horizon View 7.0.0 to 7.0.1. For whatever reason the tunneling settings within the Connection Server changed back to the initial settings (tunneled for HTTPS/Blast).


This might have a severe impact if you have a Load-Balancer in front of it configured for Src-IP session persistence where a direct connection is a MUST.

Those settings define if the remote-session between the Horizon View Client and the View Agent within your virtual Desktop are either tuneled over the security/connection server or directly between the client & agent.

I know that this is not a common situation after the update, but I could not figure out what triggered this effect. Anyway to be in a save position, after updating please check the value to ensure they are fitting to your system design:

–> Deselect both checkmarks and you are back in business.


VMware #vSAN Queue Depth: Call for input/discussions

During the week I was at a customer site that is using vSAN 6.2 as foundation for their upcoming virtual desktop infrastructure (Seems like 2016 is really really the year of the VDI). I love vSAN and believe that at the moment it’s a great fit for many dedicated use-cases within the virtualization field.

During some load- & failover tests of the vSAN installation I realized something regarding the IO-queues within the vSAN-stack and to be honest, I am not quiet sure what the risks, mitigations and therefore the correct actions are.

We open a VMware ticket in parallel, but if you have any more in-depth knowledge about this topic, please let me know since this might be interesting to more of people (since the number of vSAN implementations is increasing).

Since the integration of flash/SSD in the performance/cache tier of vSAN the performance is great compared to classical HDD-based solutions.

To ensure a good performance level Duncan Epping and Cormac Hogan talked about the queuing topics within some blog posts or the offical vSAN troubleshooting reference manual (great document btw.).

Screen Shot 2016-06-10 at 09.40.18


Screen Shot 2016-06-10 at 09.40.36

Read more

vSphere console: black screen on Windows 10 & Windows 2012 R2

If you have a black-screen on the vSphere console (no matter if using .NET client, Web-Client, Remote-Console, etc.) typically the following things help you out:

  • Having the correct port opened between your client and the ESXi host where the virtual machine is running on (TCP: 903)
  • Correct working DNS: Your client must be able to lookup the ESXi FQDN
  • Permissions

For sure there are many more things… most of them are well documented nowadays… so check them out.

There might be another reason if you use Windows 10 or Windows 2012 R2 (Not sure how it is in the older versions) where the symptoms (a black screen) might be similiar, but the problem is not vSphere related (even though it seems to be).

Read more