Some notes on vIDM in general and ADFS integration

Over the last week I had my first contact with vIDM and the fine task to integrate it with ADFS for a customer and to do some tasks around NSX-T in my lab. There were some caveats I fell into and this page is more like my online notes in case you stumble here with the help of google fu.

Here are the sources for reference for the ADFS integration:

The NSX-T integration with vIDM is described in great details by my colleague Romain Decker (who has some awesome content on his blog btw).

Access denied or how to force local login

When my configuration didn’t work after following the guides, I was unable to go back to the admin console because I was re-directed to my default authentication method. Then I was looking for a way to login against the system domain. Use the following URL to enforce this:


vIDM authentication methods for an IDP

When you create an Identity provider (IDP), vIDM forces you to specify an authentication method. Both guide specify the classes

  • urn:oasis:names:tc:SAML:2.0:ac:classes:Password
  • urn:federation:authentication:windows

During our debugging session I learned from the ADFS folks that these classes are not an universal standard or defaults, but depend on what your provider has configured. Unfortunately, this is a mandatory field and hence you need to talk to your ADFS team first on what they expect from you. If nothing else helps, set this to

  • urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified

Change the authentication method for an IDP

So you created an IDP but you made an error? You can only change the “authentication method” if it is not in use. Change the policy that uses this authentication method to another setting, make your changes and re-include your authentication method.

Debugging SAML messages

When I configured the ADFS integration it didn’t work and I didn’t know why. The way forward was to capture the SAML message and see which failure was thrown. AWS provides a nice summary on how to capture the SAML response in your browser here:

One you got the content you can use this page to decode the response and try to make sense of it all:

Get the vIDM certificate thumbprint

There is an official page in the VMware documentation, however I found that you can shorten it down to this (with possible improvements to use OpenSSL from remote to reduce the steps further):

  • SSH to the vIDM host and log in as sshuser.
  • su root or sudo -s or whatever suits you to get root access
  • Change directory cd /usr/local/horizon/conf
  • Get the thumbprint: openssl x509 -in <FQDN of vIDM host>_cert.pem -noout -sha256 -fingerprint
Update 2019-08-01:

Find the vIDM debug logs

The bulk of vIDM log files is not in the standard directory /var/log

  • SSH to the vIDM host and log in as sshuser.
  • su root or sudo -s or whatever suits you to get root access
  • change directory to /opt/vmware/horizon/workspace/logs
  • If you need to increase the verbosity, edit /usr/local/horizon/conf/

VCDX: Some thoughts on requirements

What is this about?

This blog post has not the intend to define what a requirement/constraint is, there are already very good posts out there. Lately I pointed a lot of people over to Jeffrey Kusters who did an excellent job of summarising it but I included also two other good links:

What is this about?

If you are like me, coming from a pure technical background, the conceptual model of the VCDX exam proves to be the hardest part – especially since your journey starts with it and there is no shortcut around it (Rene gives good advice, as always  Think of the conceptual model like the foundation of a house, if it is not solid everything you build on top of it will collapse eventually (btw: It happened to me, forcing me to re-write on more than one time. Pro tip: don’t be like me on this).

Summing it up: Ideally this post forces you to realise that you need to invest time into the process of learning on how to develop a conceptual model and, with as the post has a focus on requirements, learn that they do not come out of thin air. There is actually a whole field called “requirement engineering” which has the target to gather and formulate requirements – just to give you a feeling for the relevance of this topic.

Requirement engineers have methods and techniques which you can study. Use this to build an understanding of what is relevant and how it is done. Try to apply this by formulating some solid requirements for your VCDX process (and keep that knowledge for your future projects).

Where do I start?

You know, there is always google 🙂

Seriously, (solid) requirements are needed in a lot of places, one of my favorite reads is provided by the NASA.
They have a whole book online, for free – start looking at chapter 4, the System Design process:

Do you need to read all of this and how does this all apply to VCDX?

Heck, no you don’t need all of this! But start digging into it and you learn some good stuff – they key here is building an understanding to know why requirements are important. For instance, I like the “TABLE 4.2-1 Benefits of Well-Written Requirements“.

Also, did you consider to try to talk to people who deal with requirements on a daily basis? Do you know any project manager or software developer/architect? They might be more than happy to help you out.

Can you sum it up, what does it mean for my VCDX document?

I cannot give you a definitive answer but a few personal opinions:

  • Write a requirement like the stakeholder investing money, not like the tech nerd you are (I include myself here).
  • Don’t focus on the implementation and do not make a hidden design decision out of a requirement: Focus on what the system/infrastructure needs to achieve, not how.
  • Did you test if other people understand your requirement? Ask around, also among non-technical people. Does everybody expects the same when reading your requirement?
  • For the majority of requirements, do not use subjective adjectives, e.g. what do you mean by fast storage? People might have different opinions on that.
  • Going in the same direction as the bullet point above, can you validate your requirement in any way? (Yes, this is one reason why there is a validation plan in the VCDX)
  • Be specific, set scope and expectations: Like when you include growth in percent, is it measured from your baseline or a “year over year”-value? For how many years do you need to plan? Which areas (compute, storage, …) do you need to consider?
  • Avoid any mis-interpretation with negative requirements, e.g. must not do X or Y. The “not” might be easily overlooked and there is still room for the question what the design must do.

On the topic of how much meta-data a requirement needs, I had a table with the following information:

  • Unique ID: Allows you reference the requirement in your design
  • Description: The main matter of a requirement.
  • Design quality: More for my sake to ensure I got everything covered
  • Issuer: Who signed off on the money going into this requirement?

I won’t say it is perfect but it did the job and it may be a good starting point if you haven’t considered anything in this regard.

The end

This is not much but I hope it points candidates into the right direction. I am always open for discussion and feedback, hit me on twitter if you like!

Disclaimer: Honestly, I feel like an imposter for writing this, constantly debating with myself if I can dare to put this out into the wild as I feel that my own stuff was not stellar. However, with some support from Bilal and Chris I decided to go for it. After all, it is a topic most candidates struggle with and I was no exception.

When you are using SPBM but the rest of the world is not (vSAN)

Today I came across an issue I did not immediately think about selecting a data protection or replication solution for a vSAN deployment:

Let us say we have a vSAN datastore as target for a replication (failover target) or a data restore from backup. But what if your data protection or disaster recovery/replication product does not support storage policies?

You might find yourself facing some unexpected problems.

The restore or failover might succeed but your VM files (including VMDKs) are subsequently protected with the vSAN default policy. If you did not modify it, this will result in FTT=1 and FTM=RAID1 (If you are not familiar with FTT and FTM, search for in conjunction with vSAN).

At first glance, this does not look too bad, does it?

Now what if the source VM was protected with FTT=2 and FTM=RAID6?
The restored VM has now less protection with more space consumption and the VM might not even fit on the datastore, even if the clusters are setup identically or even it is the same cluster (in case of a restore).


A VM with a 100GB disk is consuming 150GB  at the source vSAN datastore (with FTT=2 and FTM=RAID6 ) and is able to withstand two host failures. However, it would consume 200GB at the destination datastore (with FTT=1 and FTM=RAID1) as the latter would create two full copies and only one host failure can be mitigated.

Sure you could modify the default policy for this, but what if you have different settings? The beauty of SPBM lies in the fact that you can apply it per disk and re-applying the policy settings for a more complex setup will become messy and error prone.

Now if you ask me for a good example on how to do it:

Veeam shows how to integrate this here.

VMware offers a storage policy mapping in SRM

I/O acceleration at host level – Part II: PrimaryIO appliance deployment

In part 1 I already talked about the basics of I/O acceleration and PrimaryIO as a possible alternative to PernixData FVP. In this (short) post we’ll look at the deployment of the APA appliance.

I recently had the time to download the newest version, GA 2.0, in order to set up a customer Proof of Concept (PoC) .

And I failed at the initial deployment.

Out of the box the OVA would throw an error about an unsupported chunk size.

PrimaryIO – chunk size error with vCenter 6.5

Now I was already sitting in front of a vCenter version 6.5 (with ESXi hosts on 6.0) and as with FVP this is currently not supported for APA (I got the info from support that PrimaryIO targets April/May 2017 for the support).

But since this is a PoC/Lab I didn’t give up easily:

A nice VMware KB article describes  the problem at hand and offers a solution.

Since OVAs are essentially a compressed archive, I used 7zip to extract the files and decided to have look at the appliance definition file (.OVF).

Line 5 and 6 contained the virtual disk definitions and the parameter:


Removing it, the .OVF looked like this after editing:

Gotacha which is also mentioned in the KB article:  You have to delete the .mf file afterswards or at least update the checksums since the content was modified and they no longer match.

I skipped the step of re-creating an .OVA-file since we can use the .OVF and .VMDK-files directly in the FlexClient-deployment wizard. The only remaining adjustment was to relieve the .VMDK-files of their training zeros.

This left me with there three files:

PrimaryIO GA 2.0 files are some adjustments


After that the deployment worked like a charm and my next task was to setup networking since I opted for “fixed IP” within the OVA deployment wizard. Unfortunately the OVF-script does not include a script to set the IP information, however this step is well documented in the manual.

Essentially the APA-appliance is an ubuntu  with enabled “root” login (default password: admin@123) and setting an IP is straightforward.

PrimaryIO – Linux Appliance screen after login

You might adjust additional linux stuff, like syslog and ntp according to your needs.

However, from a security standpoint I am a bit worried.

The appliance is based on Ubuntu 12.04 LTS, which is nearing end of life/support in few weeks– after that there are no more updates.
As you can see, there are initially many updates missing after deployment. I am not sure how the update policy is on the appliance (i.e. can I just use apt-get).

Regarding these questions I will raise this issue with PrimaryIO support and update this article.

Updated info from support:

We do not recommend an apt-get upgrade of the appliance. If you are facing any specific issue – we can help address that. […]
I have a confirmation that the APA 2.5 release scheduled for May 2017 GA will have the latest ubuntu LTS based PIO Appliance.


All right, for a few week I am OK with an “old version”.

Part 3 will go into the APA configuration

Adding a second syslog server to a VCSA 6.5 (Appliance MUI)

Beware: This is probably not supported

I was asked if I could add another syslog server to an existing VCSA deployment. With the nice blog post from William Lam in mind, adding the second server should be easy. Just edit the configuration and there you are.

The UI won’t allow this.

I guess it is CLI time then, luckily the blog post mentions this:

A minor change, but syslog-ng is no longer being used within the VCSA and has been replaced by rsyslog.

So we are just looking at a matter of finding the right config file.

In the main file

  • /etc/rsyslog.conf

you can an “include” statement point towards the file

  • /etc/vmware-syslog/syslog.conf

The only content is our first syslog server, configured as a remote syslog target.

At this point adding the second server is not a big deal, the file now looks like this:

Remember to reload the syslog service afterwards.


Another gotacha (besides the lack of support):

Changing the settings via VAMI/UI will overwrite your modifications

I/O acceleration at host level – Part I: Overview & PrimaryIO

In my last to posts I have been rambling about certifications, but it is time to put something useful on this page.

When PernixData released 2.x of FVP they really got the attention of the vCommunity. Lots of posts and happy people all around about two years ago.

For those of you who are not familiar with the topic:
The idea of FVP and similar products is to take the I/O handling as close to the source as possible, which is in case of VMs the ESXi host. You can use memory or flash storage devices as a read or (mirrored-)write cache for your most demanding VMs. (Cached) Requests are instantly answered and load is taken from your storage system. This is especially useful if you have an older model with very little or no flash and/or experience performance problems and you 6want to protect your investment (read: no chance to upgrade hardware any time soon). There are of course other use cases but I’ll try to keep this simple.

But after Nutanix acquired PernixData they essentially buried FVP as a product.
However Nutanix fulfills the support for customers with an existing contract but by now you should have a “plan B” if you need I/O acceleration at host level.
FVP lacks vSphere 6.5 support and I find it hard to believe that great efforts are being made to deliver it anytime soon. (Side note: More than FVP I’ll be missing the PernixData Architect which delivers quite a good view into the storage layer and is very easy to handle.) *UPDATE: I am very sorry, I didn’t give the folks at Pernix/Nutanix the credit they deserve, update is targeted for 04/17 according to support.

So, let’s have a look at a possible alternative:

Last year PrimaryIO offered PernixData FVP customers their Application Performance Accelerator (APA) for VMware at no charge except support costs.

APA uses vSphere APIs for I/O Filtering (VAIO) for their implementation, which  is quite nice in my opinion since this is standardized within the vSphere environment. Here I take the liberty to quote directly from VMware:

VAIO is a Framework that enables their parties (Partners) to develop filters that run in ESXi and can intercept any IO requests from a guest operating system to a virtual disk. An IO will not be issued or committed to disk without being processed by IO Filters created by 3rd parties (source)

Right now there are two supported use cases (caching and replication) and according to VMware

caching will significantly increase the IOPS available, reduce latency, and increase hardware utilization rates (quote from the source from above)

If someone is interested in an overview of currently supported VAIO solutions, you may find it here.

What does APA offer?
According to their technical brief they do not cache randomly or only the most frequented blocks but do this acutally application aware (hence the name, I guess):

Only the most important application components such as frequently accessed tables or indexes that speed up queries are optimized, while less critical elements such as log records, replicas, audit entries, or ad hoc user activity are de-prioritized.

I’m not sure how to track this in the future and verify the claim, so comments are welcome and if I find a way I’ll let you know 🙂

Like FVP they APA offers the possibility to mirror the write cache across hosts which is a must if you take this into production. I’ll need to check if they support fault domains (i.e. two data center).

For a more details on how it works have a look at the VMware blogs where Murali Nagaraj, CTO of PrimaryIO posted about this.


Thank you for your attention, in part 2 I will continue on with the APA appliance deployment


Note: My posts on this topic are in no way sponsored by PrimaryIO.

Personal blog, remember? 🙂

Adding some thoughts on the value of VMware certifications

This morning I posted about how I feel about the VCDX price increase. TL, DR: I can understand the reasons behind this, but VMware has to deliver value for the money.

Having said that, there is a bigger issue in the room in my opinion.

Essentially this tweet from Jason Nash triggered this post:

For what it is worth, I think that with VMware, the partner tier says not much about the technical skills and qualification.

As you can read here, the requirements for the highest level, Premier Partner, is essentially revenue driven. Sure, you need four VCPs but when you are big enough for a million of sales within 12 months, sending out four people on an ICM course is peanuts.

Do not get me wrong, the VCP has is place but from the higher partner level I would expect more to verify the expertise. VCP is a multiple choice exam, VCAP deployment is hand-on (you cannot braindump that) and design requires you to draw and place something (again, no braindumps here).

So why would or should a partner spend any money to certify his employees toward the “VMware Certified Advanced Professional”-level or even above?

The answer is: I do not know and I cannot see a business case for this at the moment.

Back on twitter Joe Silvagi  from VMware pointed out that there are business benefits for a partner:


Nevertheless, here I try to see it from a potential customer point of view.

You cannot pick an enterprise/premier partner and know that they have at least a number of n VCIX or even a VCDX in a certain field (solution competency) to guarantee a certain amount of knowledge.

This would really count for something BUT…

… VMware needs to promote their advanced certifications so these get the attention and value they deserve.

I had to explain to many people what my VCAP or VCIX actually means, customers and coworkers alike, and even what the next level with VCDX would be. For a VCDX attendee this is the worst case, you put effort into your certification in order to get benefits (from a pay increase to a new job) but if no one knows what this title is, you have a problem because this lowers your ROI

Compare this to Cisco, if you say “CCIE” everyone has an idea what you are talking about and goes like “ahhh” and “ohhh”.

This might be different in the US, but this is my unfortunate experience here in Germany.




My two cents on the VCDX price increase

Last week I completed my VCIX-DCV and started contemplating if I should take the long way towards the VCDX.

Currently doing I am doing preliminary outline of a project in order to give it the Paul McSharry basis assessment. After that, it is finding a VCDX-mentor who is willing to give my project a quick review and discourage or encourage me 🙂

This morning there is a lot of buzz around the vCommunity about the Certification Exam Price increase. Going from about 1000$ to 3000$ for defense seems a bit too much for many.

Personally, I was always amazed how “inexpensive” the VCDX program was.

With this certification, you aim to be one of around 250 persons in the world who hold the VMware top tier certification.

Think of it economically:

If you personally pay for it, what would be the gain of this certification?

Think of it like a student loan. You will probably get more salary or can apply for a new job at architect level. I am going out on a limb here, but in these job regions paying the sum of 3000$ is not totally off the limits (not easily done, but you should get it back over a few months).

If your company pays for this:

Your employer is probably using you and your certification to get new projects or customers. From there on this is a simple investment calculation. Split 3000$ on two or three projects and it is payed off.

Another important point in my opinion:

We are also finding ways to recognize our VCDX panelists who commit a tremendous amount of their own time and resources to support the program.

I was always wondering how many applications the panelists have to read and evaluate.

This is a manual process unlike the VCAP exams where there is one setup and a script will do the evaluation. And don’t forget this is done by highly qualified VCDX architects who have most likely other things to do 🙂

So where does it leave me?
I am still going forward with my outline, make or break won’t be the exam price for me.




Hello world! Or: Do we need another IT blog?

This is my first post and as the topic suggests it is just another blog about IT stuff like so many already available.

So, why would I even bother?

Recently I switched from read-only to read-write on reddit and became active in /r/vmware.
I created a post about my exam experience for VCAP6-DCV design like so many before (if someone is contemplating if he or she shoud post his/her experience as well: please do it!).

Since I failed on the first try, I was quite worked up about this and wanted to share my feelings (to put it mildly).
Other users got into the discussion, shared their experience and some even offered their help to me.
This was great and I took the offer for help and passed a week later on the second try.
Where does this blog come in?
Well, I would like to give back to the community by adding some posts about IT topics and perhaps someone out there might find it helpful.
Otherwise I will be doing this for myself and for the fun of it.