NSX-T: A quick glance at the new version 2.4 management/control plane of NSX-T

I worked quite a bit with NSX-v over the last years, but I am just coming to terms with NSX-T. The more excited I am to get my hands on the latest version (2.4) and try out the new way of handling the management/central control plane.

Status quo

For NSX-v and NSX-T prior to 2.4 the setup of management and control plane was like this:

  • A single VM for the manager who serves as API endpoint
  • Three controller VMs .
  • (Only on NSX-T) a policy manager VM.

Changes with version 2.4

Starting with this new version all mentioned functions (and hence appliances) are consolidated with the ability to create a management cluster.

If you want to get started, here are the most important links:

How do we get things going with 2.4?

As with prior versions the first step is to deploy the NSX unified appliance, since the dialog didn’t change much I will omit the screenshots and you can find the docs here. Just note:

  • that you will now need a 12 character long password, password managers FTW!
  • There is a new role “nsx-manager nsx-controller” for the appliance, reflecting the change.

After deployment and logging in, you will see the new UI which I really like as it is very clean.

Following the links to the NSX nodes you can manage your installation and that includes adding a virtual IP for your management cluster.

Setup of the virtual IP was as easy as typing the desired value in the text box after hitting “EDIT”. And as you can see above, I am already accessing my NSX UI by the virtual IP.

Break-out: The cluster status

The front-end is running on the second controller as I decided to break a few things on the first node (by intent *cough*). A click on “DEGRADED” will reveal what is up with my NSX installation, neat!

Adding a new node (UI)

With the first node in place, you want to add two additional nodes for a proper cluster. The UI provides you with a nice wizard to do so, obviously there are other, more automated ways of deployment but I leave that for another day.

To deploy nodes, a computer manager (vCenter) needs to be added to the NSX manager. This process hasn’t changed, so I didn’t include it here.

  • Note that you can provide a common set of information for all nodes here (as the dialog title “common attributes” suggests).
  • The appliance sizes are relevant for your targeted deployment, most SME will probably use medium where as bigger installations will go for large nodes. The NSX-T documentation as well as configmax will guide you for sizing.
  • Take that into account for your resource and HA planning that RAM for all nodes is reserved,
  • I cannot give a support statement here, but as you can deploy more NSX nodes and decomission old ones on the fly, I would image that you can swap out smaller nodes one after the other in a merry-go-round fashion if you hit a limit.

In my case I wanted to add just one additional node, but the dialog would also allow to deploy more nodes if needed

After that the node will be deployed and shows up in the UI afterwards.

Summary

I really like the new approach of managing the appliances. This means less admin overhead/complexity for the operations folks while providing an increased availability for the management plane.