Nested NSX-T cluster on vSphere 6.7U1

I took some time this week to update William Lams Nested vSphere 6.5 with NSX-T to Nested vSphere 6.7U1/NSX-T 2.3.1, to kickstart a new customer project.

This version updated some of the vCSA OVF JSON fields, and added support for PowerShell 6.1. It was tested against PowerCLI 11.2 on a Windows 10 machine.

The original post can be found at virtuallyghetto.

You can find the new, updated file(s) at nested-nsxt231-vsphere67u1.


NvIPAM is Open Source

NvIPAM is open source as of today.

After some significant work over the past few weeks, I’m pleased to announce the publication of NvIPAM on Github.

This is really an Alpha release, as I’m sure lots of things will change over the course of time.

The installation is handled by an Ansible playbook, which will install and configure the CentOS machine.  Once installed you’ll need to initialize the database, then start the service.

My plan is to start documenting the project on the Wiki page as I have time.  These articles will cover installing and configuring the vRO plugin, integrating it with vRA, deploying an external vRO appliance (for vSphere IPAM integration).

You can down load your copy at

Please feel free to add any issues, or even branch it and take it in your own direction.

Stay tuned.


Basic vRA Endpoint workflow progress

Some really good progress has been made this week, to include;

  • Adding NvIPAM as legitimate vRA Endpoint
  • Get IP Rangescropped-nvipam-1.png
  • Allocate IP from IPAM
  • Release IP to IPAM

The first, took off after looking at the code in the SDK package.  One of the main things I found was it required two actions, and four workflows.  I simply copied the ones listed int the SDK into my own folders, and off I went.  Actually the only rea


l change was just changing the Id’s in my copied workflow to match the action path, and the workflow ID’s.  Danged if it didn’t get added the first time.



After the type was added, I simply went in added my IPAM server as  an NvIPAM IPAM endpoint.

Get IP Ranges workflow took some major rework as the SDK version uses hard-coded pools, and did not have support for a token authentication.  Bearer tokens will be used throughout the project so an action was developed for reuse.  The username, password and baseUrl are provided by vRA as an Endpoint composite type.

After making some additional changes to one of the actions and the workflow, I was able to add an IPAM pool to vRA, and assign it to a reservation. The Range Name is generated by IPAM by appending the pool to the network to simplify pool discovery (See previous posts).


The basic allocate and release workflows are also working for basic External network IP management.  I’ll go into more detail about those in later posts.

DNS management is next on the list.  Stay tuned.



NvIPAM Ansible Playbook

Man I love it when a play starts to come together. cropped-nvipam-1.png

First off, the refactoring is done.  Oh yeah.

Plus an Ansible playbook to prepare and setup NvIPAM on a basic CentOS virtual machine is now working.  The playbook installs and configures the following

  1. Installs the basic OS requirements
  2. Installs and configures Postgresql
  3. Installs and configures PowerDNS authoritative and recursor servers
  4. Installs and configures Python virutal environment
  5. Installs and configures NGINX
  6. Setups uWSGI
  7. And installs the application

And if that isn’t enough, I dumped the Swagger into a Postman file to help with continuing development agains vRealize Orchestrator (and other CMS).

The current install script is available at NvIPAM setup

The next step is to start tinkering with the vRA IPAM SDK.

NvIPAM basic refactoring done

Virtualization aware IPAM

I’ve spent the last few days refactoring the project.  The main reason for refactoring was accessing the PowerDns database.  The previous version was attempting to use two databases.  One for IPAM/CMDB and second remote one for PowerDns.  This was causing all sorts of issues.

The new model has a single shared database, containing all the tables. PowerDns tables are created manually by importing the schema from their site.  The other tables are created using flask-migrate.

My next step after adding the few remaining DNS routes is to move it up to my CentOS machine and front end the whole thing with Gunicorn or some other WSGI using Ansible.

Stay tuned.

NvIPAM plans

Project goals

NvIPAM is an IPAM solution specifically targeting VMware vRealize Automation (vRA) deployments.

During on project I had a customer with multiple networks using the same Network Profile.  The profile had several IP ranges managed by an external IPAM solution.  When the customer would request the machine at network would be assigned, but it would be assigned an IP address from the wrong pool.  The logical workaround was to have a unique network pool per network.

I think the main issue with legacy IPAM solutions is they don’t understand Virtual Networks. Most that I’ve worked with are based on VLAN’s.

NvIPAM’s network schema includes the network-id, network name, datacenter and cluster.  The intent is to use the Event Broker payload to determine the network, then grab an IP address from the pool (or pools) associated with that network.

The beta version provides the following features;

  • NSX / vSphere network keys
  • Pools bound to the network (The network JSON includes associated networks)
  • Tags for Networks and Pools
  • Basic CMDB
  • PowerDNS A/PTR record management
  • Swagger API provided by flask_restplus
  • API ONLY – No UI other than Swagger
  • Ansible playbooks to install and configure base packages
  • Postgresql database
  • PowerShell scripts to capture vSphere network information (includes NSX logical wires)

The beta version includes a basic CMDB, and DNS through PowerDNS.

Stay tuned