After some significant work over the past few weeks, I’m pleased to announce the publication of NvIPAM on Github.
This is really an Alpha release, as I’m sure lots of things will change over the course of time.
The installation is handled by an Ansible playbook, which will install and configure the CentOS machine. Once installed you’ll need to initialize the database, then start the service.
My plan is to start documenting the project on the Wiki page as I have time. These articles will cover installing and configuring the vRO plugin, integrating it with vRA, deploying an external vRO appliance (for vSphere IPAM integration).
Some really good progress has been made this week, to include;
Adding NvIPAM as legitimate vRA Endpoint
Get IP Ranges
Allocate IP from IPAM
Release IP to IPAM
The first, took off after looking at the code in the SDK package. One of the main things I found was it required two actions, and four workflows. I simply copied the ones listed int the SDK into my own folders, and off I went. Actually the only rea
l change was just changing the Id’s in my copied workflow to match the action path, and the workflow ID’s. Danged if it didn’t get added the first time.
After the type was added, I simply went in added my IPAM server as an NvIPAM IPAM endpoint.
Get IP Ranges workflow took some major rework as the SDK version uses hard-coded pools, and did not have support for a token authentication. Bearer tokens will be used throughout the project so an action was developed for reuse. The username, password and baseUrl are provided by vRA as an Endpoint composite type.
After making some additional changes to one of the actions and the workflow, I was able to add an IPAM pool to vRA, and assign it to a reservation. The Range Name is generated by IPAM by appending the pool to the network to simplify pool discovery (See previous posts).
The basic allocate and release workflows are also working for basic External network IP management. I’ll go into more detail about those in later posts.
I’ve spent the last few days refactoring the project. The main reason for refactoring was accessing the PowerDns database. The previous version was attempting to use two databases. One for IPAM/CMDB and second remote one for PowerDns. This was causing all sorts of issues.
The new model has a single shared database, containing all the tables. PowerDns tables are created manually by importing the schema from their site. The other tables are created using flask-migrate.
My next step after adding the few remaining DNS routes is to move it up to my CentOS machine and front end the whole thing with Gunicorn or some other WSGI using Ansible.
NvIPAM is an IPAM solution specifically targeting VMware vRealize Automation (vRA) deployments.
During on project I had a customer with multiple networks using the same Network Profile. The profile had several IP ranges managed by an external IPAM solution. When the customer would request the machine at network would be assigned, but it would be assigned an IP address from the wrong pool. The logical workaround was to have a unique network pool per network.
I think the main issue with legacy IPAM solutions is they don’t understand Virtual Networks. Most that I’ve worked with are based on VLAN’s.
NvIPAM’s network schema includes the network-id, network name, datacenter and cluster. The intent is to use the Event Broker payload to determine the network, then grab an IP address from the pool (or pools) associated with that network.
The beta version provides the following features;
NSX / vSphere network keys
Pools bound to the network (The network JSON includes associated networks)
Tags for Networks and Pools
PowerDNS A/PTR record management
Swagger API provided by flask_restplus
API ONLY – No UI other than Swagger
Ansible playbooks to install and configure base packages
PowerShell scripts to capture vSphere network information (includes NSX logical wires)
The beta version includes a basic CMDB, and DNS through PowerDNS.