Changing vRealize Automation Cloud Proxy internal network ranges

My current customer needs to use 172.18.0.0/16 for their new VMWare Cloud on AWS cluster. However we tried this in the past and were getting a “NO ROUTE TO HOST” error when trying to add the VMC vCenter as a cloud account.

The problem was eventually traced back to the ‘on-prem-collector’ (br-57b69aa2bd0f) network in the Cloud Proxy which also uses the same subnet.

Let’s say the vCenters IP is 172.18.32.10. From inside cloudassembly-sddc-agent container, I try to connect to the vCenter. Eventually getting a ‘No route to host’ error. Can anyone say classic overlapping IP space?

We reached out to our VMWare Customer Success Team and TAM, who eventually provided a way to change the Cloud Proxy docker and on-prem-collector subnets.

Now for the obligatory warning. Don’t try this in production without having GSS sign off on it.

In this example I’m going to change the docker network to 192.168.0.0/24 and the on-prem-collector network to 192.168.1.0/24.

First to update the docker interface range.

Add the following two lines to /etc/docker/daemon.json. Don’t forget to add the necessary comma(s). Then save and close.

{
  "bip": "192.168.0.1/24",
  "fixed-cidr": "192.168.0.1/25"
}

Restart the docker service.

# systemctl restart docker

Now onto the on-prem-collector network.

Check to see which containers are using this network with docker network inspect on-prem-collector. Mine had two, cloudassembly-sddc-agent, cloudassembly-cmx-agent.

# docker network inspect on-prem-collector
[
    {
        "Name": "on-prem-collector",
        "Id": "57b69aa2bd0f694d76cc553769321deebcdb79e009e0964c4b5cc47aadb14684",
        "Created": "2021-02-10T16:05:21.953266873Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "05105324cff757d76de9e2f535cfb72d2e96094a630561aa141a40aa04095f00": {
                "Name": "cloudassembly-cmx-agent",
                "EndpointID": "8f6717a969b5a1edfea37b9e3d77565c38419de18774bebf4c3981e41c1ad017",
                "MacAddress": "02:42:ac:12:00:03",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": ""
            },
            "b227cf1add6caca415b88f927fb10982b0cd846f71548f95071b65330e4024e1": {
                "Name": "cloudassembly-sddc-agent",
                "EndpointID": "4f802a81e0a5dfe50ca39675a5b5106a5fb647198f3bfa898f4f62793baad448",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

Disconnect those two machines from the on-prem-collector network.

# docker ps
CONTAINER ID        IMAGE                                                                          COMMAND                  CREATED             STATUS              PORTS                      NAMES

05105324cff7        symphony-docker-external.jfrog.io/vmware/cloudassembly-cmx-agent:207           "./run.sh --lemansDa…"   4 days ago          Up 5 minutes        127.0.0.1:8004->8004/tcp   cloudassembly-cmx-agent

b227cf1add6c        symphony-docker-external.jfrog.io/vmware/cloudassembly-sddc-agent:4cda576      "./run.sh --lemansDa…"   4 days ago          Up 5 minutes        127.0.0.1:8002->8002/tcp   cloudassembly-sddc-agent

# docker network disconnect on-prem-collector b227cf1add6c
# docker network disconnect on-prem-collector 05105324cff7
# docker network inspect on-prem-collector
[
    {
        "Name": "on-prem-collector",
        "Id": "57b69aa2bd0f694d76cc553769321deebcdb79e009e0964c4b5cc47aadb14684",
        "Created": "2021-02-10T16:05:21.953266873Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

Delete the on-prem-collector network, then re-add using the new subnet (using 192.168.1.0/24)

# docker network rm on-prem-collector
on-prem-collector
# docker network create --subnet=192.168.1.0/24 --gateway=192.168.1.1 on-prem-collector
47e3d477a87c4459f57e3a7305754b1d91e4d13e645ad4c160de5b8e64fede1a

Reconnect the two containers to the new docker network.

# docker network connect on-prem-collector 05105324cff7
# docker network connect on-prem-collector b227cf1add6c
# 
# docker network inspect on-prem-collector
[
    {
        "Name": "on-prem-collector",
        "Id": "47e3d477a87c4459f57e3a7305754b1d91e4d13e645ad4c160de5b8e64fede1a",
        "Created": "2021-05-18T15:58:55.019732144Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.1.0/24",
                    "Gateway": "192.168.1.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "05105324cff757d76de9e2f535cfb72d2e96094a630561aa141a40aa04095f00": {
                "Name": "cloudassembly-cmx-agent",
                "EndpointID": "34df13b0accf2f561e0226918a7e84d02995a25f4cc3969758a913a3f6c4e8bb",
                "MacAddress": "02:42:c0:a8:01:02",
                "IPv4Address": "192.168.1.2/24",
                "IPv6Address": ""
            },
            "b227cf1add6caca415b88f927fb10982b0cd846f71548f95071b65330e4024e1": {
                "Name": "cloudassembly-sddc-agent",
                "EndpointID": "405e7e8e1a4ad09b4cc99b0661454a4b0f32687152ca2346daf72f5a424dcd4d",
                "MacAddress": "02:42:c0:a8:01:03",
                "IPv4Address": "192.168.1.3/24",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

Reboot and do the happy dance.

Happy not-coding.

Cloud Extensibility Appliance vRO Properties using PowerShell

In this article I’ll show you how to return JSON as a vRO Property type using vRA Cloud Extensibility Proxy (CEXP) vRO PowerShell 7 scriptable tasks.

First a couple of notes about the CEXP.

  • It is BIG, 32GB of RAM. However my lab instance is using less than 7 GB active memory.
  • 8 vCPU, and runs about 50% on average.
  • It deploys with 4 disks, using a tad less than 210 GB.

Why PowerShell 7? Well it was a design decision based on the customers PS proficiency.

Now down to the good stuff. Here are the details of this basic workflow using PowerShell 7 as Scriptable Tasks.

  • Get a new vRA Cloud Bearer Token
    • Save it, along with other common header values to an output variable named ‘headers’ (Properties)
  • The second scriptable task will use the header and apiEndpoint to GET the vRAC version information (About).
    • Then save version information to an output variable named ‘vRacAbout’ (Properties)

Getting (actually POST) the bearerToken is fairly simple. Here is the code for the first task.

function Handler($context, $inputs) {
    <#
    .PARAMETER $inputs.refreshToken (SecureString)
        vRAC Refresh Token

    .PARAMETER $inputs.apiEndpoint (String)
        vRAC Base API URL

    .OUTPUT headers (Properties)
        Headers including the bearerToken

    #>
    $body = @{ refreshToken = $inputs.refreshToken } | ConvertTo-Json

    $headers = @{'Accept' = 'application/json'
                'Content-Type' = 'application/json'}
    
    $Uri = $inputs.apiEndpoint + "/iaas/api/login"
    $requestResponse = Invoke-RestMethod -Uri $Uri -Method Post -Body $body -Headers $headers 

    $bearerToken = "Bearer " + $requestResponse.token 
    $authorization = @{ Authorization = $bearerToken}
    $headers += $authorization

    $output=@{headers = $headers}

    return $output
}

The second task consumes the headers produced by the first task, then GET(s) the Version Information from the vRA Cloud About route (‘/iaas/api/about’). The results are then returned as the vRacAbout (Properties) variable.

function Handler($context, $inputs) {
    <#
    .PARAMETER $inputs.headers (Properties)
        vRAC Refresh Token

    .PARAMETER $inputs.apiEndpoint (String)
        vRAC Base API URL

    .OUTPUT vRacAbout (Properties)
        vRAC version information from the About route

    #>
    $requestUri += $inputs.apiEndpoint + "/iaas/api/about"
    $requestResponse = Invoke-RestMethod -Uri $requestUri -Method Get -Headers $headers

    $output=@{vRacAbout = $requestResponse}

    return $output
}

Here, you can see the output variables for both tasks are populated. Pretty cool.

As you can see, using the vRO Properties type is fairly simple using the PowerShell on CEXP vRO.

The working workflow package is available here.

Happy coding.

vExpert 2021 Applications are open

The 2021 vExpert applications are now open!

The program “is about giving back to the community beyond your day job”.

One way I give back is by posting new and unique content here once or twice a month. Sometimes a post is simply me clearing a thought before the weekend, completing a commitment to a BU, or documenting something before moving on to another task. It doesn’t take long, but could open the door for one of my peers.

My most frequently used benefit is the vExpert and Cloud Management Slack channels. I normally learn something new every-week. And it sure does feel good to help a peer struggling with something I’ve already tinkered with.

Here’s a list of some of the benefits for receiving the award.

  • Networking with 2,000+ vExperts / Information Sharing
  • Knowledge Expansion on VMware & Partner Technology
  • Opportunity to apply for vExpert BU Lead Subprograms
  • Possible Job Opportunities
  • Direct Access to VMware Business Units via Subprograms
  • Blog Traffic Boost through Advocacy, @vExpert, @VMware, VMware Launch & Announcement Campaigns
  • 1 Year VMware Licenses for Home Labs for almost all Products & Some Partner Products
  • Private VMware & VMware Partner Sessions
  • Gifts from VMware and VMware Partners
  • vExpert Celebration Parties at both VMworld US and VMworld Europe with VMware CEO, Pat Gelsinger
  • VMware Advocacy Platform Invite (share your content to thousands of vExperts & VMware employees who amplify your content via their social channels)
  • Private Slack Channels for vExpert and the BU Lead Subprograms

The applications close on January 9th, 2021. Start working on those applications now.

vExpert Applications open

The midyear vExpert Applications are open until June 25th, 5 PM PDT.

What the heck is vExpert you may ask? The VMware vExpert program is VMware’s global evangelism and advocacy program. 

The award is for individuals who are sharing their VMware knowledge and contributing that back to their community.

How do you do that? Writing blog articles, participating in discussions on VMware Code (Slack), presenting at VMUG’s, etc.

What is in it for you? Promotion of your articles, exposure at global events, co-op advertising, traffic analysis, and early access to beta programs and VMware’s roadmap.

Other vExpert Program Benefits

  • Invite to the private #Slack channel
  • vExpert certificate signed by CEO Pat Gelsinger.
  • Private forums on communities.vmware.com.
  • Permission to use the vExpert logo on cards, website, etc for one year
  • Access to a private directory for networking, etc.
  • Exclusive gifts from various VMware partners.
  • Private webinars with VMware partners as well as NFRs.
  • Access to private betas (subject to admission by beta teams).
  • 365-day eval licenses for most products for home lab / cloud providers.
  • Private pre-launch briefings via our blogger briefing pre-VMworld (subject to admission by product teams)
  • Blogger early access program for vSphere and some other products.
  • Featured in a public vExpert online directory.
  • Access to vetted VMware & Virtualization content for your social channels.
  • Yearly vExpert parties at both VMworld US and VMworld Europe events.
  • Identification as a vExpert at both VMworld US and VMworld EU.

The application process is pretty simple, just visit the vExpert site, create and submit your application.

Don’t forget, the midyear applications close at 5PM PDT June 25th 2020.

vRA Cloud Sync Blueprint Versions to Github

The current implementation of vRealize Automation Cloud and Git integration for Blueprint is read only. Meaning you download the new Blueprint version into a local repo the push it. After a few minutes vRA Cloud will see the new version and update the design page. It’s really a pain if you know what I mean.

What I really wanted was to automatically push the new or updated Blueprint when a new version is created.

The following details one potential solution using vRA Cloud ABX actions in a flow on Lambda.

The flow consists of three parts.

  1. Retrieve a vRA Cloud refresh token from an AWS Systems Manager Parameter, then get a refresh token (get_bearer_token_AWS). It returns the bearer token as ‘bearer_token’.
  2. Get Blueprint Version Content. This uses ‘bearer_token’ to get the new Blueprint Version payload and return it as ‘bp_version_content’.
  3. Then Add or Update Blueprint on Github. This action converts the ‘bp_version_content’ from JSON into YAML. It also adds or updates the two required properties, ‘name’ and ‘version’. Both values come from the content retrieved from step two. It also clones the repo, checks to see if the blueprint exists. Then it either creates a Blueprint folder with blueprint.yaml, or updates an existing blueprint.yaml.

The vRA Cloud Refresh Token and Github API key are stored in an AWS SSM Parameter. Please take a look at one of my previous articles on how to set this up.

‘get_bearer_token_AWS’ has two inputs. region_name is the AWS region, and refreshToken is the SSM Parameter containing the vRA Cloud refresh token.

Action 2 (Blueprint Version Content) uses the bearer token returned by Action 1 to get the blueprint version content.

The final action, consumes the blueprint content returned by action 2. It has three inputs, githubRepo is the repo configured in your github project, githubToken is the SSM Parameter holding the Github key, and finally region_name is the AWS region where the Parameter is configured.

Create a new Blueprint version configuration subscription, using the flow as the target action, and filtering the event to “‘event.data.eventType == ‘CREATE_BLUEPRINT_VERSION'”.

Now to test the solution. Here I have a very basic blueprint. Make sure you add the name and version properties. The name value should match the actual blueprint name. Now create a new Version. Then wait until Github does another inventory.

You may notice the versioned Blueprint will show up a second time, now being managed by Github. I think vRA Cloud is adding the discovered blueprints on Github with a new Blueprint ID. The fix is pretty easy, just delete the original blueprint after making sure the imported one still works.

The flow bundle containing all of the actions is available in this repository.

Spas Kaloferov recently posted a similar solution for gitlab. Here is the link to his blog.

vRA Custom Form CSS rendering issues

I ran into some interesting issues when trying to develop a custom form in vRealize Automation.

The first one was noticed when I tried to apply font-size to a field.  After doing some research and watching a few videos, it looked like some Field ID’s simply could not be mapped in CSS.  The main issue is the Field ID has a tilde ‘~’ in it, which is not a valid CSS ID.

tildeIssue

A quick search resulted in an amazingly simple solution.  Escape the tilde with a backslash ‘\’.  Stupid easy!

My CSS up to this point contained the following;

body {
font-size: 18px;
}

#vSphere__vCenter__Machine_1\~size {
font-size: 14px; 
}

#integerField_cda957b7 {
font-size: 20px;
font-style: italic;
}

#vSphere__vCenter__Machine_1\~VirtualMachine.Disk1.Size {
font-size: 32px;
}

Which produced this form.

improperlyFormattedForm

The integerField and Size (Component Profile) updated properly, but the Requested Disk Size didn’t.  (This field is actually updated using an external vRO action to change the Integer into a String, then applied to a custom property).  Ok, what I was actually after was getting rid of the Label word wrap, but stumbled across the font-size issue along the way.

After digging through some debugging info I ran into two classes, one was ‘grid-item’, and the second was ‘label.field-label.required’.

label-field-required

After updating CSS, I was finally able to change the Requested Disk Size font, as well as fixed the word wrap issue.

My final result is a good starting point.  Both Size and Requested Disk Size fields had a font size of 18, and the integer field had an italicized font with the correct size.  The shadow on the Size Label was just for fun 🙂

FormattedCustomForm

My final CSS looks like this;

body {
font-size: 18px;
}

.label.field-label.required {
width: 80%!important;
}

.grid-item {
width: 80%!important;
}

#vSphere__vCenter__Machine_1\~size {
/* font-size: 14px; */
color: blue;
text-shadow: 3px 2px red;
}
#integerField_cda957b7 {
font-size: 20px;
font-style: italic;
}
#vSphere__vCenter__Machine_1\~VirtualMachine.Disk1.Size {
/* does not work with custom properties */
/* need to dig into label.field.label.* */
font-size: 32px;
}

Well its’ about the time of day. Be well.

Migrating from NSX-V to NSX-T Error

The project this week was to convert my lab environment from NSX-V to NSX-T to support some upcoming projects.

Apparently unconfiguring the hosts through the NSX-V interface didn’t work as expected, and in fact didn’t remove all of the NSX-V components.

When trying to install NSX-T (2.3.1) on my vSphere 6.7 hosts, I received the error ‘File path of /etc/vmsyslog.conf.d/dfwpktlogs.conf is claimed by multiple none-overlay VIBS’.

vmsyslogError

After logging into one of the hosts and running ‘esxcli software vib list | grep -i nsx’ I discovered a stray NSX package called esx-nsxv.

#esxcli software vib list | grep -i nsx
esx-nsxv 6.7.0-0.0.7563456 VMware VMwareCertified 2019-03-08

This was quickly rectified by placing the host in maintenance mode, then running the following command.

#esxcli software vib remove -n esx-nsxv

Which spit out the following.

Removal Result
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed:
VIBs Removed: VMware_bootbank_esx-nsxv_6.7.0-0.0.7563456
VIBs Skipped:

After exiting maintenance mode I retried the installation by selecting ‘Resolve’ in NSX-T.

vmsyslogResolve

NSX-T installation on the host then completed successfully.

Nested NSX-T cluster on vSphere 6.7U1

I took some time this week to update William Lams Nested vSphere 6.5 with NSX-T to Nested vSphere 6.7U1/NSX-T 2.3.1, to kickstart a new customer project.

This version updated some of the vCSA OVF JSON fields, and added support for PowerShell 6.1. It was tested against PowerCLI 11.2 on a Windows 10 machine.

The original post can be found at virtuallyghetto.

You can find the new, updated file(s) at nested-nsxt231-vsphere67u1.

 

NvIPAM is Open Source

NvIPAM is open source as of today.

After some significant work over the past few weeks, I’m pleased to announce the publication of NvIPAM on Github.

This is really an Alpha release, as I’m sure lots of things will change over the course of time.

The installation is handled by an Ansible playbook, which will install and configure the CentOS machine.  Once installed you’ll need to initialize the database, then start the service.

My plan is to start documenting the project on the Wiki page as I have time.  These articles will cover installing and configuring the vRO plugin, integrating it with vRA, deploying an external vRO appliance (for vSphere IPAM integration).

You can down load your copy at https://kovarus.github.io/NvIPAM/.

Please feel free to add any issues, or even branch it and take it in your own direction.

Stay tuned.

 

Basic vRA Endpoint workflow progress

Some really good progress has been made this week, to include;

  • Adding NvIPAM as legitimate vRA Endpoint
  • Get IP Rangescropped-nvipam-1.png
  • Allocate IP from IPAM
  • Release IP to IPAM

The first, took off after looking at the code in the SDK package.  One of the main things I found was it required two actions, and four workflows.  I simply copied the ones listed int the SDK into my own folders, and off I went.  Actually the only rea

EndpointType

l change was just changing the Id’s in my copied workflow to match the action path, and the workflow ID’s.  Danged if it didn’t get added the first time.

vRAEndpoint

 

After the type was added, I simply went in added my IPAM server as  an NvIPAM IPAM endpoint.

Get IP Ranges workflow took some major rework as the SDK version uses hard-coded pools, and did not have support for a token authentication.  Bearer tokens will be used throughout the project so an action was developed for reuse.  The username, password and baseUrl are provided by vRA as an Endpoint composite type.

After making some additional changes to one of the actions and the workflow, I was able to add an IPAM pool to vRA, and assign it to a reservation. The Range Name is generated by IPAM by appending the pool to the network to simplify pool discovery (See previous posts).

NvIPAMNetworkProfile

The basic allocate and release workflows are also working for basic External network IP management.  I’ll go into more detail about those in later posts.

DNS management is next on the list.  Stay tuned.