vRO Action PowerShell Zip importing and use

One of my current tasks is to leverage vRealize Automation Orchestrator to meet the following use case.

  • Get the next available subnet from InfoBlox
  • Reserve the gateway and other IP’s in the new subnet
  • Create a new NSX-T segment
  • Create new NSX-T security groups
  • Discover the new segment in vRealize Automation Cloud
  • Assign the new InfoBlox to the discovered Fabric Network
  • Create a new Network Profile in vRA Cloud

This weeks goal was to get the InfoBlox part working. Well I had it working two years ago, but couldn’t remember how I did it (CRS).

Today I’ll discuss how to use vRO to get the next available subnet from InfoBlox. The solution uses a PSM I build, along with the PowerShell script which actually does the heavy lifting. One key difference between my solution and the one from VMware’s documentation is the naming of the zip file. This affects how to import and use it in vRO.

The code used in this example is available in this GitHub repo. Clone the repo, then run the following command to zip up the files.

zip -r -x ".git/*" -x "README.md" -X nextibsubnet.zip .

Next import the zip file into vRO, add some inputs, modify the output, then finally run it.

Within vRO, add a new Action. Then change the script type to “PowerCli 12 (PowerShell 7.1).

PowerShell Script Type

Change the Type to ‘Zip’ by clicking on the dropdown under ‘Type’ and selecting ‘Zip’

Click ‘Import’, then browse to the folder containing the zip file from earlier in the article.

You will notice the name is not ‘nextibsubnet.zip’ but InfobloxGetNextAvailableSubnet.zip. The imported zip assumes the name of the vRO Action.

Now the biggest difference between my approach and the VMware way. If you look at the cloned folder you will see a file named ‘getNextAvailableIbSubnet.ps1’. The VMware document called this file ‘handler.ps1’. Instead of putting in ‘handler.handler’ in ‘Entry Handler’, I’ll use ‘getNextAvailableIbSubnet.handler. This tells vRO to look inside ‘getNextAvailableIbSubnet.ps1’ for a function called ‘handler’.

Next we need to change the return type to Properties, and add a few inputs.

Save and run. And if everything is in order, you should get the next InfoBlox subnet from 10.10.0.0/24. The results from the action run.

So there you go. Now on to the next adventure.

Packer HCL and PVSCSI drivers

Just this last week I was updating an old Packer build configuration from JSON to HCL. But for the life of me could not get a new vSphere Windows 2019 machine to find a disk attached to Para Virtualized disk controller.

I repeatedly received this error after the machine new machine booted.

Error

In researching error 0x80042405 in C:\Windows\pather\setuperr.log, I found it simply could not find the attached disk.

setuperr.log

After some research I determined the PVSCSI drivers added to the floppy disk where not being discovered. Or more specifically the new machine didn’t know to search the floppy for additional drivers.

I finally found a configuration section for my autounattend.xml file which would fix it after an almost exhaustive online search.

The magic section reads as follows.

<unattend xmlns="urn:schemas-microsoft-com:unattend">
    <settings pass="windowsPE">        
       <component name="Microsoft-Windows-PnpCustomizationsWinPE" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
            <DriverPaths>
                <PathAndCredentials wcm:action="add" wcm:keyValue="A">
                    <!-- pvscsi-Windows8.flp -->
                    <Path>A:\</Path>
                </PathAndCredentials>
            </DriverPaths>
        </component>
...
    </settings>

After adding this section, the new vSphere Windows machine easily found the additional drivers.

This was tested against Windows 2019 in both AWS and vSphere deployments.

The vSphere deployment took an hour, mostly waiting for the updates to be applied. AWS takes significantly less time as I’m using the most recently updated image they provide.

The working files are located in the packer-hcl-vsphere-aws github repo.

Code Stream Nested Esxi pipeline Part 1

Been a while since my last post. Over the last couple of months I’ve been tinkering with using Code Stream to deploy a Nested Esxi / vCenter environment.

My starting point is William Lams excellent PowerShell script (vsphere-with-tanzu-nsxt-automated-lab-deployment). I also wanted to use the official vmware/poweclicore docker image.

Well let’s just say it’s been an adventure. Much has been learned through trial and (mostly) error.

For example in Williams script, all of the files are located on the workstation where the script runs. Creating a custom docker image with those files would have resulted in a HUGE file, almost 16GB (Nested ESXi appliance, vCSA appliance and supporting files, and NSX-T OVA files). As one of my co-worker says, “Don’t be that guy”.

At first I tried cloning the files into the container as part of the CI setup. Downloading the ESXi OVA worked fine, but failed when I tried copying over the vCSA files. I think it’s just too much.

I finally opted to use a Kubernetes Code Stream instead of a Docker pipeline. This allowed me to use a Persistent Volume Claim.

Kubernetes setup

Some of the steps may lack details, as this has been an ongoing effort and just can’t remember everything. Sorry peeps!

Create two Name Spaces, codestream-proxy and codestream-workspace. Codestream-proxy is used by Code Stream to host a Proxy pod.

Codestream-workspace will host the containers running the pipeline code.

Next came the service account for Code Stream. The path of least resistance was to simply assign ‘cluster-admin’ to the new service account. NOTE: Don’t do this in a production environment.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cs-cluster-role-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: codestream
  apiGroup: ""
  namespace: default

Next came the Persistent Volume (pv) and Persistent Volume Claim (pvc). My original pv was set to 20GI, which after some testing was determined too small. It was subsequently increased it to 30GI. The larger pv allowed me to retain logs and configurations between runs (for troubleshooting).

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
  name: cs-persistent-volume-cw
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 30Gi
  hostPath:
    path: /mnt/nested
  persistentVolumeReclaimPolicy: Retain
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cs-pvc-cw
  namespace: codestream-workspace
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 30Gi
  volumeName: cs-persistent-volume-cw

The final step in k8s is to get the Service Account token. In this example the SA is called ‘codestream’ (So creative).

k get secret codestream-token-blah!!! -o jsonpath={.data.token} | base64 -d | tr -d "\n"

eyJhbGciOiJSUzI1NiIsImtpZCI6IncxM0hIYTZndS1xcEdFVWR2X1Z4UFNLREdQcGdUWDJOWUF1NDE5YkZzb.........

Copy the token, then head off to Code Stream.

Codestream setup

There I added a Variable to hold the token, called DAG-K8S-Secret.

Then went over to Endpoints, where I added a new Kubernetes endpoint.

Repo setup

The original plan was to download the OVA/OVF files from a repo every time the pipeline ran. However an error would occur on every VCSA file set download. Adding more memory to the container didn’t fix the problem, so I had to go in another direction.

The repo is well connected to the k8s cluster, so the transfer is pretty quick. Here is the directory structure for the repo (http://repo.corp.local/repo/).

NOTE: You will need a valid account to download VCSA and NSX-T.

NOTE: NSX-T will be added to the pipeline later.

Simply copying the files interactively on the k8s node seemed like the next logical step. Yes the files copied over nicely, but any attempt to deploy the VCSA appliance would throw a python error complaining about a missing ‘vmware’ module.

However I was able to run the container manually, copy the files over and run the scripts successfully. Maybe a file permissions issue?

Finally I ran the pipeline with a long sleep at the beginning. Using an interactive session, and copied the files over. This fixed the problem.

Here are the commands I used to copy the files over interactively.

k -n codestream-workspace exec -it po/running-cs-pod-id bash
wget -mxnp -q -nH http://repo.corp.local/repo/ -P /var/workspace_cache/ -R "index.html*"
# /var/workspace_cache is the mount point for the persistent volume
# need to chmod +x a few files to get the vCSA to deploy
chmod +x /var/workspace_cache/repo/vcsa/VMware-VCSA-all-7.0.3/vcsa/ovftool/lin64/ovftool*
chmod +x /var/workspace_cache/repo/vcsa/VMware-VCSA-all-7.0.3/vcsa/vcsa-cli-installer/lin64/vcsa-deploy*

This should do it for now. The next article will cover some of the pipeline details, and some of the changes I had to make to William Lams Powershell code.

Happy holidays.

Changing vRealize Automation Cloud Proxy internal network ranges

My current customer needs to use 172.18.0.0/16 for their new VMWare Cloud on AWS cluster. However we tried this in the past and were getting a “NO ROUTE TO HOST” error when trying to add the VMC vCenter as a cloud account.

The problem was eventually traced back to the ‘on-prem-collector’ (br-57b69aa2bd0f) network in the Cloud Proxy which also uses the same subnet.

Let’s say the vCenters IP is 172.18.32.10. From inside cloudassembly-sddc-agent container, I try to connect to the vCenter. Eventually getting a ‘No route to host’ error. Can anyone say classic overlapping IP space?

We reached out to our VMWare Customer Success Team and TAM, who eventually provided a way to change the Cloud Proxy docker and on-prem-collector subnets.

Now for the obligatory warning. Don’t try this in production without having GSS sign off on it.

In this example I’m going to change the docker network to 192.168.0.0/24 and the on-prem-collector network to 192.168.1.0/24.

First to update the docker interface range.

Add the following two lines to /etc/docker/daemon.json. Don’t forget to add the necessary comma(s). Then save and close.

{
  "bip": "192.168.0.1/24",
  "fixed-cidr": "192.168.0.1/25"
}

Restart the docker service.

# systemctl restart docker

Now onto the on-prem-collector network.

Check to see which containers are using this network with docker network inspect on-prem-collector. Mine had two, cloudassembly-sddc-agent, cloudassembly-cmx-agent.

# docker network inspect on-prem-collector
[
    {
        "Name": "on-prem-collector",
        "Id": "57b69aa2bd0f694d76cc553769321deebcdb79e009e0964c4b5cc47aadb14684",
        "Created": "2021-02-10T16:05:21.953266873Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "05105324cff757d76de9e2f535cfb72d2e96094a630561aa141a40aa04095f00": {
                "Name": "cloudassembly-cmx-agent",
                "EndpointID": "8f6717a969b5a1edfea37b9e3d77565c38419de18774bebf4c3981e41c1ad017",
                "MacAddress": "02:42:ac:12:00:03",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": ""
            },
            "b227cf1add6caca415b88f927fb10982b0cd846f71548f95071b65330e4024e1": {
                "Name": "cloudassembly-sddc-agent",
                "EndpointID": "4f802a81e0a5dfe50ca39675a5b5106a5fb647198f3bfa898f4f62793baad448",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

Disconnect those two machines from the on-prem-collector network.

# docker ps
CONTAINER ID        IMAGE                                                                          COMMAND                  CREATED             STATUS              PORTS                      NAMES

05105324cff7        symphony-docker-external.jfrog.io/vmware/cloudassembly-cmx-agent:207           "./run.sh --lemansDa…"   4 days ago          Up 5 minutes        127.0.0.1:8004->8004/tcp   cloudassembly-cmx-agent

b227cf1add6c        symphony-docker-external.jfrog.io/vmware/cloudassembly-sddc-agent:4cda576      "./run.sh --lemansDa…"   4 days ago          Up 5 minutes        127.0.0.1:8002->8002/tcp   cloudassembly-sddc-agent

# docker network disconnect on-prem-collector b227cf1add6c
# docker network disconnect on-prem-collector 05105324cff7
# docker network inspect on-prem-collector
[
    {
        "Name": "on-prem-collector",
        "Id": "57b69aa2bd0f694d76cc553769321deebcdb79e009e0964c4b5cc47aadb14684",
        "Created": "2021-02-10T16:05:21.953266873Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

Delete the on-prem-collector network, then re-add using the new subnet (using 192.168.1.0/24)

# docker network rm on-prem-collector
on-prem-collector
# docker network create --subnet=192.168.1.0/24 --gateway=192.168.1.1 on-prem-collector
47e3d477a87c4459f57e3a7305754b1d91e4d13e645ad4c160de5b8e64fede1a

Reconnect the two containers to the new docker network.

# docker network connect on-prem-collector 05105324cff7
# docker network connect on-prem-collector b227cf1add6c
# 
# docker network inspect on-prem-collector
[
    {
        "Name": "on-prem-collector",
        "Id": "47e3d477a87c4459f57e3a7305754b1d91e4d13e645ad4c160de5b8e64fede1a",
        "Created": "2021-05-18T15:58:55.019732144Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.1.0/24",
                    "Gateway": "192.168.1.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "05105324cff757d76de9e2f535cfb72d2e96094a630561aa141a40aa04095f00": {
                "Name": "cloudassembly-cmx-agent",
                "EndpointID": "34df13b0accf2f561e0226918a7e84d02995a25f4cc3969758a913a3f6c4e8bb",
                "MacAddress": "02:42:c0:a8:01:02",
                "IPv4Address": "192.168.1.2/24",
                "IPv6Address": ""
            },
            "b227cf1add6caca415b88f927fb10982b0cd846f71548f95071b65330e4024e1": {
                "Name": "cloudassembly-sddc-agent",
                "EndpointID": "405e7e8e1a4ad09b4cc99b0661454a4b0f32687152ca2346daf72f5a424dcd4d",
                "MacAddress": "02:42:c0:a8:01:03",
                "IPv4Address": "192.168.1.3/24",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

Reboot and do the happy dance.

Happy not-coding.

Cloud Extensibility Appliance vRO Properties using PowerShell

In this article I’ll show you how to return JSON as a vRO Property type using vRA Cloud Extensibility Proxy (CEXP) vRO PowerShell 7 scriptable tasks.

First a couple of notes about the CEXP.

  • It is BIG, 32GB of RAM. However my lab instance is using less than 7 GB active memory.
  • 8 vCPU, and runs about 50% on average.
  • It deploys with 4 disks, using a tad less than 210 GB.

Why PowerShell 7? Well it was a design decision based on the customers PS proficiency.

Now down to the good stuff. Here are the details of this basic workflow using PowerShell 7 as Scriptable Tasks.

  • Get a new vRA Cloud Bearer Token
    • Save it, along with other common header values to an output variable named ‘headers’ (Properties)
  • The second scriptable task will use the header and apiEndpoint to GET the vRAC version information (About).
    • Then save version information to an output variable named ‘vRacAbout’ (Properties)

Getting (actually POST) the bearerToken is fairly simple. Here is the code for the first task.

function Handler($context, $inputs) {
    <#
    .PARAMETER $inputs.refreshToken (SecureString)
        vRAC Refresh Token

    .PARAMETER $inputs.apiEndpoint (String)
        vRAC Base API URL

    .OUTPUT headers (Properties)
        Headers including the bearerToken

    #>
    $body = @{ refreshToken = $inputs.refreshToken } | ConvertTo-Json

    $headers = @{'Accept' = 'application/json'
                'Content-Type' = 'application/json'}
    
    $Uri = $inputs.apiEndpoint + "/iaas/api/login"
    $requestResponse = Invoke-RestMethod -Uri $Uri -Method Post -Body $body -Headers $headers 

    $bearerToken = "Bearer " + $requestResponse.token 
    $authorization = @{ Authorization = $bearerToken}
    $headers += $authorization

    $output=@{headers = $headers}

    return $output
}

The second task consumes the headers produced by the first task, then GET(s) the Version Information from the vRA Cloud About route (‘/iaas/api/about’). The results are then returned as the vRacAbout (Properties) variable.

function Handler($context, $inputs) {
    <#
    .PARAMETER $inputs.headers (Properties)
        vRAC Refresh Token

    .PARAMETER $inputs.apiEndpoint (String)
        vRAC Base API URL

    .OUTPUT vRacAbout (Properties)
        vRAC version information from the About route

    #>
    $requestUri += $inputs.apiEndpoint + "/iaas/api/about"
    $requestResponse = Invoke-RestMethod -Uri $requestUri -Method Get -Headers $headers

    $output=@{vRacAbout = $requestResponse}

    return $output
}

Here, you can see the output variables for both tasks are populated. Pretty cool.

As you can see, using the vRO Properties type is fairly simple using the PowerShell on CEXP vRO.

The working workflow package is available here.

Happy coding.

vExpert 2021 Applications are open

The 2021 vExpert applications are now open!

The program “is about giving back to the community beyond your day job”.

One way I give back is by posting new and unique content here once or twice a month. Sometimes a post is simply me clearing a thought before the weekend, completing a commitment to a BU, or documenting something before moving on to another task. It doesn’t take long, but could open the door for one of my peers.

My most frequently used benefit is the vExpert and Cloud Management Slack channels. I normally learn something new every-week. And it sure does feel good to help a peer struggling with something I’ve already tinkered with.

Here’s a list of some of the benefits for receiving the award.

  • Networking with 2,000+ vExperts / Information Sharing
  • Knowledge Expansion on VMware & Partner Technology
  • Opportunity to apply for vExpert BU Lead Subprograms
  • Possible Job Opportunities
  • Direct Access to VMware Business Units via Subprograms
  • Blog Traffic Boost through Advocacy, @vExpert, @VMware, VMware Launch & Announcement Campaigns
  • 1 Year VMware Licenses for Home Labs for almost all Products & Some Partner Products
  • Private VMware & VMware Partner Sessions
  • Gifts from VMware and VMware Partners
  • vExpert Celebration Parties at both VMworld US and VMworld Europe with VMware CEO, Pat Gelsinger
  • VMware Advocacy Platform Invite (share your content to thousands of vExperts & VMware employees who amplify your content via their social channels)
  • Private Slack Channels for vExpert and the BU Lead Subprograms

The applications close on January 9th, 2021. Start working on those applications now.

vExpert Applications open

The midyear vExpert Applications are open until June 25th, 5 PM PDT.

What the heck is vExpert you may ask? The VMware vExpert program is VMware’s global evangelism and advocacy program. 

The award is for individuals who are sharing their VMware knowledge and contributing that back to their community.

How do you do that? Writing blog articles, participating in discussions on VMware Code (Slack), presenting at VMUG’s, etc.

What is in it for you? Promotion of your articles, exposure at global events, co-op advertising, traffic analysis, and early access to beta programs and VMware’s roadmap.

Other vExpert Program Benefits

  • Invite to the private #Slack channel
  • vExpert certificate signed by CEO Pat Gelsinger.
  • Private forums on communities.vmware.com.
  • Permission to use the vExpert logo on cards, website, etc for one year
  • Access to a private directory for networking, etc.
  • Exclusive gifts from various VMware partners.
  • Private webinars with VMware partners as well as NFRs.
  • Access to private betas (subject to admission by beta teams).
  • 365-day eval licenses for most products for home lab / cloud providers.
  • Private pre-launch briefings via our blogger briefing pre-VMworld (subject to admission by product teams)
  • Blogger early access program for vSphere and some other products.
  • Featured in a public vExpert online directory.
  • Access to vetted VMware & Virtualization content for your social channels.
  • Yearly vExpert parties at both VMworld US and VMworld Europe events.
  • Identification as a vExpert at both VMworld US and VMworld EU.

The application process is pretty simple, just visit the vExpert site, create and submit your application.

Don’t forget, the midyear applications close at 5PM PDT June 25th 2020.

vRA Cloud Sync Blueprint Versions to Github

The current implementation of vRealize Automation Cloud and Git integration for Blueprint is read only. Meaning you download the new Blueprint version into a local repo the push it. After a few minutes vRA Cloud will see the new version and update the design page. It’s really a pain if you know what I mean.

What I really wanted was to automatically push the new or updated Blueprint when a new version is created.

The following details one potential solution using vRA Cloud ABX actions in a flow on Lambda.

The flow consists of three parts.

  1. Retrieve a vRA Cloud refresh token from an AWS Systems Manager Parameter, then get a refresh token (get_bearer_token_AWS). It returns the bearer token as ‘bearer_token’.
  2. Get Blueprint Version Content. This uses ‘bearer_token’ to get the new Blueprint Version payload and return it as ‘bp_version_content’.
  3. Then Add or Update Blueprint on Github. This action converts the ‘bp_version_content’ from JSON into YAML. It also adds or updates the two required properties, ‘name’ and ‘version’. Both values come from the content retrieved from step two. It also clones the repo, checks to see if the blueprint exists. Then it either creates a Blueprint folder with blueprint.yaml, or updates an existing blueprint.yaml.

The vRA Cloud Refresh Token and Github API key are stored in an AWS SSM Parameter. Please take a look at one of my previous articles on how to set this up.

‘get_bearer_token_AWS’ has two inputs. region_name is the AWS region, and refreshToken is the SSM Parameter containing the vRA Cloud refresh token.

Action 2 (Blueprint Version Content) uses the bearer token returned by Action 1 to get the blueprint version content.

The final action, consumes the blueprint content returned by action 2. It has three inputs, githubRepo is the repo configured in your github project, githubToken is the SSM Parameter holding the Github key, and finally region_name is the AWS region where the Parameter is configured.

Create a new Blueprint version configuration subscription, using the flow as the target action, and filtering the event to “‘event.data.eventType == ‘CREATE_BLUEPRINT_VERSION'”.

Now to test the solution. Here I have a very basic blueprint. Make sure you add the name and version properties. The name value should match the actual blueprint name. Now create a new Version. Then wait until Github does another inventory.

You may notice the versioned Blueprint will show up a second time, now being managed by Github. I think vRA Cloud is adding the discovered blueprints on Github with a new Blueprint ID. The fix is pretty easy, just delete the original blueprint after making sure the imported one still works.

The flow bundle containing all of the actions is available in this repository.

Spas Kaloferov recently posted a similar solution for gitlab. Here is the link to his blog.

vRA Custom Form CSS rendering issues

I ran into some interesting issues when trying to develop a custom form in vRealize Automation.

The first one was noticed when I tried to apply font-size to a field.  After doing some research and watching a few videos, it looked like some Field ID’s simply could not be mapped in CSS.  The main issue is the Field ID has a tilde ‘~’ in it, which is not a valid CSS ID.

tildeIssue

A quick search resulted in an amazingly simple solution.  Escape the tilde with a backslash ‘\’.  Stupid easy!

My CSS up to this point contained the following;

body {
font-size: 18px;
}

#vSphere__vCenter__Machine_1\~size {
font-size: 14px; 
}

#integerField_cda957b7 {
font-size: 20px;
font-style: italic;
}

#vSphere__vCenter__Machine_1\~VirtualMachine.Disk1.Size {
font-size: 32px;
}

Which produced this form.

improperlyFormattedForm

The integerField and Size (Component Profile) updated properly, but the Requested Disk Size didn’t.  (This field is actually updated using an external vRO action to change the Integer into a String, then applied to a custom property).  Ok, what I was actually after was getting rid of the Label word wrap, but stumbled across the font-size issue along the way.

After digging through some debugging info I ran into two classes, one was ‘grid-item’, and the second was ‘label.field-label.required’.

label-field-required

After updating CSS, I was finally able to change the Requested Disk Size font, as well as fixed the word wrap issue.

My final result is a good starting point.  Both Size and Requested Disk Size fields had a font size of 18, and the integer field had an italicized font with the correct size.  The shadow on the Size Label was just for fun 🙂

FormattedCustomForm

My final CSS looks like this;

body {
font-size: 18px;
}

.label.field-label.required {
width: 80%!important;
}

.grid-item {
width: 80%!important;
}

#vSphere__vCenter__Machine_1\~size {
/* font-size: 14px; */
color: blue;
text-shadow: 3px 2px red;
}
#integerField_cda957b7 {
font-size: 20px;
font-style: italic;
}
#vSphere__vCenter__Machine_1\~VirtualMachine.Disk1.Size {
/* does not work with custom properties */
/* need to dig into label.field.label.* */
font-size: 32px;
}

Well its’ about the time of day. Be well.

Migrating from NSX-V to NSX-T Error

The project this week was to convert my lab environment from NSX-V to NSX-T to support some upcoming projects.

Apparently unconfiguring the hosts through the NSX-V interface didn’t work as expected, and in fact didn’t remove all of the NSX-V components.

When trying to install NSX-T (2.3.1) on my vSphere 6.7 hosts, I received the error ‘File path of /etc/vmsyslog.conf.d/dfwpktlogs.conf is claimed by multiple none-overlay VIBS’.

vmsyslogError

After logging into one of the hosts and running ‘esxcli software vib list | grep -i nsx’ I discovered a stray NSX package called esx-nsxv.

#esxcli software vib list | grep -i nsx
esx-nsxv 6.7.0-0.0.7563456 VMware VMwareCertified 2019-03-08

This was quickly rectified by placing the host in maintenance mode, then running the following command.

#esxcli software vib remove -n esx-nsxv

Which spit out the following.

Removal Result
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed:
VIBs Removed: VMware_bootbank_esx-nsxv_6.7.0-0.0.7563456
VIBs Skipped:

After exiting maintenance mode I retried the installation by selecting ‘Resolve’ in NSX-T.

vmsyslogResolve

NSX-T installation on the host then completed successfully.