Managing Azure AKS clusters with VMWare Aria Automation

The use case presented to me for POC was to deploy a new Azure AKS cluster then install a basic application. Simple use case, but for those using VRA you know the kubernetes capabilities are all but non existent.

But after digging around and tinkering I figured a CodeStream (now called Pipelines) would probably fit the bill. The pipeline would run a terraform plan to build, and then destroy the deployment later on.

Keeping track of the state file between runs also presented a ‘problem’. After lots of kicking the tires I came up with a way to store the state file securing in an Azure Storage account. The state file in the container is simply the deployment name plus .tfstate. This allowed me to refer to it using day two actions and Event Broker Subscriptions (EBS).

Another issue that came up was deleting ‘codestream.execution’ resources when the deployment is deleted. Since these deployments are handled by terraform I needed another WF which called a pipeline to destroy the deployment when the eventType was DESTROY_DEPLOYMENT.

The files for this article can be found at azure-terraform-blog

Terraform is used to do the heavy lifting. The backend values get replaced with some pipeline inputs in the first pipeline task. The most important one is the deployment name. When destroying the deployment, terraform will pull the current state for that deployment and do its thing.

The CodeStream pipeline (Now Pipelines) uses a custom docker image. It includes the latest version of Terraform (Currently at 1.5.4), AZ CLI, Kubectl, and Helm (for another use case). It is stored on DockerHub as americanbwana/cas-terraform-154:latest.

I didn’t come up with the basic Template. I found this article on vEducate.co.uk. A very good starting point. ‘pipelineTask’ is used by the pipeline to either create (apply) or destroy the deployment. More on that later.

formatVersion: 1
inputs:
  pipelineTask:
    type: string
    title: Pipeline Task
    description: 'Create '
    readOnly: true
    default: create
resources:
  cs.pipeline:
    type: codestream.execution
    properties:
      pipelineId: 2b80427c...
      outputs:
        computed: true
      inputs:
        deploymentName: ${env.deploymentName}
        pipelineTask: ${input.pipelineTask}

vRA doesn’t delete the actual codestream.execution items when you destroy the deployment. A workflow called ‘Terraform delete AKS and Helm deployment’ is called by an Event Broker Subscription (EBS). Make sure to update the ‘codestreamPipelineId’ in the WF variables.

Event Broker Subscription

And finally on to the pipeline. The initialize task copies several variables into a file, which is then sourced by most stages. Terraform apply is only fired if the pipelineTask = ‘create’. And Terraform destroy is only fired when pipelineTask = ‘destroy’.

Pipeline

‘Get Service IP’ is also only fired if the pipelineTask = ‘create’. This task will get the IP address of WordPress and export it back to vRA.

Terraform Output
Assembler Output

Nuff for now. Happy coding.

VMware vRealize Automation Cloud Terraform Resources

I’ve been anxiously waiting for the new Terraform Resources for vRealize Automation Cloud. Well the wait is finally over. Version 8.20 was released on 8/30/2020. You can view the Release Notes here.

Now why would you want to use Terraform in vRA Cloud? You can already do a bunch out of the box. But what if you wanted to deploy an AWS EC2 instance with an encrypted boot disk? That is not available.

Terraform to the rescue. You can use Terraform to fill those kind of gaps without using a vRealize Orchestrator or Extensibility Appliance.

In this article I’ll show you how to deploy a basic AWS EC2 instance with an encrypted boot disk using the new Terraform Resource.

First you will need to setup your vRAC environment for Terraform. This is well documented in the How to include Terraform configuration in Cloud Assembly documentation.

The Terraform configuration files and blueprint are available here.

Now on to the good stuff. Within vRAC create a new Cloud Template (renamed from Blueprints), by clicking on Design -> NEW FROM -> Terraform.

Enter the Template name and project on the next page, then click Next. The next page is where you select your GitHub Repository, Commit and the Source Directory. Then click Next.

For this example I’m leaving all of the variables as they come from variables.tf in the source directory. Click Next. This will bring you to the designer page.

The code will look something like this, with one notable exception depending on how your repo’s directory structure is laid out. The wizard assumes your sourceDirectory is directly off the root. For example root/sourceDirectory. But what if you have a path that looks like root/terraform/sourceDirectory? The wizard will not add ‘/terraform’ automagically. You will need to fix it (sample is already fixed).

inputs:
  region:
    type: string
    default: us-east-2
  ssh_key_name:
    type: string
    default: changeMe
  hostname:
    type: string
    default: changeMe
resources:
  terraform:
    type: Cloud.Terraform.Configuration
    properties:
      variables:
        region: '${input.region}'
        ssh_key_name: '${input.ssh_key_name}'
        hostname: '${input.hostname}'
      providers:
        - name: aws
          # List of available cloud zones: Will get populated during create from
          cloudZone: *********
      terraformVersion: 0.12.26
      configurationSource:
        repositoryId: XXXXXXXXX
        commitId: XXXXXXXXX
        sourceDirectory: /terraform/Basic AWS

If all goes as expected, you can now deploy the new template.

Make sure to change the default variable values to match your environment. The Ssh_key_name needs to exist in the region you are deploying into.

Click on the Deployment History to see the Terraform Plan and Apply logs. They can be viewed by clicking ‘Show Logs” on the PLAN_IN_PROGRESS or CREATE_IN_PROGRESS status lines.

The logs can also be viewed in a new browser tab by clicking on ‘View as plain text’ from the expanded ‘Show Logs’ window.

Hopefully the instance deployed correctly. If so, take a coffee break. This gives vRAC time to discover the new ‘aws_instance’ and enable some day 2 actions.

The day 2 actions vary depending on the deployed machine type. You can find more information on page 416 of the Using and Managing VMware Cloud Assembly documentation.

But did the boot disk actually get encrypted? Yes! Here is a screenshot of the boot volume.

As you can see, the new Terraform Resource is a great way to fill in vRAC deployment gaps without using extensibility.

Stay tuned. More to come.