DefinIT

Three Tier App for vRealize Automation

One question I’m asked quite a lot is what I use for a 3-tier application when I’m testing things like NSX micro-segmentation with vRealize Automation. The simple answer is that I used to make something up as I went along, deploying components by hand and generally repeating myself a lot. I had some cut/paste commands in my note application that sped things up a little, but nothing that developed. I’ve been meaning to rectify this for a while, and this is the result!

A lot of this is based on the excellent blog posts published on the VMware HOL blog by Doug Baer. Doug wrote five parts on creating his application on Photon OS and they’re well worth a read (start at part 1, here). I have changed a few things for my vRA Three Tier App, and some things are the same:

  • I’m using CentOS7, as that’s what I see out in the wild with customers (RHEL7) and I am most familiar with
  • The app itself is the PHP MySQL CRUD Application from Tutorial Republic
  • The DB tier uses MariaDB (MySQL) not SQLite
  • The App tier is an Apache/PHP server
  • The Web tier is still NGINX as a reverse proxy
  • I am including NSX on-demand load balancers in my blueprint, but you don’t actually need them for single-VM tiers
  • Finally, I want to be able to deploy my 3-tier application using vRA Software Components (though you can also use startup scripts in the customisation spec)

Based on this, my final application will look something like the image below, with clients connecting to the NSX load balancer on HTTPS/443, multiple NGINX reverse proxy servers communicating with the NSX load balancer on HTTP/8080, which is in front of multiple Apache web servers running the PHP application which all talk to the MySQL databased back end over MySQL/3306.

Three Tier App

When in use, the application looks like this:

(more…)

vRealize Automation 7.3 and NSX – Micro-segmentation strategies

vRealize Automation and NSX integration has introduced the ability to deploy multi-tiered applications with network services included. The current integration also enables a method to deploy micro-segmentation out of the box, based on dynamic Security Group membership and the Service Composer. This method does have some limitations, and can be inflexible for the on-going management of deployed applications. It requires in-depth knowledge and understanding of NSX and the Distributed Firewall, as well as access to the Networking and Security manager that is hosted by vCenter Server.

For customers who have deployed a private cloud solution using vRealize Automation, an alternative is to develop a “Firewall-as-a-Service” approach, using automation to allow authorised end users to configure micro-segmentation. This can be highly flexible, and allow the delegation of firewall management to the application owners who have intimate knowledge of the application. There are disadvantages to this approach, including significantly increased effort to author and maintain the automation workflows.

This blog post describes two possible micro-segmentation strategies for vRealize Automation with NSX and compares the two approaches against a common set of requirements.

This post was written based on the following software versions

Software Component Version (Build)
vRealize Automation 7.3 (5604410)
NSX 6.3.5 (7119875) – 6.4
vSphere 6.5 Update 1d (7312210)
ESXi 6.5 Update 1 (5969303)

These are some generic considerations when deploying micro-segmentation with vRealize Automation.

  • An application blueprint is designed to be deployed multiple times from vRealize Automation, the automation shouldn’t break any micro-segmentation or firewall policy when that happens.
  • vRealize Automation blueprints can scale in and out – this should be accommodated within the micro-segmentation strategy to ensure that required micro-segmentation is the same as implemented micro-segmentation.
  • vRealize Automation is a shared platform, so the micro-segmentation of one deployment should be limited in scope, but should also consider intra-deployment communications between applications, for example, of the same business group or tenant.

Application XYZ requirements

For illustration purposes, an example 3-tier application deployment is shown below “Application XYZ“. It consists of a Web, App and DB tier and a load balancer for the Web and App tiers.

Application XYZ Allowed Flows

Application XYZ Allowed Flows

(more…)

vRealize Automation 7.3 Distributed Install – Prerequisites

Pre-requisites - Get your ducks in a row!As a consultant I’ve had the opportunity to design, install and configure dozens of production vRealize Automation deployments, from reasonably small Proof of Concept environments to globally-scaled multi-datacenter fully distributed behemoths. It’s fair to say, that I’ve made mistakes along the way – and learned a lot of lessons as to what makes a deployment a success.

In the end, pretty much everything comes down to getting the pre-requisites right. Nothing that I’ve written here is not already documented in the official documentation, and the installation wizard does a huge amount of the work for you.

For the purposes of this post, I am working with the following components, which have been pre-deployed on a single flat network.

vRA Appliances

Server
CPU
RAM
Disk
vra-app-1
4
18
140
vra-app-2
4
18
140
vRA IaaS Windows Servers
Server
CPU
RAM
Disk
vra-web-1
2
8
60
vra-web-2
2
8
60
vra-man-1
2
8
60
vra-man-2
2
8
60
vra-dem-1
2
4
60
vra-dem-2
2
4
60
vra-sql
2
8
60

(more…)

vRA7.2 and vSphere Integrated Containers

One of the cool new features released with vRealize Automation 7.2 was the integration of VMware Admiral (container management) into the product, and recently VMware made version 1 of vSphere Integrated Containers generally available (GA), so I thought it was time I started playing around with the two.

In this article I’m going to cover deploying VIC to my vSphere environment and then adding that host to the vRA 7.2 container management.

Deploying vSphere Integrated Containers

VIC is deployed using a command line interface, which deploys a vApp and a container host VM onto your ESXi host or vSphere cluster. There are a LOT of different ways to configure VIC so I strongly suggest you read and digest the VIC Installation Guide. For the sake of simplicity, I’m going to deploy as basic a setup as I can figure out. (more…)

Deploying to AWS with Software Components on vRealize Automation 7

aws-vra

Recently I’ve been working on some ideas in my lab to leverage the AWS endpoint on vRealize Automation. One of the things I needed to get working was getting Software Components working on my AWS deployed instances.

The diagram to the right shows my end-stage network – the instance deployed by vRA into AWS should be in a private subnet in my VPC, and should use my local lab DNS server and be able to access my vRA instance. This allows me to make use of the vRA guest agent for software components on the deployed VMs. I also wanted to have the deployed VMs use their local NAT gateway for internet traffic, rather than paying for the data over my VPN connection. (more…)

Adding a vCloud Air (PAYG/Gen2) instance to vRealize Orchestrator as a vCloud Director host

vRABig thanks to Jose Luis Gomez for this solution, his response to my tweet was spot on and invaluable!

I’ve been trying to configure vCloud Air as a vCloud Director host in vRealize Orchestrator in order to create some custom resource actions for Day 2 operations in vRealize Automation. What I found was that there’s *very* little information out there on how to do this, and I ended up writing my own custom resource mapping for the virtual machines to VCAC:VirtualMachine objects – at least that way I could add my resource action. But this still didn’t expose the vCloud Director functionality for those machines. To do this I needed vCloud Air added as a vCloud Director host.

As per Jose’s advice, I duplicated the “com.vmware.library.vCloud.Host/addHost” action, named it “addHost_vCA_G2”:

2016-04-19_10-58-00

I then modified the following line to include “/api/compute”:

newHost.url = "https://" + host + ":" + port;

Becomes

newHost.url = "https://" + host + ":" + port + "/api/compute";

I then duplicated the “Add a connection” workflow to create “Add a connection (vCloud Air Gen2)” and swapped the old action for the new action:

2016-04-19_11-00-46

2016-04-19_11-02-31

Now I can add vCloud Air (PAYG/Gen2) as an endpoint in the normal way:

2016-04-19_11-10-02  2016-04-19_11-12-57

The out-of-the-box “IaaS vCD VM” Resource Mapping now works in vRA and I can create custom Resource Actions against the vCloud:VM object type.

Once again, big thanks to Jose for this solution!

MindMap: vRealize Automation Roles

I use mind maps quite a lot for study, I find the visual representation of info makes it a lot easier for me to remember! Below is a mind map I created for learning the roles in vRealize Automation, which I used during my presentation for #vBrownBag on VCP6-CMA objective 2.

You can download a PDF version here: vRealize Automation Roles Mind Map

image

Building a vRealize Automation NSX Lab on Ravello

imageAs a vExpert, I am blessed to get 1000 CPU hours access to Ravello’s awesome platform and recently I’ve been playing with the AutoLab deployments tailored for Ravello.

If you’re unfamiliar with Ravello’s offering (where have you been?!) then it’s basically a custom hypervisor (HVX) running on either AWS or Google Cloud that allows you to run nested environments on those platforms. I did say it’s awesome.

As an avid home-lab enthusiast Ravello initially felt weird, but having used it for a while I can definitely see the potential to augment, and in some cases completely replace the home lab. I spent some time going through Nigel Poulton’s AWS course on Pluralsight to get a better understanding of the AWS platform and I think that helped, but it’s definitely not required to get started on Ravello.

One more thing to add before I start the setup – even if I didn’t have 1000 hours free, the pricing model means that you could run your lab on Ravello for a fraction of the cost of a higher spec home lab. It’s definitely an option to consider unless you’re running your lab 24/7.

(more…)

Deploying fully distributed vRealize Automation instance – Configuring NetScaler Monitors

vRAWith a fully distributed vRealize Automation instance one of the critical components to maintaining uptime is determining whether any particular service is “up”. Out-of-the-box monitors allow us to detect if the port we are load balancing is open, but don’t determine whether the service on that port is functioning correctly.

Important: None of these monitors should be created until vRealize has been fully installed – doing these as you go along will result in installation failures. For example, if you create the monitor on the IaaS web service before the DEM roles are installed, the web service will always be down because it’s waiting for a DEM role.

Creating a NetScaler Monitor

To create the monitor open the NetScaler configuration page and open Traffic Management, Load Balancing then Monitors. Select the “https-evc” monitor and click “Add” – this pre-loads the settings from this monitor, which populates most of the settings we need.

image (more…)