DefinIT

vRealize Automation 7.3 Distributed Install – Prerequisites

Pre-requisites - Get your ducks in a row!As a consultant I’ve had the opportunity to design, install and configure dozens of production vRealize Automation deployments, from reasonably small Proof of Concept environments to globally-scaled multi-datacenter fully distributed behemoths. It’s fair to say, that I’ve made mistakes along the way – and learned a lot of lessons as to what makes a deployment a success.

In the end, pretty much everything comes down to getting the pre-requisites right. Nothing that I’ve written here is not already documented in the official documentation, and the installation wizard does a huge amount of the work for you.

For the purposes of this post, I am working with the following components, which have been pre-deployed on a single flat network.

vRA Appliances

Server
CPU
RAM
Disk
vra-app-1
4
18
140
vra-app-2
4
18
140
vRA IaaS Windows Servers
Server
CPU
RAM
Disk
vra-web-1
2
8
60
vra-web-2
2
8
60
vra-man-1
2
8
60
vra-man-2
2
8
60
vra-dem-1
2
4
60
vra-dem-2
2
4
60
vra-sql
2
8
60

(more…)

Deploying fully distributed vRealize Automation IaaS components – Part 1: Pre-requisites

| 14/08/2015 | Tags: , , ,

vRAOne of the trickiest parts of deploying vRealize Automation is the IaaS layer – people sometimes look at me like I’m a crazy person when I say that, normally because they’ve deployed a PoC or small deployment with just a single IaaS server. Add in 5 more servers, some load balancers, certificates, a distributed setup and MSDTC to the mix and you have a huge potential for pain!

If you’ve followed my previous posts, you’ll see know that I’ve got a HA Platform Services Controller configured, and a HA vRealize Appliance cluster configured with Postgres replication – all good so far.

There are loads of ways to deploy the IaaS layer and they depend on the requirements and the whim of the architect who’s designed it – but the setup I deploy most often is below

  • imageActive/Active IaaS Web components on two servers and load balanced
  • Active/Passive IaaS Manager components on two more servers and load balanced, with a DEM Orchestrator also running on each host
  • Two more servers running the DEM Worker and Agents – typically vSphere Agents.

Keeping the web components separate ensures your users’ experience is snappy and isn’t degraded during deployments. The manager service is load balanced, but has an active and a passive side which needs to be manually started in the event of a failover. The DEM Orchestrator roles orchestrate the tasks given to the DEM Workers and don’t put much load on the servers. Finally, the DEM Workers and Agents are placed on two more servers – the DEM Orchestrators load balance the workloads sent to each DEM Worker and in the case of a failed DEM Worker will resume the workload on the other Worker. The vSphere agents are configured identically and provide high availability for the service, workloads will just use whichever agent is available for a particular resource.

I have already deployed and configured a Microsoft SQL Server 2008 R2 Failover Cluster, with a local MSDTC configured on each node. I hope to publish a blog post on this in the near future – for this article it’s not important, just that there’s a MSSQL database available and MSDTC is configured on the database server. (more…)