DefinIT

Deploying fully distributed vRealize Automation IaaS components – Part 2: Database, Web and Manager services

vRANow that the prerequisites for the IaaS layer have been completed, it’s time to move on to the actual installation of the IaaS components, starting with the database. We then move onto the first Web server, which also imports the ModelManagerData configuration to the database, populating the database with all of the info the IaaS layer needs out of the box. We then install the second Web server before moving on to the active Manager server. The second Manager server is passive and the service should be disabled – I’ll cover installing DEM Orchestrators, Workers and the vSphere Agents in the next article. (more…)

Deploying fully distributed vRealize Automation IaaS components – Part 1: Pre-requisites

vRAOne of the trickiest parts of deploying vRealize Automation is the IaaS layer – people sometimes look at me like I’m a crazy person when I say that, normally because they’ve deployed a PoC or small deployment with just a single IaaS server. Add in 5 more servers, some load balancers, certificates, a distributed setup and MSDTC to the mix and you have a huge potential for pain!

If you’ve followed my previous posts, you’ll see know that I’ve got a HA Platform Services Controller configured, and a HA vRealize Appliance cluster configured with Postgres replication – all good so far.

There are loads of ways to deploy the IaaS layer and they depend on the requirements and the whim of the architect who’s designed it – but the setup I deploy most often is below

  • imageActive/Active IaaS Web components on two servers and load balanced
  • Active/Passive IaaS Manager components on two more servers and load balanced, with a DEM Orchestrator also running on each host
  • Two more servers running the DEM Worker and Agents – typically vSphere Agents.

Keeping the web components separate ensures your users’ experience is snappy and isn’t degraded during deployments. The manager service is load balanced, but has an active and a passive side which needs to be manually started in the event of a failover. The DEM Orchestrator roles orchestrate the tasks given to the DEM Workers and don’t put much load on the servers. Finally, the DEM Workers and Agents are placed on two more servers – the DEM Orchestrators load balance the workloads sent to each DEM Worker and in the case of a failed DEM Worker will resume the workload on the other Worker. The vSphere agents are configured identically and provide high availability for the service, workloads will just use whichever agent is available for a particular resource.

I have already deployed and configured a Microsoft SQL Server 2008 R2 Failover Cluster, with a local MSDTC configured on each node. I hope to publish a blog post on this in the near future – for this article it’s not important, just that there’s a MSSQL database available and MSDTC is configured on the database server. (more…)

vRealize Automation Infrastructure Tab displays incorrect labels

| 24/07/2015 | Tags: , , , ,

vRAHaving just completed a particularly problem-prone distributed IaaS install, this was almost the straw that broke the camel’s back. Logging into vRealize Automation for the first time as an Infrastructure Admin displayed the infrastructure tab and all menu labels as big ugly references, and no functionality:

{com.cmware.cap.component.iaas.proxy.provider@csp.places.iaas.label}

clip_image002

Rebooting the IaaS web servers restored the functionality of the IaaS layer but still did not fix the label issue, it took a further reboot of both vRealize Automation appliances, then the IaaS web servers to finally view the correct labels.

 

Deploying vRealize Automation 6.2 Appliance Cluster with Postgres Replication

The recommendations for the vRealize Appliance have changed with 6.2, the published reference architecture now does not recommend using an external Postgres database (either vPostgres appliance, a 3rd party Postgres deployment or using a third vRealize Appliance as a stand-alone database installation). Instead the recommended layout is shown in the diagram below. One instance of postgres on the primary node becomes an active instance, replicating to the second node which is passive. In front of these a load balancer or DNS entry points to the active node only. Fail-over is still a manual task, but it does provide better protection than a single instance.

The cafe portal and APIs are still load balanced in an active/active configuration and are clustered together.

image (more…)

vRealize Automation (vRA) Manager Service error

| 14/01/2015 | Tags: , , , ,

vmware logo
Recently when do a fresh install of vRealize Automation (vRA) 6.2 I came across the following error after configuring the first end point.

 

Error log example

msdtc-error

 

 

 

 


DataBaseStatsService: ignoring exception: Error executing query usp_SelectAgent Inner Exception: Error executing query usp_SelectAgentCapabilities

and

Error processing ping response Error executing query usp_SelectAgent Inner Exception: Error executing query usp_SelectAgentCapabilities


First of all I checkedĀ to see if the end points were working which in this case they appeared to be, but I wanted to clear the error before continuing the install.

It was clear from the error log at leastĀ one machine was affected by the error, IaaS.

After a little bit of effort searching I came across this extremely helpful article – “Using a Cloned VM as a SQL Server – Gotcha for vCAC Install

Essentially because I had deployed my VMs from the template or clone with MSDTC already installed the deployed VMs shared the same GUID for their MSDTC thus causing problems.

(more…)