DefinIT

vRealize Orchestrator Workflow: shutdownVSANCluster

vRealize OrchestratorMy vSphere lab is split into two halves – a low power management cluster, powered by 3 Intel NUCs, and a more hefty workload cluster powered by a Dell C6100 chassis with 3 nodes. The workload servers are noisy and power hungry so they tend to be powered off when I am not using them, and since they live in my garage, I power them on and off remotely.

To automate the process, I wanted to write an Orchestrator workflow (vRO sits on my management cluster and is therefore always on) that could safely and robustly shut down the workload cluster. There were some design considerations:

  • Something to do with the C6100 IPMI implementation and the ESXi driver does not like being woken from standby mode. It’s annoying, but not the end of the world – I can use impitool to power the hosts on from shutdown. If you want to use host standby, there’s a hostStandby and hostExitStandby workflow in the package on github.
  • I run VSAN on the cluster, so I need to enter maintenance mode without evacuating the data (which would take a long time and be pointless).
  • All the running VMs on the cluster should be shut down before the hosts attempt to go into maintenance mode.
  • I want a “what if” option, to see what the workflow would do if I ran it, without actually executing the shutdown – it’s largely for development purposes, but the power down/power on cycle takes a good half hour and I don’t want to waste time on a typo!

(more…)

Deploying vRealize Automation 6.2 Appliance Cluster with Postgres Replication

The recommendations for the vRealize Appliance have changed with 6.2, the published reference architecture now does not recommend using an external Postgres database (either vPostgres appliance, a 3rd party Postgres deployment or using a third vRealize Appliance as a stand-alone database installation). Instead the recommended layout is shown in the diagram below. One instance of postgres on the primary node becomes an active instance, replicating to the second node which is passive. In front of these a load balancer or DNS entry points to the active node only. Fail-over is still a manual task, but it does provide better protection than a single instance.

The cafe portal and APIs are still load balanced in an active/active configuration and are clustered together.

image (more…)

Creating a vRealize Log Insight 2.5 cluster with Integrated Load Balancing

vRealize Log InsightvRealize Log Insight 2.5 improves on the clustering in previous versions with an Integrated Load Balancer (ILB) which allows you to distribute load across your cluster of Log Insight instances without actually needing an external load balancer. The advantage of this over an external load balancer is that the source IP is maintained which allows for easier analysis.

The minimum number of nodes in a cluster is three, the first node becomes the Master node and the other two become Worker nodes. The maximum number of nodes supported is six, though acording to Mr Log Insight himself, Steve Flanders, the hard limit is more:

The Log Insight appliance comes in four sizes: an Extra Small, for use in labs or PoC, through to Large, which can consume a whopping 112.5GB of logs per day. Those figures scale in a linear fashion for clusters, so a 3-node cluster of large instances can consume 337.5GB per day, from 2,250 syslog connections at a rate of 22,500 events a second. See more on sizing vRealize Log Insight.

(more…)

vCAC 6.1 build out to distributed model: Clustered vCAC Appliances

With the release of vCAC 6.1 there have been some great improvements in the setup of the clustered vCAC appliances – none of the previous copying of configuration files between appliances – just a simple wizard to do it all for you. In my opinion this is superb.

You’ll need to have deployed a load balancer of some sort – vCAC 6.0 build-out to distributed model – Part 3.1: Configure Load Balancing with vCNS or vCAC 6.0 build-out to distributed model – Part 3.2: Configure load balancing with NSX

Deploy vCAC Appliances

Deploy three vCAC appliances by running through the OVF deployment wizard, two to be configured as vCAC Appliance nodes and one to be the external vPostgres database.

  • vCAC-61-PG-01.definit.local
  • vCAC-61-VA-01.definit.local
  • vCAC-61-VA-02.definit.local

image image

image image (more…)

vCAC 6.0 build-out to distributed model – Part 4: Deploying and clustering a secondary vCAC Appliance

This is the fourth article in a series about how to build-out a simple vCAC 6 installation to a distributed model.

By the end of this post we will have deployed a second vCAC Appliance, clustered it with the first appliance and registered the load balanced URL with the Identity Appliance. This will mean logging on to https://vcloud.definit.local/shell-ui-app will be successful.

vCAC deployment with clustered and load balanced vCAC Appliances

vCAC deployment with clustered and load balanced vCAC Appliances

An overview of the steps required are below:

  • Issue and install certificates
  • Deploy an external vPostgres appliance and migrate the vCAC database
  • Configure load balancing
  • Deploy a second vCAC appliance and configure clustering
  • Install and configure additional IaaS server
  • Deploy vCenter Orchestrator Appliance cluster

(more…)

Extending a vCenter Orchestrator (vCO) Workflow with ForEach – Backing up all ESXi hosts in a Cluster

vCenter Orchestrator (vCO)In my previous post Backing up ESXi 5.5 host configurations with vCenter Orchestrator (vCO) – Workflow design walkthrough I showed how to create a workflow to back up host configurations, but it was limited to one host at a time. For this post I’m going to show how to create a new workflow that calls the previous one on multiple hosts using a ForEach loop to run it against each one. This was actually easier than I had anticipated (having read posts on previous versions of vCO that involved created the loops manually).

Ready? Here we go…

Create a new workflow – I’ve called mine “DefinIT-Backup-ESXi-Config-Cluster” and open the Schema view. Drag a “ForEach” element onto the workflow and the Chooser window pops up. This is a bit deceptive at first because it’s blank! However, if you enter some search text it will bring up existing workflows, so I searched for my “DefinIT-Backup-ESXi-Config” workflow that was created in that previous post. (more…)

vSphere HA agent for host [Host’s Name] has an error in [Cluster’s Name] in [Datacenter’s Name]: vSphere HA agent cannot be correctly installed or configured

| 24/08/2012 | Tags: , , , , ,

Here’s a lesson in checking the basics! I added new ESXi 5 host to a cluster today and spent a good couple of hours troubleshooting the error:

vSphere HA agent for host [Host’s Name] has an error in [Cluster’s Name] in [Datacenter’s Name]: vSphere HA agent cannot be correctly installed or configured

After a few basic checks, migrating the host in and out of the cluster and rebooting, I headed off to google and began troubleshooting.

Cannot install the vSphere HA (FDM) agent on an ESXi host – this article suggests that the host is in lockdown mode. This is unlikely since we don’t use lockdown mode, but I checked anyway:

Get-vmhost esxi001.definit.co.uk | select Name,@{N="LockDown";E={$_.Extensiondata.Config.adminDisabled}} | ft -auto Name,LockDown

This returned false – no lockdown.

To exit lockdown mode, you can use:

(get-vmhost esx001.definit.co.uk | get-view).ExitLockdownMode()

I spent a good amount of time going through the list on Troubleshooting VMware High Availability (HA) in vSphere which isn’t entirely ESXi relevant but has some good pointers nonetheless.

I finally got to Reconfiguring HA (FDM) on a cluster fails with the error: Operation timed out, with the following gem of info:

 This issue occurs if the vSphere High Availability Agent service on the ESXi host is stopped.    

*Facepalm* – I checked the services and set the service to start and stop automatically. HA is now happily configured.

No matter how much you know, you gotta check the basics!