DefinIT

vSphere 6 Lab Upgrade – Overview

vsphere logoI tested vSphere 6 quite intensively when it was in beta, but I didn’t ever upgrade my lab – basically because I need a stable environment to work on and I wasn’t sure that I could maintain that with the beta.

Now 6 has been GA a while and I have a little bit of time, I have begun the lab upgrade process. You can see a bit more about my lab hardware over on my lab page.

I will be upgrading

  • vCenter Server Appliance – currently 5.5 update 1
  • vSphere Update Manager – currently 5.5 update 1
  • 3 HP N54L resource hosts
  • 1 Intel NUC management host

In my lab I run various VMware software suites listed below, although I typically run them in nested environments to keep my lab install relatively clean.

  • vCloud Director
  • vRealize Automation
  • vRealize Orchestrator
  • NSX

Other considerations:

  • VSAN – I currently run VSAN 5.5 and will need to upgrade to 6.0
  • Update Manager – I’d prefer to update my hosts using Update Manager where possible
  • Certificates – I currently use a Microsoft CA, I’d like to move to the VMCA as a subordinate CA
  • Drivers – VMware changed the drivers supported in ESXi, some consumer grade drivers are blacklisted
  • Backup – I use the excellent Veeam Backup and Replication to protect key lab machines, and I know that it doesn’t yet support vSphere 6. That’s a hit I can take in my lab.

To upgrade I need to first verify everything is compatible using the VMware Product Interoperability Matrixes.

High level plan

Having read a lot of vSphere 6 docs, my upgrade plan is as follows:

  1. Upgrade vCenter Server Appliance
  2. Upgrade vSphere Update Manager
  3. Upgrade ESXi
  4. Upgrade VSAN
  5. Upgrade nested labs and other software suites

vRealize Orchestrator (vRO/vCO) and vCloud Director – fixing bugs in "Add a vDC"

vRealize OrchestratorWhen you are using a VMware orchestration platform with an official VMware plugin to manage a VMware product, you don’t really expect to have to fix the out-of-the-box workflows. However, during some testing of some workflows with a client the other day we ran into a couple of issues with the vCloud Director plugin workflows.

Software versions used

  • vCloud Director 5.5.1 (appliance for development) and 5.5.2 (production deployment)
  • vRealize Orchestrator Appliance 5.5.2.1
  • vCloud Director plugin 5.5.1.2

CPU allocations are incorrect for both "Add a VDC"

When you provide the CPU allocation model properties for the Allocation Pool model the first problem is decrypting the naming – it doesn’t match the names in the vCloud Director interface!

The "CPU (GHz)" value is the vCPU speed, and the "CPU Quota (GHz)" value is the CPU allocation

image

(more…)

vCAC 6.1 build out to distributed model: Clustered vCAC Appliances

With the release of vCAC 6.1 there have been some great improvements in the setup of the clustered vCAC appliances – none of the previous copying of configuration files between appliances – just a simple wizard to do it all for you. In my opinion this is superb.

You’ll need to have deployed a load balancer of some sort – vCAC 6.0 build-out to distributed model – Part 3.1: Configure Load Balancing with vCNS or vCAC 6.0 build-out to distributed model – Part 3.2: Configure load balancing with NSX

Deploy vCAC Appliances

Deploy three vCAC appliances by running through the OVF deployment wizard, two to be configured as vCAC Appliance nodes and one to be the external vPostgres database.

  • vCAC-61-PG-01.definit.local
  • vCAC-61-VA-01.definit.local
  • vCAC-61-VA-02.definit.local

image image

image image (more…)

vCAC 6.0 build-out to distributed model – Part 4: Deploying and clustering a secondary vCAC Appliance

This is the fourth article in a series about how to build-out a simple vCAC 6 installation to a distributed model.

By the end of this post we will have deployed a second vCAC Appliance, clustered it with the first appliance and registered the load balanced URL with the Identity Appliance. This will mean logging on to https://vcloud.definit.local/shell-ui-app will be successful.

vCAC deployment with clustered and load balanced vCAC Appliances

vCAC deployment with clustered and load balanced vCAC Appliances

An overview of the steps required are below:

  • Issue and install certificates
  • Deploy an external vPostgres appliance and migrate the vCAC database
  • Configure load balancing
  • Deploy a second vCAC appliance and configure clustering
  • Install and configure additional IaaS server
  • Deploy vCenter Orchestrator Appliance cluster

(more…)

vCAC 6.0 build-out to distributed model – Part 3.2: Configure load balancing with NSX

This is the second part of the 3rd article in a series about how to build-out a simple vCAC 6 installation to a distributed model.

By the end of this part, we will not have modified the vCAC deployment in any way, we’ll just have 3 configured load balanced URLs

image

vCAC Simple Install with vPostgres deployed and load balancers prepared

An overview of the steps required are below:

  • Issue and install certificates
  • Deploy an external vPostgres appliance and migrate the vCAC database
  • Configure load balancing
  • Deploy a second vCAC appliance and configure clustering
  • Install and configure additional IaaS server
  • Deploy vCenter Orchestrator Appliance cluster

(more…)

#LonVMUG 15th May – another great VMUG!

VMware User GroupYesterday saw another fantastic London VMUG with lots of quality sessions and opportunities to network with peers and friends. The committee seem to do a fantastic job every time and this one was no exception, so thanks to Alaric Davies, Jane Rimmer, Stuart Thompson and Simon Gallagher!

One of the best things for me about the VMUG is the chance to chat with some of the smartest and most influential people in the VMware world – a trip to the coffee table provided a great opportunity to “chew the vfat” with two of the VMUG’s biggest characters, Mike Laverick and Ricky El-Qasem – all before any sessions had started.

The first session of the day, after the obligatory coffee and biscuits, was presented by Itzik Reich of EMC’s ExtremIO talking about the all-flash offering. For a non-native English speaker I was thoroughly impressed with how he engaged with the audience and spoke. My main take-away was that you can’t treat flash in the same way as magnetic disk – it’s not just a faster version of the traditional spinning platter but requires a whole new approach as to how it’s used and managed. That may sound obvious but I think a lot of solutions treat flash as such, imposing magnetic disk concepts like RAID which don’t make the best use. Flash != magnetic disk, don’t treat it the same! (more…)