DefinIT

Upgrading PKS with NSX-T from 1.0.x to 1.1

PKSYesterday, Pivotal Container Service 1.1 dropped and, as it’s something I’ve been actively learning in my lab, I wanted to jump on the upgrade straight away. PKS with NSX-T is a really hot topic right now and I think it’s going to be a big part of the future CNA landscape.

My Lab PKS 1.0.4 deployment is configured as a “NO-NAT with Logical Switch (NSX-T) Topology” as depicted in the diagram below (from the PKS documentation). My setup has these network characteristics:

  • PKS control plane (Ops Manager, BOSH Director, and PKS VM) components are using routable IP addresses.
  • Kubernetes cluster master and worker nodes are using routable IP addresses.
  • The PKS control plane is deployed inside of the NSX-T network. Both the PKS control plane components (VMs) and the Kubernetes Nodes use routable IP addresses.

I used William Lam’s excellent series on PKS with NSX-T to configure a lot of the settings, so I am going to assume a familiarity with that series. If not, I suggest you start there to get a good understanding of how everything is laid out.

NO-NAT with Logical Switch (NSX-T) Topology (more…)

NSX-T 2.0 Lab Build: Upgrading to NSX-T 2.1

| 22/12/2017 | Tags: , ,

Yesterday saw the release of NSX-T 2.1, with some new features and also some usability enhancements. You can check out the release notes here https://docs.vmware.com/en/VMware-NSX-T/2.1/rn/VMware-NSX-T-21-Release-Notes.html

As I’m mid-way through this blog series, I thought I’d stick in the upgrade as a little bonus!

Download the upgrade bundle

Validate the version and status of NSX-T components

Check the Controller cluster status and Manager connections are up.

Validate the hosts are installed, and have a connection to the controller and manager.

Ensure the Edges

Finaly, check that the transport nodes are all in a “Success” state

You can also validate the state of NSX-T via the command line

SSH to the controller and use “get control-cluster status verbose”

Uploading the NSX-T Upgrade bundle

Navigate to System > Utilities > Upgrade, then click the “PROCEED TO UPGRADE” button

Select the upgrade .mub file and click “UPLOAD”

Since the upgrade bundle is fairly hefty (3.7GB) the upload will take a while, and once it’s uploaded it is extracted and verified, which again takes some time

 

Once the package has uploaded, click to begin the upgrade. The upgrade coordinator will then check the install for any potential issues. In my environment there are two warnings for the Edges that the connectivity is degraded – this is because of the disconnected 4th VMNIC on my Edge VMs and is safe to ignore.

Click Next to view the Hosts Upgrade page. Here you can define the order and method of upgrade for each host, and define host groups to control the order of upgrade. I’ve gone with the defaults, serial (one at a time) upgrades over the parallel (up to 5 at once). All three hosts in this environment are in an automatic group for Pod200-Cluster-1. Click START to begin the upgrade, and the hosts will be put in maintenance mode, then upgraded and rebooted if necessary (a reboot shouldn’t be necessary!) Bear in mind you need to have DRS enabled and the VMs on the hosts must be able to vMotion off of the host being put in maintenance mode. Once the host has upgraded, and the MPA (Management Plane Agent) has reported back to the Manager, the Upgrade Coordinator will move on to the next host in the group.

Once the hosts are upgraded, click NEXT to move to the Edge Upgrade page

Edge Clusters can be upgraded in serial or parallel, but the Edges within are grouped by the Edge Clusters and upgraded serially to ensure connectivity is maintained. I have a single Edge Cluster with two Edge VMs, so this will be upgraded one Edge at a time. Click START to begin the upgrade, and select the Edge Cluster to view the status of the Edge VMs within the Cluster.

Once the Edge Cluster upgrades are complete, click NEXT to move to the Controllers. You can’t change the upgrade of the controllers, these are done in parallel by default. Click START to begin the upgrade – this step took by far the longest in my lab, so be patient!

Once the upgrade has completed, click NEXT to move to the NSX Manager upgrade page. The NSX Manager will become unavailable about 2-4 minutes after you click START, and may take 10-15 minutes to become available again afterwards – don’t worry about errors that come up in the meantime!

Once the Manager upgrade has completed you can re-validate the installation as I did at the start, checking that we have all the green lights, and the versions have increased.

Over the next few posts I will be exploring some the new features introduced in 2.1

 

#vROps 6.2 – upgrade and utilization dashboards

| 29/01/2016 | Tags: , , ,

vRealize-Operations-Manager-Logo_thumb.jpgAs you will see the upgrade is simple and even though its early days I haven’t seen anything that has been broken!

If you are interested in seeing an example of the new utilization dashboard scroll to the bottom of this article.

Upgrading from 6.1

I will be doing this upgrade on a VA so to begin with I will need the vRealize Operations Manager – Virtual Appliance Operating System upgrade .pak file (Realize_Operations_Manager-VA-OS-6.2.0.3445569.pak)

One applied I will then perform the product upgrade of the VA using (vRealize_Operations_Manager-VA-6.2.0.3445569.pak)

Performing the upgrade is simple enough, login to the admin area click software update and then click install a software update.

1

(more…)

vSphere 6 Lab Upgrade – vCenter Orchestrator to vRealize Orchestrator

| 02/04/2015 | Tags: , ,

vsphere logoI tested vSphere 6 quite intensively when it was in beta, but I didn’t ever upgrade my lab – basically because I need a stable environment to work on and I wasn’t sure that I could maintain that with the beta.

Now 6 has been GA a while and I have a little bit of time, I have begun the lab upgrade process. You can see a bit more about my lab hardware over on my lab page.

Upgrading the vCenter Orchestrator Appliance

Upgrading the vCenter Orchestrator Appliance is child’s play – just log onto the admin interface at https://vco.fqdn.com:5480 using the root credentials.

Select the update tab, then click “Check Updates”. You should see appliance version 6.0.1 available, then click Install Updates (more…)

vSphere 6 Lab Upgrade – VSAN

vsphere logoI tested vSphere 6 quite intensively when it was in beta, but I didn’t ever upgrade my lab – basically because I need a stable environment to work on and I wasn’t sure that I could maintain that with the beta.

Now 6 has been GA a while and I have a little bit of time, I have begun the lab upgrade process. You can see a bit more about my lab hardware over on my lab page.

Upgrading to VSAN 6.0

The upgrade process for VSAN 5.5 to 6.0 is fairly straight forward

  • Upgrade vCenter Server
  • Upgrade ESXi hosts
  • Upgrade the on-disk format to the new VSAN FS

Other parts of this guide have covered the vCenter and ESXi upgrade, so this one will focus on the disk format upgrade. Once you’ve upgraded these you’ll get a warning on your VSAN cluster:

image

(more…)

vSphere 6 Lab Upgrade – Overview

vsphere logoI tested vSphere 6 quite intensively when it was in beta, but I didn’t ever upgrade my lab – basically because I need a stable environment to work on and I wasn’t sure that I could maintain that with the beta.

Now 6 has been GA a while and I have a little bit of time, I have begun the lab upgrade process. You can see a bit more about my lab hardware over on my lab page.

I will be upgrading

  • vCenter Server Appliance – currently 5.5 update 1
  • vSphere Update Manager – currently 5.5 update 1
  • 3 HP N54L resource hosts
  • 1 Intel NUC management host

In my lab I run various VMware software suites listed below, although I typically run them in nested environments to keep my lab install relatively clean.

  • vCloud Director
  • vRealize Automation
  • vRealize Orchestrator
  • NSX

Other considerations:

  • VSAN – I currently run VSAN 5.5 and will need to upgrade to 6.0
  • Update Manager – I’d prefer to update my hosts using Update Manager where possible
  • Certificates – I currently use a Microsoft CA, I’d like to move to the VMCA as a subordinate CA
  • Drivers – VMware changed the drivers supported in ESXi, some consumer grade drivers are blacklisted
  • Backup – I use the excellent Veeam Backup and Replication to protect key lab machines, and I know that it doesn’t yet support vSphere 6. That’s a hit I can take in my lab.

To upgrade I need to first verify everything is compatible using the VMware Product Interoperability Matrixes.

High level plan

Having read a lot of vSphere 6 docs, my upgrade plan is as follows:

  1. Upgrade vCenter Server Appliance
  2. Upgrade vSphere Update Manager
  3. Upgrade ESXi
  4. Upgrade VSAN
  5. Upgrade nested labs and other software suites

vSphere 6 Lab Upgrade – Upgrading ESXi 5.5

vsphere logoI tested vSphere 6 quite intensively when it was in beta, but I didn’t ever upgrade my lab – basically because I need a stable environment to work on and I wasn’t sure that I could maintain that with the beta.

Now 6 has been GA a while and I have a little bit of time, I have begun the lab upgrade process. You can see a bit more about my lab hardware over on my lab page.

Checking for driver compatibility

In vSphere 5.5, VMware dropped the drivers for quite a few consumer grade NICs – in 6 they’ve gone a step further and actually blocked quite a few of these using a VIB package. For more information, see this excellent article by Andreas Peetz.

To list the NIC drivers you’re using on your ESXi hosts, use the following command:

esxcli network nic list | awk ‘{print $1}’|grep [0-9]|while read a;do ethtool -i $a;done

image

As you can see from the results, my HP N54Ls are running 3 NICs, a Broadcom onboard and two Intel PCI NICs. Fortunately the Broadcom chip is supported and the e1000e driver I’m using is compatible with vSphere 6 and is in fact superseded by a native driver package. (more…)

vSphere 6 Lab Upgrade – vCenter Server Appliance

vsphere logoI tested vSphere 6 quite intensively when it was in beta, but I didn’t ever upgrade my lab – basically because I need a stable environment to work on and I wasn’t sure that I could maintain that with the beta.

Now 6 has been GA a while and I have a little bit of time, I have begun the lab upgrade process. You can see a bit more about my lab hardware over on my lab page.

(more…)