NST-T 2.0 Lab Build: Logical Router Configuration
Posts in this series
- NSX-T 2.0 Lab Build: Deploying NSX Manager
- NSX-T 2.0 Lab Build: Deploying Controller Cluster
- NSX-T 2.0 Lab Build: ESXi Host Preparation
- NSX-T 2.0 Lab Build: Adding a vCenter Compute Manager and Preparing Hosts
- NSX-T 2.0 Lab Build: Edge Installation
- NSX-T 2.0 Lab Build: Transport Zones and Transport Nodes
- NST-T 2.0 Lab Build: Logical Router Configuration
- NSX-T 2.0 Lab Build: Upgrading to NSX-T 2.1
Disclaimer! I am learning NSX-T, part of my learning is to deploy in my lab – if I contradict the official docs then go with the docs!
This NSX-T lab environment is built as a nested lab on my physical hosts. There are four physical ESXi hosts, onto which I will deploy three ESXi VMs, a vCenter Server Appliance, NSX Manager, an NSX Controller cluster, and two NSX Edge Nodes.
I will follow the deployment plan from the NSX-T 2.0 documentation:
- Install NSX Manager.
- Install NSX Controllers.
- Join NSX Controllers with the management plane.
- Initialize the control cluster to create a master controller.
- Join NSX Controllers into a control cluster.
- Join hypervisor hosts with the management plane.
- Install NSX Edges.
- Join NSX Edges with the management plane.
- Create transport zones and transport nodes.
- Configure Logical Routing and BGP
When this post series is complete, the network topology should be something like this, with two hostswitches configured. The ESXi Hosts will have a Tunnel Endpoint IP address, as will the Edge. The Edge will also have an interface configured for a VLAN uplink.
In this post I will walk through configuring VLAN Logical Switch, Tier-0 Router, Tier-1 Router, Uplink Profiles and BGP dynamic routing to the physical router.
#vROps a new diva in the datacentre
For the last few months and certainly very recently (at the London VMUG meeting) I have had the chance to talk to peers and #vExperts and share “war stories” with regards to vRealize Operations Manager.
What has become a consistent theme in all the stories is just how much compute resources vROps requires when you “go big” and not just what is clearly defined in the sizing spreadsheet.
For example some of the large deployments I have either been involved in or heard about (to monitor upwards and beyond of 30000 VMs) required the deployment of Large vROps nodes.