NST-T 2.0 Lab Build: Logical Router Configuration
Posts in this series
- NSX-T 2.0 Lab Build: Deploying NSX Manager
- NSX-T 2.0 Lab Build: Deploying Controller Cluster
- NSX-T 2.0 Lab Build: ESXi Host Preparation
- NSX-T 2.0 Lab Build: Adding a vCenter Compute Manager and Preparing Hosts
- NSX-T 2.0 Lab Build: Edge Installation
- NSX-T 2.0 Lab Build: Transport Zones and Transport Nodes
- NST-T 2.0 Lab Build: Logical Router Configuration
- NSX-T 2.0 Lab Build: Upgrading to NSX-T 2.1
Disclaimer! I am learning NSX-T, part of my learning is to deploy in my lab – if I contradict the official docs then go with the docs!
This NSX-T lab environment is built as a nested lab on my physical hosts. There are four physical ESXi hosts, onto which I will deploy three ESXi VMs, a vCenter Server Appliance, NSX Manager, an NSX Controller cluster, and two NSX Edge Nodes.
I will follow the deployment plan from the NSX-T 2.0 documentation:
- Install NSX Manager.
- Install NSX Controllers.
- Join NSX Controllers with the management plane.
- Initialize the control cluster to create a master controller.
- Join NSX Controllers into a control cluster.
- Join hypervisor hosts with the management plane.
- Install NSX Edges.
- Join NSX Edges with the management plane.
- Create transport zones and transport nodes.
- Configure Logical Routing and BGP
When this post series is complete, the network topology should be something like this, with two hostswitches configured. The ESXi Hosts will have a Tunnel Endpoint IP address, as will the Edge. The Edge will also have an interface configured for a VLAN uplink.
In this post I will walk through configuring VLAN Logical Switch, Tier-0 Router, Tier-1 Router, Uplink Profiles and BGP dynamic routing to the physical router.
Deploying ECMP with NSX for a Provider Logical Router
Equal Cost Multipathing (ECMP), for the vSphere admin, is ability to create routes with an equal cost, which allows multiple paths to the same network to be created and traffic can be distributed over those paths. This is good for a couple of reasons – firstly is availability. If we were to lose a host, and an NSX Edge, the route will time out quicker than NSX Edge High Availability – thus providing higher availability for our network traffic. Then second reason is throughput – each NSX Edge is capable of ~10Gbps throughput, but with ECMP we can have multiple NSX Edges (up to 8) providing 10Gbps each – that’s a significant performance boost.
Below I’ve mapped out what I want to build in my lab – it’s a simplification of a design I’ve used for some Service Provider customers (and you’ll see similar in the vCloud Architecture Toolkit documentation from VMware).
More notes on Threat Management Gateway Arrays
Get the NICs right first
In this case I came to a project after the initial installation of the array and there was no dedicated intra-array network installed. I added a new NIC to each VM and configured the IP addressing, VLANs and routing, but could not get the intra-array network to ping, let alone talk to each other. So the lesson here is to set up the servers with their NICs before you install TMG – Microsoft recommend a dedicated intra-array network and every bit of experience I have with TMG arrays confirms that.
Get the NIC Binding order right
This is simple, the order I have found to work is:
- Intra-array Network
- Private/Internal Network
- Public/External Network
Some people recommend the Private/Internal network first, then the Intra-array, but I have found that this order works better (anyone able to dispute this or give me a reason why it should be the other way?). The key thing is that the External Network (which should be your default Gateway) is last in the binding order, which brings me to the next point…
Get the gateway and routing right
- Default Gateway: The only NIC with a Default Gateway set should be the Public/External NIC
- DNS: The only NIC with DNS configured should be your Private/Internal NIC
- Register in DNS: The only NIC registering in DNS should be the Private/Internal NIC
- Client for Microsoft Networks: Only enabled on the Private/Internal NIC
- File and Print Sharing for Microsoft Networks: Only enabled on the Private/Internal NIC
- NetBIOS over TCP/IP: Only enabled on the Private/Internal NIC
Add any static and persistant routes required and make sure you can access those networks before installing TMG. This allows you to get the routing right without the complication of TMG rules and firewalls.
Then, and only then, install TMG 🙂