Sam has been working in the IT industry for nearly 20 years now, and is currently working for VMware as a Senior Technical Marketing Manger in the Cloud Management Business Unit (CMBU) focussed on Automation. Previously, he has worked as consultant for VMware PSO, specializing in cloud automation and network virtualization. His technical experience includes design, development and implementation of cloud solutions, network function virtualisation and the software defined datacentre. Sam specialises in automation of network virtualisation for cloud infrastructure, enabling public cloud solutions for service providers and private or hybrid cloud solutions for the enterprise.
Sam holds multiple high level industry certifications, including the VMware Certified Design Expert (VCDX) for Cloud Management and Automation. He is also a proud member of the vExpert community, holding the vExpert accolade from 2013-present, as well as being selected for the vExpert NSX, vExpert VSAN and vExpert Cloud sub-programs.
Up until recently I’ve been running a Windows Server Core VM with Active Directory, DNS and Certificate Services deployed to provide some core features in my home lab. However, I’ve also been conscious that running a lab on old hardware doesn’t exactly have much in the way of green credentials. So, in an effort to reduce my carbon footprint (and electricity bill) I’ve been looking for ways to shut down my lab when it’s not in use.
Firstly, I need to mention that I’m using three different repositories for my code base, and why. The three repositories are:
definit-hugo - this contains the hugo site configuration
definit-content - this contains the site content - markdown files, images etc
definit-theme - this contains the VMware Clarity-based theme I use for my site
definit-content and definit-theme are git submodules in the definit-hugo project, mapped into the /content and /themes folders respectively. This allows me to keep the configuration, content and theme separate, and to manage them as separate entities. The aim is that the theme will eventually be in a position to be released, and I don’t want to have to extract it from my hugo code base later on.
Autumn seems to be a time for the winds of change to blow through our industry, and this year that’s true for me.
TL;DR - I’m leaving VMware PSO to join the Cloud Management Business Unit as a Technical Marketing Manager for Cloud Automation!
It’s been a little over two years since I joined VMware as a Senior Conusltant in the EMEA NSX Practice, and in that time I’ve enjoyed some great opportunities, worked with some great people and technologies. And I’ve learned a lot since taking on my first NSX-T design just days after joining. But, as I mentioned when I joined a couple of years ago
, I’ve always said NSX comes alive when it’s automated by vRA. In fact, almost every engagement I’ve done over the past two years has had a strong vein of automation running through it.
I run quite a few applications in Docker as part of my home network - there’s a small selection below, but at any one time there might be 10-15 more apps I’m playing around with:
plex - Streaming media server
unifi - Ubiquiti Network Controller
homebridge - Apple Homekit compatible smart home integration
influxdb - Open source time series database
grafana - Data visualization & Monitoring
pihole - internet tracking and ad blocker
vault - Hashicorp secret management
Until recently a single PhotonOS VM with Docker was all I needed to run - everything shared the same host IP, stored it’s configuration locally or on an NFS mount and generally ran fine. However, my wife and kids have become more dependant on plex, and homebridge (which I use to control the air conditioning in my house), and if they’re down, it’s a problem. So, I embarked on a little project to provide some better availability, and learn a little in the process.
Following on from me recent post deploying Kubernetes with the NSX-T
CNP
,
I wanted to extend my environment to make use of the vSphere Cloud
Provider
to enable Persistent Volumes backed by vSphere storage. This allows me to use
Storage Policy to create Persistent Volumes based on policy. For example, I’m
going to create two classes of storage, Fast and Slow - Fast will be vSAN
based and Slow will be NFS based.
I’ve done a fair amount of work learning VMware PKS and NSX-T, but I wanted to
drop down a level and get more familiar with the inner workings for Kubernetes,
as well as explore some of the newer features that are exposed by the NSX
Container Plugin that are not yet in the PKS integrations.
The NSX-T docs are…not great, I certainly don’t think you can work out the steps
required from the official NCP installation
guide
without a healthy dollop of background knowledge and familiarity with Kubernetes
and CNI. Anthony Burke published this
guide
which is great, and I am lucky enough to be able to pick his brains on our
corporate slack.
I ran into this UI bug the other day when I was trying to enable route
redistribution on an Edge in a Secondary site of a cross-vCenter NSX deployment.
The Edge itself was deployed correctly, and configured to peer with a physical
northbound router, however when I attempted to configure the route
redistribution I was unable to do so.
Fortunately, the solution was simple - use the API.
When I started my blog back in May 2007 (12 years ago!) I was running Wordpress, then switched to DotNetNuke, then BlogEngine, then finally back to Wordpress - which I’ve used since 2010. Today I’ve cut over to a new architecture based on Hugo and hosted on AWS using a combination of Route53, Cloudfront and S3.
Why the change? If it ain’t broke…
You may well ask why I’ve made the move, or you may not…I’m going to tell you anyway…
Most vSphere admins are more than comfortable with using Update Manager to download patches and update their environment, but few that I talk to actually know a huge amount about the Update Mangaer Download Service (UMDS). UMDS is tool you can install to download patches (and third party VIBs - I’ll get to that) for Update Manager and it’s useful for environments that don’t have access to the internet, or air-gapped, and also for environments with multiple vCenter Servers where you don’t necessarily want to download the same patch on every server. You can control which patches you download (for example, limiting to ESXi 6.7+ only) and you can add third party Vendor repositories (e.g. Dell or HPE).
This series was originally going to be a more polished endeavour, but unfortunately time got in the way. A prod from James Kilby (@jameskilbynet) has convinced me to publish as is, as a series of lab notes. Maybe one day I’ll loop back and finish them…
Requirements
Routing
Because I’m backing my vCloud Director installation with NSX-T, I will be using my existing Tier-0 router, which interfaces with my physical router via BGP. The Tier-0 router will be connected to the Tier-1 router, the NSX-T logical switches will be connected to the Tier-1, and the IP networks advertised to the Tier-0 (using NSX-T’s internal routing mechanism) and up via eBGP to the physical router.