As more services go live on my Kubernetes clusters and more people start relying on them, I get nervous. For the most part, I try and keep my applications and configurations stateless - relying on ConfigMaps for example to store application configuration. This means with a handful of YAML files in my Git repository I can restore everything to working order. Sometimes though, there’s no choice but to use a PersistentVolume to provide some data persistance where you can’t capture it in a config file.
If you’re anything like me, your home lab is constantly changing, evolving, breaking, rebuilding. For the last year or so I’ve been running all my home kubernetes workloads on a Raspberry Pi cluster - and it’s been working really well! I’ve been through several iterations - for example firstly running on SD cards (tl;dr - it’s bad, they wear out really fast with Kubernetes on board!), then PxE booting them from my Synology to it’s now current state of booting directly from SSDs.
VMware have been very busy in the last few months across many of their products and Operations Management has not gone untouched. We saw glimpses of things to come late last year and now we see the Cloud Management Team in VMware really ramping things up with vRealize Operation 8.1 vRealize Operations Cloud One of the bigger items to arrive very shortly is vRealize Operations Cloud (SaaS), this means you do not need to worry about standing vROps up in your environment to see what value it can bring to your infrastructure monitoring.
Since I started learning Kubernetes the Certified Kubernetes Administrator (CKA) exam has been a target for me, but it’s always seemed to be out of reach. The whole Kubernetes ecosystem is a vast and nebulous beast, with new projects rising to the fore all the time, and old projects fading from favour. The size and rapid development that make the field so interesting and powerful, are the same properties that make the learning curve so steep, and the entry bar so high.
Up until recently I’ve been running a Windows Server Core VM with Active Directory, DNS and Certificate Services deployed to provide some core features in my home lab. However, I’ve also been conscious that running a lab on old hardware doesn’t exactly have much in the way of green credentials. So, in an effort to reduce my carbon footprint (and electricity bill) I’ve been looking for ways to shut down my lab when it’s not in use.
I run quite a few applications in Docker as part of my home network - there’s a small selection below, but at any one time there might be 10-15 more apps I’m playing around with: plex - Streaming media server unifi - Ubiquiti Network Controller homebridge - Apple Homekit compatible smart home integration influxdb - Open source time series database grafana - Data visualization & Monitoring pihole - internet tracking and ad blocker vault - Hashicorp secret management Until recently a single PhotonOS VM with Docker was all I needed to run - everything shared the same host IP, stored it’s configuration locally or on an NFS mount and generally ran fine.
Following on from me recent post deploying Kubernetes with the NSX-T CNP, I wanted to extend my environment to make use of the vSphere Cloud Provider to enable Persistent Volumes backed by vSphere storage. This allows me to use Storage Policy to create Persistent Volumes based on policy. For example, I’m going to create two classes of storage, Fast and Slow - Fast will be vSAN based and Slow will be NFS based.
I’ve done a fair amount of work learning VMware PKS and NSX-T, but I wanted to drop down a level and get more familiar with the inner workings for Kubernetes, as well as explore some of the newer features that are exposed by the NSX Container Plugin that are not yet in the PKS integrations. The NSX-T docs are…not great, I certainly don’t think you can work out the steps required from the official NCP installation guide without a healthy dollop of background knowledge and familiarity with Kubernetes and CNI.
It will be no surprise, given my impending move to the VMware PSO NSX Practice, that this morning I’ve been focussing on NSX-T. The two sessions I attended were the Introduction to NSX-T Architecture and Integrating NSX-T with Kubernetes. In a weird twist of scheduling, the Kubernetes session was before the introduction session, but it worked out OK. I found the Kubernetes session really enjoyable and really felt like the speakers delivered a great overview of the integration and how they work together.