In my previous post I walked through configuring kubernetes ingress with automatically generated SSL certificates and DNS registration using Tanzu Kubernetes Grid’s Packages. Another of the packaged applications available is Fluent-bit, which enables log forwarding from your Kubernetes cluster and workloads to a range of supported logging endpoints. There are a couple of tweaks required in order to forward logs to vRealize Automation Log Insight Cloud. We need to use the HTTP output in the Fluent Bit configuration to forward the logs as a JSON payload to the Log Insight API.
So, you’ve set up your shiny new Workload Management on vSphere, created a Namespace and deployed a cluster…now what?! When you deploy a workload cluster from Workload Management on vSphere 7, it comes with basic functionality, but in order to start running workloads you will inevitably need to install additional tools. That’s where Tanzu’s Packages come into play. Tanzu’s User Managed Packages are based on a project called Carvel which:
Most of my home network runs on my Raspberry Pi Kubernetes cluster, and for the most part it’s rock solid. However, applications being applications, sometimes they become less responsive than they should (or for example, when my Synology updates itself and reboots, any mounted NFS volumes can cause the running pods to degrade in performance). This isn’t an issue with service liveliness, which can be mitigated with a liveness probe that restarts the pod if a service isn’t running.
When I deploy a new service into a namespace, I need to create a new DNS record that makes it available. I’ve previously talked about using CoreDNS to host my lab DNS zones, but this is something different. I want to make a Kubernetes Service available using an existing Microsoft DNS server - which is already used by all the clients who would need to access the service. To do this I will create a delegated zone under my existing zone cmbu.
As more services go live on my Kubernetes clusters and more people start relying on them, I get nervous. For the most part, I try and keep my applications and configurations stateless - relying on ConfigMaps for example to store application configuration. This means with a handful of YAML files in my Git repository I can restore everything to working order. Sometimes though, there’s no choice but to use a PersistentVolume to provide some data persistance where you can’t capture it in a config file.
If you’re anything like me, your home lab is constantly changing, evolving, breaking, rebuilding. For the last year or so I’ve been running all my home kubernetes workloads on a Raspberry Pi cluster - and it’s been working really well! I’ve been through several iterations - for example firstly running on SD cards (tl;dr - it’s bad, they wear out really fast with Kubernetes on board!), then PxE booting them from my Synology to it’s now current state of booting directly from SSDs.
VMware have been very busy in the last few months across many of their products and Operations Management has not gone untouched. We saw glimpses of things to come late last year and now we see the Cloud Management Team in VMware really ramping things up with vRealize Operation 8.1 vRealize Operations Cloud One of the bigger items to arrive very shortly is vRealize Operations Cloud (SaaS), this means you do not need to worry about standing vROps up in your environment to see what value it can bring to your infrastructure monitoring.
Since I started learning Kubernetes the Certified Kubernetes Administrator (CKA) exam has been a target for me, but it’s always seemed to be out of reach. The whole Kubernetes ecosystem is a vast and nebulous beast, with new projects rising to the fore all the time, and old projects fading from favour. The size and rapid development that make the field so interesting and powerful, are the same properties that make the learning curve so steep, and the entry bar so high.
Up until recently I’ve been running a Windows Server Core VM with Active Directory, DNS and Certificate Services deployed to provide some core features in my home lab. However, I’ve also been conscious that running a lab on old hardware doesn’t exactly have much in the way of green credentials. So, in an effort to reduce my carbon footprint (and electricity bill) I’ve been looking for ways to shut down my lab when it’s not in use.
I run quite a few applications in Docker as part of my home network - there’s a small selection below, but at any one time there might be 10-15 more apps I’m playing around with: plex - Streaming media server unifi - Ubiquiti Network Controller homebridge - Apple Homekit compatible smart home integration influxdb - Open source time series database grafana - Data visualization & Monitoring pihole - internet tracking and ad blocker vault - Hashicorp secret management Until recently a single PhotonOS VM with Docker was all I needed to run - everything shared the same host IP, stored it’s configuration locally or on an NFS mount and generally ran fine.