Most of my home network runs on my Raspberry Pi Kubernetes cluster, and for the most part it’s rock solid. However, applications being applications, sometimes they become less responsive than they should (or for example, when my Synology updates itself and reboots, any mounted NFS volumes can cause the running pods to degrade in performance). This isn’t an issue with service liveliness, which can be mitigated with a liveness probe that restarts the pod if a service isn’t running.
Integrating vROps with vRA 8 using Workspace One Auth Source Product Version - vRealize Automation 8.x Why integrate vROps with vRA 8? vRealize Automation can work with vRealize Operations Manager to perform advanced workload placement, provide deployment health and virtual machine metrics, and display pricing. So what is the problem? When configuring the integration, you will input the vROps URL and it will also ask you to input a username and password of the service account you wish to use.
Why use Content Libraries with vRA 8 Product Version - vRealize Automation 8.x What are vSphere Content Libraries? A content library stores and manages content in the form of library items. A single library item can consist of one file or multiple files. For example, the OVF template is a set of files (.ovf, .vmdk, and .mf). When you upload an OVF template to the library, you upload the entire set of files, but the result is a single library item of the OVF Template type.
Using vRealize Orchestreator MP in vROps Product Version - vRealize Operations 7.x and 8.x A few years ago VMware released the Orchestrator MP which is a superb way to directly call vRO workflows from vROps by way of alerts and actions. This opens the door to all manner of ideas for conditional automation using vROps. The limitation Recently for a customer we planned to use the vRO MP to assist a customer with a very unique/niche challenge.
vROps Remote Collectors - Design Considerations Product Version - vRealize Operations 7.x and 8.x As part of VMware Validated Designs tyically you would use Remote Collectors not just in different DCs but also local to the analytics cluster. However there are a circumstances where the rules “change”. When using RecoverPoint or Site Recovery Manager Design Assumption - you have Remote Collectors on your Primary Site and on your Failover Site.
When I deploy a new service into a namespace, I need to create a new DNS record that makes it available. I’ve previously talked about using CoreDNS to host my lab DNS zones, but this is something different. I want to make a Kubernetes Service available using an existing Microsoft DNS server - which is already used by all the clients who would need to access the service. To do this I will create a delegated zone under my existing zone cmbu.
To generate a basic authentication header from a username and password in Code Stream you could use a CI task and execute echo -n username:password | base64 in the shell then export the result for use later on. A more repeatable way is to create a Custom Integration that takes the two inputs, and returns the encoded header as an output. To create the Custom Integration: Create a new Custom Integration named “Create Basic Authentication Header” Select the Runtime - the examples below are shell and python3 respectively Replace the placeholder code with the example from below Save and version the Custom Integration, ensuring you eanble the “Release Version” toggle Creating the Custom Integration To use the Custom Integration in a pipeline:
As more services go live on my Kubernetes clusters and more people start relying on them, I get nervous. For the most part, I try and keep my applications and configurations stateless - relying on ConfigMaps for example to store application configuration. This means with a handful of YAML files in my Git repository I can restore everything to working order. Sometimes though, there’s no choice but to use a PersistentVolume to provide some data persistance where you can’t capture it in a config file.
If you’re anything like me, your home lab is constantly changing, evolving, breaking, rebuilding. For the last year or so I’ve been running all my home kubernetes workloads on a Raspberry Pi cluster - and it’s been working really well! I’ve been through several iterations - for example firstly running on SD cards (tl;dr - it’s bad, they wear out really fast with Kubernetes on board!), then PxE booting them from my Synology to it’s now current state of booting directly from SSDs.
Where can you find me at VMworld 2020 VMworld 2020 - possible together VMworld 2020 is fast approaching (Sept 29th-October 1st), and in case you hadn’t heard, it online and free! If you struggle getting funding for tickets and flights normally, this could be a golden opportunity to get involved! Register for VMworld 2020 for FREE here! Please come and talk to me for my round table session, it will be awkward by myself!