Being a Consultant within a VMware Principal Partner there are standards we need to meet and preferably exceed. Master Services Competencies (MSCs) are VMware partner designations designed to recognize services-capable partners with delivery expertise and experience within a VMware solution area. With that in mind I sat the VCF Specialist exam. The Exam The exam covers a broad spectrum of VMware technologies (vSphere, NSX-T, vSAN, Tanzu) I was apprehensive about it as there is an awful lot of material to cover that the exam expects you to have good knowledge of.
There has been some really exciting things being announced/relased by VMware and their respective product teams recently so lets take a look at all the new goodness in vRealize Operations 8.4 vRealize Operations 8.4 vRealize Operations 8.4 delivers new and enhanced capabilities for self-driving operations to help customers optimize, plan, and scale VMware Cloud, which includes on-premises private cloud or VMware SDDC in multiple public clouds such as VMware Cloud on AWS, Azure VMware Solution (AVS), and Google Cloud VMware Engine (GCVE), while at the same time unifying multi-cloud monitoring, and supporting AWS, Azure Cloud, and Google Cloud platforms.
Most of my home network runs on my Raspberry Pi Kubernetes cluster, and for the most part it’s rock solid. However, applications being applications, sometimes they become less responsive than they should (or for example, when my Synology updates itself and reboots, any mounted NFS volumes can cause the running pods to degrade in performance). This isn’t an issue with service liveliness, which can be mitigated with a liveness probe that restarts the pod if a service isn’t running.
Integrating vROps with vRA 8 using Workspace One Auth Source Product Version - vRealize Automation 8.x Why integrate vROps with vRA 8? vRealize Automation can work with vRealize Operations Manager to perform advanced workload placement, provide deployment health and virtual machine metrics, and display pricing. So what is the problem? When configuring the integration, you will input the vROps URL and it will also ask you to input a username and password of the service account you wish to use.
Why use Content Libraries with vRA 8 Product Version - vRealize Automation 8.x What are vSphere Content Libraries? A content library stores and manages content in the form of library items. A single library item can consist of one file or multiple files. For example, the OVF template is a set of files (.ovf, .vmdk, and .mf). When you upload an OVF template to the library, you upload the entire set of files, but the result is a single library item of the OVF Template type.
Using vRealize Orchestreator MP in vROps Product Version - vRealize Operations 7.x and 8.x A few years ago VMware released the Orchestrator MP which is a superb way to directly call vRO workflows from vROps by way of alerts and actions. This opens the door to all manner of ideas for conditional automation using vROps. The limitation Recently for a customer we planned to use the vRO MP to assist a customer with a very unique/niche challenge.
vROps Remote Collectors - Design Considerations Product Version - vRealize Operations 7.x and 8.x As part of VMware Validated Designs tyically you would use Remote Collectors not just in different DCs but also local to the analytics cluster. However there are a circumstances where the rules “change”. When using RecoverPoint or Site Recovery Manager Design Assumption - you have Remote Collectors on your Primary Site and on your Failover Site.
When I deploy a new service into a namespace, I need to create a new DNS record that makes it available. I’ve previously talked about using CoreDNS to host my lab DNS zones, but this is something different. I want to make a Kubernetes Service available using an existing Microsoft DNS server - which is already used by all the clients who would need to access the service. To do this I will create a delegated zone under my existing zone cmbu.
To generate a basic authentication header from a username and password in Code Stream you could use a CI task and execute echo -n username:password | base64 in the shell then export the result for use later on. A more repeatable way is to create a Custom Integration that takes the two inputs, and returns the encoded header as an output. To create the Custom Integration: Create a new Custom Integration named “Create Basic Authentication Header” Select the Runtime - the examples below are shell and python3 respectively Replace the placeholder code with the example from below Save and version the Custom Integration, ensuring you eanble the “Release Version” toggle Creating the Custom Integration To use the Custom Integration in a pipeline:
As more services go live on my Kubernetes clusters and more people start relying on them, I get nervous. For the most part, I try and keep my applications and configurations stateless - relying on ConfigMaps for example to store application configuration. This means with a handful of YAML files in my Git repository I can restore everything to working order. Sometimes though, there’s no choice but to use a PersistentVolume to provide some data persistance where you can’t capture it in a config file.