Recently I stumbled upon a limitation in VCF that isn’t very clearly documented, and while not an issue you would regularly come accross. It is an import limitation to be aware of if you plan to adjust your pNIC configuration of any VCF hosts post deployment/commisioning. The Problem We have a few customers who will not be able to commision their new hosts with the desired pNIC configuration due to current hardware availability.
I have been working with VMware Cloud Foundation recently and while for the most part things went well there were occasions where challenges were encountered which made the delivery to the customer all the more trickier than expected. This article is a list of observations and things to most definitely check or watch out for when delivering a VCF project. We were working with VCF version 3.7.2 (yes I am aware 3.
Extending a vCenter Orchestrator (vCO) Workflow with ForEach – Backing up all ESXi hosts in a Cluster
In my previous post Backing up ESXi 5.5 host configurations with vCenter Orchestrator (vCO) – Workflow design walkthrough I showed how to create a workflow to back up host configurations, but it was limited to one host at a time. For this post I’m going to show how to create a new workflow that calls the previous one on multiple hosts using a ForEach loop to run it against each one.
Backing up ESXi 5.5 host configurations with vCenter Orchestrator (vCO) – Workflow design walkthrough
As a little learning project, I thought I’d take on Simon’s previous post about backing up ESXi configurations and extend it to vCenter Orchestrator (vCO), and document how I go about building up a workflow. I’m learning more and more about vCO all the time, but I found it has a really steep entry point, and finding use cases is hard if you haven’t explored capabilities. The steps I want to create in this post are:
After having a play with Virtual Flash and Host Caching on one of my lab hosts I wanted to re-use the SSD drive, but couldn’t seem to get vFlash to release the drive. I disabled flash usage on all VMs and disabled the Host Cache, then went to the Virtual Flash Resource Management page to click the “Remove All” button. That failed with errors: “Host’s virtual flash resource is inaccessible.
Problem Fairly recently I came across this error message on one of my hosts “esx.problem.visorfs.ramdisk.full” Fallout While trying to address the issue I had the following problems when the ramdisk did indeed “fill up” PSOD (worst case happened only once in my experience) VM’s struggling to vMotion from the affected host when putting into maintenance mode. Temporary workaround A reboot of the host would clear the problem (clear out the ramdisk) for a short while but the problem will return if not addressed properly.
There are different schools of thought as to whether you should have SSH enabled on your hosts. VMware recommend it is disabled. With SSH disabled there is no possibility of attack, so that’s the “most secure” option. Of course in the real world there’s a balance between “most secure” and “usability” (e.g. the most secure host is powered off and physically isolated from the network, but you can’t run any workloads ).
Losing a root password isn’t something that happens often, but when it does it’s normally a really irritating time. I have to rotate the password of all hosts once a month for compliance, but sometimes a host drops out of the loop and the root password gets lost. Fortunately, as the vpxuser is still valid I can manage the host via vCenter - this lends itself to this little recovery process:
This is the second article in a series of vSphere Security articles that I have planned. The majority of this article is based on vSphere/ESXi 5.1, though I will include any 5.5 information that I find relevant. The first article in this series was vSphere Security: Understanding ESXi 5.x Lockdown Mode. Why would you want to join an ESXi host to an Active Directory domain? Well you’re not going to get Group Policies applying, what you’re really doing is adding another authentication provider directly to the ESXi host.
John Troyer (@jtroyer) asked a question on Twitter last night about a CloudCred prize of $1000-2000: @jtroyer a nice lab setup! — Sam McGeown (@sammcgeown) September 19, 2013 @jtroyer I guess a couple of hosts, storage and a switch, wouldn't get HCL certified for that but I'm sure it's doable! — Sam McGeown (@sammcgeown) September 19, 2013 That got me thinking – was it possible to create an entire 2 host lab with storage on a $2000 budget?