Esxi

Written by Simon Eady on 4/4/2022
Published under VCF

Recently I stumbled upon a limitation in VCF that isn’t very clearly documented, and while not an issue you would regularly come accross. It is an import limitation to be aware of if you plan to adjust your pNIC configuration of any VCF hosts post deployment/commisioning.

The Problem

We have a few customers who will not be able to commision their new hosts with the desired pNIC configuration due to current hardware availability. The availability of the pNICs due to global supply challenges has meant severe delays in availability.

Written by Simon Eady on 29/7/2019

I have been working with VMware Cloud Foundation recently and while for the most part things went well there were occasions where challenges were encountered which made the delivery to the customer all the more trickier than expected.

This article is a list of observations and things to most definitely check or watch out for when delivering a VCF project.

We were working with VCF version 3.7.2 (yes I am aware 3.8 has arrived but that was too late for the delivery in this project)

Written by Sam McGeown on 20/3/2014

In my previous post Backing up ESXi 5.5 host configurations with vCenter Orchestrator (vCO) – Workflow design walkthrough I showed how to create a workflow to back up host configurations, but it was limited to one host at a time. For this post I’m going to show how to create a new workflow that calls the previous one on multiple hosts using a ForEach loop to run it against each one. This was actually easier than I had anticipated (having read posts on previous versions of vCO that involved created the loops manually).

Written by Sam McGeown on 13/3/2014
Published under VMware, vRealize Orchestrator

As a little learning project, I thought I’d take on Simon’s previous post about backing up ESXi configurations and extend it to vCenter Orchestrator (vCO), and document how I go about building up a workflow. I’m learning more and more about vCO all the time, but I found it has a really steep entry point, and finding use cases is hard if you haven’t explored capabilities.

Written by Sam McGeown on 27/2/2014
Published under VMware, vSphere

After having a play with Virtual Flash and Host Caching on one of my lab hosts I wanted to re-use the SSD drive, but couldn’t seem to get vFlash to release the drive. I disabled flash usage on all VMs and disabled the Host Cache, then went to the Virtual Flash Resource Management page to click the “Remove All” button. That failed with errors:

“Host’s virtual flash resource is inaccessible.”

Written by Simon Eady on 4/2/2014
Published under

Problem

Fairly recently I came across this error message on one of my hosts “esx.problem.visorfs.ramdisk.full”

Fallout

While trying to address the issue I had the following problems when the ramdisk did indeed “fill up”

  • PSOD (worst case happened only once in my experience)
  • VM’s struggling to vMotion from the affected host when putting into maintenance mode.

Temporary workaround

A reboot of the host would clear the problem (clear out the ramdisk) for a short while but the problem will return if not addressed properly.

Written by Sam McGeown on 22/10/2013
Published under Networking, VMware

There are different schools of thought as to whether you should have SSH enabled on your hosts. VMware recommend it is disabled. With SSH disabled there is no possibility of attack, so that’s the “most secure” option. Of course in the real world there’s a balance between “most secure” and “usability” (e.g. the most secure host is powered off and physically isolated from the network, but you can’t run any workloads ). My preferred route is to have it enabled but locked down.

Written by Sam McGeown on 7/10/2013
Published under VMware, vSphere

Losing a root password isn’t something that happens often, but when it does it’s normally a really irritating time. I have to rotate the password of all hosts once a month for compliance, but sometimes a host drops out of the loop and the root password gets lost. Fortunately, as the vpxuser is still valid I can manage the host via vCenter - this lends itself to this little recovery process:

Written by Sam McGeown on 4/10/2013
Published under VMware

This is the second article in a series of vSphere Security articles that I have planned. The majority of this article is based on vSphere/ESXi 5.1, though I will include any 5.5 information that I find relevant. The first article in this series was vSphere Security: Understanding ESXi 5.x Lockdown Mode .

Why would you want to join an ESXi host to an Active Directory domain? Well you’re not going to get Group Policies applying, what you’re really doing is adding another authentication provider directly to the ESXi host. You will see a computer object created in AD, but you will still need to create a DNS entry (or configure DHCP to do it for you). What you will get is a way to audit root access to your hosts, to give administrators a single sign on for managing all aspects of your virtual environment and more options in your administrative arsenal – for example, if you’re using an AD group to manage host root access, you don’t have to log onto however many ESXi hosts you have to remove a user’s permissions, simply remove them from the group. You can keep your root passwords in a sealed envelope for emergencies! 😉

Written by Sam McGeown on 20/9/2013
Published under VMware

John Troyer (@jtroyer) asked a question on Twitter last night about a CloudCred prize of $1000-2000:

 

That got me thinking – was it possible to create an entire 2 host lab with storage on a $2000 budget? My first step was to convert it into a proper currency:

I figured that I’d stick to the Intel NUC route that I’ve gone down for my lab at home – I love the NUC for its tiny form factor, silent operation and really low power consumption. There are down sides – it can only take 16GB RAM, only one mSATA disk and only has one gigabit NIC. I don’t think any of those are too big a deal for a personal lab though – certainly I’ve not had any problems building and testing VMware products on my single NUC. I’d drop in an 8GB stick of RAM and an Intel 60GB mSATA SSD per NUC – you could always go 16GB later by adding another 8GB stick in the 2nd slot. I picked the Intel mSATA disk for it’s controller and throughput figures – there are larger and cheaper ones but not with the same write performance. Since the use of SSD is massively in focus with vFlash, PernixData FVP and several other technologies, you wouldn’t want to miss out. I’ve also added an 8GB USB3 flash drive per NUC to boot ESXi from.