After having a play with Virtual Flash and Host Caching on one of my lab hosts I wanted to re-use the SSD drive, but couldn’t seem to get vFlash to release the drive. I disabled flash usage on all VMs and disabled the Host Cache, then went to the Virtual Flash Resource Management page to click the “Remove All” button. That failed with errors:
“Host’s virtual flash resource is inaccessible.”
“The object or item referred to could not be found.”
In order to reclaim the SSD you need to erase the proprietary vFlash File System partition using some command line kung fu. SSH into your host and list the disks:
You’ll see something similar to this:
You can see the disk ID “t10.ATA_____M42DCT032M4SSD3__________________________00000000121903600F1F” and below it appended with the “:1” which is partition 1 on the disk. This is the partition that I need to delete. I then use partedUtil to delete the partition I just identified using the format below:
partedutil delete “/vmfs/devices/disks/<disk ID>” <partition number>
partedutil delete “/vmfs/devices/disks/t10.ATA_____M42DCT032M4SSD3__________________________00000000121903600F1F” 1
There’s no output after the command:
Now I can go and reclaim the SSD as a VMFS volume as required:
Hope that helps!
Fairly recently I came across this error message on one of my hosts "esx.problem.visorfs.ramdisk.full"
While trying to address the issue I had the following problems when the ramdisk did indeed "fill up"
- PSOD (worst case happened only once in my experience)
- VM's struggling to vMotion from the affected host when putting into maintenance mode.
A reboot of the host would clear the problem (clear out the ramdisk) for a short while but the problem will return if not addressed properly.
- Clustered ESXi hosts (version 5.1)
- vCenter 5.1
First of all I had to see just how bad things were so I connected to the affected host via SSH ( you may need to start the SSH service on the host as by default it is usually stopped)
Using the following command I could determine what used state the ramdisk was in.
(using an example output)
It was clear the root was full so now to find out what was filling it up so I searched KB articles for answers below were more the more helpful ones.
Unlike many of the other articles I had read which all seemed to point to /var/log being the cause the culprit in this instance was the /scratch disk
After doing some quick reading it was clear I could set a separate persistent location for the /scratch disk. So I followed the VMware KB article on this process and rebooted the host to apply the changes.
Even though I had followed the article to the letter and rebooted the host the changes did not apply on this host as when I double checked the ramdisk and this was the output as you can see the used space was already growing and when I put the console window side by side with the other hosts it was growing rapidly whereas the other hosts were quite static in used space.
It was not until I rebooted the host again did the changes I made apply, I will point out that this was the only time this little issue occurred the other hosts only required one reboot after changing the /scratch location.
After the second reboot I could see the host was as it should be
There are different schools of thought as to whether you should have SSH enabled on your hosts. VMware recommend it is disabled. With SSH disabled there is no possibility of attack, so that’s the “most secure” option. Of course in the real world there’s a balance between “most secure” and “usability” (e.g. the most secure host is powered off and physically isolated from the network, but you can’t run any workloads ). My preferred route is to have it enabled but locked down.
Note: VMware use the term “ESXi Shell”, most of us would term it “SSH” – the two are used interchangeably in this article although there is a slight difference. You can have the ESXi Shell enabled but SSH disabled – this means you can access the shell via the DCUI. For the sake of this article assume ESXi Shell and SSH are the same.
Losing a root password isn’t something that happens often, but when it does it’s normally a really irritating time. I have to rotate the password of all hosts once a month for compliance, but sometimes a host drops out of the loop and the root password gets lost. Fortunately, as the vpxuser is still valid I can manage the host via vCenter - this lends itself to this little recovery process:
- Join the host to the domain (I’ve got a handy post for that here)
- Create the “ESX Admins” group in your AD and ensure that you are a member. The AD group will be given full administrator rights on the host automatically.
- Wait for replication, and the host to pick up the group and membership – it took about 15 minutes for me.
- You can now connect directly to the host using the vSphere Client – head on to the “Local Users & Groups” page and edit “root”:
- You should now be able to connect to the host using your new root password.
This is the second article in a series of vSphere Security articles that I have planned. The majority of this article is based on vSphere/ESXi 5.1, though I will include any 5.5 information that I find relevant. The first article in this series was vSphere Security: Understanding ESXi 5.x Lockdown Mode.
Why would you want to join an ESXi host to an Active Directory domain? Well you’re not going to get Group Policies applying, what you’re really doing is adding another authentication provider directly to the ESXi host. You will see a computer object created in AD, but you will still need to create a DNS entry (or configure DHCP to do it for you). What you will get is a way to audit root access to your hosts, to give administrators a single sign on for managing all aspects of your virtual environment and more options in your administrative arsenal – for example, if you’re using an AD group to manage host root access, you don’t have to log onto however many ESXi hosts you have to remove a user’s permissions, simply remove them from the group. You can keep your root passwords in a sealed envelope for emergencies!