After having a play with Virtual Flash and Host Caching on one of my lab hosts I wanted to re-use the SSD drive, but couldn’t seem to get vFlash to release the drive. I disabled flash usage on all VMs and disabled the Host Cache, then went to the Virtual Flash Resource Management page to click the “Remove All” button. That failed with errors:
“Host’s virtual flash resource is inaccessible.”
“The object or item referred to could not be found.”
In order to reclaim the SSD you need to erase the proprietary vFlash File System partition using some command line kung fu. SSH into your host and list the disks:
You’ll see something similar to this:
You can see the disk ID “t10.ATA_____M42DCT032M4SSD3__________________________00000000121903600F1F” and below it appended with the “:1” which is partition 1 on the disk. This is the partition that I need to delete. I then use partedUtil to delete the partition I just identified using the format below:
partedutil delete “/vmfs/devices/disks/<disk ID>” <partition number>
partedutil delete “/vmfs/devices/disks/t10.ATA_____M42DCT032M4SSD3__________________________00000000121903600F1F” 1
There’s no output after the command:
Now I can go and reclaim the SSD as a VMFS volume as required:
Hope that helps!
In case you missed it VMware have now released vCHS (VMware vCloud Hybrid Service) in Europe! The first data center residing in Slough with more data centers planned across Europe in the near future.
Working in an SME that has several existing vSphere environments this was of real interest, as the need to scale out quickly from our Private clouds is rapidly becoming a requirement.
Having already spoken to VMware on the phone to get a rough idea on options and costs I decided to take a look at the Hands-on-labs to see how easy it really is to use and migrate VMs from an existing private cloud to vCHS.
The Lab gives you 3 hours with 128 steps but to be honest this is very generous (no bad thing) so I was done and dusted in 1.5 hours. Also the option to split the screen across multiple windows was very useful. (HoL FTW!)
vCHS has a very simple dashboard as you can see below
Management seems very straight forward with familiar terminology, all of the components you would expect and need are easily accessible via the web interface or links to your own vCloud Director window.
What I was very keen to learn though was how to migrate a VM from my Private cloud to vCHS, inthis case VMware uses vCloud Connector which you would you install in your environment. You can see here the vCloud Connector at the bottom of the vSphere console window.
Once you are then inside the application you would then simply add your local environment first then the vCHS environment (naturally you would of setup a VPN tunnel in advance) You can see the hands on labs example below.
Once both sites are added you can then select a VM of your choice in your private cloud click the actions and choose "copy" populate the various questions it asks you and then the process begins.
What struck me once this process had completed is how straight forward it all was. I had visions of it being potentially overly complex with numerous caveats but I simply didn't see any immediate deal breakers for the kinds of usage I would envisage applying to a vCHS environment.
Of course its all early days and this was a Hands-on-lab but what I saw was very encouraging.
For those of you unaware VMware recently released the VMware vSphere Mobile Watchlist
What does it do?
"VMware vSphere Mobile Watchlist allows you to monitor the virtual machines you care about in your vSphere infrastructure remotely on your phone. Discover diagnostic information about any alerts on your VMs using VMware Knowledge Base Articles and the web. Remediate problems from your phone by using power operations or delegate the problem to someone on your team back at the datacenter."
- REMEDIATE REMOTELY
Use power operations to remediate many situations remotely from your device.
- VMS AT A GLANCE
Review the status of these VMs from your device including: their state, health, console and related objects.
I have been using it for a day or so and I have found it very useful, presently I have it installed on my Android Phone and Tablet.
If you consider using this in conjunction with VPN or whatever your preferred secure method to connect to your work LAN when you are "out and about" its a great way to quickly take a look at any problematic VMs without needing to fire up your laptop.
Its available on Android and iOS and is well worth a quick look.
Fairly recently I came across this error message on one of my hosts "esx.problem.visorfs.ramdisk.full"
While trying to address the issue I had the following problems when the ramdisk did indeed "fill up"
- PSOD (worst case happened only once in my experience)
- VM's struggling to vMotion from the affected host when putting into maintenance mode.
A reboot of the host would clear the problem (clear out the ramdisk) for a short while but the problem will return if not addressed properly.
- Clustered ESXi hosts (version 5.1)
- vCenter 5.1
First of all I had to see just how bad things were so I connected to the affected host via SSH ( you may need to start the SSH service on the host as by default it is usually stopped)
Using the following command I could determine what used state the ramdisk was in.
(using an example output)
It was clear the root was full so now to find out what was filling it up so I searched KB articles for answers below were more the more helpful ones.
Unlike many of the other articles I had read which all seemed to point to /var/log being the cause the culprit in this instance was the /scratch disk
After doing some quick reading it was clear I could set a separate persistent location for the /scratch disk. So I followed the VMware KB article on this process and rebooted the host to apply the changes.
Even though I had followed the article to the letter and rebooted the host the changes did not apply on this host as when I double checked the ramdisk and this was the output as you can see the used space was already growing and when I put the console window side by side with the other hosts it was growing rapidly whereas the other hosts were quite static in used space.
It was not until I rebooted the host again did the changes I made apply, I will point out that this was the only time this little issue occurred the other hosts only required one reboot after changing the /scratch location.
After the second reboot I could see the host was as it should be
Just a quick post on something that was not immediately obvious when it happened to me.
When deploying vCSA 5.5 and trying to add it to the domain, I was presented with the following error.
I immediately did all the all the usual checks, making sure it had a static IP and correct DNS servers etc..
The one thing missing however was a FQDN for the hostname (in the network tab).
All I had was "vCSAname"
But what was required to join a domain was "vCSAname.domain.local"
After I applied this change the vCSA connected to the domain without a problem.
As always with these niggles its simple when you know how!