DefinIT
vSphere 6 Lab Upgrade – Upgrading ESXi 5.5
Sam
01/04/2015

vsphere logoI tested vSphere 6 quite intensively when it was in beta, but I didn’t ever upgrade my lab – basically because I need a stable environment to work on and I wasn’t sure that I could maintain that with the beta.

Now 6 has been GA a while and I have a little bit of time, I have begun the lab upgrade process. You can see a bit more about my lab hardware over on my lab page.

Checking for driver compatibility

In vSphere 5.5, VMware dropped the drivers for quite a few consumer grade NICs – in 6 they’ve gone a step further and actually blocked quite a few of these using a VIB package. For more information, see this excellent article by Andreas Peetz.

To list the NIC drivers you’re using on your ESXi hosts, use the following command:

esxcli network nic list | awk ‘{print $1}’|grep [0-9]|while read a;do ethtool -i $a;done

image

As you can see from the results, my HP N54Ls are running 3 NICs, a Broadcom onboard and two Intel PCI NICs. Fortunately the Broadcom chip is supported and the e1000e driver I’m using is compatible with vSphere 6 and is in fact superseded by a native driver package. (more…)

Some useful VMware related diagrams
Simon
01/04/2014

VMware.jpgOne of the things that never fails to amaze me are the superb PDF diagrams I occasionally stumble upon so i thought it would be a useful idea to list some of the the ones I have found on my travels.

vSphere 6 ESXTOP quick Overview for Troubleshooting

http://www.running-system.com/wp-content/uploads/2015/04/ESXTOP_vSphere6.pdf

esxtop

 

 

 

 

 

 

VMware vSphere 5 Memory Management and Monitoring diagram

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2017642

memory-management

 

 

 

 

 

 

 Concepts and best practices in Resource Pools

http://federicocinalli.com/blog/item/194-concepts-and-best-practices-in-resource-pools#.Uzre1_ldV8F

conceptsandbestpractices

 

 

 

 

 

 

(more…)

Path Selection Policies after removing PernixData
Simon
26/03/2014

pernixdataRecently I have had the pleasure to use PernixData but I did come across a bit of a ‘gotcha’ after uninstalling it from my hosts.

If like me you use iSCSI then you will likely spend a bit of time setting up your Path Selection Polices to suit your specific needs, so it was interesting to note the following.

When you do uninstall and remove PernixData from your hosts your Path Selection Polices do not revert back to your original configuration rather they revert back to the default vSphere setting of MRU (Most recently used).

This is worth noting as this is not mentioned in the documentation PernixData provide.

UPDATE – After being contacted by the guys at PernixData I can confirm they will be updating their documentation shortly to reflect this outcome.

Extending a vCenter Orchestrator (vCO) Workflow with ForEach – Backing up all ESXi hosts in a Cluster
Sam
20/03/2014

vCenter Orchestrator (vCO)In my previous post Backing up ESXi 5.5 host configurations with vCenter Orchestrator (vCO) – Workflow design walkthrough I showed how to create a workflow to back up host configurations, but it was limited to one host at a time. For this post I’m going to show how to create a new workflow that calls the previous one on multiple hosts using a ForEach loop to run it against each one. This was actually easier than I had anticipated (having read posts on previous versions of vCO that involved created the loops manually).

Ready? Here we go…

Create a new workflow – I’ve called mine “DefinIT-Backup-ESXi-Config-Cluster” and open the Schema view. Drag a “ForEach” element onto the workflow and the Chooser window pops up. This is a bit deceptive at first because it’s blank! However, if you enter some search text it will bring up existing workflows, so I searched for my “DefinIT-Backup-ESXi-Config” workflow that was created in that previous post. (more…)

Backing up ESXi 5.5 host configurations with vCenter Orchestrator (vCO) – Workflow design walkthrough
Sam
13/03/2014

vCenter Orchestrator (vCO)As a little learning project, I thought I’d take on Simon’s previous post about backing up ESXi configurations and extend it to vCenter Orchestrator (vCO), and document how I go about building up a workflow. I’m learning more and more about vCO all the time, but I found it has a really steep entry point, and finding use cases is hard if you haven’t explored capabilities.

The steps I want to create in this post are:

  1. Right click host to trigger
  2. Sync and then back up the configuration to file
  3. Copy the file to a datastore, creating a folder based on date and the host

Ideas for future extensions of this workflow include – trigger from cluster/datastore/host folder, email notifications, emailing the backup files and backup config before triggering a VMware Update Manager remediation. Hopefully all this will come in future posts – for now, lets get stuck into the creation of this workflow.

(more…)

How to backup and restore your ESXi host config
Simon
05/03/2014

VMware.jpgThere are many ways to tackle the problem of quickly redeploying or recovering ESXi hosts, Host profiles, Auto deploy etc.. however such options are either out of reach for SME/SMB users where their license does not cover such features or they have very small clusters of which Auto deploy etc would perhaps be considered overkill.

So how can we backup the config of our ESXi hosts? There is a great command you can use in vSphere CLI “vicfg-cfgbackup.pl“, which when used with certain switches can either back up or restore your ESXi host config.

Backing up a host

Quite simply you fire up your vSphere CLI client and run the command as shown below, make sure you define a file name as well as the destination folder or it will error.

1

 

 

 

You will then be prompted for authentication to the host, assuming you input the correct credentials the firmware configuration will be saved successfully to the folder you specified.

2

 

 

 

 

You may notice on my example I saved the file type as .tgz, you can drill into the .tgz file and see all of the config this process saves which is kind of handy if you want to be doubly sure it did the job correctly.

Restoring a host

So now you want to restore a host from a backup you have taken, we can use the same command but with the -l switch.

restore

 

 

 

Important things to note

  • This action will reboot your host
  • This command will want to place your host in maintenance mode so therefore you will need to evacuate any VMs on the host.
  • Placing the host into maintenance mode prior to running the command will not work and it will error, the process needs to place the host in maintenance mode itself.
  • If you are running a small cluster you will likely need to disable HA while you perform this action to avoid errors being generated due to the lack of available resources.

  Example error below

error

 

 

 

 

 

Successful restore below

Success

 

 

 

 

I have found this to be really handy if I wish to restore a host to a previous running config, and by example will save you having to re-enter all of your network config etc.

Reclaiming an SSD device from ESXi 5.5 Virtual Flash
Sam
27/02/2014

After having a play with Virtual Flash and Host Caching on one of my lab hosts I wanted to re-use the SSD drive, but couldn’t seem to get vFlash to release the drive. I disabled flash usage on all VMs and disabled the Host Cache, then went to the Virtual Flash Resource Management page to click the “Remove All” button. That failed with errors:

“Host’s virtual flash resource is inaccessible.”

“The object or item referred to could not be found.”

image

In order to reclaim the SSD you need to erase the proprietary vFlash File System partition using some command line kung fu. SSH into your host and list the disks:

ls /vmfs/devices/disks

You’ll see something similar to this:

image

You can see the disk ID “t10.ATA_____M42DCT032M4SSD3__________________________00000000121903600F1F” and below it appended with the “:1” which is partition 1 on the disk. This is the partition that I need to delete. I then use partedUtil to delete the partition I just identified using the format below:

partedutil delete “/vmfs/devices/disks/<disk ID>” <partition number>

partedutil delete “/vmfs/devices/disks/t10.ATA_____M42DCT032M4SSD3__________________________00000000121903600F1F” 1

There’s no output after the command:

image

Now I can go and reclaim the SSD as a VMFS volume as required:

image

Hope that helps!

Trouble shooting – esx.problem.visorfs.ramdisk.full
Simon
04/02/2014

Problem

Fairly recently I came across this error message on one of my hosts “esx.problem.visorfs.ramdisk.full”

Fallout

While trying to address the issue I had the following problems when the ramdisk did indeed “fill up”

  • PSOD (worst case happened only once in my experience)
  • VM’s struggling to vMotion from the affected host when putting into maintenance mode.

Temporary workaround

A reboot of the host would clear the problem (clear out the ramdisk) for a short while but the problem will return if not addressed properly.

Environment

  • Clustered ESXi hosts (version 5.1)
  • vCenter 5.1

Steps taken

First of all I had to see just how bad things were so I connected to the affected host via SSH ( you may need to start the SSH service on the host as by default it is usually stopped)

Using the following command I could determine what used state the ramdisk was in.

vdf -h

ramdisk_full

 

 

 

 

(using an example output)

It was clear the root was full so now to find out what was filling it up so I searched KB articles for answers below were more the more helpful ones.

Unlike many of the other articles I had read which all seemed to point to /var/log being the cause the culprit in this instance was the /scratch disk

After doing some quick reading it was clear I could set a separate persistent location for the /scratch disk. So I followed the VMware KB article on this process and rebooted the host to apply the changes.

The gotcha

Even though I had followed the article to the letter and rebooted the host the changes did not apply on this host as when I double checked the ramdisk and this was the output as you can see the used space was already growing and when I put the console window side by side with the other hosts it was growing rapidly whereas the other hosts were quite static in used space.

street

 

 

 

 

 

It was not until I rebooted the host again did the changes I made apply, I will point out that this was the only time this little issue occurred the other hosts only required one reboot after changing the /scratch location.

Finally

After the second reboot I could see the host was as it should be

2014-02-04_135244

How to Virtualize Mac OS X using ESXi 5.1
Simon
17/01/2014

As a proof of concept I recently tried to virtualize OS X (Mountain Lion) – It is important to note that VMware is now licensed to do so and you can read more here.

The following is an overview of the steps I followed to achieve my goal in some cases it was trial an error as I am not a regular Mac user.

The Hardware

As OS X requires Apple hardware to run you will have to find yourself a Mac that will install and run ESXi. You can check VMware’s HCL even though the results only listed MacPro5,1 I was able to run ESXi 5.1 on a MacPro4,1. I did try it on an earlier MacPro but no joy. For this proof of concept test i have the following hardware.

  • 2x 4core MacPro4,1
  • 7GB Ram
  • Single 1TB SATA Drive

I am also aware others have used Mac mini’s as Lab machines but I will not cover that here.

ESXi installation

The installation is simple, by burning an ISO with ESXi 5.1 and booting the MacPro from the CD and then follow the usual steps to deploy ESXi.

Note – if you find nothing happens and you end up with a black screen with “Select CD-ROM boot type” its likely your MacPro cannot run ESXi though I have read a few article where individuals have performed firmware updates etc.

Once you have have ESXi installed configure it in what ever fashion you wish (a static IP is never a bad idea)

(more…)