VMware ESX 3.5

Written by Sam McGeown on 25/5/2010
Published under VMware and vSphere
I rebuilt an ESX host in my HA/DRS cluster today, following my build procedure to configure as per VMware best practices and internal guidelines. When the host was fully configured and up-to-date, I added it to the cluster and enabled HA and DRS. Then I went to generate some DRS recommendations to balance the load an ease off my overstretched host, but no recommendations were made. I couldn’t manually migrate any VMs either – it was odd, because both hosts were added into the cluster, and could ping and vmkping each other from the console.
Written by Sam McGeown on 14/10/2009
Published under VMware and vSphere
I’m migrating some hosts off of an older storage LUN, but when I drag the disk to the new Datastore with the SVMotion plug-in the job fails with the following error: The error occurs because the virtual disk cannot be moved without moving the source files, the .vmx, .vswap etc. Simply drag the entire VM, rather than the virtual disk to the new Datastore. If you’re trying to move a 2nd, 3rd or nth disk and you get this error, drag the entire VM as per above over to the new Datastore, once that’s completed, go back in to SVMotion and drag the whole VM across again, only this time before you apply, drag the nth disk back to the new Datastore.
Written by Sam McGeown on 2/10/2009
Published under VMware
A.K.A Why not to use snapshots I ran into a slightly confusing problem today - our SQL servers are all created with 4 disks on 4 separate LUNs (System, Swap, SQL Data and SQL Logs). When viewing the server through Virtual Center I couldn’t see all of the LUNs, just the System LUN. It’s not a major problem as the VM can see the storage, but a little annoying when you have to remember what LUN the disks are on.
Written by Sam McGeown on 30/9/2009
Published under Networking and VMware
Here’s the setup. We have a core switch of 2 Cisco 3750s, connected together for fault tolerance as a single logical switch; we also have several ESX 3.5 hosts with 4 Gigabit Ethernet NICs installed each. The Virtual Machines will all be on VLAN 8 (reserved for internal servers) and the VMKernel will be on VLAN 107 (reserved for VMKernel traffic like VMotion). I want to create a load balanced, fault tolerant aggregate of these four NICs over the Core Switch.