I flew from Gatwick to Barcelona last night to my very first VMworld!
I’m staying in a hotel that is actually quite far from the conference, it’s a metro, train and bus journey away from the conference center and it takes about 40 minutes to get here. On the plus side I was only 5 minutes away from the VMUG party last night so I went over there for an hour or so.
Losing a root password isn’t something that happens often, but when it does it’s normally a really irritating time. I have to rotate the password of all hosts once a month for compliance, but sometimes a host drops out of the loop and the root password gets lost. Fortunately, as the vpxuser is still valid I can manage the host via vCenter - this lends itself to this little recovery process:
This is the second article in a series of vSphere Security articles that I have planned. The majority of this article is based on vSphere/ESXi 5.1, though I will include any 5.5 information that I find relevant. The first article in this series was vSphere Security: Understanding ESXi 5.x Lockdown Mode.
Why would you want to join an ESXi host to an Active Directory domain? Well you’re not going to get Group Policies applying, what you’re really doing is adding another authentication provider directly to the ESXi host.
This is the first article in a series of vSphere Security articles that I have planned. The majority of this article is based on vSphere/ESXi 5.1, though I will include any 5.5 information that I find relevant.
I think lockdown mode is a feature that is rarely understood, and even more rarely used. Researching this article I’ve already encountered several different definitions that weren’t quite right. As far as I can see there are no differences between lockdown more in 5.
With vSphere 5.5 being announced at VMworld San Francisco I was very eager to see what was new and after devouring all of the great blog posts out there of the guys in attendance I wanted to summarize in my own way the aspects I think are great!
**VMDK 2TB limitation removed! (also virtual mode RDMs) ** This has to be one of the best pieces of news as it has been in the rear trying to accommodate really large VMs (changes affect both VMFS and NFS)
You’d be surprised how many times I see datastore that’s just been un-presented from hosts rather than decommissioned correctly – in one notable case I saw a distributed switch crippled for a whole cluster because the datastore in question was being used to store the VDS configuration.
This is the process that I follow to ensure datastores are decommissioned without any issues – they need to comply with these requirements
The vSphere UMDS provides a way to download patches for VMware servers that have an air-gap, or for some reason aren’t allowed to go out to the internet themselves – in my case a security policy prevented a DMZ vCenter Server from connecting to the internet directly. The solution is to use UMDS to download the updates to a 2nd server that was hosted in the DMZ and then update the vCenter Server from there.
If you work in company with strict password compliance rules, for example under SOX, you might well have to change administrator passwords every month. Doing this on any more than a few hosts is tedious work – even on two hosts it seems like a waste of time logging on the host via SSH (or even enabling it first) before changing the password. Then we also need to audit the change, there’s no point making it for compliance reasons if we can’t then prove we did it!
A problem reared it’s head over the weekend with one of our hosts’ Fibre Channel HBAs negotiating it’s way down to 2GB, and consequently introducing massive latency for the LUNs behind it. Analysis showed that the drivers for the HBA were over a year out of date so the suggested fix from VMware was to update the drivers. This is fine to do manually for a few hosts, but would be a real pain for the 300+ hosts in the environment I manage.
I’ve previously posted around this topic as part of another problem but having had to figure out the process again I think it’s worth re-posting a proper script for this. VMware KB 1016106 is snappily titled “ESXi/ESX hosts with visibility to RDM LUNs being used by MSCS nodes with RDMs may take a long time to boot or during LUN rescan” and describes the situation where booting ESXi (5.1 in my case) takes a huge amount of time to boot because it’s attempting to gain a SCSI reservation on an RDM disk used by MS Clustering Services.