There are different schools of thought as to whether you should have SSH enabled on your hosts. VMware recommend it is disabled. With SSH disabled there is no possibility of attack, so that’s the “most secure” option. Of course in the real world there’s a balance between “most secure” and “usability” (e.g. the most secure host is powered off and physically isolated from the network, but you can’t run any workloads ). My preferred route is to have it enabled but locked down.
Note: VMware use the term “ESXi Shell”, most of us would term it “SSH” – the two are used interchangeably in this article although there is a slight difference. You can have the ESXi Shell enabled but SSH disabled – this means you can access the shell via the DCUI. For the sake of this article assume ESXi Shell and SSH are the same.
Losing a root password isn’t something that happens often, but when it does it’s normally a really irritating time. I have to rotate the password of all hosts once a month for compliance, but sometimes a host drops out of the loop and the root password gets lost. Fortunately, as the vpxuser is still valid I can manage the host via vCenter - this lends itself to this little recovery process:
- Join the host to the domain (I’ve got a handy post for that here)
- Create the “ESX Admins” group in your AD and ensure that you are a member. The AD group will be given full administrator rights on the host automatically.
- Wait for replication, and the host to pick up the group and membership – it took about 15 minutes for me.
- You can now connect directly to the host using the vSphere Client – head on to the “Local Users & Groups” page and edit “root”:
- You should now be able to connect to the host using your new root password.
This is the second article in a series of vSphere Security articles that I have planned. The majority of this article is based on vSphere/ESXi 5.1, though I will include any 5.5 information that I find relevant. The first article in this series was vSphere Security: Understanding ESXi 5.x Lockdown Mode.
Why would you want to join an ESXi host to an Active Directory domain? Well you’re not going to get Group Policies applying, what you’re really doing is adding another authentication provider directly to the ESXi host. You will see a computer object created in AD, but you will still need to create a DNS entry (or configure DHCP to do it for you). What you will get is a way to audit root access to your hosts, to give administrators a single sign on for managing all aspects of your virtual environment and more options in your administrative arsenal – for example, if you’re using an AD group to manage host root access, you don’t have to log onto however many ESXi hosts you have to remove a user’s permissions, simply remove them from the group. You can keep your root passwords in a sealed envelope for emergencies!
John Troyer (@jtroyer) asked a question on Twitter last night about a CloudCred prize of $1000-2000:
@jtroyer a nice lab setup!
— Sam McGeown (@sammcgeown) September 19, 2013
@jtroyer I guess a couple of hosts, storage and a switch, wouldn't get HCL certified for that but I'm sure it's doable!
— Sam McGeown (@sammcgeown) September 19, 2013
With the release of vCenter Log Insight Public Beta (http://communities.vmware.com/community/vmtn/vcenter/vcenter-log-insight) I thought I’d strike while the iron is hot and run through the installation and configuration.
Deploying the OVF
This is such a bread and butter task that it doesn’t require more than a few words – it’s definitely worth looking at the Sizing PDF before you deploy (VMware-vCenter-Log-Insight-1.0-Beta-Virtual-Appliance-Sizing.pdf) as it’s not small even for a test installation. If you’re using less than the recommended 8GB RAM there are additional steps to change the heap size for performance.
The vSphere UMDS provides a way to download patches for VMware servers that have an air-gap, or for some reason aren’t allowed to go out to the internet themselves – in my case a security policy prevented a DMZ vCenter Server from connecting to the internet directly. The solution is to use UMDS to download the updates to a 2nd server that was hosted in the DMZ and then update the vCenter Server from there. It also can save on bandwidth if you’re running multiple vCenter Servers, which again was the case (though bandwidth isn’t really a constraint).
DataStore conflicts with an existing DataStore in the DataCenter – Manually disabling Storage I/O Control
I ran into this issue yesterday while reconnecting hosts in our vCenter Server following a complete reinstall - the reasons for which are a long story, but suffice to say that there were new certificates and the host passwords were encrypted with the old ones.
The LUNs had been unpresented at the hardware level by the storage team, but had not been unmounted or removed from vCenter. This is *not* the way to remove storage - let me re-iterate: remove storage properly. Unfortunately in this case the storage was removed badly - doing this can lead to a condition called "All Paths Down" or APD which is best explained by Cormac Hogan (@vmwarestorage) in the article Handling the All Paths Down (APD) condition.
Simply because of the size and age of the environment, some of our production clusters have now reached the limit (including local paths) - you see this message in the logs
[2012-08-20 01:48:52.256 77C3DB90 info 'ha-eventmgr'] Event 2003 : The maximum number of supported paths of 1024 has been reached. Path vmhba3:C0:T4:L0 could not be added.
Recently I installed and configured a client’s new ESXi host, they’re a small company and only require a single host. The host in question was an IBM x3650 M3, an excellent workhorse for virtualisation and one of 5 or 6 of the same model that I’ve installed in the last year. In addition to the onboard Broadcom Dual Gigabit NIC, we always install at least a second Intel PCIx Dual Gigabit card for resilience/redundancy/performance.
Recently I had cause to configure iSCSI multipathing on a test ESXi server. The production environment servers use iSCSI HBAs to connect to the back end storage, so multipathing them is a straight-forward setup.