Recently I installed and configured a client’s new ESXi host, they’re a small company and only require a single host. The host in question was an IBM x3650 M3, an excellent workhorse for virtualisation and one of 5 or 6 of the same model that I’ve installed in the last year. In addition to the onboard Broadcom Dual Gigabit NIC, we always install at least a second Intel PCIx Dual Gigabit card for resilience/redundancy/performance.
For some reason this time the installation of ESXi didn’t pick up the additional NIC and, after seating and reseating the card, checking the UEFI (new-fangled BIOS) settings and getting to the point of ordering a return code from our suppliers, I thought I’d try installing the drivers… you know, just in case.
What you need:
- VMware vSphere CLI installed on your admin machine (mine’s a Windows 7 Desktop)
- The latest driver CD for your ESXi component and burn to a CD or mount the ISO
- Shutdown any running VMs and set the host to Maintenance mode
From there, open a CLI window, which looks just like a command prompt, because it is. Run the vihostupdate.pl script:
vihostupdate.pl --server --username --password --install --bundle “e:\path\to\bundle.zip”
Reboot the host and the new hardware will be installed and ready to use.
I am pretty sure that I didn’t need to do that on any of the other identical x3650s with Intel NICs that I have set up over the last year, so what has changed? The install media I used was freshly downloaded, so that could be a factor, as could the UEFI version on the server. Whatever it was, it won’t surprise me again!