Written by Sam McGeown on 4/10/2013
Published under VMware

This is the second article in a series of vSphere Security articles that I have planned. The majority of this article is based on vSphere/ESXi 5.1, though I will include any 5.5 information that I find relevant. The first article in this series was vSphere Security: Understanding ESXi 5.x Lockdown Mode .

Why would you want to join an ESXi host to an Active Directory domain? Well you’re not going to get Group Policies applying, what you’re really doing is adding another authentication provider directly to the ESXi host. You will see a computer object created in AD, but you will still need to create a DNS entry (or configure DHCP to do it for you). What you will get is a way to audit root access to your hosts, to give administrators a single sign on for managing all aspects of your virtual environment and more options in your administrative arsenal – for example, if you’re using an AD group to manage host root access, you don’t have to log onto however many ESXi hosts you have to remove a user’s permissions, simply remove them from the group. You can keep your root passwords in a sealed envelope for emergencies! 😉

Written by Sam McGeown on 26/9/2013
Published under VMware, vSphere

This is the first article in a series of vSphere Security articles that I have planned. The majority of this article is based on vSphere/ESXi 5.1, though I will include any 5.5 information that I find relevant.

I think lockdown mode is a feature that is rarely understood, and even more rarely used. Researching this article I’ve already encountered several different definitions that weren’t quite right. As far as I can see there are no differences between lockdown more in 5.5 and 5.1.

Written by Sam McGeown on 20/9/2013
Published under VMware

John Troyer (@jtroyer) asked a question on Twitter last night about a CloudCred prize of $1000-2000:

 

That got me thinking – was it possible to create an entire 2 host lab with storage on a $2000 budget? My first step was to convert it into a proper currency:

I figured that I’d stick to the Intel NUC route that I’ve gone down for my lab at home – I love the NUC for its tiny form factor, silent operation and really low power consumption. There are down sides – it can only take 16GB RAM, only one mSATA disk and only has one gigabit NIC. I don’t think any of those are too big a deal for a personal lab though – certainly I’ve not had any problems building and testing VMware products on my single NUC. I’d drop in an 8GB stick of RAM and an Intel 60GB mSATA SSD per NUC – you could always go 16GB later by adding another 8GB stick in the 2nd slot. I picked the Intel mSATA disk for it’s controller and throughput figures – there are larger and cheaper ones but not with the same write performance. Since the use of SSD is massively in focus with vFlash, PernixData FVP and several other technologies, you wouldn’t want to miss out. I’ve also added an 8GB USB3 flash drive per NUC to boot ESXi from.

Written by Simon Eady on 11/9/2013
Published under

The South West VMware User Group launches in the UK, bringing the best of VMware and the user community to The West, South West and South Wales.

The leadership team is pleased to announce the South West VMware User Group (VMUG). Meeting in Bristol at the crossroads to the South West, The West Country, South Wales and the Midlands, meetings will begin early in 2014, to bring together virtualization customers, end users and enthusiasts in an informal social setting for discussion, learning and engagement.

Written by Simon Eady on 6/9/2013
Published under VMware, vSphere

With vSphere 5.5 being announced at VMworld San Francisco I was very eager to see what was new and after devouring all of the great blog posts out there of the guys in attendance I wanted to summarize in my own way the aspects I think are great!

  • **VMDK 2TB limitation removed! (also virtual mode RDMs)

** This has to be one of the best pieces of news as it has been in the rear trying to accommodate really large VMs (changes affect both VMFS and NFS)

Written by Sam McGeown on 2/9/2013
Published under VMware

After my previous post about studying and the exam experience of the VCAP5-DCA exam (and 3 weeks of waking up to check my phone for the email all night) I am pleased to say that I received my Exam Score last week and it was a pass! I was really pleased to see that I passed with a  very decent margin too, which was great! The rushed nature of the exam and long wait for the results leaves you going over the exam in your head convincing yourself how badly you’ve done, so it came as a huge relief and surprise.

Written by Sam McGeown on 28/8/2013
Published under VMware

There’s not a lot more to say than the title of this post – if you create a new Virtual Switch using PowerCLI without specifying the NumPorts parameter, it defaults to 64 ports. This strikes me as odd when the default for a standard switch is 120.

You can see in the screenshot below that when I create a Virtual Switch without the parameter, it creates it with 64 ports. Once you minus the 8 reserved for physical NIC ports (uplinks), CDP traffic, and network discovery it leaves you with 56 ports available for VMs.

Written by Simon Eady on 21/8/2013
Published under

Just a quick note to say that I made it in to a recent book release as a contributor and naturally I am delighted and proud!

You can find out more here - vSphere Design Pocketbook

 

Written by Sam McGeown on 20/8/2013
Published under VMware

One of the many perks of being a vExpert is the cool vexpert.me URL shortener provided by Darren Woollard (@dawoo). There are several ways for vExperts to use it once they’ve signed up – there’s a PowerShell script by Jonathan Medd (@jonathanmedd) and Maish Saidel-Keesing (@maishsk) and now even a GUI interface based on the PowerShell.

Written by Sam McGeown on 14/8/2013
Published under VMware, vSphere

You’d be surprised how many times I see datastore that’s just been un-presented from hosts rather than decommissioned correctly – in one notable case I saw a distributed switch crippled for a whole cluster because the datastore in question was being used to store the VDS configuration.

This is the process that I follow to ensure datastores are decommissioned without any issues – they need to comply with these requirements