DefinIT

One Node vSAN Lab Host

| 12/12/2017 | Tags: ,

A little while ago I replaced my three ageing Intel NUC hosts with a single (still ageing) Dell T7500 workstation. The workstation provides 24 processor cores and 96GB RAM for a really reasonable price, while still being quiet enough to sit in my home office. One of the driving factors in retiring the old NUCs was vSAN – I know in the newer generations of NUC you can get an M2 and a SATA SSD in, but my 1st gen. models could only do a single M2.

This new single host presents a challenge though – a single node vSAN is not a supported configuration! To get it working, we have to force vSAN to do things it doesn’t want to do. To this end, let me be very clear: this is not a supported configuration. It is not for production. Don’t do it without understanding the consequences – and don’t put data you can’t afford to lose on it. Back up everything.

Enabling vSAN on a single host

Firstly, enable vSAN on either the existing VMKernel interface, or create a new VMKernel interface for vSAN. If the host is currently standalone (and you’ll deploy vCenter to vSAN later, for example), you can use an esxcli command to “tick the box” using the VMKernel ID (e.g. vmk0):

esxcli vsan network ipv4 add -i vmk0

NewImage

Next, we need to update the default vSAN policy to allow us to use a single host vSAN. Querying the default policy via esxcli shows that the cluster, vdisk, vmnamespace, vmswap and vmem classes are all configured with a hostFailuresToTolerate (FTT) value of 1, and only vmswap and vmem are configured to forceProvisioning (that is, provision if policy is not met).

[root@t7500:~] esxcli vsan policy getdefault
Policy Class Policy Value 
------------ --------------------------------------------------------
cluster (("hostFailuresToTolerate" i1))
vdisk (("hostFailuresToTolerate" i1))
vmnamespace (("hostFailuresToTolerate" i1))
vmswap (("hostFailuresToTolerate" i0) ("forceProvisioning" i1))
vmem (("hostFailuresToTolerate" i0) ("forceProvisioning" i1))

Using esxcli we can update the policy to set the hostFailuresToTolerate to zero, which means vSAN will not attempt to replicate data to mitigate a host failure, and enable forceProvisioning on the cluster, vdisk and vmnamespace objects.

esxcli vsan policy setdefault -c cluster -p "((\"hostFailuresToTolerate\" i0) (\"forceProvisioning\" i1))"
esxcli vsan policy setdefault -c vdisk -p "((\"hostFailuresToTolerate\" i0) (\"forceProvisioning\" i1))"
esxcli vsan policy setdefault -c vmnamespace -p "((\"hostFailuresToTolerate\" i0) (\"forceProvisioning\" i1))"
esxcli vsan policy setdefault -c vmswap -p "((\"hostFailuresToTolerate\" i0) (\"forceProvisioning\" i1))"

Now, re-running the getdefault command shows the policy has updated.

From here you can create your vSAN cluster and claim the disks on the host using esxcli:

esxcli vsan cluster new
esxcli vsan vsan storage add -s <SSD identifier> -d <HDD identifier>

The vSAN storage is now available, and you can deploy your vCenter (or other VMs) to the datastore.

And finally…

If you are running a single node vSAN under vCenter, you’ll want to enable vSAN from within vCenter, and also update the default vSAN policy to match the settings:

Create vSAN One Node Policy

Simon’s #VMworld 2017: Sunday and Monday

Sunday

Arriving in early on Sunday as the local flight choices are more limited from Bristol than perhaps a larger Airport, very fortunate to have a hotel so close to the VMworld venue, perhaps not so great for the evening activities but I am happy with it this way around.

Other than registration (4pm-8pm) and hopefully catching up with a few folk who have also arrived early.

In the evening I had the pleasure to meet many awesome people from the vCommunity.

My current focus has been on vRA so it was great to meet some well known and knowledgable.

(more…)

Sam’s #VMworld 2017: vSAN Specialist and VMware {Code} Hackathon

VMware vSAN 2017 Specialist Exam

I always like to take a discounted exam at VMworld, this year I opted for the VMware vSAN 2017 Specialist exam, which was released a few weeks ago. Having delivered quite a few vSAN based solutions over the last few years, I was fairly confident in the blueprint. I am pleased to say that I passed the exam with a score of 422, way higher than I expected! I thought the exam itself was fair, and covered and tested the basics of vSAN well. You definitely need to know the supported architectures and how storage policies effect and apply to the data on vSAN.

You can check out my new badge on Acclaim

VMware Code Hackathon

Having taken part and thoroughly enjoyed last year’s hackathon at VMworld Las Vegas, I was definitely keen to get involved with the event this year. When I signed up there were no teams that took my fancy, so I created my own based on PowerCLI and some of my pod deployment scripts.

I was joined by some really good guys who all put in a serious amount of work in the short time we had – most importantly we had a great time and learned a lot! We managed to get a mostly working deployment of vROps, a HTML5 interface for the script config and the beginnings of the PowerShell required to deploy OVF templates from the vSphere Content Library – the content library script is available on my GitHub account. I’m hoping to continue working on it to develop a module to contribute to the PowerCLI script examples.

#vSAN Cache Performance Dashboard in #vROps

| 01/02/2017 | Tags: , , , ,

I have recently been able to get vSAN properly up and running in my lab and took a look at the OOTB Dashboards that come with the MPSD (Management Pack for Storage Devices).

As I have a hybrid build (I know many have All Flash Arrays) I was interested in how hard my Flash Cache was working so I built a dashboard purely focused on this aspect of the vSAN product.

It is a simple dashboard but has what I think is useful information.

You can download it here

Feedback and questions very welcome.

vRealize Orchestrator Workflow: shutdownVSANCluster

vRealize OrchestratorMy vSphere lab is split into two halves – a low power management cluster, powered by 3 Intel NUCs, and a more hefty workload cluster powered by a Dell C6100 chassis with 3 nodes. The workload servers are noisy and power hungry so they tend to be powered off when I am not using them, and since they live in my garage, I power them on and off remotely.

To automate the process, I wanted to write an Orchestrator workflow (vRO sits on my management cluster and is therefore always on) that could safely and robustly shut down the workload cluster. There were some design considerations:

  • Something to do with the C6100 IPMI implementation and the ESXi driver does not like being woken from standby mode. It’s annoying, but not the end of the world – I can use impitool to power the hosts on from shutdown. If you want to use host standby, there’s a hostStandby and hostExitStandby workflow in the package on github.
  • I run VSAN on the cluster, so I need to enter maintenance mode without evacuating the data (which would take a long time and be pointless).
  • All the running VMs on the cluster should be shut down before the hosts attempt to go into maintenance mode.
  • I want a “what if” option, to see what the workflow would do if I ran it, without actually executing the shutdown – it’s largely for development purposes, but the power down/power on cycle takes a good half hour and I don’t want to waste time on a typo!

(more…)

vSphere Web Client – VSAN is Turned Off – Edit button disappears

| 16/09/2016 | Tags: , , , ,

I ran into a strange one with my lab today where the previously working VSAN cluster couldn’t be enabled. Symptoms included:

  • The button to enable VSAN was missing from vSphere Web ClientVSAN is Turned OFF
  • vsphere_client_virgo.log had the following error:

[2016-09-16T14:49:03.473Z] [ERROR] http-bio-9090-exec-18 70001918 100023 200008 com.vmware.vise.data.query.impl.DataServiceImpl Error occurred while executing query:
QuerySpec
QueryName: dam-auto-generated: ConfigureVsanActionResolver:dr-57
ResourceSpec
Constraint: ObjectIdentityConstraint
TargetType: ClusterComputeResource
Target: ManagedObjectReference: type = ClusterComputeResource, value = domain-c481, serverGuid = a44e7d15-e63f-46c2-a1aa-b9b1cbf972be

I was able to enable VSAN on the cluster using rvc commands

  1. SSH to VCSA
  2. Enable bash shell
  3. rvc administrator@vshere.local@locahost
  4. vsan.enable_vsan_on_cluster /localhost/<datacenter name>/computers/<cluster name>

Following the enabling of VSAN on the cluster, I was still getting errors:

  • “Unable to load VSAN configuration” when viewing the VSAN configuration for the cluster in the vSphere Web Client
  • “HTTP400 Error” when viewing the cluster summary tab, on the VSAN health widget

The HTTP400 Error led me to the following KB VMware Virtual SAN 6.x health plug-in fails to load with the error: Unexpected status code: 400 (2133384), following the resolution in this KB resolved the issue.

It seems that, yet again, VMware’s certificate tooling does not replace a key certificate, and this is the root cause of the problem. When I deployed the VCSA, I configured the PSC as a subordinate Certificate Authority and followed the documented procedure to replace the certificates. Clearly this one was missed!

vSphere 6 Lab Upgrade – VSAN

| 02/04/2015 | Tags: , , , , ,

vsphere logoI tested vSphere 6 quite intensively when it was in beta, but I didn’t ever upgrade my lab – basically because I need a stable environment to work on and I wasn’t sure that I could maintain that with the beta.

Now 6 has been GA a while and I have a little bit of time, I have begun the lab upgrade process. You can see a bit more about my lab hardware over on my lab page.

Upgrading to VSAN 6.0

The upgrade process for VSAN 5.5 to 6.0 is fairly straight forward

  • Upgrade vCenter Server
  • Upgrade ESXi hosts
  • Upgrade the on-disk format to the new VSAN FS

Other parts of this guide have covered the vCenter and ESXi upgrade, so this one will focus on the disk format upgrade. Once you’ve upgraded these you’ll get a warning on your VSAN cluster:

image

(more…)

vSphere 6 Lab Upgrade – Overview

vsphere logoI tested vSphere 6 quite intensively when it was in beta, but I didn’t ever upgrade my lab – basically because I need a stable environment to work on and I wasn’t sure that I could maintain that with the beta.

Now 6 has been GA a while and I have a little bit of time, I have begun the lab upgrade process. You can see a bit more about my lab hardware over on my lab page.

I will be upgrading

  • vCenter Server Appliance – currently 5.5 update 1
  • vSphere Update Manager – currently 5.5 update 1
  • 3 HP N54L resource hosts
  • 1 Intel NUC management host

In my lab I run various VMware software suites listed below, although I typically run them in nested environments to keep my lab install relatively clean.

  • vCloud Director
  • vRealize Automation
  • vRealize Orchestrator
  • NSX

Other considerations:

  • VSAN – I currently run VSAN 5.5 and will need to upgrade to 6.0
  • Update Manager – I’d prefer to update my hosts using Update Manager where possible
  • Certificates – I currently use a Microsoft CA, I’d like to move to the VMCA as a subordinate CA
  • Drivers – VMware changed the drivers supported in ESXi, some consumer grade drivers are blacklisted
  • Backup – I use the excellent Veeam Backup and Replication to protect key lab machines, and I know that it doesn’t yet support vSphere 6. That’s a hit I can take in my lab.

To upgrade I need to first verify everything is compatible using the VMware Product Interoperability Matrixes.

High level plan

Having read a lot of vSphere 6 docs, my upgrade plan is as follows:

  1. Upgrade vCenter Server Appliance
  2. Upgrade vSphere Update Manager
  3. Upgrade ESXi
  4. Upgrade VSAN
  5. Upgrade nested labs and other software suites

VMworld 2014: Day Three and wrap-up

| 20/10/2014 | Tags: , , , , ,

wpid-wp-1413446501457.jpeg

*This post was meant to be published on Friday, VMworld Sleep Deprivation meant I didn’t click the button!*

This is the last post and a bit of a wrap up on my VMworld 2014 series!

There isn’t a keynote on day three, and there’s definitely a “winding down” feel as people tend to arrive later (if at all) and many are…feeling the effects of the previous night shall we say! That said, every session I wanted to attend was still fully booked and it was a case of queuing for the spare seats.

I managed to get into the #SDDC1337 Techincal Deep Dive on EVO:RAIL really to get a good view on what the EVO:RAIL offering is. The session was presented very well and told the story of EVO:RAIL from inception to birth. There was a lot of information about the technologies involved in getting EVO:RAIL to a fully functional product. I was impressed with the 8 month timescale and the team’s focus on doing the core things right rather than feature creep which VMware can be guilty of.

I think the fact that the hardware is partner based means that it’s much more accessible for environments that are single-vendor (e.g. HP, EMC, DELL, HDS or Fujitsu shops) because they can purchase under existing agreements etc without needing to get new suppliers approved, and there’s already familiarity and eco-systems in place.

With VSAN still really a tier-2 storage solution, I’d expect these to go into remote office environments for large enterprises. I haven’t seen the pricing for EVO:RAIL, but I suspect all that packaged goodness will have a price – probably not one SMB’s will like. An interesting idea discussed with Michael Poore (@michaelpoore) was the idea of having EVO:RAIL clusters as vCloud endpoints.

That was the last technical session I was able to attend as I had to catch my flight home! It’s hard to summarise in a blog posting the value that you get from attending VMworld – as a vExpert I have access to a lot of VMworld sessions online after the event, but VMworld is a lot more than just the sessions. It’s a crazy mix of sessions, networking, meeting old and new friends, vendor parties, sleep deprivation, walking (lots of walking), exams, the solutions exchange and generally being immersed in all things VMware for the duration of the conference.

I would definitely encourage anyone who can get to VMworld and who loves the technology and community around VMware to go next year. It’s much more than the sum of it’s parts!