I ran into a strange one with my lab today where the previously working VSAN cluster couldn’t be enabled. Symptoms included: The button to enable VSAN was missing from vSphere Web Client vsphere_client_virgo.log had the following error: [2016-09-16T14:49:03.473Z] [ERROR] http-bio-9090-exec-18 70001918 100023 200008 com.vmware.vise.data.query.impl.DataServiceImpl Error occurred while executing query: QuerySpec QueryName: dam-auto-generated: ConfigureVsanActionResolver:dr-57 ResourceSpec Constraint: ObjectIdentityConstraint TargetType: ClusterComputeResource
[<img class="alignright size-medium wp-image-3968” src=”/images/2014/02/pernixdata1.png” alt="pernixdata” width="300” height="80” since vSphere 6 was released, simply because I can’t afford to wait on learning new versions until 3rd party software catches up. It makes you truly appreciate the awesome power of FVP, even on my less than spectacular hardware in my lab, when it’s taken away for a while. Now that FVP 3.0 has GA’d, I’m looking forward to getting my lab storage accelerated - it makes a huge difference.
Recently I had the good fortune to be invited along to a blogger briefing with Satyam Vaghani CTO and Co-founder of PernixData.Those of you not in the know Satyam already has quite the track record, more notably for authoring 50+ patents, Principle engineer and Storage CTO for VMware (10 years). So it is safe to say he knows a thing or two about storage and related technology! Nine of us (bloggers) were in attendance.
I recently came across Infinio and after reading about the unique way it tackled the problem of increasing I/O and reducing latency I was curious to see how it would perform in my lab. A few things I would like to point out. First of all Infinio works only with NFS storage, secondly it does not require flash storageas it utilizes the host RAM instead however it provides onlyread acceleration.
You’d be surprised how many times I see datastore that’s just been un-presented from hosts rather than decommissioned correctly – in one notable case I saw a distributed switch crippled for a whole cluster because the datastore in question was being used to store the VDS configuration. This is the process that I follow to ensure datastores are decommissioned without any issues – they need to comply with these requirements
As some of you read previously, I had been experiencing disk latency issues on our SAN and tried many initial methods to troubleshoot and understand the root cause. Due to other more pressing issues this was placed aside until we started to experience VMs being occasionaly restarted by vSphere HA as the lock had been lost on a given VMDK file. (NOT GOOD!!) The Environment:- 3x vSphere 5.1 Hosts2x 4port Nics 1GBe (allowing 2x iSCSi vmkernel ports per hostfor redundancy)Dedicated Switching (isolated from the LAN) for iSCSi and vMotion (on seperate respective VLANs)_MSA2312i SAN G2 (with 4 Shelves)The iSCSi Multipathing policy was set to Round Robin.