As promised, vROps Webinar Series 2017 is back with the second episode of the year. Last time around we looked closely into the features of vROps 6.5 and as stated during that webinar, we will now show you how you can unlock the full capabilities of vROps using the extensibility of the platform.
If you have been following the Webinar Series, by now you have a complete visibility into the capabilities of vROps, when it comes to monitoring the vSphere infrastructure.
I ran into a strange one with my lab today where the previously working VSAN cluster couldn’t be enabled. Symptoms included:
The button to enable VSAN was missing from vSphere Web Client vsphere_client_virgo.log had the following error: [2016-09-16T14:49:03.473Z] [ERROR] http-bio-9090-exec-18 70001918 100023 200008 com.vmware.vise.data.query.impl.DataServiceImpl Error occurred while executing query:
QueryName: dam-auto-generated: ConfigureVsanActionResolver:dr-57
Target: ManagedObjectReference: type = ClusterComputeResource, value = domain-c481, serverGuid = a44e7d15-e63f-46c2-a1aa-b9b1cbf972be
I was able to enable VSAN on the cluster using rvc commands
This time around Iwan Rahabok will lead the next session of the vROps Webinar Series while Sunny and myself will support him to deliver some awesome content which Iwan has developed over the past few months.
Yes, this time around we will move our focus from vROps as a Product and related features to the concept of running your SDDC operations with vRealize Operations Dashboards. Just to clarify, this is not a session where we will teach you to create dashboards, but this is a session where we would share how a set of Customised Dashboards can help any organisation’s IT to get an insight into Storage, Network & Compute within your SDDC.
[<img class=“alignright size-medium wp-image-3968” src="/images/2014/02/pernixdata1.png" alt=“pernixdata” width=“300” height=“80” since vSphere 6 was released, simply because I can’t afford to wait on learning new versions until 3rd party software catches up. It makes you truly appreciate the awesome power of FVP, even on my less than spectacular hardware in my lab, when it’s taken away for a while.
Now that FVP 3.0 has GA’d, I’m looking forward to getting my lab storage accelerated - it makes a huge difference.
Recently I had the good fortune to be invited along to a blogger briefing with Satyam Vaghani CTO and Co-founder of PernixData.
Those of you not in the know Satyam already has quite the track record, more notably for authoring 50+ patents, Principle engineer and Storage CTO for VMware (10 years). So it is safe to say he knows a thing or two about storage and related technology!
Nine of us (bloggers) were in attendance.
I recently came across Infinio and after reading about the unique way it tackled the problem of increasing I/O and reducing latency I was curious to see how it would perform in my lab.
A few things I would like to point out. First of all Infinio works only with NFS storage, secondly it does not require flash storage as it utilizes the host RAM instead however it provides only read acceleration.
I recently got my hands on a copy* of Chris Wahl and Steve Pantol’s Networking for VMware Administrators and was very keen to read it – especially given the reputation of the authors. I came to the book as someone who is at CCNA level (although now expired) and someone who regularly designs complex VMware networks using standard and distributed switches. I would class myself as having a fairly decent understanding of networking, though not a networking specialist.
Since the keynote by Frank Denneman at the LonVMUG many months ago the PernixData product has been something I wanted to test to see what benefits it may or may not bring to our SQL environment, I did have the good fortune to briefly beta test it last year but this blog post will cover the current full version (FVP 220.127.116.11). I am aware that 1.5 is just around the corner and with it comes full support for vSphere 5.
You’d be surprised how many times I see datastore that’s just been un-presented from hosts rather than decommissioned correctly – in one notable case I saw a distributed switch crippled for a whole cluster because the datastore in question was being used to store the VDS configuration.
This is the process that I follow to ensure datastores are decommissioned without any issues – they need to comply with these requirements
As some of you read previously, I had been experiencing disk latency issues on our SAN and tried many initial methods to troubleshoot and understand the root cause. Due to other more pressing issues this was placed aside until we started to experience VMs being occasionaly restarted by vSphere HA as the lock had been lost on a given VMDK file. (NOT GOOD!!)
3x vSphere 5.1 Hosts