Announcing Part 2 – #vROps Webinar series 2017 – “Full Stack” monitoring with vRealize Operations Manager
As promised, vROps Webinar Series 2017 is back with the second episode of the year. Last time around we looked closely into the features of vROps 6.5 and as stated during that webinar, we will now show you how you can unlock the full capabilities of vROps using the extensibility of the platform. If you have been following the Webinar Series, by now you have a complete visibility into the capabilities of vROps, when it comes to monitoring the vSphere infrastructure.
I ran into a strange one with my lab today where the previously working VSAN cluster couldn’t be enabled. Symptoms included: The button to enable VSAN was missing from vSphere Web Client vsphere_client_virgo.log had the following error: [2016-09-16T14:49:03.473Z] [ERROR] http-bio-9090-exec-18 70001918 100023 200008 com.vmware.vise.data.query.impl.DataServiceImpl Error occurred while executing query: QuerySpec QueryName: dam-auto-generated: ConfigureVsanActionResolver:dr-57 ResourceSpec Constraint: ObjectIdentityConstraint TargetType: ClusterComputeResource
This time around Iwan Rahabok will lead the next session of the vROps Webinar Series while Sunny and myself will support him to deliver some awesome content which Iwan has developed over the past few months. Yes, this time around we will move our focus from vROps as a Product and related features to the concept of running your SDDC operations with vRealize Operations Dashboards. Just to clarify, this is not a session where we will teach you to create dashboards, but this is a session where we would share how a set of Customised Dashboards can help any organisation’s IT to get an insight into Storage, Network & Compute within your SDDC.
[<img class="alignright size-medium wp-image-3968” src=”/images/2014/02/pernixdata1.png” alt="pernixdata” width="300” height="80” since vSphere 6 was released, simply because I can’t afford to wait on learning new versions until 3rd party software catches up. It makes you truly appreciate the awesome power of FVP, even on my less than spectacular hardware in my lab, when it’s taken away for a while. Now that FVP 3.0 has GA’d, I’m looking forward to getting my lab storage accelerated - it makes a huge difference.
Recently I had the good fortune to be invited along to a blogger briefing with Satyam Vaghani CTO and Co-founder of PernixData.Those of you not in the know Satyam already has quite the track record, more notably for authoring 50+ patents, Principle engineer and Storage CTO for VMware (10 years). So it is safe to say he knows a thing or two about storage and related technology! Nine of us (bloggers) were in attendance.
I recently got my hands on a copy* of Chris Wahl and Steve Pantol’s Networking for VMware Administrators and was very keen to read it – especially given the reputation of the authors. I came to the book as someone who is at CCNA level (although now expired) and someone who regularly designs complex VMware networks using standard and distributed switches. I would class myself as having a fairly decent understanding of networking, though not a networking specialist.
Since the keynote by Frank Denneman at the LonVMUG many months ago the PernixData product has been something I wanted to test to see what benefits it may or may not bring to our SQL environment, I did have the good fortune to briefly beta test it last year but this blog post will cover the current full version (FVP 188.8.131.52). I am aware that 1.5 is just around the corner and with it comes full support for vSphere 5.
If you are close to the VMware ESXi storage path limit of 1024 paths per host, you may want to consider the following: local storage, including CD-ROMs, are counted in your total paths. Simply because of the size and age of the environment, some of our production clusters have now reached the limit (including local paths) - you see this message in the logs [2012-08-20 01:48:52.256 77C3DB90 info ‘ha-eventmgr’] Event 2003 : The maximum number of supported paths of 1024 has been reached.