A few things I would like to point out. First of all Infinio works only with NFS storage, secondly it does not require flash storage as it utilizes the host RAM instead however it provides only read acceleration.
I chose to setup 1 NFS store per array (RAID1 & RAID0) to see if there were any discernible differences. I had several existing lab VMs (DC and other general VMs) but for additional I/O load I added 3 I/O intensive VMs for each NFS volume.
You can see distinct increases to I/O when switched on
I recently got my hands on a copy* of Chris Wahl and Steve Pantol’s Networking for VMware Administrators and was very keen to read it – especially given the reputation of the authors. I came to the book as someone who is at CCNA level (although now expired) and someone who regularly designs complex VMware networks using standard and distributed switches. I would class myself as having a fairly decent understanding of networking, though not a networking specialist.
The book starts out at from a really basic level explaining OSI, what a protocol is etc. and builds on the foundation set out as it progresses. Part I of the book gives are really good explanation of not only the basics of networking, but a lot of the “why” as well. If you’ve done CCNA level networking exams then you will know most of this stuff – but it’s always good to refresh, and maybe cover any gaps.
Part II of the book translates the foundations set out in Part I into the virtual world and takes you through the similarities and differences with between virtual and physical. It gives a good overview of the vSphere Standard Switch (VSS) and vSphere Distributed Switch (vDS) and even has a chapter on the Cisco 1000v. One of the really useful parts of the book are the lab examples and designs, which takes you though the design process and considerations to get to the solution.
You can view the announcement and the full list here - http://blogs.vmware.com/vmtn/2014/04/vexpert-2014-announcement.html
VMware vSphere 5 Memory Management and Monitoring diagram
Concepts and best practices in Resource Pools
If like me you use iSCSI then you will likely spend a bit of time setting up your Path Selection Polices to suit your specific needs, so it was interesting to note the following.
When you do uninstall and remove PernixData from your hosts your Path Selection Polices do not revert back to your original configuration rather they revert back to the default vSphere setting of MRU (Most recently used).
This is worth noting as this is not mentioned in the documentation PernixData provide.
UPDATE - After being contacted by the guys at PernixData I can confirm they will be updating their documentation shortly to reflect this outcome.