DefinIT

PernixData FVP 3.0 GA and Lab Install

pernixdataI’ve been running “Pernix-less” since vSphere 6 was released, simply because I can’t afford to wait on learning new versions until 3rd party software catches up. It makes you truly appreciate the awesome power of FVP, even on my less than spectacular hardware in my lab, when it’s taken away for a while.

Now that FVP 3.0 has GA’d, I’m looking forward to getting my lab storage accelerated – it makes a huge difference.

What’s new in FVP 3.0? Well, to quote the release notes:

A stand alone, browser based FVP Management Console.

Support for vSphere 6.0

Performance and scalability improvements.

Ability to rename the PRNXMS database.

Online and offline license activation via the new standalone UI.

Obviously support for vSphere 6.0 was the big one I was waiting for, but don’t discount the rather understated “Performance and scalability improvements”. Not sure if renaming the database is a headline for release, but I’ll let that go. I’m really, really, REALLY hoping the license activation has improved because I found it a little clunky and frustrating before…we’ll see… (more…)

Blogger briefing with Satyam Vaghani – PernixData

Recently I had the good fortune to be invited along to a blogger briefing with Satyam Vaghani CTO and Co-founder of PernixData.satyamvaghani-304

Those of you not in the know Satyam already has quite the track record, more notably for authoring 50+ patents, Principle engineer and Storage CTO for VMware (10 years). So it is safe to say he knows a thing or two about storage and related technology!

Nine of us (bloggers) were in attendance. (@dawoo @Archie_Hendryx @Craig_Kilborn @GreggRobertson5 @dellock6 @egrigson @julian_wood @simoneady @virtualisedgeek)

At the time of meeting Satyam, PernixData was 2 years and 2 months old and already has had a large impact on the storage industry.

FVP was of course at the forefront of discussion and how it stands unique in the  storage market place by providing clustered read and write acceleration of any shared storage.

Satyam was very clear, he believes customers should be less focused on renewing/replacing your shared storage in an effort to maintain or improve existing performance but rather focus on simply increasing the overall shared storage capacity and scale out your caching system (clustered flash) to deliver that consistent predictable high performance applications and end users demand and expect. He also highlighted how right now the storage industry has never been more fluid, after 20 years of predictable changes and advances the emergence of SSD and flash has turned the industry upside down. Flash based technologies have already been proven to exceed the performance limitations of well known products like SQL Server where the code is now having to be reviewed to take advantage of the new speeds available.

(more…)

Pernixdata – ICM and initial impressions – part 1

| 28/02/2014 | Tags: , , , ,

pernixdataSince the keynote by Frank Denneman at the LonVMUG many months ago the PernixData product has been something I wanted to test to see what benefits it may or may not bring to our SQL environment, I did have the good fortune to briefly beta test it last year but this blog post will cover the current full version (FVP 1.0.2.0). I am aware that 1.5 is just around the corner and with it comes full support for vSphere 5.5 whereas the current version that I will be installing supports ESXi hosts on 5.0 or 5.1 and vCenter 5.5 (not mentioned in the minimum requirements)

Environment

  • 3x Dell R715
  • 3x Dell SSD (1 installed in each host)
  • iSCSI connected SAN

ESXi Host preparation

The first job is to install the PernixData host extensions to the hosts, I opted to copy the extension to a data store that was accessible to all the hosts. After putting the first host into maintenance mode I quickly encounter my first issue.

2014-02-27_135234

 

 

 

This was simply as a result of not removing the previous install from this particular host so it was easy enough to fix by simply removing the previous installation with the following command cp /opt/pernixdata/bin/prnxuninstall.sh /tmp/ && /tmp/prnxuninstall.sh (as outlined in the PernixData FVP install guide)

After a reboot of the host (just to make sure) I reran the installation with success.

Management server install

As per the PernixData documentation I created a new AD account which had the appropriate admin permissions on vCenter and  local admin rights on the dedicated VM for the FVP management server.

Because this environment uses a vCenter 5.5 Appliance I created a small dedicated VM (Server 2008 R2) for the FVP management server, I installed SQL Express 2008 R2 and then the SQL Express management studio. Once SQL was installed I proceeded to install the FVP Management server, the installation went ahead with no problems. I rebooted the VM (just to be sure) and then once back up I reopened my vSphere client hoping to see the Management plugin listed in the Plugins, however it was not there. I checked the PernixData Windows service which had indeed started successfully.

Checking the logs (<INSTALLDIR>\server\log\prnxms.log) there was clearly a problem.

2014-02-28 11:50:53,371 [pool-3-thread-1] ERROR Context – Logging by SSPI failed
javax.xml.ws.soap.SOAPFaultException: A general system error occurred: User mydomain\pernixuseraccount, cause: N3Vpx6Common3Sso23DomainNotFoundExceptionE(No Domain found with ID: mydomain)

I went and double checked the username and its credentials, everything seemed perfectly fine, I restarted the service still the same error.

I wanted to see what configuration was actually being used so I took a quick look at the Configuration file (<INSTALLDIR>\server\conf\prnxms.config)

The following lines in the config file were empty

prnxms.vcserver.username=
prnxms.vcserver.password=

So as a test I populated the fields with the correct information

prnxms.vcserver.username=username@domain
prnxms.vcserver.password=userpassword

It is also important to ensure the following line is set to cleartext (as shown) before restarting the service

prnxms.vcserver.password.format=cleartext

After restarting the Management server service it will encrypt the password text and reset the line entry to the following

prnxms.vcserver.password.format=encrypted

I then closed and reopened the vSphere client and voila! the FVP Management plug was listed as an available plugin.

After installing the plugin I created a flash Cluster but at this point did not add any SSD devices to the cluster, this will allow us to then add any targeted VMs and gather existing metrics for a few days so we can then compare how much benefit the targetted VMs actually get after “switching it on”.

In my next post I will go over the results and my overall experience of using the PernixData product.

Reclaiming an SSD device from ESXi 5.5 Virtual Flash

| 27/02/2014 | Tags: , , , , , ,

After having a play with Virtual Flash and Host Caching on one of my lab hosts I wanted to re-use the SSD drive, but couldn’t seem to get vFlash to release the drive. I disabled flash usage on all VMs and disabled the Host Cache, then went to the Virtual Flash Resource Management page to click the “Remove All” button. That failed with errors:

“Host’s virtual flash resource is inaccessible.”

“The object or item referred to could not be found.”

image

In order to reclaim the SSD you need to erase the proprietary vFlash File System partition using some command line kung fu. SSH into your host and list the disks:

ls /vmfs/devices/disks

You’ll see something similar to this:

image

You can see the disk ID “t10.ATA_____M42DCT032M4SSD3__________________________00000000121903600F1F” and below it appended with the “:1” which is partition 1 on the disk. This is the partition that I need to delete. I then use partedUtil to delete the partition I just identified using the format below:

partedutil delete “/vmfs/devices/disks/<disk ID>” <partition number>

partedutil delete “/vmfs/devices/disks/t10.ATA_____M42DCT032M4SSD3__________________________00000000121903600F1F” 1

There’s no output after the command:

image

Now I can go and reclaim the SSD as a VMFS volume as required:

image

Hope that helps!