I’m fairly new to SRM, but even so this one seemed like a real head-scratcher! If you happen to be using CA signed certificates on your “protected site” vCenter and “recovery site” vCenter servers, when you come to linking the two SRM sites you encounter SSLHandShake errors – basically SRM assumes you want to use certificates for authentication because you’re using signed certificates. If you use the default self-signed certificates, SRM will default to using password authentication (see SRM Authentication). Where the process fails is during the “configure connection” stage, if either one of your vCenter servers does not have CA signed and the other does (throws an error that they are using different authentication methods) or that you are using self-signed certificates for either SRM installation (throws an error that the certificate or CA could not be trusted).
SRM server 'vc-02.definit.local' cannot do a pair operation. The reason is: Local and remote servers are using different authentication methods.
This had me scratching my head, what seemed to be a common problem wasn’t fixed by the common solution. It was actually my fault – too familiar with the product and setting things up too quickly to test.
I installed a VCSA 5.5 instance in my lab as a secondary site for some testing and during the process found I couldn’t log on to the web client – it failed with the error:
Failed to connect to VMware Lookup Service https://vCVA_IP_address:7444/lookupservice/sdk - SSL certificate verification failed.
I had a closer look at the certificate being generated and noticed that the Subject Name was malformed “CN=vc-02.definit.loca” – that led me to the network config of the VCSA. I’d entered the FQDN into the “host name” field, which was in turn being passed to the certificate generation, truncated and throwing the SSL error. Changing the FQDN back to the host name “VC-02” and regenerating the certificate resolved the issue.
If you do have to follow that process, remember to disable the SSL certificate regeneration after it’s fixed – otherwise you’ll suffer slow boot times!
I’ll put that one down to over-familiarity with the product!
Since the keynote by Frank Denneman at the LonVMUG many months ago the PernixData product has been something I wanted to test to see what benefits it may or may not bring to our SQL environment, I did have the good fortune to briefly beta test it last year but this blog post will cover the current full version (FVP 22.214.171.124). I am aware that 1.5 is just around the corner and with it comes full support for vSphere 5.5 whereas the current version that I will be installing supports ESXi hosts on 5.0 or 5.1 and vCenter 5.5 (not mentioned in the minimum requirements)
- 3x Dell R715
- 3x Dell SSD (1 installed in each host)
- iSCSI connected SAN
ESXi Host preparation
The first job is to install the PernixData host extensions to the hosts, I opted to copy the extension to a data store that was accessible to all the hosts. After putting the first host into maintenance mode I quickly encounter my first issue.
This was simply as a result of not removing the previous install from this particular host so it was easy enough to fix by simply removing the previous installation with the following command "cp /opt/pernixdata/bin/prnxuninstall.sh /tmp/ && /tmp/prnxuninstall.sh" (as outlined in the PernixData FVP install guide)
After a reboot of the host (just to make sure) I reran the installation with success.
Management server install
As per the PernixData documentation I created a new AD account which had the appropriate admin permissions on vCenter and local admin rights on the dedicated VM for the FVP management server.
Because this environment uses a vCenter 5.5 Appliance I created a small dedicated VM (Server 2008 R2) for the FVP management server, I installed SQL Express 2008 R2 and then the SQL Express management studio. Once SQL was installed I proceeded to install the FVP Management server, the installation went ahead with no problems. I rebooted the VM (just to be sure) and then once back up I reopened my vSphere client hoping to see the Management plugin listed in the Plugins, however it was not there. I checked the PernixData Windows service which had indeed started successfully.
Checking the logs (<INSTALLDIR>\server\log\prnxms.log) there was clearly a problem.
"2014-02-28 11:50:53,371 [pool-3-thread-1] ERROR Context - Logging by SSPI failed
javax.xml.ws.soap.SOAPFaultException: A general system error occurred: User mydomain\pernixuseraccount, cause: N3Vpx6Common3Sso23DomainNotFoundExceptionE(No Domain found with ID: mydomain)"
I went and double checked the username and its credentials, everything seemed perfectly fine, I restarted the service still the same error.
I wanted to see what configuration was actually being used so I took a quick look at the Configuration file (<INSTALLDIR>\server\conf\prnxms.config)
The following lines in the config file were empty
So as a test I populated the fields with the correct information
It is also important to ensure the following line is set to cleartext (as shown) before restarting the service
After restarting the Management server service it will encrypt the password text and reset the line entry to the following
I then closed and reopened the vSphere client and voila! the FVP Management plug was listed as an available plugin.
After installing the plugin I created a flash Cluster but at this point did not add any SSD devices to the cluster, this will allow us to then add any targeted VMs and gather existing metrics for a few days so we can then compare how much benefit the targetted VMs actually get after "switching it on".
In my next post I will go over the results and my overall experience of using the PernixData product.
After having a play with Virtual Flash and Host Caching on one of my lab hosts I wanted to re-use the SSD drive, but couldn’t seem to get vFlash to release the drive. I disabled flash usage on all VMs and disabled the Host Cache, then went to the Virtual Flash Resource Management page to click the “Remove All” button. That failed with errors:
“Host’s virtual flash resource is inaccessible.”
“The object or item referred to could not be found.”
In order to reclaim the SSD you need to erase the proprietary vFlash File System partition using some command line kung fu. SSH into your host and list the disks:
You’ll see something similar to this:
You can see the disk ID “t10.ATA_____M42DCT032M4SSD3__________________________00000000121903600F1F” and below it appended with the “:1” which is partition 1 on the disk. This is the partition that I need to delete. I then use partedUtil to delete the partition I just identified using the format below:
partedutil delete “/vmfs/devices/disks/<disk ID>” <partition number>
partedutil delete “/vmfs/devices/disks/t10.ATA_____M42DCT032M4SSD3__________________________00000000121903600F1F” 1
There’s no output after the command:
Now I can go and reclaim the SSD as a VMFS volume as required:
Hope that helps!
In case you missed it VMware have now released vCHS (VMware vCloud Hybrid Service) in Europe! The first data center residing in Slough with more data centers planned across Europe in the near future.
Working in an SME that has several existing vSphere environments this was of real interest, as the need to scale out quickly from our Private clouds is rapidly becoming a requirement.
Having already spoken to VMware on the phone to get a rough idea on options and costs I decided to take a look at the Hands-on-labs to see how easy it really is to use and migrate VMs from an existing private cloud to vCHS.
The Lab gives you 3 hours with 128 steps but to be honest this is very generous (no bad thing) so I was done and dusted in 1.5 hours. Also the option to split the screen across multiple windows was very useful. (HoL FTW!)
vCHS has a very simple dashboard as you can see below
Management seems very straight forward with familiar terminology, all of the components you would expect and need are easily accessible via the web interface or links to your own vCloud Director window.
What I was very keen to learn though was how to migrate a VM from my Private cloud to vCHS, inthis case VMware uses vCloud Connector which you would you install in your environment. You can see here the vCloud Connector at the bottom of the vSphere console window.
Once you are then inside the application you would then simply add your local environment first then the vCHS environment (naturally you would of setup a VPN tunnel in advance) You can see the hands on labs example below.
Once both sites are added you can then select a VM of your choice in your private cloud click the actions and choose "copy" populate the various questions it asks you and then the process begins.
What struck me once this process had completed is how straight forward it all was. I had visions of it being potentially overly complex with numerous caveats but I simply didn't see any immediate deal breakers for the kinds of usage I would envisage applying to a vCHS environment.
Of course its all early days and this was a Hands-on-lab but what I saw was very encouraging.