So the other day my Skype account was briefly compromised, a successful login from Russia (after digging through activity logs) and this was after many attempts from IP addresses all around the world (China, Korea, Argentina the list goes on). You can see from the picture below the successful login attempt.
If the vROps appliance needs to be hardened there is already a VMware provided guide and tool to accommodate.
“The documentation for Secure Configuration is intended to serve as a secure baseline for the deployment of vRealize Operations Manager.”
The Documentation covers the Virtual Appliance, Linux deployments and Windows deployments.
Update 6th April 2016 – https://my.vmware.com/web/vmware/details?downloadGroup=VR-HARDENING-200&productId=563
VMware vRealize Hardening Tool 2.0.0
The vRealize Hardening Tool automates the hardening activity by applying appliance-specific configuration changes to a system. For more information about hardening vRealize and on how to use the vRealize Hardening Tool
vRealize Log Insight 2.5 improves on the clustering in previous versions with an Integrated Load Balancer (ILB) which allows you to distribute load across your cluster of Log Insight instances without actually needing an external load balancer. The advantage of this over an external load balancer is that the source IP is maintained which allows for easier analysis.
The minimum number of nodes in a cluster is three, the first node becomes the Master node and the other two become Worker nodes. The maximum number of nodes supported is six, though acording to Mr Log Insight himself, Steve Flanders, the hard limit is more:
@sammcgeown yes though hard limit in product is much higher. How big do you need?
— Steve Flanders (@smflanders) February 11, 2015
The Log Insight appliance comes in four sizes: an Extra Small, for use in labs or PoC, through to Large, which can consume a whopping 112.5GB of logs per day. Those figures scale in a linear fashion for clusters, so a 3-node cluster of large instances can consume 337.5GB per day, from 2,250 syslog connections at a rate of 22,500 events a second. See more on sizing vRealize Log Insight.
It is with great relief that I can announce I have passed my VCP NV (Network Virtualisation) having been caught out by the difficulty of the exam and failing previously.
I was fortunate to attend a VMware internal bootcamp (roughly equivalent to the ICM course) for NSX and have had experience deploying production NSX environments, so that is by far the best preparation. As always, the exam blueprint is crucial, you *have* to know all areas covered there. I’ve also been reading the documentation and design and deploy guides published by VMware, and completed the basic and advanced hands on labs that are also freely available. On top of that there is the official practice exam which I strongly suggest you do as it reflects the real exam well, and there are a series of fantastic practice tests by Paul McSharry available while provide a decent test of knowledge.
It’s a typical VMware VCP level exam consisting of 120 multiple choice questions with 120 minutes to answer them. That’s 1m per question, it may not sound a lot but there are plenty of questions you will answer in seconds. I completed the exam in about 1h25m. Other than that there’s not a huge amount to say about the exam itself due to NDAs!
Advice for takers
Study the blueprint, it really does cover everything you need!
It seems obvious, but know the packet walks and understand how encapsulation changes packets
Have a clear and precise understanding of the components and architecture, and what the use cases are
If you have access to the binaries, install, break, fix, remove, repeat! If not, HOL, you don’t have to follow the guides, you can do your own thing.
My score wasn’t great (a pass is a pass right?) so I’m keen to go back over some weaker areas to start with. I am definitely going to look at recertifying my expired CCNA, as this is really good knowledge to take into any NSX engagement. With the VCIX exam recently released, I’ll look towards that also. Finally, lots of lab work with vCAC 6.1 and NSX to really maximise its potential. NSX shines when you see it automated.
The NSX Edge Gateway comes pre-armed with the ability to provide an SSL VPN for remote access into your network. This isn’t a new feature (SSL VPN was available in vCloud Networking and Security), but it’s worth a run through. I’m configuring remote access to my Lab, since it’s often useful to access it when on a client site, but traditional VPN connections are often blocked on corporate networks where HTTPS isn’t. (more…)
This is the first article in a series about how to build-out a simple vCAC 6 installation to a distributed model.
In a simple installation you have the Identity Appliance, the vCAC appliance (which includes a vPostgres DB and vCenter Orchestrator instance) and an IaaS server. The distributed model still has a single Identity Appliance but clusters 2 or more vCAC appliances behind a load balancer, backed by a separate vPostgres database appliance. The IaaS components are installed on 2 or more IaaS Windows servers and are load balanced, backed by an external MSSQL database. Additionally, the vCenter Orchestrator appliance is used in a failover cluster, backed by the external vPostgres database appliance.
The distributed model can improve availability, redundancy, disaster recovery and performance, however it is more complex to install and manage, and there are still single points of failure – e.g. the vPostgres database is not highly available and although protected by vSphere HA could be the cause of an outage. Clustering the database would provide an improved level of availability but may not be supported by VMware. Similarly the Identity Appliance is currently a single point of failure, although there are also options for high availability there too.
An overview of the steps required is below:
- Issue and install certificates
- Deploy an external vPostgres appliance and migrate the vCAC database
- Configure load balancing
- Deploy a second vCAC appliance and configure clustering
- Install and configure additional IaaS server
- Deploy vCenter Orchestrator Appliance cluster
In my previous post Backing up ESXi 5.5 host configurations with vCenter Orchestrator (vCO) – Workflow design walkthrough I showed how to create a workflow to back up host configurations, but it was limited to one host at a time. For this post I’m going to show how to create a new workflow that calls the previous one on multiple hosts using a ForEach loop to run it against each one. This was actually easier than I had anticipated (having read posts on previous versions of vCO that involved created the loops manually).
Ready? Here we go…
Create a new workflow – I’ve called mine “DefinIT-Backup-ESXi-Config-Cluster” and open the Schema view. Drag a “ForEach” element onto the workflow and the Chooser window pops up. This is a bit deceptive at first because it’s blank! However, if you enter some search text it will bring up existing workflows, so I searched for my “DefinIT-Backup-ESXi-Config” workflow that was created in that previous post. (more…)
As a little learning project, I thought I’d take on Simon’s previous post about backing up ESXi configurations and extend it to vCenter Orchestrator (vCO), and document how I go about building up a workflow. I’m learning more and more about vCO all the time, but I found it has a really steep entry point, and finding use cases is hard if you haven’t explored capabilities.
The steps I want to create in this post are:
- Right click host to trigger
- Sync and then back up the configuration to file
- Copy the file to a datastore, creating a folder based on date and the host
Ideas for future extensions of this workflow include – trigger from cluster/datastore/host folder, email notifications, emailing the backup files and backup config before triggering a VMware Update Manager remediation. Hopefully all this will come in future posts – for now, lets get stuck into the creation of this workflow.
I’m fairly new to SRM, but even so this one seemed like a real head-scratcher! If you happen to be using CA signed certificates on your “protected site” vCenter and “recovery site” vCenter servers, when you come to linking the two SRM sites you encounter SSLHandShake errors – basically SRM assumes you want to use certificates for authentication because you’re using signed certificates. If you use the default self-signed certificates, SRM will default to using password authentication (see SRM Authentication). Where the process fails is during the “configure connection” stage, if either one of your vCenter servers does not have CA signed and the other does (throws an error that they are using different authentication methods) or that you are using self-signed certificates for either SRM installation (throws an error that the certificate or CA could not be trusted).
SRM server ‘vc-02.definit.local’ cannot do a pair operation. The reason is: Local and remote servers are using different authentication methods.