There are many improvements, changes and new additions to vROps in version 6.7 but one of the aspects that stands out to me personally is the direction VMware are taking with the product. Aside from the obvious addition of cloud costings and comparisons and a reworked capacity planning (from the ground up) and new hook in to Wavefront (which I really like) there has been some real effort to improve how you can further automate things from vROps.
The list of OOTB actions has grown a lot.. (a sample below)
Couple this with the vRO Management Pack and you can have some serious proactive and reactive automation taking place based on effective monitoring.
If you were already considering using vROps for automation then perhaps this will tip the balance and if you were in the no camp then perhaps you will reconsider?
Either way 6.7 looks great (loving the dark theme)
You can check out more about the overall changes and improvements in the follow sites.
Nice blog summary – https://lukaswinn.net/2018/04/12/welcome-to-vrealize-operations-6-7/
One question I’m asked quite a lot is what I use for a 3-tier application when I’m testing things like NSX micro-segmentation with vRealize Automation. The simple answer is that I used to make something up as I went along, deploying components by hand and generally repeating myself a lot. I had some cut/paste commands in my note application that sped things up a little, but nothing that developed. I’ve been meaning to rectify this for a while, and this is the result!
A lot of this is based on the excellent blog posts published on the VMware HOL blog by Doug Baer. Doug wrote five parts on creating his application on Photon OS and they’re well worth a read (start at part 1, here). I have changed a few things for my vRA Three Tier App, and some things are the same:
- I’m using CentOS7, as that’s what I see out in the wild with customers (RHEL7) and I am most familiar with
- The app itself is the PHP MySQL CRUD Application from Tutorial Republic
- The DB tier uses MariaDB (MySQL) not SQLite
- The App tier is an Apache/PHP server
- The Web tier is still NGINX as a reverse proxy
- I am including NSX on-demand load balancers in my blueprint, but you don’t actually need them for single-VM tiers
- Finally, I want to be able to deploy my 3-tier application using vRA Software Components (though you can also use startup scripts in the customisation spec)
Based on this, my final application will look something like the image below, with clients connecting to the NSX load balancer on HTTPS/443, multiple NGINX reverse proxy servers communicating with the NSX load balancer on HTTP/8080, which is in front of multiple Apache web servers running the PHP application which all talk to the MySQL databased back end over MySQL/3306.
When in use, the application looks like this:
vRealize Automation and NSX integration has introduced the ability to deploy multi-tiered applications with network services included. The current integration also enables a method to deploy micro-segmentation out of the box, based on dynamic Security Group membership and the Service Composer. This method does have some limitations, and can be inflexible for the on-going management of deployed applications. It requires in-depth knowledge and understanding of NSX and the Distributed Firewall, as well as access to the Networking and Security manager that is hosted by vCenter Server.
For customers who have deployed a private cloud solution using vRealize Automation, an alternative is to develop a “Firewall-as-a-Service” approach, using automation to allow authorised end users to configure micro-segmentation. This can be highly flexible, and allow the delegation of firewall management to the application owners who have intimate knowledge of the application. There are disadvantages to this approach, including significantly increased effort to author and maintain the automation workflows.
This blog post describes two possible micro-segmentation strategies for vRealize Automation with NSX and compares the two approaches against a common set of requirements.
This post was written based on the following software versions
|Software Component||Version (Build)|
|vRealize Automation||7.3 (5604410)|
|NSX||6.3.5 (7119875) – 6.4|
|vSphere||6.5 Update 1d (7312210)|
|ESXi||6.5 Update 1 (5969303)|
These are some generic considerations when deploying micro-segmentation with vRealize Automation.
- An application blueprint is designed to be deployed multiple times from vRealize Automation, the automation shouldn’t break any micro-segmentation or firewall policy when that happens.
- vRealize Automation blueprints can scale in and out – this should be accommodated within the micro-segmentation strategy to ensure that required micro-segmentation is the same as implemented micro-segmentation.
- vRealize Automation is a shared platform, so the micro-segmentation of one deployment should be limited in scope, but should also consider intra-deployment communications between applications, for example, of the same business group or tenant.
Application XYZ requirements
For illustration purposes, an example 3-tier application deployment is shown below “Application XYZ“. It consists of a Web, App and DB tier and a load balancer for the Web and App tiers.
While we all know that, automating day to day operations tasks is becoming the choice of an IT organization, it is hard for them to find a solution which can completely understand their business policies and provide them the efficiency they need from automating simple operational tasks.
Based on our research, we found that, one of the most time consuming activity done by an Virtual Infrastructure Administrator is to juggle resources between the changing business requirements and ensure that every VM which is being hosted in their environment is BEING SERVED WELL.
With VMware vRealize Operations 6.6, a new and innovative solution is introduced to cater to this use case. While there are other products in the market which claim to have done this, what we have learned from our customers that Technology applied by those solutions quickly forgets that Automation should also consider business policies.
With Workload Balance as the next topic of our Webinar Series, we will talk about some real world use cases and demonstrate how we can AUTOMATE OPERATIONS keeping in mind the business goals and requirements. We have selected a date post VMworld Barcelona and hopefully it will suit you.
Make sure that you save the invite. Here are the details:
Session Title – Part 4 – Optimizing Workload Performance using Automation
Date – Tuesday, 19th September 2017
Time – 1:00 PM to 2:30 PM Pacific Time
Speakers – Sunny Dua & Simon Eady
Webinar Link – Click here to join the session when it’s time
Save Invite – Click here to save invite
Like many other geeks out there, I received an Amazon Echo device this Christmas, and whether it’s a fad or not, I’ve spent a few happy hours setting up my Hue lights and some other automation. The room in the house with the most automation is my office – the novelty may wear off, but walking in each morning and saying “Alexa, turn on my office” and having everything wake up for me is really cool.
I already have a vRealize Orchestrator workflow to shutdown my workload cluster. What I want to do is trigger that by a voice command from Alexa. (more…)
My vSphere lab is split into two halves – a low power management cluster, powered by 3 Intel NUCs, and a more hefty workload cluster powered by a Dell C6100 chassis with 3 nodes. The workload servers are noisy and power hungry so they tend to be powered off when I am not using them, and since they live in my garage, I power them on and off remotely.
To automate the process, I wanted to write an Orchestrator workflow (vRO sits on my management cluster and is therefore always on) that could safely and robustly shut down the workload cluster. There were some design considerations:
- Something to do with the C6100 IPMI implementation and the ESXi driver does not like being woken from standby mode. It’s annoying, but not the end of the world – I can use impitool to power the hosts on from shutdown. If you want to use host standby, there’s a hostStandby and hostExitStandby workflow in the package on github.
- I run VSAN on the cluster, so I need to enter maintenance mode without evacuating the data (which would take a long time and be pointless).
- All the running VMs on the cluster should be shut down before the hosts attempt to go into maintenance mode.
- I want a “what if” option, to see what the workflow would do if I ran it, without actually executing the shutdown – it’s largely for development purposes, but the power down/power on cycle takes a good half hour and I don’t want to waste time on a typo!
Time to publish the recording for the 10th episode of vROps Webinar Series. This time around we spoke about vRealize Operations Manager Resful API and how to use it. Post 20 minutes of slide-ware, I jumped into the lab and thanks to the demo god, we demonstrated a number of use cases and browsed through the documentation to make it easier for you to consume and use the same.
Big thanks to @sunny_dua for doing the session while I was MIA you are a legend buddy!
So without further ado, here is the recording for this session:
Recently I was asked to develop some vRealize Orchestrator workflows against the F5 BIG-IP iControl REST API, but I was not able to test freely against a production appliance. After a lot of attempts to get in contact with F5 for a 90-day trial of the full version, or to purchase a lab license, I came up empty handed. The free version you can download from F5’s website is version 11.3, which does not feature the iControl REST API, which was released in 11.4.
What I did notice while on the F5 site was the links to the AWS Marketplace where you can rent F5 BIG-IP Virtual Editions by the hour – $0.83/hr + AWS usage fees.
If you happen to have a license for BIG-IP there’s also a Bring Your Own License version, which would be handy. You can view all the options on F5 Network’s AWS Marketplace page.
Here’s how you set up your F5-in-AWS! (more…)
Big thanks to Jose Luis Gomez for this solution, his response to my tweet was spot on and invaluable!
I’ve been trying to configure vCloud Air as a vCloud Director host in vRealize Orchestrator in order to create some custom resource actions for Day 2 operations in vRealize Automation. What I found was that there’s *very* little information out there on how to do this, and I ended up writing my own custom resource mapping for the virtual machines to VCAC:VirtualMachine objects – at least that way I could add my resource action. But this still didn’t expose the vCloud Director functionality for those machines. To do this I needed vCloud Air added as a vCloud Director host.
As per Jose’s advice, I duplicated the “com.vmware.library.vCloud.Host/addHost” action, named it “addHost_vCA_G2”:
I then modified the following line to include “/api/compute”:
newHost.url = "https://" + host + ":" + port;
newHost.url = "https://" + host + ":" + port + "/api/compute";
I then duplicated the “Add a connection” workflow to create “Add a connection (vCloud Air Gen2)” and swapped the old action for the new action:
Now I can add vCloud Air (PAYG/Gen2) as an endpoint in the normal way:
The out-of-the-box “IaaS vCD VM” Resource Mapping now works in vRA and I can create custom Resource Actions against the vCloud:VM object type.
Once again, big thanks to Jose for this solution!
— Jose Luis Gomez (@pipoe2h) April 19, 2016