VREALIZE AUTOMATION

In this episode, Sunny gave us a deep dive into the WLP and WLB features of vROps.
We were also joined by a special guest, Jad El-Zein who gave us a great insight into how vRA utilises vROps for initial placement of freshly provisioned VMs
We would highly appreciate it if you could spend 30 seconds to fill up this quick and simple survey to provide us your feedback. You can also request topics of your choice through this survey.



Published under VMware and vRealize Automation


I already have a vRealize Orchestrator workflow to shutdown my workload cluster. What I want to do is trigger that by a voice command from Alexa.
Now, the correct and proper thing to do here would be to create a new Alexa skill, write the function in Lambda and connect that to my Orchestrator REST API and execute the workflow. That way I could control the “intents” and “utterances” and have verbal feedback.

In this humble consultant’s opinion, Log Insight is one of the most useful tools in the administrator’s tool belt for troubleshooting vRealize Automation. I have lost count of the number of times I’ve been asked to help troubleshoot an issue that, when asked, people don’t know which log they should be looking at. The simple fact is that vRealize Automation has a lot of log files. Correlating these log sources to provide an overall picture is a painful, manual process - unless you have Log Insight!

One of the cool new features released with vRealize Automation 7.2 was the integration of VMware Admiral (container management) into the product, and recently VMware made version 1 of vSphere Integrated Containers generally available (GA), so I thought it was time I started playing around with the two.
In this article I’m going to cover deploying VIC to my vSphere environment and then adding that host to the vRA 7.2 container management.

Recently I’ve been working on some ideas in my lab to leverage the AWS endpoint on vRealize Automation. One of the things I needed to get working was getting Software Components working on my AWS deployed instances.
The diagram to the right shows my end-stage network - the instance deployed by vRA into AWS should be in a private subnet in my VPC, and should use my local lab DNS server and be able to access my vRA instance.



Although it’s fairly limited, you can add AWS as an endpoint for vRealize Automation 7 and consume EC2 AMIs as part of a blueprint. You can even add the deployed instances to an existing Elastic Load Balancer at deploy time. In this post I’ll run through the basics to get up and running and deploy your first highly available (multiple Availability Zone, load balanced) blueprint.
Preparing AWS for use as a vRA endpoint There are some obvious pre-requisites for attaching an AWS endpoint - for example, you need to have a VPC configured.

Big thanks to Jose Luis Gomez for this solution, his response to my tweet was spot on and invaluable!
I’ve been trying to configure vCloud Air as a vCloud Director host in vRealize Orchestrator in order to create some custom resource actions for Day 2 operations in vRealize Automation. What I found was that there’s *very* little information out there on how to do this, and I ended up writing my own custom resource mapping for the virtual machines to VCAC:VirtualMachine objects - at least that way I could add my resource action.