DefinIT
Building a vRealize Automation NSX Lab on Ravello
Sam
29/09/2015

imageAs a vExpert, I am blessed to get 1000 CPU hours access to Ravello’s awesome platform and recently I’ve been playing with the AutoLab deployments tailored for Ravello.

If you’re unfamiliar with Ravello’s offering (where have you been?!) then it’s basically a custom hypervisor (HVX) running on either AWS or Google Cloud that allows you to run nested environments on those platforms. I did say it’s awesome.

As an avid home-lab enthusiast Ravello initially felt weird, but having used it for a while I can definitely see the potential to augment, and in some cases completely replace the home lab. I spent some time going through Nigel Poulton’s AWS course on Pluralsight to get a better understanding of the AWS platform and I think that helped, but it’s definitely not required to get started on Ravello.

One more thing to add before I start the setup – even if I didn’t have 1000 hours free, the pricing model means that you could run your lab on Ravello for a fraction of the cost of a higher spec home lab. It’s definitely an option to consider unless you’re running your lab 24/7.

Building AutoLab in Ravello

I used the vBrownbag walkthrough video to build AutoLab 2.6 on Ravello, below is a quick run through based on this

Add the AutoLab 2.6 Final blueprint to your library – log in and click on “Ravello Repo”, then add it to your library

image

image

image

Using the Ravello import tool, upload the following ISOs

  • ESXi (VMware-VMvisor-Installer-6.0.0-2494585.x86_64.iso)
  • vSphere 6 (VMware-VIMSetup-all-6.0.0-2562643.iso)
  • Windows Server 2008 R2 Evaluation (7601.17514.101119-1850_x64fre_server_eval_en-us-GRMSXEVAL_EN_DVD.iso)

image

Open the Blueprint from your Library and hit “Create Application”

image

Give it a good name

image

Select the NAS VM, then select the “Disks” tab and mount the ESXi and vCenter installers, then click Save.

image

Likewise, attach the Windows 2008 R2 ISO to the DC and VC virtual machines:

image

Now hit Publish and configure your application. I have had the best success with selecting Performance and Amazon, set a decent length of time for the Auto-stop and make sure the option to start all VMs automatically is unchecked – AutoLab needs to build in the correct sequence.

image

Once it’s published, power on the NAS and DC VMs by selecting both and clicking Start – this will start the build.

image

Those VMs will need about an hour to build, so in the mean time you can sort out NSX and vRA.

image

Once the build has completed, connect to the DC VM using RDP and run the validate PowerShell script. This will let you change the default password and set the VC config to add ESXi hosts automatically. It will also prompt you to download PowerCLI, which you should save in the B:\ network drive.

The build will fail – that’s OK, it’s because we don’t have PowerCLI downloaded again. Once PowerCLI is in the B:\, run the script again and it will complete:

imageimage

Back in the Ravello Application, select the three hosts and the VC and start them.

image

Once Host1, 2 and 3 have started, open the console and select the PXE build option:

imageimage

They can then be left to their own devices while they build.

image

Once the ESXi hosts have deployed, and the vCenter Server has built, we can log on using RDP to the vCenter server and run the AutoLab Scipt Menu and select option 1 to validate the build:

image

image

The vCenter should now be available – use the vSphere Client to connect to connect and you can view the standard AutoLabs setup

image

Deploying NSX Manager to Ravello

The extended OVA functionality to deploy the NSX manager to Ravello is not available directly, so you can’t just upload the OVA and expect it to work. The easiest method I found was to extract the NSX ova using 7-Zip and use the upload tool to upload the OVF

image

Once the appliance has been uploaded it needs to be verified, I’ll be using the below settings:

  • Hostname: nsx
  • IP: 192.168.199.20
  • Subnet: 255.255.255.0
  • Gateway: 192.168.199.1
  • DNS: 192.168.199.4
  • Search: lab.local

Make sure you drop the RAM down to 8GB, otherwise it won’t start on Ravello. (I’ve configured a static IP address and used a NAT (port forwarding) to provide HTTPS 443 access but I’ll remove that later – it’s better to use the VC or DC to access it!)

imageimageimage

imageimageimage

Once the VM is verified, you can drag it onto your canvas and publish the application. The VM will boot and you can then configure the NSX manager via the console:

Log in using admin/default, and enter priviledged mode (enable) using the password “default”. Type setup to begin the initial configuration:

image

Once rebooted, check you can access the NSX admin console via the DC or VC:

image

From here, the NSX install/deploy is as you would do it in a physical environment.

Deploying vRealize Automation to Ravello

Using the same method as the NSX manager deployment, extract the Identity Appliance and vRealize Automation Appliance and upload the OVF appliances directly to Ravello using the import tool.

Identity Appliance vRealize Automation Appliance
  • Hostname: sso
  • IP: 192.168.199.21
  • Subnet: 255.255.255.0
  • Gateway: 192.168.199.1
  • DNS: 192.168.199.4
  • Search: lab.local
  • Hostname: vra
  • IP: 192.168.199.22
  • Subnet: 255.255.255.0
  • Gateway: 192.168.199.1
  • DNS: 192.168.199.4
  • Search: lab.local

Once the Identity Appliance is uploaded, navigate to your Ravello Library, VMs and select the it. The configuration needs to be verified before you can move on:

imageimageimage

imageimage

Now drag the SSO appliance from the add VMs to your canvas, publish and power on:

image

Once it’s published, switch to the console where you’ll see a warning about the hypervisor allows – press any key and ignore it! It’ll also want a password input, since the OVF environment it’s expecting isn’t there – enter a password twice to continue. The appliance will pick up a DHCP address as the IP configuration isn’t included in the OVF. From the console, take a note of the IP and access it via the VC or DC servers.

image

Log in using the root user and the password you specified on boot. Change the IP address to the specified static IP and reboot. The SSO appliance is now ready to continue in the normal install process.

image

Next, open the Library > VM page and select the vRealize Automation appliance to verify the VM settings. The only real configuration I’ve made is to add the static IP address:

imageimageimage

imageimageimage

Once again, drag the vRA VM from the menu and publish the VM using the “Update” button. Once the VM is provisioned, open the console to acknowledge the warning, as with the SSO appliance:

image

Enter an initial password and let the appliance boot until you see the splash screen – grab the IP address and then use it to configure the appliance, as with the Identity Appliance, from the DC or VC server.

image

Log in using root and the password you entered in the console, and configure the IP address:

image

The vRealize Automation appliance is now ready to configure.

To create the last component, the IaaS server – from the Application canvas, click + and drag the “Empty” VM template onto the canvas.

Select it and configure as follows, ensuring you configure the name, IP addressing, CD mapping and resources. Click save, then publish the VM.

image image

image image

image

Install and configure the Windows OS ready for the IaaS installation with a static IP address.

My application now looks like this:

image

That concludes this post – getting everything ready for configuration has been a long process, but overall a lot less taxing than I expected!