Tag Archives: vRO

Beyond automated deployment

I have been involved in quite a lot of automation projects over the last five years. All of them centered around VMware vRealize Automation and vRealize Orchestrator. During these projects customers throw all kinds of challenges at me. Most of which I can solve. Over the years however I found two challenges that go beyond automated deployment which I can’t really solve using vRA/vRO:

  1. If you update a vSphere template, how do you make sure all machines deployed from that template are also updated?
  2. If you change a blueprint, how do you make sure those changes are also made to existing deployments from that blueprint?

The answer two both really is: you can’t. Not If you’re using vRA/vRO. Dont’ get me wrong. I’m not trying to bash these products here. It’s just a result of how these products are designed and how they work.

In my opinion both problems boil down to the fact that in vRA blueprints you define the initial state of a deployment, not the desired state. So if you deploy a blueprint you get whatever was specified in that blueprint. Which is fine initially. But if you change the blueprint or update the template, nothing will be changed on the existing deployments. The other way around is true as well: If you change/damage your deployment, vRA won’t come in and fix it for you.

Now this seems obvious and not a big problem. After all: getting deployment times down from weeks to minutes using automation tools is a pretty good improvement in its own right. But if you think about it for a minute you’ll realize that when you have automated deployment, now you need to spent the rest of your days automating day 2 operations. After all the tool isn’t doing it for you.

For example you’ll have to introduce a tool which manages patches and updates on existing deployments. You also need to figure out a way to keep your template up-to-date, preferable automated. And if somebody breaks his deployment you need to spent time fixing it.

Now, if you’ve been following my blog recently you probably already guessed the solution to this problem: BOSH :). Here are four reason why BOSH makes your life as a platform operator easier:

  1. In BOSH a template is called a stemcell and stemcells are versioned. You don’t have to make you own, up-to-date versions of CentOS and Ubuntu stemcells are available online at bosh.io.
  2. When you’re using BOSH, software is installed on stemcells by using BOSH releases. Which are versioned, available online and actively maintained.
  3. A BOSH deployment defines a desired state. So if a VM disappears BOSH will just re-create it, re-install the software and attach the persistent disk. Also, when you update the deployment manifest to use a newer stemcell version, BOSH will just swap out the current OS disk with the new one in a few seconds and everything will still work afterwards.
  4.  All these parts can be pushed through a Concourse Pipeline! The pipeline will even trigger automatically when a new stemcell version, release version or deployment manifest version is available. Below is a screenshot of a very simple pipeline I build. This pipeline keeps both the software and the OS of my redis server up-to-date without me ever touching anything.

You can find the source files for this pipeline here. In real life you ‘d probably would want to add a few steps to this pipeline. First you deploy it to a test environment, then do some automated tests and then push it into production.

In summary: If you’re using BOSH not only do you get all the goodness of versioning and desired state config, it also enables you to employ Continuous Deployment for all your servers and software. You can even test new versions automatically so you don’t have to spent all your time just keeping your platform up-to-date.

NLVMUG UserCon Session: The Why, What and How of Automation

On March 16th the Dutch VMUG UserCon took place. Again a big event with around 1000 attendees. And again I had the honor to fill one of the breakout sessions. This year I presented with my co-worker Ruurd Keizer. Our session was titled: “The Why, What and How of Automation”.

In this session we talked about digitization, the differences between power tools and factories, containers, Cloud Foundry and more.

The recording of our session is now available. It’s in Dutch, no subtitles. But the Demos are towards the end so feel free to skip the first part if you just want to watch the awesomeness 🙂

This presentation also inspired a whitepaper which you can find here.

The Why, What and how of Automation

Today my first ever whitepaper was published. It’s titled: The why, What and how of Automation. Here is the teaser:

The current digitization wave puts an ever increasing load on enterprise IT departments. At the same time the business is expecting shorter delivery times for IT services just to stay ahead of the competition. To keep delivering the right services on time enterprise IT needs a high degree of automation.

The whitepaper explains why automation is so important, what you need to automate and how this can be done. Those who attended my NLVMUG session might notice that this whitepaper has the same title as my presentation. That’s obviously not a coincidence. If you missed the session make sure to download and read the whitepaper here: http://itq.nl/the-why-what-and-how-of-automation/

I’ll be posting a few more blogs on some of the topics in the whitepaper as well so stay tuned :).

The right tool for the job

I work with vRealize Automation and vRealize Orchestrator on a daily basis. And I really enjoy doing so, especially the custom code development part. vRO gives a lot of flexibility and it’s not often that I’m unable to build what my customers need. Whetever the request I usually find a way to emply vRA and vRO in such a way that if fulfills the customers need. But more and more often do I wonder if we’re using the right tool for the job.

Today I presented a break-out session during the annual NLVMUG UserCon. In the presentation we emphasized the importance of using the right tool for the job. After all, you don’t drive a nail in the wall with a power drill. you can do so if you really want to but you’ll probably spent more time than needed putting up your new painting and likely destroy you power drill in the process. It’s similar in enterprise IT: You can use a customizable tool like vRA/vRO for nearly anything. But that doesn’t mean you should.

But if you can make it work anyway then why not? First of all: If you’re using a product to do something that it wasn’t originally intended to do you’ll spent a lot of time and money to make it do what you actually want.  But getting the product to do that is only the beginning. Now you need to maintain the product customizations. chances are something will break at the next product upgrade. So you postpone the upgrade, then postpone again and in the end the upgrade never happens because the risk is just too high.

Let me give an example: Let’s say you’re trying to deploy in-house developed code through different life cycle stages. You could argue that everything needs to run on a virtual machine so you start out by automating virtual machine deployment. You’ll probably use vRA or something similar to do that for you. After this first step you realize that the code does not run on an bare OS, you may need IIS or .NET or Java or a bunch of shared libraries. So you decide to automate the deployment of middleware software as well. But that still isn’t enough to run the code. You also need a database, a load balancer, an SSL certificate and last but not least: you need a way to deploy the code to your machines and configure the way it’s running.  Oh and of course all this needs to be triggered by the code repository and be completely self service. By the time you have implemented all this you’ll have written tons of custom installation scripts and integration workflows.

Automating code deployment can be tricky to say the least. And in my opinion all this difficulty stems from the fact that we’re starting with the VM as the unit of deployment. The actual unit of deployment is the code/application your developers are writing. By using the wrong data as input for the tool selection you ended up with the wrong tool.

Luckily there are tools designed for application deployment. One of them is called Cloud Foundry. If you use the Pivotal distribution you can set it up in a day or so. And then your developers can just run cf push and their code is running. In the cloud. Sounds a lot better than writing countless installation scripts and custom integrations doesn’t it? Also, the Cloud Foundry platform gives you loads of options you wouldn’t have out of the box with tools like vRA: auto-scaling, easy manual scaling, application health monitoring, service bindings, application statistics, centralized logging, custom logging endpoints and lots more.

There is one major “drawback” however: your applications need to be cloud native or 12factor apps. But you’ll have to transform your apps into cloud native apps at some point in the future anyways so why not start now?

 

vRO Code – Finding VirtualMachines by Custom property

For the current project I’m involved in, I was asked to deliver a list of vRA deployed machines that have a Production status.

At first I have been writing a short piece of code that obtained all vRA managed machines and for each machine gathered the customer properties. Creating this workflow actually took less time than the execution itself as the environment has about 4200 managed objects. Next to the fact that this is time consuming to wait for, it will also generate a lot of load on the vRO service and the vRA IaaS API.

The developer in me felt like improving this and move the functionality to the vRA IaaS API, the API nevertheless has the custom properties linked to the virtual machine entity object. Eventually, after some research on ODATA queries and how to query for properties within linked entities, I was able to write the following ODATA filter:

Putting the filter and the vCAC IaaS plugin logic together will form the following script that can be used in either a workflow or an action:

To elaborate  a little bit on the code snippet above:

  • First the property and it’s value are being specified
  • The second step is to setup the filter with the property and value
  • The third step is to actually perform the call to vRA IaaS to return an array of vCAC:Entity based on the filter.
  • The last step in the code is to System.log() the names of the VirtualMachines that match the query.

When necessary to have vCAC:VirtualMachine objects instead of vCAC:entity objects change the last part of the code to:

 

Conclusion

Gathering virtualmachines based on specific properties can be a hassle using ODATA queries as in some cases it is not completely clear on how to structure the query. Eventually when the query is ready and working it shows to be much faster than creating a script to “hammer” the API for data.  The two screenshots below show the actually difference between the initial code and the improved code. The first screenshot is the original code, it errors out after 30 minutes of API calls. The second screenshot is a capture of the improved code, it runs for only 1 second to return the list of VirtualMachines matching the filter.

log get virtual machines by property and value error
First attempt ended up in an error returned by the vRA IaaS API after 30 minutes of performing API calls.

 

log get virtual machines by property and value
Second attempt with improved code. The runtime of the script is now only a matter of seconds.

vRO Code – Calculate CIDR notation

Recently I ran into situations where I needed to supply a network address in CIDR notation to external systems (infoblox and the oracle GRID installer in my case) from a vRealize Orchestrator workflow.

CIDR notation looks like this: 192.168.1.20/24. So instead of specifying the subnet mask using the 4 bytes separated by dots it just tells how many bits are used for the network number.

The thing is, all you can get from vRA is the regular subnet mask (255.255.255.0 in this example). Sometimes you can get away with a solution as simple as a few ifs or a switch/case but that’s not really the way to properly fix this.  I wanted to solve this once and for all so I wrote some JavaScript code for vRealize Orchestrator that calculates the number of bits in the mask and creates the CIDR network notation for you. Here it is:

  • Inputs for the script object or action:
    • gateway (String)
    • subnetMask (String)
  • Output:
    • cidr (String)

This script generates the network number but it can easily be adapted to return the ip address itself in CIDR notation.

 

 

vRO Code – Finding vRA IaaS entities using OData query

odata logo

As I’ve explained in an earlier blog the vRA API is split into two parts: CAFE and Iaas. This post is about the latter which still contains a lot of the vRA entities. When working with those entities you regularly need to find them first. One of the methods for finding vRA Iaas entities is using an odata query.

Continue reading

Free tool: vRO API Explorer

If you’re working with vRealize Orhcestrator you are probably spending a lot of time in the API explorer. I find myself using it all the time. But it has some flaws and the functionality is limited. Two of my colleagues recognized that and decided to build their own vRO API Explorer. And they didn’t stop there… it is now available online for everyboy to use: vroapi.com

Screenshot from 2016-03-15 12:03:29

Continue reading

Orchestration and your configuration data

As this is my first blog post at automate-it.today I would like to start off with something less-technical and have a monologue about the Orchestration and your configuration data. This first post will be the first in a series of four; the following post will be more technical and describing the possibilities, technical implementation and their pro’s and con’s.

First off, what is configuration/automation data? In short: the data that supports your  automation in terms of decision making and logic.

Continue reading