Tag Archives: Automation

NLVMUG UserCon Session: The Why, What and How of Automation

On March 16th the Dutch VMUG UserCon took place. Again a big event with around 1000 attendees. And again I had the honor to fill one of the breakout sessions. This year I presented with my co-worker Ruurd Keizer. Our session was titled: “The Why, What and How of Automation”.

In this session we talked about digitization, the differences between power tools and factories, containers, Cloud Foundry and more.

The recording of our session is now available. It’s in Dutch, no subtitles. But the Demos are towards the end so feel free to skip the first part if you just want to watch the awesomeness 🙂

This presentation also inspired a whitepaper which you can find here.

The Why, What and how of Automation

Today my first ever whitepaper was published. It’s titled: The why, What and how of Automation. Here is the teaser:

The current digitization wave puts an ever increasing load on enterprise IT departments. At the same time the business is expecting shorter delivery times for IT services just to stay ahead of the competition. To keep delivering the right services on time enterprise IT needs a high degree of automation.

The whitepaper explains why automation is so important, what you need to automate and how this can be done. Those who attended my NLVMUG session might notice that this whitepaper has the same title as my presentation. That’s obviously not a coincidence. If you missed the session make sure to download and read the whitepaper here: http://itq.nl/the-why-what-and-how-of-automation/

I’ll be posting a few more blogs on some of the topics in the whitepaper as well so stay tuned :).

NLVMUG UserCon 2017

On Thursday March 16th the annual NLVMUG UserCon will take place. The venue will be the same as the last few years: 1931 Congrescentrum Brabanthallen.

All authors of this site will be participating in the conference: Dennis is the dutch VMUG leader so in that capacity he’ll have the honor of opening the conference. Olivier and I will be hosting a Group discussion titled “Automation in real life”. And I’ll also be presenting during a break-out session, together with my colleague Ruurd Keizer (author of vroapi.net). The title for this years presentation is “The why, what and how of automation”.

Contrary to previous years (here, here and here) I won’t be bringing any physical DIYed gadgets onto the stage :(. Not because I didn’t want to but because I’m moving houses and all my stuff is in storage at the moment. would be mission impossible to get all the way to the back of the darn storage box to retrieve my soldering iron for example. If you’ve ever put your whole life in a storage box you know what I’m talking about….

But of course there will still be demos! and I still did some DIYing. It’s just all virtual this year 🙂

See you all this Thursday!

Automating vRA (vCAC) using vRO – Split Brain

Recently I have done some work on automating vRA (vCAC) using vRO (vCO). This meant I had to dive into the vCAC APIs. The bad news is that this felt like diving into a pool of dark muddy water. The good news is that I’m still alive, my headache is gone and I’ll try to capture some of the things I learned on this blog.

Split brain

In this post I’ll start out with an introduction to the vCAC APIs. Yes, plural. Not just one API.

vCAC ahem… VRA is actually not just one product, it’s two products which are loosely coupled and sold as one. The first product is the vRA Appliance also known as CAFE. This is a new product that was introduced with vCAC verion 6.0. It is developed in Java (springsource), runs on linux, uses postgres as a data persistence layer, seems to use a micro services architecture , supports multi-tenancy and provides a REST API.

But there also is the old product that was originally developed at CreditSuise, spun off as DynamicOps and then acquired by VMware. It was sold as vCAC 5.x, is developed in .net, uses an MS SQL back-end, runs .net workflows, has no notion of multi-tenancy and provides an OData API. This part is usually called the Iaas Part.

The two products are also reflected in two separate vCO ahem… vRO Plugins. Although you download and install just one package there are really two plugins installed. One is called VCAC and has the description “vCloud Automation Center Infrastructure Administration plug-in for vCenter Orchestrator” the other one is called CAFE and is described as “vCloud Automation Center plug-in for vCenter Orchestrator”.

Confusing. Right? So let’s clear things up:

CAFE is the virtual appliance. All new features are developed in CAFE. So anything that was added since 6.0 runs on the appliance and can be used from the REST API. On top of that some functionality was moved to the appliance. Functionality running in CAFE in version 6.1 includes:

  • Business Groups and Tenants
  • Advanced Service Designer
  • The Catalog
  • Resource Actions
  • Approval policies
  • Notifications

So if you want to automate anything regarding any of these features you’ll need the CAFE plugin which talks to the REST API running on the virtual appliance.

IaaS is the name of everything that’s not on the appliance. It is the reason you need a windows server to run vRA, not just the appliance. This windows server (or multiple servers) runs the old DynamicOps Software with some modifications. Features provided by this part of vRA include:

  • Virtual Machine Blueprints
  • Machine Prefixes
  • Provisioning Groups (Maps to Business Groups in CAFE, GUI only knows Business Groups in the current version)
  • Reservations
  • VirtualMachines (vCAC VM objects which map to vSphere/vCloud VMs or even physical machines)

If you want to automate any of the above you’ll need to use the vCAC plugin or the Odata API. If you’re note familiar with Odata APIs there is something you should know: It’s not an actual API. It’s just a representation of the database. There is no application logic behind it, just database constraints. This means that creating new things (called entities) is rather difficult. You have to figure out all the links between different database tables yourself. I’ll try to dive into this deeper in another blog post.

There another peculiarity I want to point out: there is no multi-tenency in the IaaS part. This means that a lot of items from the IaaS part (for example: machine prefixes) are shown to all tenants!

Touchpoints

The fact that vRA basically has a split brain provides some challenges when automation things in vRA. For Example: You’ll have to create a blueprint in the IaaS part but when you want to publish it you have to create a catalog item in the CAFE part of the product. Which brings me to the last part of this post.

As I said before the two product are loosely coupled. The actual touchpoints are not documented. Or at least I couldn’t find anything. But after spending a lot of hour trying to find out how to autmate the publishing of blueprint I found these touchpoints between both APIs:

  • The Business Group ID in CAFE is identical to the Provisioning Group ID in IaaS. If you create a Business Group in the REST API then vRA also creates the ProvisioningGroup in IaaS for you.
  • The catalog actually consists of three catalogs. More on this later. One of the catalogs is the provider catalog. Each provider manages its own provider catalog. IaaS is on of the providers. Somehow CAFE knows where to find certain provides IDs. Not sure where to find or set that mapping.
  • Every Catalog Item has a providerBinding attribute. This contains the bindingId. This binding ID is the blueprint ID (virtualMachineTemplateID) from the IaaS Part. This is how vRA figures out which blueprint to deploy when you request a catalog Item.
  • A Resource Operation has bindingId which maps the CAFE action to the IaaS action (like powerOn a VM for example)

What does this button do?

Dennis and I will be presenting at the NLVMUG Event on March 19 2015. We will talk about automation in general, how to optimize for automation, how cool docker is and we’ll be doing some awesome interactive demo’s.

This year we won’t take a 3D printed datacenter with us. Instead we’ll take our magic button with us. It’s battery powered, Wifi connected and looks like this:

2015-02-17 23.54.44

What does this button do? join us on March 19th to find out.

Workflows vs. Scripts (or: Scripts are scary)

I originally wrote this blog post a couple weeks ago and was planning on publishing it today. I started the post with this line: “Let me start by saying this: Scripts are awesome!”. But today I read Duncans post titled “Automation is scary!“. And I partly agree with Duncan, automation can be scary. But I think not all automation is scary so I’ll start this post with a slightly modified line…”

Let me start by saying this: Scripts are scary! They can do the tedious tasks for you, they can be scheduled so you never forget to delete a snapshot again, they can be used to offload your work to less skilled colleagues, they can save you time and frustration.However, Most scripts used are not written by the user. They are usually left by your predecessor, downloaded from blogs, copied from tutorials and what not. On top of that  if somebody tells you he “Hacked some scripts together” you are left with the feeling it’s a quick and dirty solution. So while scripts are awesome there appears to be something wrong with them which make them scary as well. So let’s dig a little deeper into what’s scary about scripts.

Have you ever taken over the responsibilities of another administrator who was really into “automating” his work? Just to discover that he has a million scripts somewhere on a file server which do all his work for him? How did that work out for you? Would you use all of those scripts?
Probably not. I know I wouldn’t. Because I don’t have a clue what each script exactly does, for which version of whatever software it was developed, if it was tested thoroughly, in which order they are supposed to run and so on and so forth. In all likelihood there will be no, or very little documentation. So when you have to deploy a new VM you have no clue if you need to run the “reserve IP” scripts first and then the “Deploy VM” script or the other way around. So the best course of action would be to not use the scripts and first figure out how the process is done manually.

But that’s a shame really. Because there was a lot of time spend on developing the scripts and they could saved you a lot of time as well. Over time you will probably develop new scripts for the same, or new, tasks. Which is just more time wasted.

Now let’s compare this with workflows in vCenter Orchestrator. Instead of a bunch of undocumented scripts you have a collection of workflows. Each one has a meaningful name, comes with a version history, displays what it is going to do and can hold a description or even nice “sticky notes” explaining what is going on. But more importantly, they execute a whole process instead of just one task.

And that’s the major difference between scripts and workflows: Workflows automate processes, scripts automate tasks. Actually, workflows just consist of a number of scripts because that’s what workflows do: executing all tasks in a certain process.

The good news is: if you happen to have a ton of powershell scripts laying around, you can use these in a vCO workflow just fine. One fair warning for the PowerCLI enthusiasts out there though: Once you start learning vCO you’ll probably use less and less powershell…..

How to Automate: vCO

In my previous “How to automate” blog I wrote a little bit about PowerCLI. I concluded that piece with the statement that it might not be the right tool for the job if you’re looking to automate IT operations processes. Today We’ll take a quick glance at vCO to see if that is a better fit for the job.

So what is vCO? vCO = vCenter Orchestrator. It is an automation tool which allows you to develop workflows by dragging and dropping pre-build parts or by creating your own objects using JavaScript.

The Upsides

The good news is that if you run vSphere you already have vCO. It actually gets installed with your vCenter server. But there is also a virtual appliance available or you can install it on a saperate windows machine. There are no additional license costs, it’s included with vSphere. So I’ll call that big upside.

Another upside is the fact that vCO is a workflow tool. For some people this might be a downside though. Especially if you are very used to writing PowerCLI scripts. For those people there is actually a plug-in which enables you to run PowerCli scripts from vCO. The upside of vCO being a workflow tool however is that it is easy to keep things organized in a library. The workflows kind of document themselves because you can use meaningfull names for elements in the workflow, insert descriptions or even add “sticky notes” to workflows. It is also very easy to re-use actions and workflows you created before so the longer you work with vCO the faster you can develop new workflwos.

There are also a lot of plug-ins available for vCO. This makes it really easy to interact with other systems like vCAC, Infoblox, ServiceNow and so on.

The last upside is, in my opinion, the fact that all actions and scriptable object in vCO are actually JavaScript. Depending on previous experience JS is probably pretty easy to learn, the syntax looks like PHP and C and you’ll be using very basic instructions most of the time.

The Downsides

The community around vCO is probably smaller then the one around PowerCLI. This could make finding solutions and pre-build scripts and workflows a bit harder. Of course there is this blog to help improve that :).

Another downside might be the fact that everything is basically JavaScript. Wait…. what? Didn’t I just say that’s an upside? Again, that depends on your current skill set. It might be a bit hard to learn JS if you’re a hardcore PowerCLI guy. There are no “|” and “$” in Javascript.

Overall it seems that the learning curve for vCO is quite steep. This is probably caused by the fact that the interaction between different steps in the workflows is a little different than what you’re used to when you just use scripts or any other programming language for that matter.

The Right Tool for the job?

In my opinion a workflow tool is the right tool for the job when you are automating processes. Because that’s exactly what these tools were build for. When you are automating a vSphere or vCloud environment it makes perfect sense to learn and use vCO.
On the other hand, if you’re trying to just get a table of VMs with the number of vCPU and RAM size you might want to consider using PowerCLI. But if you’re running that script every day it makes sense to call it from a workflow which send it to you by e-mail and then schedule the workflow for a daily run.

Selecting Network Through vCAC Property

vCAC is a really powerful automation tool, primarily giving users the opportunity to request their own applications without interaction of the IT Department. In this particular case I was configuring a blueprint to be used by developers.

Using vCAC to clone vCenter templates is really simple; users request a machine to use in the development or in the production environment. While I was testing the deployment of this blueprint the department responsible for Active Directory complained because my test machines, without the correct naming convention, appeared in the production AD.

The reservation used for this blueprint contained two networks, a development and a production network. Normally after deploying a vCenter template you edit the Virtual Machine hardware and select the correct network, before powering on the VM. In this case the requesting user does not have permission to connect to vCenter and to change this setting, and vCAC will just power on the VM. So we need another solution.

The solution is surprisingly not that difficult, you can use a custom property within vCAC. I will describe this in a few steps.

The first step is to find out which networks are available, this can be done by editing the reservation for this particular blueprint.

networkproperty-1

The next step is creating a new property definition within the property dictionary:

networkproperty-2

Name: VirtualMachine.Network0.Name
Display Name: Select Network
Control Type: DropDownList
Required: Yes

After you created the definition you can edit the property attributes

networkproperty-3

Type: ValueList
Name: Network
Value: Add all choices comma separated.

Now when the user will request a blueprint, there is a required choice with the name Network and the choices you added, in this case Development-1 and Production-1.

networkproperty-4

How to Automate: PowerCLI

After writing my blog posts about the why and the what of automation I decided to write a bit about how to automate. This series of posts won’t go into much technical detail but rather offer some pointers to help you choose the right tool for the job.

When you think of automating tasks in a vSphere environment the first tool you probably think about it PowerCLI. For all less technical readers out there: VMware vSphere PowerCLI is a command line interface tool for automating vSphere tasks.

The Upsides

PowerCLI has the backing of a huge community. There are also plenty of books on the subject. On top of that there are a lot of command lets available from all kind of vendors so it integrates easily with a lot of products.

Thanks to the huge community it is pretty easy to learn PowerCLI or to build your scripts by just recycling code that can be found online and in books.

Another upside is that it is object oriented. This makes passing around information and monipulating the information pretty easy

The Downsides

The object oriented nature of PowerCLI could also be a downside depending on where you’re coming from. If you are used to writing bash scripts then some of the PowerCLI syntax may seem familiar but handling objects can be confusing.

Another disadvantage in my opinion is that you need a windows machine to run powershell. Me being a linux guy I really don’t feel a need to run any windows machine. Especially in lab environments where you can get away with running vCenter VA instead of the windows installation.

The last disadvantage is not specific for powerCLI but applies to scripts in general: Simple tasks are easy to automate but more complex tasks or long processes will result in huge scripts or piles of scripts which are hard to understand and very hard to troubleshoot and modify if needed.

The right tool for the job?

So is powerCLI the right tool for your job? Of course this depends on what the job is. I think it’s the right tool if you’re automating vSphere administration tasks. In that case it can make your life easier and save time. But if you try to automate whole IT Operations processes I highly doubt if this is the right tool you. More on that in a future blog post.

vCAC Blueprint Configuration

Below is the vCAC configuration workflow about configuring a Blueprint. This blogpost is the fifth in the vCAC configuration series.

The action blocks are actually clickable and will show you the matching parts of the VMware documentation in a popup window.

Go back to the configuration steps overview.

 

A couple of interesting vCAC documentation links about Blueprints: